title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
LangTime: A Language-Guided Unified Model for Time Series Forecasting with Proximal Policy Optimization
Accept (poster)
Summary: This paper introduces LangTime, which is a language guided unified model for time series forecasting. It integrates LLM with RL based fine-tuning using PPO for time series analysis. Among the designed components, the Temporal Comprehension Prompts (TCPs) align time series data with LLM by embedding data-specific and channel-specific instructions and the time series data is compressed into a single token to facilitate LLM's understanding. The proposed TimePPO addresses error accumulation in AR forecasting and introduces a multi-dimensional reward function for better predictions. LangTime outperforms existing LLM-based time series models and shows strong. zero-shot transferability Claims And Evidence: Yes, the claims are clear and convincing. Methods And Evaluation Criteria: Yes, the method and evaluation metrics are aligned. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are sound and complete. Supplementary Material: Yes, I have reviewed the code in the anonymous GitHub repository. Relation To Broader Scientific Literature: It involves leveraging LLMs for time series analysis through RL-based pre-training, with the potential to serve as a foundation model for time series forecasting. Essential References Not Discussed: The related work is complete. Other Strengths And Weaknesses: Strength: 1. Integrating time series pre-training with PPO (Reinforcement Learning) is an innovative approach, offering a novel solution for aligning time series data with LLMs. In particular, TimePPO effectively mitigates accumulated errors in autoregressive forecasting. 2. The experiments are comprehensive and complete, showing the effectiveness of the proposed framework. 3. Present visualizations of compressed tokens to validate the effectiveness of the approach. Weakness: 1. Do not mention the training efficiency, especially with autoregressive output approach. 2. On which datasets did you pre-train LangTime? The pre-training process is unclear to me, making it difficult to assess and potentially unfair to compare with zero-shot performance. Other Comments Or Suggestions: Figure 2 contains too much compressed information, making it difficult to follow. Consider splitting it into two figures: one illustrating the overall pipeline and another focusing specifically on the TimePPO algorithm. Questions For Authors: 1. Is there a specific reason for selecting Qwen2-0.5B-Instruction as the backbone LLM? I noticed that this choice differs from previous works. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and insightful comments. In our response, our model was jointly pre-trained on the ETTh1 and Weather datasets, and experiments involving the TimePPO stage were fine-tuned on individual datasets. We presented the average of the results, and detailed experimental results are available at [link](https://anonymous.4open.science/r/full-E4EE/README.md). >Suggestions: Figure 2 contains too much compressed information, making it difficult to follow. Thank you for your suggestion. We have re-drawn Figure 2 and presented it as Figure 1 in the [link](https://anonymous.4open.science/r/full-E4EE/README.md). In the next version, we will update some images in the paper for clearer expression. >W1: Do not mention the training efficiency, especially with autoregressive output approach. Our model achieved the results in the paper with just 1 epoch of pre-training, partially compensating for the slower training speed. Due to the complex procedure of the PPO algorithm, the TimePPO fine-tuning phase is slow. However, our experiments demonstrate that fine-tuning with limited data (Tab.6 in our response to reviewer **5obH**'s **Q4**) and mostly frozen parameters (Tab.2) still yields good performance, enhancing its scalability under resource constraints. Table 1 Training Speeds of Different Models ||Ours(pt)|UniTime|AutoTimes|TimeLLM |-|-|-|-|-| |Speed ms/iter|218|8|89|212 |Parameter|0.5B|0.13B|7B|7B Table 2 Impact of Fine-tuning Data Volume on ETTh1 |data rate|PT|5%|10%|15%|20% |-|-|-|-|-|- |MSE|0.439|0.437|0.437|0.435|**0.435** |MAE|0.432|0.432|0.431|0.430|**0.430** >W2: The pre-training process is unclear. We pre-trained on 7 datasets (ETT, Weather, Exchange, Electricity) and conducted zero-shot experiments on two unseen datasets (Traffic, Illness). All models in Table 5 in paper were trained under the same dataset settings, making the comparison fair. >Q1: Is there a specific reason for selecting Qwen2-0.5B-Instruction as the backbone LLM? Compared to GPT2, Qwen is pre-trained on larger datasets, offering better language understanding and stronger instruction-following abilities to better guide LLM in interpreting time series. While the common Llama-7b also has powerful capabilities, its large number of parameters led us to choose the smaller Qwen2-0.5B-Instruction model as the backbone. In Table x, we compared different backbones; GPT2's performance on Weather is close to Qwen, demonstrating our framework's effectiveness, but its performance on ETTh1 reveals its limitations. Table 3 Performance of Different Backbones |backbones|GPT2||Qwen||Linear|| |-|-|-|-|-|-|- ||MSE|MAE|MSE|MAE|MSE|MAE |ETTh1|0.492|0.447|**0.448**|**0.431**|0.723|0.588 |Weather|0.275|0.294|**0.270**|**0.277**|0.301|0.333 --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. I will maintain my score, as the paper indeed presents a novel and promising framework to tackle the challenges in this area. One minor suggestion: during pretraining, it may be beneficial to consider using larger-scale datasets, such as those provided in Chronos or Moirai, to further enhance the model’s robustness and performance. This is an interesting and valuable contribution. Looking forward to the release of the pretrained model! --- Reply to Comment 1.1.1: Comment: Dear Reviewer bFpj, Thank you for your constructive feedback and suggestions. We sincerely appreciate your positive assessment of our work's contribution. We will actively explore the use of larger-scale datasets during pretraining in our future research to enhance the framework's robustness. We are grateful for your insightful suggestions to strengthen our methodology. Best regards, The Authors
Summary: This paper presents LangTime, a novel language-guided model for time series forecasting that addresses key challenges in leveraging large language models for this task. Specifically, the authors construct Temporal Comprehension Prompts to help LLMs understand domain-specific time series data, along with a new reinforcement learning-based fine-tuning algorithm, TimePPO, designed to mitigate error accumulation in autoregressive models. Extensive experiments on seven time series datasets demonstrate that LangTime achieves state-of-the-art forecasting performance, surpassing previous methods in both cross-domain generalization and autoregressive forecasting stability. ## update after rebuttal I support for accept Claims And Evidence: The claims made in the paper regarding the novel application of LLMs to time series forecasting are well-supported by the presented evidence. In addition, the detailed experimental setup and the availability of the source code of the proposed model enhance the study’s reproducibility. Methods And Evaluation Criteria: The methods presented in the paper are robust and well-suited for time series forecasting tasks, effectively addressing key challenges such as cross-domain generalization, cross-modality alignment, and error accumulation. In addition, the benchmark datasets are carefully selected to cover a wide range of time series characteristics, and the evaluation criteria are well-defined, allowing for a thorough assessment of the model's performance. Theoretical Claims: The theoretical contributions of the paper are sound, well-supported, and free from any issues. Experimental Designs Or Analyses: The experimental design is comprehensive and well-executed. However, there are several issues that need to be addressed: 1. The paper introduces TimePPO and provides valuable results; however, the authors need to clarify the choice and tuning of the hyperparameters, such as \gamma, \xi in equation (7) and r(\theta), \eta in equation (8). More detailed explanations are needed regarding how these parameters are selected and how they impact model performance. 2. While the authors have demonstrated LangTime’s performance on a number of datasets, why did they not attempt to train LangTime using other large language models to assess its generalizability and adaptability? Supplementary Material: The supplementary material, including the appendix, code repository, and detailed experimental setup, is sufficient and enhances the transparency and reproducibility of the study. Relation To Broader Scientific Literature: LangTime situates itself within the growing body of work that applies large language models to time series forecasting. The authors provide a thorough discussion of related work, highlighting existing methods and clearly explaining how their approach advances the field by addressing the unique challenges of multi-domain generalization, cross-modality alignment, and error accumulation in autoregressive predictions. Essential References Not Discussed: The authors have adequately discussed relevant works on large language models for time series forecasting and reinforcement learning within the context of large language models. Other Strengths And Weaknesses: Strengths: 1. The introduction of Temporal Comprehension Prompts to guide large language models in understanding domain-specific time series data is a novel contribution. It effectively enhances the model's ability to interpret and process time series information. 2. The proposed TimePPO algorithm, designed to improve autoregressive forecasting by mitigating error accumulation, is an innovative approach. It addresses key challenges in long-term forecasting, ensuring more stable and accurate predictions over extended horizons. Weaknesses: 1. While the model shows impressive forecasting accuracy, the complexity of its architecture and training process, particularly with the use of reinforcement learning, may raise concerns regarding scalability and efficiency. A discussion of its training time and computational requirements would be valuable. Other Comments Or Suggestions: The authors provide informative figures, but additional explanatory captions or annotations are needed. For example, in Figure 6 (T-SNE visualization), a clearer distinction between the domains being compared and an explanation of why this visualization is important for understanding the model’s capabilities would be helpful. It is recommended that the authors review the manuscript and make the necessary adjustments. Questions For Authors: 1. In Section 1 and Figure 1, the paper mentions cross-modality alignment. Could the authors elaborate on the difference between using language as prefixes and the proposed language-guided strategy for aligning modalities? 2. In Section 3.3, the authors adopt the previous token form <|EMB|> and <|OUT|> as the compressed token and prediction token. Could the authors explain why these specific token forms were chosen and what advantages they offer over alternative tokenization strategies? 3. The authors state that the mask rate is set to 0.4. Does this mask rate influence how the model learns and understands time series data? Specifically, how does the choice of mask rate impact the model's ability to extract temporal patterns and maintain prediction accuracy over longer horizons? 4. Does TimePPO fine-tune all parameters of the model, or are specific parts of the model frozen during fine-tuning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and insightful comments. In our response, models were jointly pre-trained on the ETTh1 and Weather, and experiments involving the TimePPO stage were fine-tuned on individual datasets. We presented the average of the results, and full results are available at [link](https://anonymous.4open.science/r/full-E4EE/README.md). >E1: Hyperparameter Explanation For Eq. (7), we introduce parameter ξ based on GAE [1] to address error accumulation effects. When prediction steps increase, model errors often exceed those of repeating previous steps, causing underestimated advantages. The coefficient ξ(<1) relaxes this constraint to better reflect relative prediction quality. For the hyperparameters in Eq.(6) and Eq.(8), $\beta$ controls the MSE penalty term, aiming to regulate the extent of policy updates from the reward score perspective. For $\eta$, we referenced the alignment tax design in InstructGPT, enhancing the model's prediction capability and training stability. Sensitivity analysis experiments showed no significant impact. Tab.1: sensitivity analysis for $\beta$ |||PT|0|0.01|0.03|0.05|0.1|0.3|0.5 |-|-|-|-|-|-|-|-|-|- |ETTh1|MSE|0.447|0.440|0.440|**0.438**|0.438|0.439|0.440|0.440 ||MAE|0.435|0.432|0.432|**0.431**|0.432|0.433|0.433|0.433 Tab.2: sensitivity analysis for $\eta$ |||PT|0|0.1|0.5|0.7 |-|-|-|-|-|-|- |ETTh1|MSE|0.447|0.439|0.440|**0.438**|0.440 ||MAE|0.435|0.434|0.433|**0.431**|0.432 A detailed parameter discussion will be added in the paper's next version. [1] Schulman, J. et al. High-dimensional continuous control using generalized advantage estimation. ICLR2016. >E2: Performance Across Backbones The rationale for selecting Qwen and experimental results with other LLMs are detailed in **Q1** of our response to Reviewer **bFpj**, demonstrating adaptability across LLMs and GPT2's limitations. >W1: Training Efficiency Discussion For computational efficiency concerns, please refer to our response **W1** to Reviewer **bFpj**. >Q1: Difference Between Language Prefixes and Language-Guided Strategy Using language as prefixes inadequately describes the cross-modal relationship between language and time series, hindering LLMs' comprehension of unseen time series data during pre-training and causing modality misalignment. LangTime provides equivalent domain information while guiding LLMs through task-specific instructions (compression and prediction) and dual training objectives (reconstruction and prediction) to achieve alignment. For additional details, see our response to **Q2** from Reviewer **BdAz**. To validate LangTime's effectiveness in enhancing LLMs' time series understanding, we replaced the LLM with a linear layer. As shown in Tab.3, the significant performance decline confirms the necessity of our language-guided strategy. Tab.3: Impact of Removing LLM |backbones|Qwen||Linear|| |-|-|-|-|- ||MSE|MAE|MSE|MAE |ETTh1|**0.448**|**0.431**|0.723|0.588 |Weather|**0.270**|**0.277**|0.301|0.333 >Q2:Could the authors explain why these specific token forms were chosen and what advantages they offer over alternative tokenization strategies? The special tokens <|EMB|> and <|OUT|> are newly added learnable tokens without predefined meanings for LLMs. Their specific forms were chosen primarily for human readability rather than offering inherent advantages in modality alignment. To verify this, we conducted experiments with alternative tokens (<|ABC|> and <|DEF|>), which showed no significant performance differences(Tab.4). Tab.4: Impact of Token Replacement |ETTh1|MSE|MAE |-|-|- |Original|0.448|0.431 |Modified|0.448|0.432 >Q3: How does the choice of mask rate impact the model's ability to extract temporal patterns and maintain prediction accuracy over longer horizons? The mask rate balances two objectives: (1) preventing overfitting in datasets with varying convergence speeds, and (2) enhancing temporal pattern learning through reconstruction tasks. Experiments show that extreme mask rates (too low/high) degrade performance: low rates fail to improve temporal pattern extraction, while high rates impair long-horizon predictions. Shorter prediction lengths suffer more from error accumulation(Tab.5). Tab.5: Impact of Mask Rate and Prediction Length on ETTh1 |Length|24||96||192|| |-|-|-|-|-|-|- |Mask Rate|MSE|MAE|MSE|MAE|MSE|MAE |0|0.466|0.442|0.444|0.431|0.460|0.437 |0.2|0.451|0.438|0.441|0.432|0.453|0.433 |0.4|0.455|0.435|0.440|0.430|0.458|0.435 |0.6|0.461|0.438|0.444|0.432|0.451|0.435 >Q4: Does TimePPO fine-tune all parameters of the model, or are specific parts of the model frozen during fine-tuning? In our experiments, TimePPO fine-tunes full parameters. However, freezing certain components still achieves similar performance, demonstrating the method's scalability. Tab.6: Effect of Fine-tuning Parameters |Fine-tuned Parameters|MSE|MAE |-|-|-| |pre-trained|0.439|0.432 |Full|**0.435**|**0.431** |LLM(>97%)|**0.435**|**0.431** |TE(<3%)|0.437|0.432 --- Rebuttal Comment 1.1: Comment: The authors present a well-motivated and methodologically sound contribution to the emerging area of LLM-based time series forecasting. They have clearly addressed the concerns raised during the review process and provided strong justifications for their design choices. The proposed LangTime framework effectively tackles key challenges and demonstrates strong performance across diverse benchmarks. I support acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 5obH, Thank you for reviewing our manuscript and providing constructive feedback. We sincerely appreciate your recognition of our work's motivation and methodology, as well as your insightful comments that strengthened the research's rigor. We will carefully incorporate your suggestions to further refine the manuscript's technical depth and clarity. Best regards, The Authors
Summary: This paper introduces LangTime, a unified framework that leverages large language models (LLMs) for time series forecasting across multiple domains and modalities. The authors identify key challenges when applying LLMs to temporal data—Cross-domain generalization, cross-modality alignment and error accumulation in autoregressive frameworks. To address these, LangTime integrates Temporal Comprehension Prompts (TCPs), which serve as structured inputs to guide LLMs in interpreting time series data by embedding dataset-specific and variable-level context into a compact representation. Furthermore, the paper presents TimePPO, a reinforcement learning-based fine-tuning strategy tailored for time series, which introduces a multi-dimensional reward mechanism and a repeat-based value estimation to improve long-horizon prediction robustness. Through comprehensive empirical evaluation, LangTime demonstrates superior forecasting accuracy and strong generalization to previously unseen datasets. The study concludes by suggesting that future developments will explore extending LangTime’s capabilities to broader time series analysis applications. Claims And Evidence: The paper lacks a detailed explanation of how the dataset-wise and channel-wise information used in the Temporal Comprehension Prompts (TCPs) is generated. Additionally, it is unclear whether different formulations or descriptions of the data might affect the forecasting performance of LangTime. If the model is sensitive to these variations, how can its performance and generalizability be consistently guaranteed? Methods And Evaluation Criteria: Yes. The methods and evaluation criteria are appropriate. Theoretical Claims: I have checked the correctness of theoretical claims. Experimental Designs Or Analyses: I have checked the soundness of experimental designs. Deisgn of reconstruction loss can well enhance the alignment between time series and context information. Supplementary Material: I have reviewed the method deisgn details in supplementary material. Relation To Broader Scientific Literature: In fact, a growing body of work has explored the integration of language models with time series forecasting, including approaches such as MetaTST, TimeLLM, UniTime, and TimeMMD. It would be beneficial for the paper to further clarify its distinctions and advancements over these existing methods. Essential References Not Discussed: See above. Other Strengths And Weaknesses: Strengths: LangTime demonstrates state-of-the-art performance on established benchmark datasets and shows strong zero-shot generalization capabilities across previously unseen domains. Weaknesses: 1)The novelty of the proposed method is limited, as similar concepts and methodologies have been presented in recent works such as TimeLLM, UniTime and MetaTST. 2)The paper does not provide a clear explanation of the methodology used for constructing the prompts. And it also lacks an evaluation of the model's generalizability under slight variations in the language prompt, which is important for assessing its robustness in real-world applications. Other Comments Or Suggestions: See above. Questions For Authors: 1) Could you provide a detailed explanation of the prompt construction process in your approach, and further discuss the model’s generalizability when faced with slight variations in the prompt formulations? 2) In terms of aligning language with time series data, what are the key innovations of your method compared to existing approaches such as TimeLLM, UniTime, and MetaTST? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and insightful comments. In our response, our model was jointly pre-trained on the ETTh1 and Weather datasets, and experiments involving the TimePPO stage were fine-tuned on individual datasets. We presented the average of the results, and detailed experimental results are available at https://anonymous.4open.science/r/full-E4EE/README.md. >Q1:Could you provide a detailed explanation of the prompt construction process in your approach, and further discuss the model’s generalizability when faced with slight variations in the prompt formulations? The prompts in our approach consist of two parts. The first part includes common domain descriptive information used in existing methods to provide richer linguistic information. The second part has two task instructions to guide the model in understanding the time series data and generating predictions. We compared the impact on performance when these two parts varied separately, as shown in Table 1. However, when their expression changes but the basic meaning remains consistent, modifying the domain description and instructions does not significantly affect the model's performance. Table 1: Impact of language prompt modification on model capability | ETTh1 | MSE | MAE | |---|---|---| |Original| 0.448 | 0.431 | |Instruction Prompt| 0.449 | 0.433 | |Data Description Prompt| 0.450 | 0.434 | In Table 1, `Instruction Prompt` means changing TCP to the following format: ``` The details of the provided time series: Period: <Timestamp>, Dataset: <Dataset Information>, Channel: <Channel Information>, Value: <Time Series Representation>, Please summarize this series in a single term: <|EMB|>. Using the given details, forecast the upcoming <N> values: <|OUT|> ``` `Data description Prompt` means modifying the descriptive information of the following two datasets: ```json ETTh1: { Original: "An hourly-sampled electricity transformer dataset intended for electrical asset monitoring, collected from one area in a province in China." Modified: "An hourly-sampled dataset of electricity transformers designed for monitoring electrical assets." }, Weather: { Original: "Meteorological indicator data with ten minute sample rate." Modified: "Data on meteorological indicators sampled every ten minutes." } ``` >**Q2:** In terms of aligning language with time series data, what are the key innovations of your method compared to existing approaches such as TimeLLM, UniTime, and MetaTST? 1. **Characteristics of Existing Methods:** Existing approaches like TimeLLM and UniTime integrate linguistic information with time series data merely by concatenation, failing to emphasize **the relationship between the two parts**. This can make it challenging for LLMs, which have not encountered time series data during pre-training, to understand sequential correlations. MetaTST, on the other hand, does not employ LLMs as its backbone architecture, therefore it does not leverage LLMs' comprehension capabilities for time series data. While these methods leverage the comprehensive pre-trained knowledge of LLMs by providing linguistic information, they overlook the powerful instruction-following abilities of LLMs. 2. **Innovations of Our Method:** Our approach introduces diverse domain information via TCP and utilizes two directives to present LLMs with dual tasks: first, compress the presented information to generate a model understanding; then predict future outcomes based on this understanding (compressed token) following instructions. Benefiting from LLMs' robust instruction-following capacity, our language-guided approach aids LLMs in comprehending time series data by integrating reconstruction and prediction training objectives to fully exploit LLM's time series understanding and prediction abilities. Additionally, LangTime employs an autoregressive structure, allowing flexible prediction length. Inspired by RLHF and considering time series data characteristics, we designed a reward function with multi-angle assessment and a repetition strategy-based Value function. Fine-tuned via PPO algorithm, this enhances the model's ability to combat cumulative errors. To validate our method's effectiveness in enhancing LLMs' time series understanding, we replaced the LLM with a linear layer. As shown in Table 2, the significant performance decline confirms the necessity of our language-guided strategy. Table 2: Impact of Removing LLM |backbones|Qwen||Linear|| |-|-|-|-|- ||MSE|MAE|MSE|MAE |ETTh1|**0.448**|**0.431**|0.723|0.588 |Weather|**0.270**|**0.277**|0.301|0.333
Summary: The paper introduces LangTime, an approach that builds on top of existing large language models (LLMs) to effectively perform time series forecasting. The paper identifies 3 crucial problems with adapting LLMs for forecasting tasks - cross-domain generalization, cross-modality alignment, and error accumulation in autoregressive frameworks. LangTime uses a temporal encoder to convert non-overlapping patches of the time series into tokens. These tokens are provided as inputs to the LLM, and the model is additionally trained to reconstruct the time series from the tokens apart from forecasting. For cross-modality alignment, the paper proposes using Temporal Comprehension Prompts (TCP) to provide additional information about the time series domain and specific channel characteristics. To reduce error accumulation, the paper introduces a novel training procedure using Proximal Policy Optimization (TimePPO), similar to RLHF in Natural Language Processing. ## update after rebuttal The authors have addressed my concerns and I am satisfied with the response. Accordingly, I have increased the score from 2 to 3. Claims And Evidence: Using TCP (guidance through detailed textual description) helps in improving the cross-modal alignment, as witnessed in Table 3. However, similar approaches are used in TimeLLM and UniTime with respect to using text prompts to describe the domain of the time series. Additionally, UniTime also employs similar reconstruction losses. In my opinion, a comparison of LangTime SFT (supervised fine tuning) results with UniTime SFT and TimeLLM to show quantitative improvements provided by TCP will be useful. The comparison is available in Tables 1 and 2, but the results look mixed with LangTime SFT underperforming in multiple cases. Similarly, the effect of TimePPO towards minimizing the error accumulation is not clearly explained. For example, if $\eta$ is a relatively large value, then isn't the loss essentially the same as in the SFT case? Also, does the predicted output sequence length affect the error accumulation? This has been shown in recent time series foundation models like TimesFM. Methods And Evaluation Criteria: Yes, the paper tests the proposed approach on standard time series forecasting datasets. Additionally, the chosen baselines are valid. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: Yes, I have checked the soundness/validity of experimental designs. Tables 2,3, and 4 showcase the ablation with respect to loss functions (SFT vs PPO), language guidance, and reward functions for PPO, respectively. However, a few more ablations are required to highlight the different contributions in the paper. For example, what are the effects of $\beta$ and $\eta$ in equations 5 and 8, respectively? Supplementary Material: There is no additional supplementary material provided. The paper has a link to the implementation of the proposed approach. I have reviewed the entire Appendix as most of the text in the method and experiments sections refer to various subsections in the Appendix. Relation To Broader Scientific Literature: Leveraging LLMs trained on large-scale internet textual data effectively for forecasting is a key problem. The effective adaptation of LLMs, through the approaches shown in the paper, can unlock the ability to obtain accurate forecasts with limited fine tuning. Additionally, this allows for future work in the direction of reasoning about the generated forecasts in natural language. Specifically, from my domain knowledge in forecasting, the application of PPO to improve forecasts is novel and interesting. And from the experimental results, such finetuning shows promising results. Essential References Not Discussed: The paper covers multiple SOTA prior works that are related to adapting LLMs for forecasting. Other Strengths And Weaknesses: Strengths: 1. The paper provides a set of extensive experimental results that showcase the effectiveness of the proposed approach. 2. The provided ablations are useful in understanding the overall approach. 3. The qualitative analyses through t-SNE and attention maps provide more interpretability to the proposed approach. For Weaknesses, please check the "Claims and Evidences" and "Experimental Designs" sections. One of the main weaknesses is the delineation of the 3 contributions. The paper highlights cross-domain generalization and cross-modal alignment as challenges, but it is unclear which components address the cross-domain generalization part. The temporal encoders and reconstruction loss help in cross-modal alignment, and the TimePPO objective helps with better forecasting. Also, since the overall setup is trained on a per-dataset basis, why is there a need for cross-domain generalization? Additionally, the experimental results do not showcase cross-domain generalization. Additionally, I think the paper might benefit from zero-shot comparisons against other time series foundation models like TimesFM, Chronos, etc. Other Comments Or Suggestions: The TimePPO section (Section 3.4) requires more clarity. Some coefficients ($\beta$, $\gamma$, $\lambda$, $\tau$, $\eta$) are not clearly explained. In Algorithm 1, in line 6, if the objective in Eq 8 has to be maximized, doesn't that imply maximizing $\|y - \hat{y}\|$? Eq 6 is flowing out of the column width. Questions For Authors: 1. How is cross-domain generalization addressed/achieved in this paper? 2. How does the TCP used in this paper differ from the language prompts used in UniTime and TimeLLM? 3. What are the effects of $\beta$ and $\eta$ in equations 5 and 8, respectively? 4. What is the performance comparison of the pre-trained model against zero-shot time series foundation models? 5. Why does TimePPO specifically contribute towards reducing error accumulation? Is there any experimental evidence to support this claim? 6. Can the authors describe the output sequence length used in the paper, and how does that affect the error accumulation? If the authors can respond to 1,2,5, and 6, I am willing to raise the score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: As for Eq.(8), it should actually be: $-||y-\hat{y}||^2_2$ (refer to `actor_loss_fn` in `ppo_trainer.py`). We will modify the description of hyperparameters and Eq.(6) and Eq.(8) in the next version of the paper. Full results are available https://anonymous.4open.science/r/full-E4EE/README.md. > C1: Comparison of LangTime SFT with UniTime SFT Unlike UniTime and TimeLLM, LangTime uses a more flexible autoregressive structure. SFT focuses on the process of generating the next prediction, which has limited effectiveness on the continuous generation process of autoregressive models. This explains why LangTime performs poorly during SFT. > Q1: Cross-Domain Generalization We need to clarify that LangTime undergoes joint **pre-training on 7 datasets**, which enables it to understand time series across different domains. When facing unseen domains, we provide description information for each channel in the new dataset via TCP. Leveraging the rich language knowledge of LLM and the understanding of time series across different domains acquired during pre-training, LangTime addresses the cross-domain generalization issue. We demonstrate its generalization capability in Table 5 of the paper. We conducted pre-training on ETTh1 and ETTm1, and evaluated on 3 other datasets. Tab.1: Cross-domain Generalization Ability ||LangTime||UniTime|| |-|-|-|-|- ||MSE|MAE|MSE|MAE| |ETTm2|**0.301**|**0.335**|0.306|0.343 |Weather|**0.320**|0.335|0.323|**0.334** |ECL|**0.377**|**0.442**|0.458|0.529 >Q2: How does the TCP used in this paper differ from the language prompts used in UniTime and TimeLLM? The prompt-as-prefix approach (UniTime/TimeLLM) cannot fully describe **the connection between the language part and the time series part**, making it difficult for the LLM to understand time series data that it has never seen during the pre-training, leading to challenges in modality alignment. LangTime provides the same domain information as the prompt-as-prefix approach while guiding the LLM to **compress and predict time series through two task instructions**. Modality alignment is achieved through reconstruction and prediction tasks. You can also refer to our response to reviewer **BdAz**'s **Q2** for more details. > Q3: Effects of $\beta$ and $\eta$ Due to length constraints, please refer to our response to reviewer **5obH**'s **E1**. >Q4: Comparison of LangTime against foundation model Because TimesFM is pre-trained on a large number of datasets, to maintain the invisibility of the test domain, we re-trained LangTime on the Weather, ECL, and Exchange datasets, and conducted zero-shot testing on the ETT dataset. As shown in Tab.2, LangTime achieved better performance on most datasets, demonstrating its effectiveness in cross-domain generalization. Tab.2: Zero-shot performance comparison of LangTime and foundation model ||LangTime||TimesFM|| |-|-|-|-|- ||MSE|MAE|MSE|MAE |h1|**0.537**|**0.481**|0.671|0.502 |h2|**0.416**|**0.429**|0.471|0.436 |m1|0.907|0.615|**0.789**|**0.561** |m2|**0.316**|**0.358**|0.422|0.386 >Q5: Why does TimePPO specifically contribute towards reducing error accumulation? Autoregressive models have flexible prediction lengths but are significantly affected by accumulated errors because they lack the ability to discern whether the output of the previous step is reliable. TimePPO estimates the return for the entire sequence through the Value Function and evaluates the long-term value of the current step relative to the estimated level using advantages. These designs optimize the prediction of the entire sequence rather than just focusing on the model's ability to predict the next step. Thus, TimePPO helps alleviate the cumulative error issue in autoregressive models. To verify this, we compared the model's metrics of the last step in long-term predictions (336, 720). As shown in Tab.3, TimePPO performs better in the final step predictions most affected by accumulated errors. Tab.3: TimePPO'role in error accumulation ||ETTh1||Weather|| |-|-|-|-|- ||MSE|MAE|MSE|MAE |PT|0.534|0.491|0.415|0.385 |SFT|0.533|0.490|0.419|0.386 |TimePPO|**0.528**|**0.489**|**0.410**|**0.381** >Q6: How does the output sequence length affect the error accumulation? The output sequence length used in this paper is 96 (the prediction length for each step in autoregression). Since errors occur during each prediction, they accumulate continuously during iterations. If the single output sequence length is small, more steps are needed to predict the same length, making accumulation errors more pronounced and prediction performance relatively poorer. Conversely, although reducing the number of steps, predicting a longer sequence in a single prediction may lead to worse results. To verify this, we compared the impact of different single prediction lengths on final performance. Tab.4: Impact of single prediction length |ETTh1|24|48|96|144|192 |-|-|-|-|-|- |MSE|0.461|**0.448**|**0.448**|0.451|0.458 |MAE|0.438|0.432|**0.431**|0.434|0.435 --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns and I am satisfied with the response. Accordingly, I will increase the score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 8SdX, Thank you for taking the time to review our rebuttal and adjust the score. We sincerely appreciate your constructive feedback, which has greatly helped improve our work. Your expertise and thoughtful evaluation are invaluable to us. Thanks again for your time and consideration. Best regards, The Authors
null
null
null
null
null
null
A Closer Look at Multimodal Representation Collapse
Accept (spotlight poster)
Summary: In this paper, the authors contribute with a theoretical understanding of the phenomena of modality collapse in multimodal representation learning model. In particular, the authors show that modality collapse occurs when the predictive features of a given modality become entangled with noise features of another, effectively leading to the collapse of the former. Furthermore, the authors demonstrate that this cross-modal entanglement emerges from faulty neural capacity allocation and that knowledge distillation from the joint encoder into the modality encoder suffering collapse can avert this phenomena. Based on these insights, the authors propose a novel method, Explicit Basis Reallocation (EBR), that promotes the disentanglement and denoising of the multimodal embeddings. The authors evaluate extensively their method across two datasets and highlight how EBR achieves SOTA results on scenarios with missing modalities at test time. Claims And Evidence: The major claim of the paper is that modality collapse emerges from the entanglement of predictive features from one modality with noisy features of another modality. This claim is significantly supported by both theory and experimental evidence: the authors start by demonstrating in Section 3.1 that the proportion of cross-modal poly semantic neurons increases as the number of modalities increase. This theoretical observation is demonstrated empirically in Section 4.1. Similarly, in Section 4.1 the authors demonstrate that cross-modal interference results from the low-rank simplicity bias. Once again, this claim is empirically evaluated in Section 4.2. Finally. the authors propose Explicit Bias Rallocation (EBR) to address modality collapse. The authors evaluate the effectiveness of EBR extensively in Section 4.3., including results of dealing with missing modalities at inference time. Methods And Evaluation Criteria: The proposed method appears sound and based on theoretical insights previously discussed in the paper. The evaluation datasets and baselines metrics are also in line with previous literature. Theoretical Claims: I reviewed the theoretical claims at a high level but did not verify the correctness of the proofs in detail. The arguments appear well-structured, though a deeper formal verification would be needed to confirm full correctness. Experimental Designs Or Analyses: The experimental setup presented in Section 4 is sound. Moreover, the authors do a great job in analyzing in depth each result and provide interesting insights. Supplementary Material: I reviewed the proofs on a high-level in Appendix B, and they appear sound. Relation To Broader Scientific Literature: This paper contributes with a theoretical framework to understand modality collapse in multimodal representation learning models. As such, it builds on previous conjectures explored in Javaloy et al., 2022, and Ma et al., 2022. The insights from this work can also be applied to the development of novel multimodal representation learning methods, especially those that deal with large number of modalities. Essential References Not Discussed: [1] also proposes a method to deal with missing modality information at test time, using a cross-modal contrastive loss. [1] Poklukar, Petra, et al. "Geometric multimodal contrastive representation learning." International Conference on Machine Learning. PMLR, 2022. Other Strengths And Weaknesses: One significant strength of the paper is the overall quality and completeness of the work: the authors present both interesting theoretical results and an extensive experimental setup. Other Comments Or Suggestions: It would further strengthen the paper if the evaluation presented in Section 4.1 and 4.2 contemplated as well other multimodal representation learning models, such as multimodal variational autoencoders [1-3], where the phenomena of modality collapse has been observed, and other contrastive learning models [4]. [1] - Shi, Yuge, Brooks Paige, and Philip Torr. "Variational mixture-of-experts autoencoders for multi-modal deep generative models." Advances in neural information processing systems 32 (2019). [2] - Wu, Mike, and Noah Goodman. "Multimodal generative models for scalable weakly-supervised learning." Advances in neural information processing systems 31 (2018). [3] - Javaloy, Adrián, Maryam Meghdadi, and Isabel Valera. "Mitigating modality collapse in multimodal VAEs via impartial optimization." International Conference on Machine Learning. PMLR, 2022. [4] - Poklukar, Petra, et al. "Geometric multimodal contrastive representation learning." International Conference on Machine Learning. PMLR, 2022. Questions For Authors: None. ## Post Rebuttal Comment I thank the authors for their hard work in the rebuttal and for addressing my comments! I maintain my score for now, great job on the paper! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the novelty and thoroughness of our work, as well as pointing us to important adjoining multimodal learning literature that observe modality collapse. Below, we aim to address their concerns, which we will also incorporate in the final version of the manuscript. **Comparison with generative and contrastive models:** For the result in Section 4.1, we evaluate [4] by applying their proposed contrastive objective to our baseline representation learning setting on MIMIC-IV and report the results below in terms of the lowest achieved training semantic loss below. | | | | Number of Modalities | | | :---- | :---: | :---: | :---: | :---: | | | **2** | **3** | **4** | **5** | | **Multimodal Prefix** | 27.68 | 52.90 | 91.20 | 167.30 | | **Unimodal Baseline** | 7.97 | 6.55 | 5.33 | 9.55 | As we can see, trends similar to that of our original setting reported in the main manuscript, in the semantic loss gap between the Multimodal Prefix and the Unimodal Baseline, play out when we perform a contrastive objective based fusion as reported in [4]. It further supports the claims in Lemma 1 and Theorem 1 that as the number of modalities increase, the modality undergoing collapse contributes less and less to the downstream representation used to encode the semantics, irrespective of the fusion strategy. Unfortunately, we could not perform this experiment on the generative models [1-3], as it would require training each of the generative models from scratch for a number of multimodal combinations, which would take several weeks on our available compute resources, and hence, is infeasible during the rebuttal timeline. However, we were able to perform the rank evaluation in both the contrastive and generative settings, since pretrained models are available in some cases for the latter. Due to time constraints of the rebuttal period, we chose [1] as our representative generative model. Since the objective of generative modelling is somewhat different from the downstream application that we experimented with, to analyze [1], we performed the experiment on their proposed MNIST-SVHN dataset, while for [4], since it is for general representation learning, we applied their proposed contrastive objective to our baseline setting on MIMIC-IV. Below, we report the results of our experiment, where vanilla setting refers to the original model, without KD or EBR. | | | | | Beta | | | | ----- | :---- | :---: | :---: | :---: | :---: | :---: | | | | **0** | **2** | **4** | **6** | **8** | | | **\[1\] Unimodal Baseline** | | | 198 | | | | | **\[4\] Vanilla** | 477 | 421 | 398 | 110 | 96 | | | **\[1\] \+ KD** | 482 | 465 | 390 | 298 | 270 | | **Rank** | **\[1\] \+ EBR** | 485 | 477 | 431 | 405 | 395 | | | **\[4\] Unimodal Baseline** | | | 1255 | | | | | **\[4\] Vanilla** | 1877 | 1330 | 1146 | 930 | 872 | | | **\[4\] \+ KD** | 1905 | 1676 | 1533 | 1427 | 1390 | | | **\[4\] \+ EBR** | 1912 | 1825 | 1709 | 1600 | 1588 | We can see that in both the generative and contrastive settings, the ranks consistently drop as the strength of the modality undergoing collapse $\beta$ (written as Beta in the table) is increased. The drop is sharp around a critical point, where the rank goes below the unimodal baseline, depicting a form of phase transition, a phenomenon also observed in our original experiments (Sec. 4.2 Observations and Analyses). Finally, the dropping rank can be counteracted by implicit (KD), and even more effectively, by explicit basis reallocation (EBR), which results in a much more stable rank across the range of different values of $\beta$.
Summary: The authors propose a new explanation for the difficult problem of modality collapse. Their argument is that the low-rank bias of neural networks lead them to learn low-rank polysemantic neurons rather than high-rank monosemantic neurons. This is a problem, since as the proportion of cross-modal polysemantic features increases, it prevents the learning of conjugate features that generalize. Claims And Evidence: * The results in general support the authors claims regarding noisy features and low-rank bias leading to modality collapse. * The results also highlight the ability of KD/EBR to suppress this behavior and learn higher-rank features. * However, I think the most crucial result is not very convincing, which is that in Table 1 the downstream performance (AUROC/AUPRC) is not better than the strongest baseline (CM-AE). * I'm convinced by the evidence that low-rank bias leads to modality collapse. However, there's a large emphasis placed on polysemantic features in the writing, but I don't see this addressed much in the experiments. Methods And Evaluation Criteria: * The authors used conventional datasets for evaluating multimodal learning. Theoretical Claims: N/A Experimental Designs Or Analyses: * Figure 4 shows that the ability of the multimodal prefix (the modality that is known to collapse) to predict the target diminishes as the number of modalities increases. It seems like a bit of a logical jump to say that this verifies the claims of Theorems 1 and 2. * $\beta$ is defined as the "amount of upweighting needed to force the multimodal model to incorporate the modality that it would otherwise eliminate under collapse." I don't see an explanation on what this is exactly. What is being upweighted? * In Table 1 MUSE is referred to as being the SOTA, but it appears CM-AE is the strongest baseline, and is arguably stronger than EBR as well. Supplementary Material: N/A Relation To Broader Scientific Literature: * The problem of modality collapse is extensively studied, well-cited in this paper, and the ideas brought by this paper seem to be a novel direction. Essential References Not Discussed: * It would improve readability to define key terminology such as "polysemantic neurons" for those who are not familiar with the mechanistic interpretability literature. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * Some essential details are missing, such as: in the datasets considered, what are the inputs, what are the targets, what is the size of the dataset, etc. It's simply stated as "For MIMIC-IV, we follow the same settings as (Wu et al., 2024)," which is fine, but you should at least include this information in the supplementary material. Questions For Authors: * In definition 1, are $\mathbf{z}$ and $\mathbf{z}^*$ both in $R^d$? If so, what is $\mathbf{z} \mathbf{z}^*$? Intuitively, I see what you're saying but the clarity could improve if you defined variables and functions more precisely. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for noting important gaps in our initial submission. Below, we aim to address them, which will be included in the final version. **CM-AE:** We apologize for the confusion caused here and we thank the reviewer for pointing out what was an error in our reporting. While we evaluated all the remaining baselines across the 5 missingness rates (Sec. 4.4), we somehow accidentally added the numbers for CM-AE for different random seeds but no missing modality, which is the same as the first row in Tab. 1 of the MUSE paper. Below, we correct this and report the numbers on CM-AE evaluated in the same missing modality setting as that of the others. | | Mort. | | Rdmn. | | | :---- | ----- | ----- | ----- | ----- | | | **ROC** | **PRC** | **ROC** | **PRC** | | **CM-AE** | 0.7873 ± 0.40 | 0.3620 ± 0.22 | 0.6007 ± 0.31 | 0.3355 ± 0.25 | In line with the other baselines, CM-AE, too, performs much worse in the missing modality setting and remains significantly below our proposed EBR. **Polysemanticity:** Considering the results in Fig. 5 (a) and (c), Fig. 7, and Sec. 4.3. Denoising Effect of Basis Reallocation, since there is no external source of noise in the fusion head, and encouraging monosemanticity through basis reallocation has a denoising effect, *the noise that leads to the observed collapse must come from some cross-modal polysemantic interference.* To provide further evidence, we adapt the definition of polysemanticity based on neural capacity allocation from Scherlis et al., 2022, to measure cross-modal polysemanticity as the amount of uncertainty in the assignment of a neuron to a particular modality. We train a two-layer ReLU network on weights from unimodal models to classify which modality the input models are optimized on. Next, we apply this modality-classifier on the weights of our multimodal fusion head and record the average cross-entropy (CE) in its outputs. Higher values of cross-entropy indicate higher levels of cross-modal polysemanticity, since the probability masses are spread out across multiple modalities. Below, we report the results on bi-modal training: | | CE | | ----- | :---: | | Vanilla | 5.66 | | KD | 2.09 | | **EBR** | **0.59** | The sharply lower relative CE for KD and EBR directly indicate the reduced cross-modal polysemantic interference under basis reallocation. **Empirical validation of Thms 1 and 2:** We do not claim that Fig. 4, just by itself verifies Thms 1 and 2. Below, we provide a more comprehensive explanation: \ **Thm 1:** Thm 1 can be factorized as: (i) as the number of modalities increase (ii) the predictive value of the weaker modality decreases. (ii) is validated by Fig. 4, where the gap in semantic loss between the multimodal prefix and the unimodal baseline increases as the number of modalities increase. (i) is validated by Section 4.3, Denoising Effect of Basis Reallocation, which shows that this drop in predictive value is indeed caused due to interference from noisy features. Combined, they imply that as the number of modalities increase, predictive features of some modalities increasingly get entangled with noisy features of another, leading to the collapse of the former. \ **Thm 2:** Note that by definition, the lower-rank parameterization predicted by Thm 2, is likely to be polysemantic since it has to fit in more features than the number of available dimensions. Fig 5 (a) and (c) establish the decreasing nature of multimodal rank, and Fig 4 establishes the predictivity degradation implying increased noisy polysemanticity due to the former, i.e., decreased rank. This is further discussed in Sec. 4.2 and L351-355, “The rank of the default multimodal representation being bounded above by that of the unimodal baseline beyond the phase transition around the critical point, is a consequence of the upper-bound presented in Thm 2”. **Beta:** We apologize for the ambiguity in the definition of $\beta$. A clearer definition is provided in Section 4.2, stating that $\beta$ “is the strength of the modality that gets eliminated by default”. Specifically, $\beta$ allows for a custom weighting of the modality that would get eliminated by default, and hence, increasing $\beta$, forces the model to incorporate it. **Terminologies and Dataset Details:** We thank the reviewer for pointing out these missing details, which we will incorporate in the final version. $\mathbf{z}$**,** $\mathbf{z^*}$**, and** $\mathbf{zz^*}$: $z$ and $z^*$ are latent factors of arbitrary dimensionality, and $zz^*$ refers to the inner product between $z$ and $z^*$ in the space of latent factors. Since we try to keep our results agnostic of the vector space from which the latent factors originate, we did not concretize $zz^*$ any further. The dimensionality of $zz^*$ would depend on the nature of the inner product of the task-specific latent space. However, we agree that it is worth clarifying this point, which we will do in the final version.
Summary: The manuscript introduces an Explicit Basis Reallocation (EBR) approach to mitigate multimodal collapse. It first explains that multimodal collapse is driven by polysemantic neurons—which increase with the number of modalities—leading these neurons to converge into a low-rank polysemantic subspace, ultimately causing collapse. To address this, the manuscript initially proposes using knowledge distillation (KD) from the "strongest" modality to those that are "weakest". Then, it presents EBR as a better alternative that accelerates convergence and eliminates the need for separate modality-specific knowledge distillation. The experimental results on avMNIST and MIMIC-IV indicate that EBR outperforms standard and KD-based training strategies. ## update after rebuttal The score was increased from 3: Weak Accept to 4: Accept. The authors addressed my concern about Multicollinearity, added statistical comparisons, and improved the connection between theoretical and empirical results. Claims And Evidence: I think the paper would benefit from linking multimodal collapse to the multicollinearity problem seen in logistic regression. Since the classification head that takes concatenated features from different modalities acts like logistic regression. Explaining how multicollinearity might cause collapse would make the claims more convincing. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for the problem. EBR was compared on two multimodal datasets (avMNIST and MIMIC-IV) and versus multiple baselines. Since EBR is a model-agnostic training strategy that is applied on top of existing backbones like MUSE, the approach builds on previous work and the evaluation is well-suited. Theoretical Claims: - Definition 1 is vaguely defined; I is used but not defined as predictive value. Conjugate features are defined but then not used in the discussion of the experiments. Experimental Designs Or Analyses: - Table 1 lacks a statistical comparison between the best and other models. Usually, it requires running Wilcoxon rank test and correction for multiple comparisons (Holm) Supplementary Material: Yes, I reviewed Proofs (B) and additional experimental results (C). Relation To Broader Scientific Literature: The solutions are similar to the ideas of DCCAE (Wang et al., 2015), where each unimodal encoder has its own additional training unsupervised objective and CCA for capturing joint information. This manuscript uses supervised objectives for each unimodal encoder and fusion joint head. Additionally, it introduces modality-specific encoder-decoder heads, which mimic autoencoder structure. Overall, previous attempts and this strategy promotes the regularization of individual unimodal encoders to ensure that we capture non-shared features from different modalities. Wang, Weiran, et al. "On deep multi-view representation learning." International conference on machine learning. PMLR, 2015. Similarly, the idea of inter- and intra-modality dependencies (e.g., Madaan et al., 2024). Madaan, Divyam, et al. "Jointly Modeling Inter-& Intra-Modality Dependencies for Multi-modal Learning." Advances in Neural Information Processing Systems 37 (2024): 116084-116105. Essential References Not Discussed: It would be great to explore related work related to multicollinearity, and inter- and intra-modality dependencies. Other Strengths And Weaknesses: Strengths: - The idea is interesting from an empirical results perspective - Multiple baselines - Two multimodal datasets Weaknesses: - Did not feel that the theoretic component was well connected to the empirical results other than rank results. Other Comments Or Suggestions: - It will be hard to reproduce experiments since there are no details on the "simple two-layer MLPs", and training schedules. I assume most of it can be found in MUSE, or other backbones, but it is essential to include the details in the appendix. Questions For Authors: Overall, it is an interesting approach. I will increase the score if you can reduce confusion and clarify the details. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to thoroughly understand our paper and providing important comments, which we believe has helped in significantly solidifying our findings. Below, we provide our response, which we will also incorporate in the final version of our manuscript. **Multicollinearity:** We indeed expect to see increased levels of multicollinearity as the number of modalities increase, if the dimensionality of the representation space remains constant. As correctly conjectured by the reviewer, we would expect multicollinearity to be more pronounced in the deeper layers of the fusion head. The reason behind this is that although there may be dependencies among features across modalities, they may not be exactly linear. As they propagate deeper into the fusion head, it is more likely that those non-linear dependencies would be resolved and linearized in the final representation space prior to classification. Theoretically, the bound in Thm 2 is derived based on the AGOP, i.e., $\nabla\varphi_W({x}) \nabla\varphi_W({x})^T$, being a low rank subspace in W (corresponding to an independent set of features), as discussed in Radhakrishnan et al., 2024, which is also required since one-to-one dimension-to-feature mappings needed to detect the presence of multicollinearity may exist in neural networks [a]. This aligns with the condition for regression multicollinearity that $X^TX$ should be not a full rank matrix. To empirically confirm this, we calculate the variance inflation factor (VIF) with increasing modalities on our trained representation space. We report the average VIF across features below. | | | \# Modalities | | | | :---- | ----- | ----- | ----- | ----- | | | **2** | **3** | **4** | **5** | | **Vanilla** | 1.15 | 2.68 | 3.51 | 4.70 | | **w/ KD** | 1.09 | 1.90 | 2.30 | 2.68 | | **w/ EBR** | 1.05 | 1.26 | 1.32 | 1.55 | With the increasing number of modalities, multicollinearity (VIF) increases in all cases. However, basis reallocation encourages cross-modal features to be encoded independently, with the explicit EBR being more efficient in controlling the level of multicollinearity relative to the implicit KD. [a] Veaux and Ungar. “Multicollinearity: A tale of two nonparametric regressions.”, Lecture Notes in Statistics, 1994. **Definition 1:** We apologize for not clarifying the details. $I(z)$ refers to the mutual information between a feature $z$ and the target label $y$, an abbreviation of $I(z; y)$ for notational minimality. Thm 1 holds in the context of definition 1. The latter is validated through our experiments in Fig. 4, Sec. 4.3. The observations therein necessitate the existence of conjugate pairs $zz^*$ across modalities, which have the capacity to cancel each other out. **Statistical Comparisons:** Below we report the resulting p-values of performing the Wilcoxon rank test with Holm–Bonferroni correction (significance level $\alpha$ = 0.05) on the Table 1 results between our proposed EBR and the other baseline methods. | Method | Mortality | | Readmission | | | :---- | ----- | ----- | ----- | ----- | | | **AUC-ROC** | **AUC-PRC** | **AUC-ROC** | **AUC-PRC** | | **CM-AE** | 0.0090 | 0.0077 | 0.0065 | 0.0035 | | **SMIL** | 0.0066 | 0.0053 | 0.0042 | 0.0066 | | **MT** | 0.0083 | 0.0082 | 0.0077 | 0.0065 | | **Grape** | 0.0027 | 0.0057 | 0.0058 | 0.0042 | | **M3-Care** | 0.0079 | 0.0031 | 0.0069 | 0.0039 | | **ShaSpec** | 0.0085 | 0.0062 | 0.0049 | 0.0075 | | **MUSE** | 0.0088 | 0.0079 | 0.0086 | 0.0089 | The null hypothesis that the proposed EBR and the other models follow the same distribution of AUC-ROC and AUC-PRCs with the chosen missingness rates, were rejected for the both Mortality and Readmission prediction tasks across all baselines, most often, with significantly low p-values, which in all cases, was lower than 0.01. It further provides evidence in support of the uniqueness of EBR in leveraging basis reallocation to free up rank bottlenecks as a novel mechanism to tackle missing modalities. **Connection between theoretical and empirical results:** We provide the connections for Thms 1 and 2 in our response to Reviewer xQ3V (Empirical validation of Thms 1 and 2). Here, we provide the same for Thm 3. In Sec. 4.3, we in addition to the results on rank and representation similarity (Fig. 5), we show the difference in the loss landscape geometry between implicit and explicit basis reallocation. Further, we provide evidence that collapse happens specifically due to cross-modal noisy interference, and that basis reallocation, by freeing up rank bottlenecks, allows the new dimensions to be used for denoising, leading to its effectiveness. **Experimental Details:** \ $\psi$: 512 -> 256 \ $h$: 1024 -> 512 \ $h^{-1}$: 512 -> 1024 \ \# Epochs: 1200 \ LR: Initially 0.01 decayed at a rate of 0.9 every 100 epochs \ We interleave between the optimization of $L_{md}$ and $L_{sem}$ every 10 epochs. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and questions. I have increased my score to "Accept" to reflect this.
Summary: The paper investigates modality collapse in multimodal learning, where models rely only on a subset of modalities. It shows that this collapse occurs due to entanglement of noisy features from one modality with predictive features from another, leading to suboptimal solutions. The authors propose Explicit Basis Reallocation (EBR) to prevent collapse by reallocating basis vectors in the latent space. Extensive experiments validate the theoretical claims and demonstrate state-of-the-art performance in handling missing modalities. Claims And Evidence: The claims are well-supported by both theoretical analysis and empirical evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: I did not check the correctness of the proofs in detail, but the theoretical claims including Lemma 1, Theorem 1, and Theorem 2, appear to be sound based on the provided derivations and explanations. Experimental Designs Or Analyses: Yes. The authors systematically analyze the impact of increasing modalities, noise levels, and missing modalities on modality collapse. The evaluation of both implicit (knowledge distillation) and explicit (EBR) basis reallocation are presented clearly. Supplementary Material: I reviewed the additional experimental results of the supplementary material. Relation To Broader Scientific Literature: The paper builds on prior work in multimodal learning, specifically addressing the problem of modality collapse. The proposed EBR algorithm contributes to the broader literature by offering new insights into improving the robustness of multimodal models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - **Strengths**: The paper provides a comprehensive theoretical analysis and empirical validation of modality collapse, a significant issue in multimodal learning. The proposed EBR algorithm is effective and achieves state-of-the-art results in handling missing modalities. - **Weaknesses**: The paper assumes that the latent factors are identifiable up to certain symmetries, which might not always hold in practice and could limit the generalizability. Other Comments Or Suggestions: N/A Questions For Authors: - Q1: How do the theoretical results change when the reduction in conditional cross-entropy provided by each feature is not the same across features? Could you provide some insights or preliminary results in this direction? - Q2: In real-world applications, how would you address the potential issue of non-identifiable latent factors? Are there any practical methods to ensure identifiability up to the required symmetries? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the theoretical and empirical contributions of our work towards understanding modality collapse and providing valuable feedback. Below, we address their concerns, which we will incorporate in our final version. **Identifiability:** Indeed there are practical ways of ensuring identifiability of latent factors by ruling out common symmetries. For instance, Gulrajani & Hashimoto, 2022 show that if the CP (Canonical Polyadic) decomposition of the third moment tensor of the distribution of the underlying latent factors is unique and its rank is equal to the dimensionality of the vector space in which the factors lie, then general linear symmetries can be ruled out, ensuring identifiability, for which they also provide an efficient algorithm. Ahuja et al., 2022 also shows that looking at inductive biases of the causal mechanisms instead of the true underlying latent factors is sufficient from identifiable representation learning up to any equivariances shared by the mechanisms. So, our assumption on identifiability up to required symmetries is indeed a widespread one in the literature, and there are many practical methods to ensure this as well. **Unequal conditional cross-entropy across features:** According to the condition $I(x; y|z_1) = I(x; y|z_2) = ... = I(x; y|z_k)$, since basins corresponding to multimodal combinations all lie at the same depth, their empirical risks are essentially the same, and so are the gradients from the ERM term. Now, as a result of modality collapse, we know that one of the basins is steeper than the rest, meaning it has a higher local gradient. Since the empirical risk is constant across all the basins / multimodal combinations, the steepness must come from the rank minimization term in Theorem 4 (Depth-Rank Duality (Sreelatha et al., 2024)). Therefore, the combination with a steep entry must lead to a lower rank solution. When the equality is not met across all features, the low-rank / steepness condition is trivially satisfied by the existence of a lower-dimensional subspace of $z_i$s that has a lower conditional mutual information $I(x; y|z_i)$, and deriving the upper-bound on the rank in terms of the AGOP is no-longer necessary. The rank of the subspace comprising features with lower relative mutual information could act as a reasonable estimate of the rank of the final weights that SGD would converge to. By considering the condition with the equality, we analyze the boundary case that even when such a subspace with low conditional mutual information cannot be identified, it is possible to upper-bound the rank of the weight matrix. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response, which has resolved my concerns. As a result, I will be upgrading my score to Accept.
null
null
null
null
null
null
DiffusionVLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression
Accept (poster)
Summary: This paper proposes a novel VLA framework that integrates NTP with a diffusion process. The writing is logical, and the experimental evaluation is thorough. While the core idea is intriguing and somewhat similar to $\pi_0 $[1], the latter was released in October, close to the ICML submission deadline in January, which may explain the absence of a direct comparison. However, it would still strengthen the paper if the authors highlighted the specific differences between their approach and $\pi_0 $[1], [1] Black, Kevin, et al. "$\pi_0 $: A Vision-Language-Action Flow Model for General Robot Control." arXiv preprint arXiv:2410.24164 (2024). Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: Yes. The Efficient Inference part Relation To Broader Scientific Literature: Related Essential References Not Discussed: Black, Kevin, et al. "$\pi_0 $: A Vision-Language-Action Flow Model for General Robot Control." arXiv preprint arXiv:2410.24164 (2024). Other Strengths And Weaknesses: Refer to Other Comments Or Suggestions Other Comments Or Suggestions: - Inference speed: After reviewing the supplementary materials, it appears that the high inference speed primarily arises from the use of the vLLM framework rather than the design of the method itself. To enhance clarity, it would be beneficial to explain in the main paper how vLLM is used to speed up inference and provide more details about its underlying implementation. - Method details: 1. The paper introduces a new VLA architecture but does not include a detailed architectural diagram. Specifically, the "Projection layer" and "action decoder" mentioned starting at Line 212 are absent from Figure 1, making it difficult to understand their functions and how they interact with other components. Including a comprehensive architecture figure that clearly illustrates these modules and their connections would greatly improve clarity. 2. In Figure 1, it appears that the reasoning tokens are transformed into language output. However, the method section does not mention any language output. Additionally, it remains unclear whether the ground truth (GT) for $L_{NTP }$ is based on action supervision or language supervision. - Others: 1. The paper references Table 1 on page 6, but the table is actually located on page 4, which disrupts the flow of the experimental discussion. A similar issue occurs with Figure 3. Please ensure that the references match their corresponding figures/tables to improve readability. 2. Although the model can perform corrections, it remains unclear why certain failures still occur. The authors should include a failure case analysis section to clarify the limitations of the model and suggest potential improvements. Questions For Authors: Refer to Other Comments Or Suggestions Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback. We have addressed each point below. Please see the following responses for details. ## 1. The inference speed of DiVLA(Same as R3#1) This issue was partially addressed in our response to R13rJ#1. Bellow, we fristly clarify why DiVLA-7B has similar number of parameters but is 8 times faster than OpenVLA-7B, followed by an explanation of how vLLM optimizes inference efficiency in VLMs. 1) **Inference Speed.** Diffusion-based VLA operates significantly faster than autoregressive VLA in robot action generation. In robotics, inference speed refers to the number of actions produced per second. For instance, using the Franka Emika with 7 degrees of freedom, OpenVLA must predict 7 tokens for each action. In a typical setup, the model needs to predict 30–60 actions for each incoming observational state, requiring 210–420 tokens for prediction. In contrast, diffusion-based VLA, such as DiVLA, only needs 10 denoising steps to predict 30–60 actions. This conclusion is supported by FAST [1], which finds that diffusion-based VLA is much faster than autoregressive VLA. In summary, DiVLA benefits from significantly faster inference times due to its use of a diffusion process, confirming the accuracy of our inference tests. 2) **DiVLA with vLLM.** Regarding our DiVLA with vLLM, the inference speed of VLM can achieve approximately 2× speedup. vLLM enhances VLM inference efficiency through PagedAttention, which prevents memory fragmentation by storing KV cache in fixed-size pages, reducing memory overhead and improving stability for long-sequence generation. Additionally, optimized CUDA kernels accelerate attention computation, resulting in significantly faster inference compared to traditional frameworks, often achieving 2–3× speedup. | Model | T_ntp \ T_diff (s) | Chunk Size | Inference speed | | --- | --- | --- | --- | | OpenVLA | 0.27 \ - | 1 | 4HZ | | DiVLA w/o vLLM | 0.5 \ 0.06 | 16 | 29HZ | | DiVLA w/ vLLM | 0.3 \ 0.07 | 16 | 43HZ | ## 2. Details architectural diagram for DiVLA We apologize for the absence of the detailed architectural diagram. As you correctly pointed out, the “projection layer" and "action decoder" are crucial components of our framework but are not represented in Figure 1. To address this, we will include a more comprehensive diagram that clearly illustrates the entire architecture, including these specific modules and their interactions. The projection layer consists of two MLP layers with LayerNorm. It bridges the VLM's embedding to the diffusion model, aligning their dimensions. Additionally, our action decoder follows the standard diffusion policy design [2]. It takes the embeddings from the VLM as conditions and predicts the noise based on the noisy actions, repeating this process *k* times to obtain the final action chunk. ## 3. Clarify on Language Generation and the L_NTP loss function In Section 3.2 Model Design Choices, we state that our training loss comprises tow part: the diffusion loss L_diff and the NTP loss L_ntp. The ground truth for L_ntp is based on language supervision. Consequently, we can generate reasoning tokens (i.e. language output) during inference. ## 4. Failure case analysis We have failure case analysis (**page 7 lines 378-384**) which systematically examines how dynamic reasoning enables action self-correction. To extend this discussion, we propose a **dual-aspect failure taxonomy**: **erroneous reasoning chains** and **action execution errors**. As demonstrated in Table through controlled experiments on the bin-picking task, we evaluate 4 trials per object across both seen and unseen categories. Seen objects are appeared in robot data while unseen objects are not. Our model could successfully pick up those objects when facing seen objects and we empirically observe that action always fails when the model **cannot recognize the unseen objects** though. We attribute the recognition degradation to robot-data overfitting as exclusive training on robotic demonstrations diminishes the base VLM's open-vocabulary recognition capacity. Thus a potential solution is co-training the VLA with robot data and vision-text data, preserving both generalizable visual concepts and action-specific skills. | success/object | Watermelon(unseen) | dragon(unseen) | Orange(unseen) | toy car(seen) | hex key(seen) | | --- | --- | --- | --- | --- | --- | | Reasoning | 50 | 0 | 75 | 100 | 100 | | Action | 50 | 0 | 50 | 100 | 100 | ## 5. Typo problem Thank you for your careful review. We appreciate your feedback regarding the misreferenced table and figure. These issues have been corrected in the next version of the manuscript, which will be updated accordingly. [1]. FAST: Efficient Action Tokenization for Vision-Language-Action Models [2]. Diffusion policy: Visuomotor policy learning via action diffusion.
Summary: DiVLA is a VLA model that connects a VLM with a diffusion model to enable both reasoning and action generation in robotics. It builds upon a pre-trained Vision-Language Model (VLM) for text-based reasoning while incorporating a diffusion model to learn robotic actions through a noise-denoising process. DiVLA introduces a reasoning injection module, which embeds reasoning outputs directly into the policy head, enhancing decision-making and interpretability. This framework demonstrates strong generalization and robustness in basic tasks (e.g., pick and place) compared to previous methods. Claims And Evidence: yes Methods And Evaluation Criteria: Please refer to 'Other Strengths and Weaknesses.' Theoretical Claims: Application paper without making theoretical claims. Experimental Designs Or Analyses: Please refer to 'Other Strengths and Weaknesses.' Supplementary Material: The video in the Supplementary Material showcases a robust real-world demo. Relation To Broader Scientific Literature: Effectively constructing a VLA model for robotics. Essential References Not Discussed: This paper provides a thorough discussion of related works. Other Strengths And Weaknesses: Strengths: 1. Attaching the diffusion module after the VLM to construct a VLA model is the mainstream action prediction approach, similar to TinyVLA and Pi_0. 2. Unlike previous diffusion-based methods, DiVLA further incorporates Reasoning Tokens to enhance action generation, representing an innovative exploration. 3. DiVLA exhibits strong zero-shot bin-picking capabilities. Weaknesses: Q1 My primary concern is the inference speed. 1.1 The authors claim that DiVLA-7B's inference speed is 8 times faster than OpenVLA at the same model size, which appears unrealistic. OpenVLA performs next-token prediction for 7 action tokens (discrete end-effector poses), whereas DiVLA additionally requires language next-token prediction (e.g., "Pick up the Rubik’s cube"). Given this, the inference speeds of both models should be comparable. The authors should retest and verify DiVLA's speed. 1.2 If LLM acceleration tools such as vLLM were utilized, this should be explicitly stated in the main text. Q2 Are Reasoning Tokens and Action Tokens both used as conditions for the Diffusion Model? If so, while Reasoning Tokens are obtained through next-token prediction, how are Action Tokens generated? Are they directly derived from the question token output of the VLM? Q3 DiVLA-2B and DiVLA-7B were pretrained on the Droid dataset, which is relatively limited compared to OXE. Could the model’s zero-shot bin-picking ability be attributed to its memory of the gray experiment tray (e.g., picking all objects within the tray) rather than genuine generalization to object semantics? For example, how would the model perform if objects were placed at different positions on the table without the tray? Other Comments Or Suggestions: It is recommended that the authors properly retest the model's inference speed. If DiVLA generates language reasoning during inference using an autoregressive next-token prediction approach, then DiVLA-7B's inference speed should be comparable to OpenVLA. For instance, the process of DiVLA-7B generating 7 words in its LLM follows a similar inference logic to OpenVLA generating 7 action bins. However, if DiVLA does not generate language reasoning during inference, then how are Reasoning Tokens constructed? Questions For Authors: Please refer to 'Other Strengths and Weaknesses.' Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your careful review and valuable comments. We address each question below. ## 1. Why DiVLA-7B has similar number of parameters but is 8 times faster than OpenVLA-7B? Thank you for pointing out this question. 1) Diffusion-based VLA operates significantly faster than autoregressive VLA in robot action generation. In robotics, inference speed refers to the number of actions produced per second. For instance, using the Franka Emika with 7 degrees of freedom, OpenVLA must predict 7 tokens for each action. In a typical setup, the model needs to predict 30–60 actions for each incoming observational state, requiring 210–420 tokens for prediction. In contrast, diffusion-based VLA, such as DiVLA, only needs 10 denoising steps to predict 30–60 actions. This conclusion is supported by FAST [1], which finds that diffusion-based VLA is much faster than autoregressive VLA. In summary, DiVLA benefits from significantly faster inference times due to its use of a diffusion process, confirming the accuracy of our inference tests. 2) To further ablate the impact of vLLM, we report the following test on an A6000 GPU, where T_ntp and T_diff represent the time spent on the NTP and diffusion processes, respectively. | Model | T_ntp \ T_diff (s) | Chunk Size | Inference speed | | --- | --- | --- | --- | | OpenVLA | 0.27 \ - | 1 | 4HZ | | DiVLA w/o vLLM | 0.5 \ 0.06 | 16 | 29HZ | | DiVLA w/ vLLM | 0.3 \ 0.07 | 16 | 43HZ | ## 2. Clarify the role of reasoning and action tokens in DiVLA We appreciate you highlighting this. 1). **Are both tokens used as conditions for diffusion model?** Yes, both action and reasoning tokens serve as different conditions on action denoising. Specifically, action tokens mainly include visual embeddings providing global observations and raw instruction tokens. On the other hand, reasoning tokens encapsulate hierarchical substep-level features that decompose complex tasks into temporally executable steps. 2). **How action tokens are generated?** Action tokens refer to vanilla visual tokens and tokens from raw instruction processed by VLM backbone. They are not generated autoregressively. ## 3. Whether DiVLA's ability to perform zero-shot bin picking is attributed to its generalization capability or spatial memory? Thank you for your insightful question. To answer your question, the DiVLA’s ability to perform zero-shot bin-picking definitely stems from its strong generalization capability. As outlined in page 8 line 410-420, the bin picking experiment used 102 unique objects that were entirely absent from the training set, ensuring that the model had to generalize to novel objects. In Figure 8, we shows that the objects have varying size and height. Object like white tape is only 1cm high, while the toy dragon is 10cm. Also, in Figure 7, we present some unseen objects that we used for zero-shot bin picking tasks. It can be observed that these objects exhibited substantial variations in shape, size, color, texture, and deformability, ensuring that the model could not rely on simple heuristics such as shape consistency or fixed spatial features. Our model, DiVLA, achieved a 63.7% success rate, substantially outperforming baselines (Diffusion Policy: 8.9%, Octo: 19.6%, TinyVLA: 23.5%, OpenVLA: 28.4%), underscoring its superior generalization for unseen objects. [1]. FAST: Efficient Action Tokenization for Vision-Language-Action Models --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I intend to keep my rating (weak reject) and have a few questions to discuss with the authors. **1. Reasoning Tokens and Inference Speed** First, I understand that DDIM-based action denoising for the diffusion head (e.g., Pi₀, CogAct, TinyVLA) is generally faster than autoregressive action generation through LLMs (e.g., RT-2, OpenVLA). However, according to Figure 1 and the description in the Method section, if the DiVLA model performs language reasoning, it would still require autoregressive next-token prediction. In other words, since action tokens already include all feedforward visual and language tokens, how are the reasoning tokens generated? If the language reasoning outputs are not autoregressively generated and the corresponding reasoning tokens are not reused as conditions, then what fundamentally distinguishes DiVLA from Pi₀ or TinyVLA? Therefore, I would appreciate a direct and detailed clarification from the authors on how the reasoning tokens are obtained. If it is difficult to explain the generation process of reasoning tokens in words, could the authors kindly share the relevant code for clarification? Second, if DiVLA requires generating reasoning tokens through autoregressive next-token prediction, this would significantly slow down the model’s inference speed. If the fast inference speed is due to the use of action chunking (where the control frequency equals the model inference speed multiplied by the action chunk size), this should be clearly stated in the paper. Additionally, it would be helpful to clarify whether the method employs temporal ensembling as in ACT, or instead directly executes a sequence of 16 actions before receiving a new image observation and language instruction. If temporal ensembling is used, the robot's control frequency would be reduced. If not, how is temporal consistency in the action outputs ensured? **2. Generalization in Real-World Scenarios** Through both qualitative and quantitative evaluations, the paper demonstrates that DiVLA exhibits generalization capabilities across diverse object shapes, sizes, colors, textures, and deformability. However, could the authors further clarify whether in the unseen-object real-world task shown in Figure 7, the performance remains robust even when the object is placed directly on the table, rather than inside a tray? **3. Lack of detailed information** Finally, I believe that incorporating language reasoning for substep-level or atomic tasks is a valuable contribution that sets this work apart from other diffusion-based VLA methods. However, I hope the authors can further improve the paper to meet the standards of the ICML conference, as it currently lacks several important details. For instance, how is the substep-level language planning GT constructed in DiVLA? Is substep reasoning only performed at the beginning of each task, or does it accompany the entire task process? Are action chunking and temporal ensembling techniques used? Which subsets of the OXE dataset were used for pretraining, and how many trajectories were included? How many iterations and GPU hours were required for pretraining? Please feel free to correct me if I have misunderstood any part of the paper. --- Reply to Comment 1.1.1: Comment: Thanks for your careful review and valuable comments. We address each question below. **1.1 Direct and detailed clarification on how the reasoning tokens are obtained.** Thank you for raising this point. Reasoning tokens are generated autoregressively, same as standard LLMs/VLMs, and are then reused as conditioning inputs for the diffusion process via our novel reasoning injection module. For a more direct understanding, we have provided the source code: [https://anonymous.4open.science/r/divla_anonymous_icml-BFD4/README.md](https://anonymous.4open.science/r/divla_anonymous_icml-BFD4/README.md). **1.2 Why does DiVLA output reasoning (in language) but much faster than OpenVLA.** Our previous initial rebuttal have answered that because diffusion based method generated more action steps per second, therefore it is much faster than auto-regreesive based VLA method like OpenVLA. This is similar for reasoning generation. For instances, to predict 80 actions, OpenVLA generates 7 tokens per action, resulting in total 560 tokens. In contrast, DiVLA generates only 100 reasoning tokens, making it much faster than OpenVLA. Specifically, DiVLA predicts 16 future actions across 10 denoising steps, but executes only the first 8. To predict 80 executable actions, this demands 10 (80/8) diffusion processes and reasoning generations. Since the average length of reasoning tokens is 10, DiVLA's total token count is 100 (10×10), which is almost 1/5 of OpenVLA's count. Thus, DiVLA generates far fewer tokens than OpenVLA for the same length of actions and can process multiple actions concurrently, enables DiVLA to achieve significantly faster inference speeds. **1.3 Does DiVLA use temporal ensembling?** We appreciate your point. Our method does not employ temporal ensembling and utilizes only action chunking, a common practice in prior works such as Pi0, TinyVLA, DP3[1], DP[2]. There might be a misunderstanding where responsiveness is being conflated with temporal consistency. As clearly stated in DP[2], action chunking involves predicting a receding future action sequence, which inherently promotes temporal action consistency. Responsiveness refers to the speed at which the model reacts to observation updates which will be weakened by large size of action chunk. DP has conducted experiments exploring the trade-off between action horizon and responsiveness. To further optimize this balance, our method predicts future 16 actions but executes only the initial 8 at a 16Hz control frequency. This results in observation updates every 0.5 seconds, aligning with Pi0. **2 Does DiVLA remain robust when the object is placed on the table?** Thanks for pointing this out. Yes, DiVLA remains robust even when object is placed directly on the table. As demonstrated below, DiVLA achieves a 47.1% success rate, tripling the performance of OpenVLA. Furthermore, DiVLA exhibited greater robustness, with a performance decline of only 26.1% compared to OpenVLA's 44.7%. Those outcomes highlight DiVLA’s capability of handling complex generalization settings. | Models | placed in tray | placed on table | Relative Decrease ↓ (%) | | --- | --- | --- | --- | | DiVLA | 63.7 | 47.1 | 26.1 | | OpenVLA | 28.4 | 15.7 | 44.7 | **3.1 How is the GT substep reasoning constructed?** Thanks for pointing this out. We converted all data into video format and employed Gemini to annotate the robot's actions in the videos. To ensure consistency, we predefined multiple sets of substep templates for each task, allowing Gemini to randomly select the template for annotation. **3.2 How is substep reasoning performed?** Thanks for feedback. Our DiVLA performs substep reasoning throughout the entire task process. Notably, DiVLA generates one substep at a time based on the current observation rather than producing all substeps at once. This approach allows DiVLA to gain a clear understanding of the task's progress by evaluating its current state and determining the next appropriate substep. **3.3 Which subset of OXE is used and howmany trajectories are included?** Thank you for pointing this out. Since OXE includes a wide variety of embodiments across different settings, and some of the data is of low quality (e.g., super low resolution, 80x80), we filtered the data based on sufficient resolution, language annotations, single-arm configuration, and an appropriate task duration. As a result, approximately 9K trajectories are included. **3.4 Pretraining time (GPU hours).** Thank you for your feedback. We have had a comprehensive discussion on training efficiency in response to **Reviewer fKzV#1**, and you can refer to that section for details. For simplicity, DiVLA-2B was pretrained for only 2.5 epochs, equivalent to 155 H800 GPU hours. [1]. 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations RSS 2024 [2]. Diffusion Policy Visuomotor Policy Learning via Action Diffusion RSS 2023
Summary: The authors propose combining the reasoning capabilities of LLMs with the robot action generalization capabilities of diffusion models, creating DiVLA. DiVLA extracts and interleaves tokens from visual input and text using SigLIP, concatenates them, and processes them in a VLM. The VLM generates action tokens that are projected and processed by the diffusion model. FiLM layers are used to incorporate reasoning tokens output by the VLM into the diffusion model. DiVLA is compared to previous VLAs on two real-world robot settings for various pick and place tasks, including seen and unseen objects and including a bimanual robot. Claims And Evidence: Since the paper makes the claim that reasoning contributes to the model’s robustness in l. 365-368 col. 2 then this should be validated. Methods And Evaluation Criteria: Yes but there are no synthetic benchmarks, implying that the model may not perform well on an environment with a vastly different set-up than the real-world experiments currently used. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: DiVLA builds directly off of Diffusion Policy and previous work in VLMs. It tackles a worthwhile problem of combining the best of VLMs and diffusion models for VLAs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: S1: Strong performance on two real world settings, especially on generalization to unseen objects. Weaknesses: W1: The components of DiVLA are not ablated. Since the paper makes the claim that reasoning contributes to the model’s robustness in l. 365-368 col. 2 then this should be validated. Additionally it is important to validate the choice of model architectures, since the main contribution of DiVLA is in the combination of existing components. W2: No synthetic benchmarks, such as Libero [1], that are standard for pick and place tasks in robotics. This implies that the model may not perform well on an environment with a vastly different set-up than the real-world experiments currently used. [1] LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning. Liu et al. arxiv preprint arxiv:2306.03310, 2023. Other Comments Or Suggestions: - l. 306 col. 1 “pretraiend”→”pretrained” - l. 309 col. 1 “the setting as pi_0”→”pi_0’s setting” Questions For Authors: 1. What is the difference between points 1 and 6 in the introduction section? 2. l. 140-144 “Research has shown … various embodiments.” Can you give some citations with evidence for this? 3. What are problems with current diffusion-based VLAs? 4. l. 248-252 col. 2 “To … reasoning.” What prompts do you send to GPT-4o? 5. l. 364 - 376 Does this pose a problem when there are two or more similar objects in the scene, i.e. does the model confuse similar objects when both exist in the scene? 6. How does the DiVLA’s size compare to other VLAs such as Diffusion Policy, Octo, OpenVLA etc? How long does it take to infer a single image and to train compared to the other models? 7. What is the difference between reasoning and action tokens? How is the VLM trained to output them separately? Is there a different number of each type of token generated? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback. ## 1. Simulated evaluation Real-world evaluation is more challenging than simulation. While our work emphasizes complex real-world tasks like long-horizon bin-picking and bimanual table bussing, we also evaluate DiVLA on two standard simulation benchmarks, Calvin and Libero. Compared with baselines including Diffusion Policy, Octo, OpenVLA, RT-1, Robo-Flamingo, and GR-1, DiVLA achieves the best performance under a unified training setup (results: https://i.postimg.cc/xTybS5p3/simulation.png), demonstrating its robustness across both real and simulated environments. ## 2. Ablation on key modules We do agree that ablation study is necessary for better support our claim. We have addressed this problem in response to **Reviewer fKzV#3**. Please refer to that section for details. ## 3. Clarify difference between points 1 and 6 in the introduction section We are sorry for the confusion between point 1 and point 6. Generalizing manipulation skills on novel objects and robust to dynamic environment are two main challenges in robot learning. Point 1 focuses on DiVLA’s ability to recognize novel objects through self-generated reasoning, while Point 6 highlights its robustness to distractors and dynamic environments. ## 4. Citation for “Research has shown … various embodiments.” Thanks for pointing this out. The next-token-prediction method used in OpenVLA struggles to complete tasks when adapting to new embodiments [1] and performs poorly when learning dexterous skills with high-frequency control [2]. ## 5. Limitation of current diffusion-based VLAs and Superiority of DiVLA Thanks for your insightful question. We have addressed this problem in response to **Reviewer fKzV#5**. Please refer to that section for details. ## 6. Prompt for gpt-4o Thanks for pointing this out. The prompt is at here: https://i.postimg.cc/kXcDH4Dj/prompt.jpg. ## 7. Clarify whether the model will confuse similar objects or not Thanks for your reminder. While DiVLA shows strong generalization, it occasionally misclassifies unseen objects, such as confusing small stuffed toys with toy cars. However, DiVLA handles shape-similar, color-distinct objects well, thanks to its robust color perception. Real-time reasoning traces help diagnose misclassifications and offer insights for further improvement. ## 8. Comparison on model size, inference speed, training speed with baselines Thanks for pointing this out. We will discuss the model size and inference speed separately. 1). **Model size:** Our DiVLA, which uses only 2B parameters—1/3 of OpenVLA’s parameters—achieves the best performance across all tasks, demonstrating our method's model efficiency. | model | DP | Octo | OpenVLA | DiVLA-2B | | --- | --- | --- | --- | --- | | size | 153M | 93M | 7B | 2B | 2). **Inference speed:** As shown in table, DiVLA-2B can achieve 82hz control frequency at test time with vLLM and action chunk. This is 20x faster than OpenVLA. The results highlight the inference efficiency of diffusion-based VLAs over autoregressive VLAs. | model | DP | Octo | OpenVLA | DiVLA-2B | | --- | --- | --- | --- | --- | | inference speed(Hz) | 122 | 105 | 4 | 82 | 3). **Training speed:** We have addressed this problem in response to **Reviewer fKzV#1**. Please refer to that section for details. ## 9. Clarify details on reasoning and action tokens Thanks for your feedback. 1). **Difference between reasoning and action tokens:** Action and reasoning tokens serve as different conditions on action denoising. Specifically, action tokens mainly include visual embeddings providing global observations and raw instruction tokens. On the other hand, reasoning tokens encapsulate hierarchical substep-level features that decompose complex tasks into temporally executable steps. 2). **How is VLM trained to output reasoning and action tokens:** Action tokens refer to vanilla vision tokens and tokens from raw instruction processed by VLM backbone while reasoning tokens are task-specific language generated by VLM. Thus, we only use standard next-token prediction loss for supervising reasoning generation. 3). **Number of both tokens:** As previously noted, action tokens comprise tokens from raw instruction and visual tokens, whereas reasoning tokens are autonomously generated by the VLM. Action tokens are much longer than reasoning tokens as a large number of vision tokens. ## 10. Typo problems Thanks for your feedback. We will improve them in updated version. [1]. TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation [2]. FAST: Efficient Action Tokenization for Vision-Language-Action Models [3]. CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation [4]. RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation [5]. π0: A Vision-Language-Action Flow Model for General Robot Control
Summary: DiffusionVLA unifies autoregressive reasoning with diffusion-based action policies to build robust vision–language–action models for robotic control. By injecting self-generated reasoning directly into the policy head, the framework improves interpretability and decision-making. Extensive experiments on tasks like factory sorting and zero-shot bin picking demonstrate strong generalization and fast inference, outperforming several state-of-the-art baselines. Claims And Evidence: - The paper claims that combining autoregressive reasoning with diffusion policies and a dedicated reasoning injection module yields superior and interpretable robotic control compared with prior VLA models. While the empirical results show improved task performance and interpretability, similar capabilities (e.g., natural language rationale generation) have been explored in comparable works. A more detailed ablation or comparison that isolates the effect of the reasoning module would strengthen the claim (e.g., the choice of projector, the effect of data source). Methods And Evaluation Criteria: - Training Efficiency and Model Size: How does DiffusionVLA’s training efficiency and model size compare against baseline methods? Quantitative comparisons (e.g., training time, parameter counts, memory usage) would provide valuable context. Theoretical Claims: There are no novel theoretical contributions. Experimental Designs Or Analyses: - Projection Layer for Action Tokens: Have the authors experimented with different projection layers, such as FiLM, instead of the two-MLP+LayerNorm design? A comparison would help justify the module selection. - Reasoning Injection Module: Similarly, for the reasoning injection module, did the authors try simpler alternatives (e.g., a plain MLP or an attention-based mechanism) and compare their effectiveness? How do the authors guarantee that the chosen design is optimal? - Learning Rate Details: Is the fixed learning rate (2e-5) uniformly applied to all layers, including the newly initialized projection modules? Supplementary Material: I reviewed all supplementary material. Relation To Broader Scientific Literature: A deeper discussion is needed on how this work situates itself within the broader literature on diffusion policies and autoregressive reasoning in robotics. How do its capabilities (especially regarding interpretability and efficiency) compare with models like OpenVLA, or π0? Essential References Not Discussed: The reference list is comprehensive and appropriately covers related work. Other Strengths And Weaknesses: ### Strengths: - The unified framework effectively combines reasoning and diffusion-based action generation. - The reasoning injection module adds interpretability by generating natural language rationales alongside actions. - Empirical results show robust performance across varied real-world tasks with fast inference speeds. ### Weaknesses: - Limited quantitative discussion on training efficiency and model size compared with baseline methods. - The paper does not provide comparative experiments on alternative projection or reasoning modules (e.g., using FiLM or plain MLPs). Other Comments Or Suggestions: - Add a period after “randomly initialized weights” for clarity. - In Figure 1, annotate the x-axis to indicate model size. - Consider renaming Section 3.2 from “Model Design Choices” to a title that more accurately reflects its focus on architecture and training objectives. - Clarify whether the fixed learning rate (2e-5) is applied uniformly across all layers, including newly initialized projection modules. - The claim that the model “generates natural language rationales alongside its output actions” is interesting; please discuss if any baseline models offer similar interpretability features and how DiffusionVLA’s approach compares. Questions For Authors: See the comments and questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your careful review and valuable comments. We address each question below. ## 1. Training efficiency for DiVLA compared to baselines Thank you for your valuable feedback and insightful advice. Since DP and Octo perform significantly worse across most tasks, our comparison focuses on competitive VLAs. 1). **Pretraining Computations:** OpenVLA requires 27 epochs (21,500 A100 GPU hours) for action token prediction on the OXE dataset, while DiVLA achieves language rationale generation in just 2.5 epochs (155 H800 GPU hours) on part of Droid dataset. DiVLA uses only 39K pretraining samples, 25 times fewer than OpenVLA's 970K, yet still delivers interpretable reasoning, demonstrating superior efficiency. | model | Pre-trained Data | GPU hours | | --- | --- | --- | | TinyVLA | / | / | | OpenVLA | 970K | 21500 | | DiVLA-2B | 39K | 155 | 2). **Finetuning Computations:** All models are trained with the same batch size on 8×H800 GPUs. TinyVLA takes 12 hours to adapt, while DiVLA outperforms it by 29.8% in factory sorting and reduces finetuning time compared to OpenVLA, achieving a 20.9% improvement. DiVLA shows superior efficiency and adaptability in finetuning. | model | OpenVLA | TinyVLA | DiVLA-2B | | --- | --- | --- | --- | | Finetune Time (hours) | 25 | 12 | 15 | | size | 7B | 1.3B | 2B | | GPU memory | 66 | 29 | 38 | 3). **Model Size and Memory Usage:** As shown in the table, DiVLA is smaller than OpenVLA (2B vs. 7B parameters) and requires less GPU memory, making it more reproducible and accessible. Overall, DiVLA demonstrates superior computational efficiency in both pretraining and finetuning, achieving significant performance gains and highlighting the advantage of our approach. ## 2. Ablation on projection layer of action tokens Thanks for pointing it out. We ablate two projection layer, as requested for action tokens on 3 tasks (described in page 4 line 189-199). For each variants we evaluated them for 11 trails for each task. | task/module | FiLM | two-MLP+LayerNorm(ours) | | --- | --- | --- | | task 1 | 63.7 | 100 | | task 2 | 45.4 | 100 | | task 3 | 36.4 | 63.6 | | Average | 48.5 | 87.9 | As shown in table, using two-MLP+LayerNorm as projection layer performs relatively better than FiLM which highlights our choice is effective. ## 3. Ablation on three reasoning injection modules (MLP, Q-Former, and FiLM) We thank the reviewer so much for pointing this out. We ablate three reasoning injection modules, 1) Plain MLP like LLaVA [1], 2) Q-former as in BLIP [2], and 3) FiLM. For each variants, we evaluated on three tasks, each with 11 trials. | task/module | Plain MLP | Q-Former | FiLM(ours) | | --- | --- | --- | --- | | Task 1 | 36.4 | 18.2 | 100 | | Task 2 | 45.4 | 45.4 | 100 | | Task 3 | 18.2 | 27.3 | 63.6 | | Average | 33.3 | 30.1 | 87.9 | As demonstrated in the table, DiVLA with the FiLM module significantly outperforms plain MLP and Q-former, confirming the efficacy of our design. Intuitively, reasoning tokens act as conditioning factors, enhancing robot action generalization without dominating predictions. The FiLM architecture effectively fulfills this conditioning role. ## 4. Do we use 2e-5 for all module? Yes, we use initial learning rate of 2e-5 to all layers, including newly initialized projection modules. We didn’t do hyper-parameter search in our experiments. ## 5. It is interest that Diffusion-VLA can generate language rationales alongside its robot action, how does it compare to OpenVLA and pi0 in terms of interpretability Thank you for raising this point. DiVLA is the only model that generates language rationales alongside robot actions, making it interpretable. In contrast, OpenVLA and Pi0 cannot generate language. DiVLA uses reasoning for dynamic scene understanding and task planning, revealing the model's decision-making process and improving long-horizon task completion by making failure reasons visible to users. Our experiments shows the importance of making VLA interpretable. Specifically, as illustrated in Figure 3, on the sorting task involving 4 progressively challenging settings, DiVLA can generate real-time reasoning traces (e.g., identifying objects and categorizing them). It achieves 20.9% improvement over OpenVLA across all settings. Furthermore, as shown in Figure 4, DiVLA exhibits remarkable zero-shot generalization, outperforming OpenVLA by 35.3% on 102 unseen objects. These experimental results shows that introducing interpretability to VLAs significantly enhances their ability to execute long-horizon tasks and generalize to novel objects. ## 6. For format, section renaming and figure annotation Thanks for pointing this out. We will improve these formating problems, section renaming and figure annotation in updated version. [1]. Improved Baselines with Visual Instruction Tuning [2]. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
null
null
null
null
null
null
Towards Lifelong Model Editing via Simulating Ideal Editor
Accept (poster)
Summary: This paper introduces Simulating Ideal Editor (SimIE), a framework that extends standard parameter-modifying methods to lifelong scenarios. SimIE computes the ideal parameter shift as the minimum-norm solution of a linear system using the Moore-Penrose inverse, and allows recursive updates by truncating its limiting expression under mild assumptions. The theoretical analysis shows that even when assumptions fail, SimIE remains near-optimal or stable, balancing optimality and robustness. Extensive experiments confirm that SimIE achieves performance comparable to specialized lifelong editing methods. Claims And Evidence: The claims is supported by experiments. Methods And Evaluation Criteria: The proposed method make sense for the problem. Theoretical Claims: I only partially checked the formulas and principles, and some parts are not very easy to understand. Experimental Designs Or Analyses: The experiments is sound. I suggest that the authors include additional case studies and evaluate the method's performance by calculating metrics without relying on teacher forcing. Supplementary Material: There is no code. Relation To Broader Scientific Literature: SimIE formulates the ideal parameter shift as the minimum-norm solution to a linear system constructed using the Moore-Penrose inverse, and it subsequently enables recursive updates by truncating the inverse’s limiting expression under two mild assumptions. Essential References Not Discussed: There are many papers on lifelong model editing; it is recommended to carefully discuss the differences from previous work. Other Strengths And Weaknesses: Strengths: SimIE effectively connects standard model editing with lifelong model editing, leveraging advancements from both fields. The framework is underpinned by a clear theoretical formulation using the Moore-Penrose inverse, providing insights into the optimality and stability of the solution. By reformulating the ideal parameter shift as a minimum-norm solution, SimIE generalizes parameter-modifying methods to lifelong scenarios, potentially broadening their applicability. Weaknesses: The recursive update mechanism depends on two mild assumptions. If these assumptions are not met, the method may face a trade-off between optimality and stability. The paper could benefit from a deeper comparative discussion with existing lifelong model editing approaches to highlight distinctive advantages and potential limitations. The theoretical guarantees are tied to the specific conditions assumed; deviations in real-world scenarios might challenge the robustness of the proposed method. Other Comments Or Suggestions: No. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are deeply grateful to Reviewer 1PGC for the careful and insightful comments on our manuscript. Our detailed responses to your questions are outlined below. ## Q1: evaluating the method's performance without relying on teacher forcing Following your suggestion, we conduct additional experiments on the recently proposed QAEdit benchmark [1] (released on February 16, 2025), which adopted the real-world evaluation metrics that do not rely on teacher forcing. Specifically, we take FT, ROME, AlphaEdit$^-$, WISE, and AlphaEdit as baselines and perform $T=1000$ sequential edits on Llama-2. The results are summarized in the table: | Method | FT | ROME | ROME+SimIE | AlphaEdit$^-$ | AlphaEdit$^-$+SimIE | WISE | AlphaEdit | | --- | --- | --- | --- | --- | --- | --- | --- | | Real-world Rel | 0.00 | 0.00 | 0.20 | 0.00 | 0.32 | 0.14 | 0.27 | | Real-world Gen | 0.00 | 0.00 | 0.13 | 0.00 | 0.17 | 0.08 | 0.18 | | Real-world Loc | 0.00 | 0.00 | 0.34 | 0.00 | 0.35 | 0.22 | 0.03 | | Real-world Avg | 0.00 | 0.00 | 0.22 | 0.00 | **0.28** | 0.15 | 0.16 | We observe that all methods, particularly lifelong editing approaches, suffered significant performance degradation in the more realistic evaluation setting. In contrast, standard algorithms enhanced with SimIE consistently outperformed these specialized lifelong editing methods, demonstrating greater robustness. Although SimIE's performance is still limited by the effectiveness of the fundamental editor, its core strength lies in its generality: SimIE enables lifelong editing to benefit directly from any future improvements in standard editors without the need for specialized redesign. ## Q2: if these assumptions are not met We provide a more in-depth empirical analysis of the two assumptions, measuring their violation in practice. Please refer to Reviewer hGQy's Q2 for the details. ## Q3: a deeper discussion and code We will expand our Related Work and Limitations sections to offer a deeper discussion, highlighting both the advantages of SimIE (e.g., leveraging advances in standard editors without specialized redesign) and its limitations (e.g., may be limited by the fundamental editor’s performance). As for the code, we have provided an anonymous GitHub repository in Section 4, which will be moved at the end of the abstract to ensure better visibility. Thank you once again for your insightful feedback, which has significantly enhanced and refined our work. [1] Yang, Wanli, et al. The mirage of model editing: Revisiting evaluation in the wild. arXiv preprint arXiv:2502.11177 (2025).
Summary: This paper introduces "Simulating Ideal Editor" (SimIE), a general framework that enables standard model editing methods to perform effectively in lifelong editing scenarios. The authors formulate the ideal parameter shift as the minimum-norm solution to a linear system constructed using the Moore-Penrose inverse, and develop a recursive update mechanism that approximates this solution through sequential edits. Their theoretical analysis demonstrates that even when key assumptions (over-parameterization and key-value invariance) are violated, SimIE remains either near-optimal or stable against perturbations. Extensive experiments on GPT2-XL, LLaMA-2, and Mistral models show that standard algorithms enhanced with SimIE achieve comparable performance to specialized lifelong editing methods, with minimal implementation. The framework effectively bridges the gap between standard and lifelong model editing paradigms, allowing lifelong editing to benefit from ongoing advancements in standard editing techniques. Claims And Evidence: Yes, I did not find problems for this part. Methods And Evaluation Criteria: This part is ok. The base models, baseline methods, benchmarks, and metrics in general look good to me in Sec 5. There are some additional benchmark datasets, but not included in this paper. Theoretical Claims: There are many theorems in this paper, and most of the proof are in Appendix. The natural language introduction of the idea behind the theorems looks reasonable to me, but I did not check the exact correctness of the theorems in Sec 4. Experimental Designs Or Analyses: According to the results reported by the authors, I feel that there is a certain correlation between model size and performance. Therefore, seeing how larger models perform with the proposed SimIE method would provide a deeper understanding of its generalizability. Supplementary Material: Yes. "D. More experimental details and results." Relation To Broader Scientific Literature: This paper proposes SimIE, a general framework that bridges the gap between standard model editing and lifelong model editing, enabling standard methods to retain strong performance in lifelong scenarios. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: A key strength of the SimIE framework lies in its strong theoretical foundation that supports each part of the method. By offering clear mathematical proofs for each step of the algorithm, the authors build a solid basis for their approach and achieve good performance on both three backbone models. Weakness: 1. While SimIE is applied across multiple layers independently, the analysis doesn't address how assumption violations cascade through the network, e.g., edits to layer 5 alter the hidden states flowing to layer 6, thereby affecting key representations in layer 6. This oversight limits our understanding of why the approach succeeds in practice despite lacking theoretical guarantees for such complex inter-layer dependencies. 2. Despite the theoretical contributions, Table 1 shows that SimIE's performance doesn't significantly outperform specialized lifelong editing methods like WISE and AlphaEdit. This raises questions about the practical necessity of the proposed approach. Without demonstrating clear advantages over existing methods, the authors' justification for "bridging the gap" between standard and lifelong editing remains primarily theoretical rather than performance-driven. Other Comments Or Suggestions: N/A Questions For Authors: 1. How extensively are the over-parameterization and key-value invariance assumptions violated in your experiments? 2. After comparing Table 3 with Table 1, I notice an interesting pattern: SimIE demonstrates superior performance over lifelong editing baselines on the smaller GPT2-XL (1.5B) model, but this advantage becomes less significant with the larger 7B models like LLaMA-2 and Mistral. This raises an important question: Is SimIE's effectiveness inversely related to model size? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer hGQy for your insightful and constructive comments. We have carefully considered your feedback and responded to each of your points below. ## Q1: larger models perform with the proposed SimIE We conduct additional evaluations on larger LLMs, specifically Llama-3 (8B) and Qwen2.5 (7B). For detailed experimental results, please refer to Reviewer nKJ9's Q1. ## Q2: How extensively are the over-parameterization and key-value invariance assumptions violated in your experiments? We conduct two analyses to assess the extent to which the assumptions are violated in practice. Specifically, we utilize AlphaEdit+SimlE to edit Llama-2 using the ZsRE dataset. At each time step $t$, we track the updated key-value pairs $(k_t, v_t)$, alongside their original counterparts $(k^{\prime}_t, v^{\prime}_t)$ from the initial unedited model. For the over-parameterization assumption, we quantify how effectively each layer can fit the desired update by computing the optimal residual $R_{\mathrm{min}}$ of the least-squares problem: $\min \frac{1}{T}\sum_{t}^{T}\\|Wk^{\prime}_t-v^{\prime}_t \\|^2$. The results are summarized as follows: | Layer | 4 | 5 | 6 | 7 | 8 | |-----|-----|-----|-----|-----|-----| | $R_{\mathrm{min}}$ | 2.46e-02 | 3.36e-02 | 4.43e-02 | 5.19e-02 | 8.41e-02 | Empirically, we observe that $R_{\mathrm{min}}$ tends to increase in deeper editing layers, indicating a greater deviation from the over-parameterization assumption. To assess the key-value invariance assumption, we measure the deviations $\\|k_t-k^{\prime}_t\\|$ and $\\|v_t-v^{\prime}_t\\|$, which capture the extent to which edits (both from previous time steps and preceding layers) perturb the original key-value representations. For visualization, we divide the $T=1000$ edits uniformly into $100$ intervals and plot their average values within each. The detailed visualizations are available in the Assumption_analysis.PDF at [SimIE](https://anonymous.4open.science/r/SimIE) (our code link), and a partial summary of key deviations for Llama-2 is provided below: | Layer | 1~10 | 191~200 | 391~400 | 591~600 | 791~800 | 991~1000 | |-----|-----|-----|-----|-----|-----|-----| | 4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | | 5 | 0.088 | 0.306 | 0.335 | 0.364 | 0.376 | 0.373 | | 6 | 0.165 | 0.612 | 0.687 | 0.760 | 0.730 | 0.730 | | 7 | 0.248 | 0.885 | 1.025 | 1.136 | 1.052 | 1.137 | | 8 | 0.323 | 1.227 | 1.476 | 1.608 | 1.472 | 1.626 | The findings demonstrate that deviations gradually increase with both editing layer depth and the time step, aligning with your speculation about the perturbations cascade phenomenon. Overall, both assumptions are indeed violated to some degree in practical scenarios, which emphasizes the trade-off between optimality and sensitivity established by Theorem 4.3. However, the observed violations appear manageable under the current experimental setup, as they do not lead to the empirical failure of SimIE. In scenarios involving longer edit sequences or more editing layers, the violations of assumptions may become severe. These new analyses provide insight into both why SimlE currently succeeds and where it might face challenges going forward, which will be integrated into the manuscript. ## Q3: practical necessity of the proposed approach We would like to clarify that SimIE's primary practical value lies in enabling lifelong editing to benefit from ongoing advances in standard editing research. Indeed, on more challenging datasets and more realistic evaluations recently proposed, specialized lifelong algorithms can exhibit significant performance deterioration, whereas SimlE-enhanced standard methods provide more robust performance (please refer to Reviewer 1PGC's Q1). ## Q4: is SimIE's effectiveness inversely related to model size? Our results generally indicate that the performance of SimIE depends more on the fundamental editor, which may be affected by various aspects like model architecture, dataset, hyperparameters, etc. Although we do observe some performance variations at different scales, there is no definitive evidence of an inverse correlation between model size and SimIE’s effectiveness. For instance, on Llama-3 (8B), ROME+SimIE achieves an average metric of $0.75$, outperforming the $0.63$ average observed on the smaller Llama-2 (7B). Additionally, in our new experiments on the other 7B model, Qwen2.5, SimlE improves performance by an average of $4.8\\%$ over existing SOTA lifelong methods. These results suggest that SimIE may leverage that strength when the underlying editor performs robustly, even for larger LLMs. Thank you once again for your thoughtful and detailed feedback, which offers valuable guidance for the improvement of our paper.
Summary: Standard model editing techniques suffer significant performance degradation in sequential editing setting due to model drift and catastrophic forgetting. To tackle the issue, this paper proposes a general framework, i.e., SimIE, to restore the performance of **any** standard model editing techniques in lifelong editing. The key insight is that the desired parameter shift $S_t=B_t K_t^T(K_tK_t^T+\lambda I)^{-1}$ can be written to the recurrence form $S_t=S_{t-1}+\Delta W_t k_tk_t^TP_t^{-1}$, where $\Delta W_t$ is the parameter shift produced by the standard model editing method. SimIE is empirically evaluated by being applied to four standard model editing methods, i.e., MEND, ROME, MEMIT, and AlphaEdit, on zsRE and CounterFact datasets, where it significantly improves the performance of the standard model editing methods from nearly complete failure to a level comparable to the state-of-the-art lifelong editing methods. Claims And Evidence: Following Figure 2, the performance of four standard model editing methods are significantly improved in the sequential editing setting, which strongly supports the claim of the paper. Methods And Evaluation Criteria: The proposed method is reasonable and the evaluation setting is standard, though I would expect to use more (recent) datasets or metrics. Theoretical Claims: The proposed method is built upon solid theoretical analyses. I have carefully checked the core part and found on problem. Experimental Designs Or Analyses: See Methods and Evaluation Criteria Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths * This paper is very well written and organized. The notations are clearly defined. The theoretical analyses are insightful and reasonable. I enjoy reading the paper. * The idea is quite novel and interesting. Instead of proposing a sequential editing method, this paper introduces a framework to convert any standard model editing methods to lifelong editing methods, making it general and potentially impactful. * The method is simple, which only requires a single line of code to implement, while effective. Weakness The main text of this paper leans more to theoretical analyses rather than empirical evaluation. I noticed that some experiments are placed in the Appendix. I recommend to incorporate less theoretical analyses and more empirical evaluation in the main text to increase the visibility (as not all readers will go through the Appendix) and to favor a broader audience (as I assume that all readers are interested in the empirical evaluation but only limited care about full theoretical justifications). Simpler theoretical analyses may make the main text clearer. The author may directly consider the least square problem in [1], leading to the approximation after Lemma 3.5 (so that the two assumptions and Lemma 3.5 can be moved to Appendix). Then, by writing the optimal solution to the least square problem in a recurrence form, the core method of the paper, i.e., Formula 3.5, is obtained. The regularization term in the least square formulation also provides a straight-forward explanation for the phenomena observed in the ablation study. [1] Massive Editing for Large Language Models via Meta Learning, ICLR 2024 Other Comments Or Suggestions: * I recommend to place the code link at the end of abstract for a better visibility. * Please unify the usage of ^\top and ^T in Lemma 3.5. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We extend our heartfelt thanks to Reviewer 7yH3 for your thorough and thoughtful review of our manuscript. Following are our responses to each individual comment. ## Q1: use more recent datasets or metrics We conduct experiments on a new benchmark, QAEdit, which is tailored for real-world QA tasks and adopts real-world evaluation metrics (released on February 16, 2025). For further details, please refer to our response to Reviewer 1PGC’s Q1. ## Q2: simpler theoretical analyses may make the main text clearer We agree that the least-squares problem in [1] offers a more succinct path toward deriving the update rule (Equation 3.5). That said, we find that keeping two assumptions in the main text helps bridge our theoretical arguments, and retaining Lemma 3.5 can aid readers in understanding the motivation (i.e., simulating the ideal editor). Nonetheless, we will streamline the theory to retain the minimum element that is essential to guide readers through Equation 3.5. ## Q3: code link and transpose symbol We will move the code link to the end of the abstract and correct the notation $A^T$ to $A^\top$ throughout the manuscript. Thank you once again for your valuable time and effort in reviewing our work. These insightful feedbacks will significantly enhance the quality and clarity of our paper. [1] Tan, Chenmien, Ge Zhang, and Jie Fu. Massive Editing for Large Language Models via Meta Learning. ICLR (2024).
Summary: The paper studies the problem of *lifelong* model editing, wherein a model has to *sequentially* incorporate new knowledge without retraining and without altering its behavior on unrelated tasks. Specifically, authors tackle the known issue where consecutive edits cause the model to forget previous edits or compromise its general accuracy - as opposed to batch edits where the model can better account for all examples. The paper proposes a framework (SimIE) for this problem that formulates the task of sequential edits as imitating batch edits, i.e. trying to perform sequential changes that add up to the same effect as though the edited examples were processed jointly. Authors design SimIE to be applied on top of multuple/any parameter-modifying editing algorithms to improve their ability to deal with lifelong(sequential) edits. They evaluate their approach on a range of popular past generation LLMs: Llama 2, Mistral, etc, where it augments multiple different editing methods (ROME, MEND, MEMIT, etc) and improves their performance in a lifelong setting - from abysmal to non-abysmal. Claims And Evidence: To the best of my understanding, the main claims in the paper are posed around the proposed SimIE framework - that it can improve the ability to perform sequential (lifelong) edits in terms of standard statistics (locality, generality, reliability) - that it is method-agnostic in the sense that it applies to any parameter-editing algorithms - the theoretical analysis of SimIE optimality under relaxed assumptions The first two claims are, from my perspective, properly supported, though it can be enhanced by editing more recent models. The theoretical claims appear outwardly sound, but could be better supported by testing if the assumptions hold in practice (e.g. the the degree of overparameterization for latest LLMs). Thought, I believe that the paper is worth accepting even without these components. Methods And Evaluation Criteria: Authors evaluate the traditional metrics (locality, generality, reliability) on popular LLM knowledge editing tasks for several models. There are always more datasets and models to further improve the analysis, but the current scope appears sufficient to verify the claims. I am slightly concerned by the fact that the paper **focuses on the arithmetic mean between locality, generality and reliability** in their tables. Since the best results are, in many cases, sub-67% accuracy, this allows a method to win by abandoning one of three criteria - e.g. locality - which would make it a terrible editor. Perhaps it would be better to report (a) geometric mean? (b) conditioned metrics, e.g. best accuracy when locality > 0.8? (or whichever other means authors devise to better represent the method's utility for practitioners) Theoretical Claims: Authors formulate a number of theorems regarding the stability and optimality of SimIE under a number of assumptions. Unfortunately, I only managed to follow the rough outline and did not verify every detail of the proof. Though, I believe that the paper is, in its current form, worth accepting even for its empirical contributions. Experimental Designs Or Analyses: Their evaluation criteria appear sound, though they could be improved by using more recent models. Supplementary Material: I have reviewed additional experimental results in the supplementary materials, but not the proofs. I have read the supplementary code linked on L328 (right) and was able to reproduce the results for MEMIT on Llama 2 7B. I commend the authors for a well documented supplementary code and the reproducibility techniques (e.g. specific dependency versions in requirements). While this wasn't the most important factor in my recommendation to accept the paper, it was certainly one of the factors. Relation To Broader Scientific Literature: To the best of my knowledge, the contributions presented in the paper account for main backbone editing methods from broader literature / prior work. That said, there are some related works that may deserve additional discussion (e.g. https://arxiv.org/pdf/2405.03279 performs lifelong editing through prompt 'learning' - as opposed to parameter editing), but the relation is debatable. Essential References Not Discussed: The idea of model editing was concurrently introduced in [1] and [2]. You cite [2], but it is probably best to also include [1], since it was published about a year earlier. [1] https://openreview.net/forum?id=HJedXaEtvS [2] https://aclanthology.org/2021.emnlp-main.522 Other Strengths And Weaknesses: The proposed framework is general, making it possible for future editing algorithms to evaluate in a lifelong without modifications. Though, there is still a gap between editor-agnostic SimIE and specialized lifelong editors. As I stated earlier, the paper would also become more convincing if authors evaluate on newer *and, importantly, more accurate* LLMs, e.g. Llama-3.x, Qwen 2.5, deepseek R1 (if enough hardware), as of the time of reviewing. This is because more accurate LLMs are known to be easier to break with model perturbation (e.g. quantization https://arxiv.org/abs/2404.14047 ). Hence, it may be easier to notice a loss of generality / locality / reliability there. Other Comments Or Suggestions: Minor: you often capitalize Llama-2 as 'LLaMA-2'. This capitalization was dropped in the second version of the model ( https://arxiv.org/abs/2307.09288 ) and, to the best of my knowledge, did not reappear ever since. Questions For Authors: No questions that would change my evaluations of the paper (as per the reviewing guidelines). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are deeply grateful to Reviewer nKJ9 for the detailed and constructive feedback on our manuscript. Below, we address your questions point-by-point. ## Q1: evaluate on newer and, importantly, more accurate LLMs We conduct additional evaluations on Llama-3 (8B) and Qwen2.5 (7B). We select FT, ROME, AlphaEdit$^-$, WISE, and AlphaEdit as baselines and perform $T=1000$ sequential edits using the ZsRE dataset. The experimental results are presented in the following table: |||||Llama-3 (8B)||||Qwen2.5 (7B)| |-----|-----|-----|-----|-----|-----|-----|-----|-----| | Method | Rel | Gen | Loc | Avg | Rel | Gen | Loc | Avg | | FT | 0.17 | 0.14 | 0.01 | 0.10 | 0.09 | 0.08 | 0.02 | 0.07 | | ROME | 0.09 | 0.08 | 0.01 | 0.06 | 0.22 | 0.22 | 0.09 | 0.18 | | ROME+SimIE | 0.75 | 0.72 | 0.80 | **0.75** | 0.92 | 0.87 | 0.72 | 0.84 | | AlphaEdit$^-$ | 0.06 | 0.06 | 0.03 | 0.05 | 0.67 | 0.63 | 0.76 | 0.68 | | AlphaEdit$^-$+SimIE | 0.74 | 0.67 | 0.75 | 0.72 | 0.91 | 0.83 | 0.88 | **0.87** | | WISE | 0.51 | 0.50 | 1.00 | 0.67 | 0.54 | 0.53 | 0.99 | 0.69 | | AlphaEdit | 0.86 | 0.78 | 0.62 | **0.75** | 0.98 | 0.80| 0.72 | 0.83 | We observe that SimIE achieves competitive performance across these recent models, especially surpassing the SOTA method (AlphaEdit) by an average of $4.8\\%$ on Qwen2.5. These results are consistent with those in the main paper, further confirming the effectiveness of our proposed SimIE. ## Q2: testing if the assumptions hold in practice To provide deeper insights into our theoretical assumptions, we consider two additional analyses assessing the extent to which the assumptions are violated in practice. Please refer to hGQy’s Q2 for details. ## Q3: focuses on the arithmetic mean Although the arithmetic mean is widely adopted, it does have the potential pitfalls you mention. Inspired by your suggestion, we introduce a new composite metric defined as $$e^{\alpha(\mathrm{Loc}-1)}(\mathrm{Rel}*\mathrm{Gen}).$$ This metric takes locality as a penalty term, serving as a smoothed condition factor. Meanwhile, by using the (squared) geometric mean of reliability and generality, it prevents methods from abandoning one of them. We will incorporate this new metric into our paper, thereby offering a clearer view of practical utility. ## Q4: still a gap between SimIE and specialized lifelong editors We recognize that SimlE may underperform some specialized lifelong approaches under certain scenarios. Nevertheless, as increasingly challenging datasets and more realistic evaluations emerge, SimlE provides greater robustness compared to elaborated lifelong algorithms (refer also to our response to Reviewer 1PGC’s Q1). These findings reinforce our central claim: SimlE enables lifelong editing to benefit directly from the ongoing advances in standard editing research, thereby bridging the gap between these two paradigms. ## Q5: related works and minor mistakes All the relevant literature [1,2] mentioned will be thoroughly integrated into our discussion, and the capitalization of Llama-2 will be consistent. Thank you once again for your careful and insightful comments, which provide valuable insights for further refinement of our work. [1] Chen, Qizhou, et al. Lifelong knowledge editing for llms with retrieval-augmented continuous prompt learning. arXiv preprint arXiv:2405.03279 (2024). [2] Sinitsin, Anton, et al. Editable neural networks. ICLR (2020).
null
null
null
null
null
null
The Hidden Joules: Evaluating the Energy Consumption of Vision Backbones for Progress Towards More Efficient Model Inference
Accept (poster)
Summary: This paper introduces an energy efficiency scoring system and develop the corresponding interactive web application for users to compare models based on accuracy and energy consumption. The experimental results show that the proposed scoring system is kind of accurate when estimating the energy consumption from the throughput and TDP aspects of GPU. Claims And Evidence: claims with clear evidence: 1. energy consumption can not be evaluated by the FLOPs alone: the empirical data shows that memory access, activation size, and hardware optimizations can cause the large difference between the energy consumption on the platform. 2. trade-offs between accuracy and efficiency: the work shows multiple results for this. 3. effectiveness of the proposed energy efficiency scoring system: the energy computed according to throughput and GPU thermal design power shows strong correlation with the actual energy consumption. Methods And Evaluation Criteria: The proposed energy efficiency scoring system is quite useful for the hardware deployment related research area, as this work provides a simple and effective framework for researchers to quickly evaluate the power consumption of their designed models. Theoretical Claims: This paper does not contain formal theoretical proofs. Experimental Designs Or Analyses: This work mainly focuses on the experimental results with the ImageNet and GPUs (including H100 and A100 mainly). Also, this work takes the FLOPs, activations, and throughput into consideration for the energy assessment, which shows more accurate compared to other works which only adopts the FLOPs as the major one. Supplementary Material: I review the configurations. Relation To Broader Scientific Literature: The method proposed by this work could be adopted for the normal hardware oriented design works for quickly evaluate the power efficiency of the designed model, which is convenient. Essential References Not Discussed: This work may need to discuss the works that related to the large models like [1]. [1] Carbon Emissions and Large Neural Network Training Other Strengths And Weaknesses: Strength: 1. the energy efficiency scoring system shows the effectiveness of evaluating the energy efficiency for vision models on ImageNet. Weakness: 1. This work did not include the energy efficiency for the down-streaming tasks like object detection and semantic segmentation tasks. 2. This work only includes the platform for GPUs (H100 & A100), while the results for edge devices, like FPGA, NVIDIA Jetson, and Mobile devices, are more important. because especially those battery-powered devices are more sensitive to the energy efficiency. Other Comments Or Suggestions: 1. provide more results for the down-streaming tasks like object detection and semantic segmentation tasks. 2. provide more results for edge devices. Questions For Authors: This work did not take the large language models (LLMs) into consideration, can the framework used for the large-scale LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - Reference: Carbon Emissions and Large Neural Network Training We appreciate your suggestion to cite this important work. We actually referenced the updated version, "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink," which significantly inspired our research. - More variety of hardware and edge devices Battery-powered devices are indeed sensitive to energy efficiency. While A100 and H100 are the most representative GPUs in the AI industry, we agree and recognize the importance of evaluating diverse hardware. To this end, we recently tested approximately 100 models near the efficient frontier on four additional GPUs: RTX 3090, 4090, 5090, and mobile 1650Ti. Results (https://github.com/clinic-engulf-trowel-huihac/gurgling-theology-margarita/blob/main/rebuttal/new_gpus.png) strongly confirm our original trends, further validating the broad applicability of our findings. We also observed that 5090 did not provide any significant efficiency gains over 4090, further highlighting the importance of our research (can’t just wait for new GPUs and expect them to be more efficient) - Down-stream tasks We fully acknowledge the importance of evaluating downstream tasks like object detection and semantic segmentation for a comprehensive assessment of energy efficiency. While our study directly evaluates image classification backbones, many popular detection and segmentation models (e.g., Faster R-CNN, Mask R-CNN, RetinaNet, DeepLab) commonly reuse these vision backbones as feature extractors. Given this standard practice, the energy efficiency characteristics we observed for various architectures—such as CNN, Transformer, and Hybrid models—serve as meaningful indicators for downstream task efficiency. For instance, backbones identified as energy-efficient at classification tasks could likely translate to efficient feature extraction in detection and segmentation pipelines, influencing overall inference efficiency. However, we do recognize the additional computational complexity and varying bottlenecks introduced by downstream modules (e.g., region proposal networks, mask heads, upsampling layers). Hence, explicitly measuring these downstream tasks remains important for precise validation. We plan to extend our methodology to directly measure and confirm how the observed backbone energy characteristics generalize to these more complex tasks in future work. In the revised manuscript, we will more explicitly highlight this link between vision backbones and downstream models, clarifying both the potential insights our current analysis provides and the scope for further empirical validation. - Can the framework be used for the LLMs? Thank you for raising this important consideration. Our current evaluation framework was specifically tailored toward vision backbones. However, we fully recognize the importance and widespread use of large language models (LLMs) in various real-world applications, making their energy efficiency evaluation critically relevant to the sustainable AI community. Although our current study does not include LLMs explicitly, the core principles of our measurement methodology—such as GPU utilization optimization, real-time energy monitoring, and accuracy-performance trade-off metrics—are inherently transferable to large-scale LLM inference scenarios. To adapt our framework effectively to LLMs, we would primarily need to consider: 1. Different inference characteristics: Token-based generation and longer context windows in LLM inference, compared to fixed-size image inference in vision models. 2. Adaptation of evaluation metrics: Metrics such as perplexity, generation quality (e.g., BLEU, ROUGE), or task-specific evaluations (e.g., accuracy on reasoning benchmarks) instead of classification accuracy. 3. Adjustments in batching strategies: Optimal GPU utilization patterns for LLM inference, including considerations for sequence length and context size variability. In future research, we plan to explicitly extend our methodology to evaluate and analyze energy-accuracy trade-offs for large-scale language models, providing analogous insights that could significantly benefit the LLM research and deployment community. We will clearly articulate this limitation and the intended future extension to LLMs in our manuscript, emphasizing the broader applicability of our proposed framework toward sustainable AI practices beyond vision tasks. - Efficiency Score and Practical Applications We appreciate your recognition of our proposed efficiency scoring system’s practical value. We will explicitly reinforce in the manuscript how this system facilitates immediate, actionable insights for hardware-oriented researchers and deployments. We will explicitly highlight how our efficiency scores can guide practical decisions, such as serving as an optimization objective in Neural Architecture Search, and influence future architecture design, directly promoting sustainable AI practices.
Summary: This paper evaluates the energy consumption of ImageNet classification models, focusing on their efficiency across different architectures, datasets, and optimization techniques. The study aims to provide a more accurate assessment of energy consumption in deep learning models and examines accuracy gains relative to increased energy consumption. The findings emphasize the importance of optimizing GPU utilization and inference configurations for energy efficiency. Finally, this work provides an interactive web application to compare models based on accuracy and energy consumption. Claims And Evidence: Yes, claims and insights made in this paper are drawn from analysis and empirical results presented in the paper. Methods And Evaluation Criteria: Yes, the propsed method and evaluation criterai, including evaluated models, datasets and the way of measuring energy consumption makes sense Theoretical Claims: The paper does not include any proof or theoretical claims Experimental Designs Or Analyses: Yes, the experiments appear to be sound Supplementary Material: N/A Relation To Broader Scientific Literature: The paper builds on prior work in energy-efficient deep learning by extending the analysis beyond FLOPs and latency to actual energy consumption. It aligns with studies on hardware-aware model optimization and inference efficiency but differentiates itself by incorporating real-world GPU utilization metrics. However, it could better contextualize its findings with prior work on energy-proportional computing and sustainable AI practices to strengthen its contributions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: - The paper addresses an important and relevant problem for the ML community. Energy efficiency is an increasing important issue as the model grows in size and focusing on energy compared to traditional metrics like FLOPS of latency not only proposes optimizations for model efficiency but also contributed to sustainable AI - The study evaluates a large set of models on relevant datasets, and optimization techniques. - The analysis provides valuable insights on the trade-offs between model architecture, size and batch sizes on accuracy and energy consumption. - The work proposes a web application which can be a useful resource for the community. Weaknesses - The comparison to prior works is limited. It would be beneficial to more clearly discuss the difference between this work and prior works in the area. How is this work different from prior work except for the larger number of models evaluated? - The study primarily uses ImageNet, only one study uses 5 different datasets. When evaluating a set of different datasets, we observe a shift in the observed results. I wonder how the evaluation results generalize for real-life workloads. - The evaluation is limited to just two types of GPUS: A100 and H100. Although the evaluation is mainly focused on model architectures, to evaluate trends and impact of energy efficiency, it would be beneficial to include more variety of hardware (potentially also AMD GPUs) to see if the observed trends generalize to a larger set of hardware. - It is interesting to see the large number of experiments and data collected in the direction of energy consumption of models. However, it is unclear of what the key insights or takeaways are for the paper. and how it can help advance sustainable designs of ML models. Other Comments Or Suggestions: N/A Questions For Authors: 1. Does the observed trends in the analysis generalize beyond the two GPU types and the ImageNet dataset? 2. How does quantization impact energy consumption across different models? 3. The paper discusses the impact of model size (number of parameters) such as in Figure 5. But it seems to consider across multiple different model architecture. Does the analysis also compare models of the same or similar architecture? Have the authors considered the impact of model size in terms of pruning, and how does that affect energy consumption? Perhaps pruning could be used to optimize energy efficiency. 4. To my understanding, the power measurement period for NVIDIA-SMI's power sensor typically operates at a frequency of 10 Hz, meaning it updates every 100 milliseconds. Section 3 of the paper states "We recorded power usage and other GPU metrics at 100 Hz, then logged the data for subsequent analysis." Is there a possibility of getting repeated values since the work queries at a higher frequency than measured, and does that affect the interpretation of the results in any way? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - How is this work different? In addition to evaluating a substantially larger number of models (50–100x more), our work differs from prior studies in several ways: 1-We identified key methodological flaws from previous studies: Henderson et al. (2020) using batch size = 1, and Desislavov et al. (2023) relying on TDP-based energy estimates. Our careful methodology ensures accurate and realistic conclusions. 2-Previous studies have lacked clarity regarding the specifics of deployment. Our research showed that running models in PyTorch can be up to an order of magnitude more energy consuming compared to TensorRT. We conducted our measurements under an industry-standard inference deployment scenario, thereby enhancing the relevance of our results for real-world applications and practitioners. 3-The models and GPUs prior work evaluated are outdated, particularly in light of rapid advancements in the field. By expanding experiments using the RTX 4090 and 5090, and models published until 2024, our study stands as the most comprehensive study to date. Furthermore, our framework, publicly available on GitHub, is designed for easily evaluating new models or GPUs. We encourage the community to contribute by utilizing our framework to assess their models and GPUs and contribute to our GitHub repo and web app. We will clearly articulate these in our revised manuscript. - Energy-proportional computing and Sustainable AI While we briefly discussed sustainable AI at the beginning of the related works, we agree with your suggestions. We will expand the discussion section of our paper along these lines: Our study empirically confirms that current deep learning models are significantly less energy-proportional than idealized hardware expectations suggest, aligning with insights from Barroso and Hölzle (2007). The steep diminishing returns we identify reinforce the need for more energy-proportional model architectures and inference methods. Our findings directly support the goals outlined by Schwartz et al. (2020), advocating for Green AI principles that prioritize energy efficiency alongside accuracy. By demonstrating the marginal gains in accuracy versus exponential energy demands, our work underscores the urgency emphasized by the Sustainable AI movement (Van Wynsberghe, 2021) to embed sustainability criteria into AI model evaluation and selection. Unlike prior studies that relied on theoretical calculations of FLOPs or idealized TDP-based estimations (Henderson et al., 2020; Desislavov et al., 2023), our real-world measurements provide realistic insights into the practical energy consumption patterns of models, bridging a critical gap in sustainable AI practices. - How the evaluation results generalize for real-life workloads We agree that evaluating beyond standard ImageNet accuracy is essential for understanding model performance in more diverse, realistic scenarios. To directly address this, our study included evaluations on five additional datasets specifically designed to assess robustness and out-of-distribution generalization. These datasets intentionally differ from standard ImageNet conditions, capturing real-world challenges such as distribution shifts and adversarial conditions. Please refer to the answer to reviewer 3dgS for more robustness analysis. While we recognize practical deployment evaluations (such as robotic applications) are valuable, they exceed the practical scope of our current study. We believe our extensive robustness analysis already significantly extends practical insights beyond standard benchmarks. - More variety of hardware: Please see answer to Reviewer b1xy. - Takeaways for the paper: Please see answer to Reviewer 3dgS. - Quantization and pruning We acknowledge the ability of quantization and pruning to improve efficiency. The TensorRT inference setup used FP16. We excluded a more detailed investigation because: 1-Quantization and pruning are not architecture changes, but rather optimization techniques applied to any existing architectures. Our study focuses on the inherent energy efficiency characteristics of different architectures themselves. 2-Often needs post-quantization fine-tuning or QAT, and similar for pruning. This process is infeasible for a large-scale study like this. 3-There are some existing works already. To name a few: Understanding the impact of precision quantization on the accuracy and energy of neural networks Pruning Deep Neural Networks for Green Energy-Efficient Models: A Survey We will clarify this scope in the manuscript and cite these works. - nvidia-smi measurement Your point about measurement frequency is valid. Querying nvidia-smi at 100 Hz was intentional, balancing the need to capture recent updates without unnecessary polling overhead. According to the Nyquist theorem, sampling above twice the signal frequency preserves all information. A higher sampling frequency would not do any harm. We will clarify the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The authors have addressed most of my concerns. I will raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your thoughtful suggestions and for raising your score. We are glad that we were able to address most of your concerns. We believe that energy efficiency and sustainability in machine learning is a critically underexplored area. As AI models continue to scale, the energy demands of data centers have become increasingly unsustainable - even leading to AC power distortions in major US cities (https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/). Companies like Microsoft are now taking the unprecedented step of supporting nuclear power plants to meet these demands (https://www.technologyreview.com/2024/09/26/1104516/three-mile-island-microsoft/, https://www.theregister.com/2025/03/12/push_for_nuclear/). The electricity consumption of modern AI systems is reaching alarming levels, and continues to increase at even faster rates. Yet, this issue remains largely overlooked within the ML research community. Overlooking this aspect will inevitably bite back and hinder progress in ML development in the not-so-distant future As the first large-scale and systematic analysis focused on this challenge, we hope our work can draw attention to this problem, raise awareness, and inspire further research. We believe our work will have a long-lasting impact on sustainable AI development. We believe that ICML, with its broad and influential audience, would be an ideal venue to help catalyze this conversation. Thank you again for your time and consideration!
Summary: This paper conducts a comprehensive analysis of energy efficiency of image classification models, around 1200 ImageNet classifiers. The authors found that there is a steep diminishing return in accuracy gains compared to increase in energy use. They further identified key factors contributing to energy consumption of deep learning models. Claims And Evidence: This paper suggest that there is diminishing return of accuracy when it comes to energy consumption. However, much of the conclusions are drawn from results on ImageNet accuracy. However, I am wondering if this is the correct metric to look at. When it comes to scaling deep learning models, model sizes and data both plays a crucial rule. But this paper seems to tell us that there is diminishing return. How does the authors reconcile this contradiction? Methods And Evaluation Criteria: - The reported results in Figure 1 are focused on energy consumption per image. I wonder if a batched inference setting will affect the results. Would it be better to measure the energy consumption for running inference on a fixed set of images? - How does the size of pre-training datasets play a role in determining the accuracy and energy efficiency tradeoff? I think the authors should provide more information on the pretraining data for the evaluated models. Theoretical Claims: This paper does not contain any theoretical proof. Experimental Designs Or Analyses: - Most of the analysis in this paper is focused on the accuracy and energy consumption tradeoff. I wonder if the authors have looked at the training methodologies. Over the last 10 years, there is a significant paradigm shift from supervised learning to self-supervised learning. I feel the authors could provide some analysis on characterizing the energy/accuracy tradeoff under different training schemes. - CLIP models also achieve competitive ImageNet accuracy. Also, they mostly are better in out-of-distribution datasets. I wonder if the authors have evaluated CLIP models in the selected 1200 models. - A major part of the analysis in section 4.1 is focused on characterizing architectures into three categories: Convnet, Transformers and Hybrid models. Could the authors provide some insights or suggestions on designing better model architectures in terms of energy efficiency and accuracy? From the provided results, it seems that Transformers are still the most promising architecture? Supplementary Material: I have viewed the supplementary material. Relation To Broader Scientific Literature: This paper provides evidence on the energy vs accuracy tradeoff of ImageNet classifiers, which should be of interest to broader machine learning industry. Essential References Not Discussed: This paper focuses on the inference energy efficiency of ImageNet classification models, which goes beyond the standard accuracy evaluation. It would be good to discuss this related work on efficiency evaluation of deep learning models [1]. [1] The Efficiency Misnomer. 2021. Other Strengths And Weaknesses: Regarding the significance, I think the idea of evaluating energy consumption of deep learning models is a realistic problem to look at. This is an important problem given that AI is growing every year and would cost more electricity to run these models. I would like the authors to provide some discussions on the implications of the results for AI development. Like going forward, is there anything we can do to improve energy efficiency of these models? Other Comments Or Suggestions: Could the authors provide a complete list of the models evaluated in text? I want to check if the authors have included certain models but couldn't do it because no information is provided. It is much harder to check by going through each points in the provided website. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - Batched inference setting Thank you for raising the important issue of batched inference. Our measurement methodology explicitly focuses on maximizing GPU utilization to ensure fair comparisons across models by testing a variety of different batch sizes and selecting the most energy-efficient inference settings. In fact, we deliberately critique Henderson et al. (2020), who ran all their experiments with batch size = 1 and concluded that FLOPs and Params have no correlation with energy. We will clarify and reinforce this point in the manuscript, ensuring readers understand our decision to measure energy per image at optimal GPU utilization. - CLIP models and robustness We did not test any <multimodal> CLIP models, but we tested several models that were CLIP style trained and then fine-tuned for the ImageNet 1k dataset. We additionally evaluated three multimodal CLIP models (ViT-B-32, ViT-B-16, ViT-L-14) in a zero-shot scenario. Compared to their fine-tuned counterparts (text encoder removed, linear head added), their energy consumption per image remains largely similar (within 2%), while accuracy is much lower. We also performed some further analysis on robustness: comparing original ImageNet validation accuracy with the average accuracy across five robustness datasets—reveals that CNNs drop 29.7%, hybrid models 27.6%, and Transformers 24.0%. The CLIP models only drop 15%, demonstrating notably stronger out-of-distribution robustness. (https://github.com/clinic-engulf-trowel-huihac/gurgling-theology-margarita/blob/main/rebuttal/robust2.png) Additionally, we grouped models into 10 deciles by energy consumption and observed that higher-energy-consuming models experience smaller accuracy drops, aligning with our findings presented in paper Fig 3. (https://github.com/clinic-engulf-trowel-huihac/gurgling-theology-margarita/blob/main/rebuttal/robust!.png) - More information on training, how does training affect energy and accuracy tradeoff, training has changed over the past 10 years We agree that training has significantly evolved over the past decade, involving larger datasets (LAION), self-supervised methods, and various training recipes. We've analyzed publication dates of the evaluated models and found that while accuracy consistently improves year over year in high-accuracy regions, improvements in the high-efficiency region appear sporadic. Notably, the most efficient models tend to be older, dating back 2–3 years. (https://github.com/clinic-engulf-trowel-huihac/gurgling-theology-margarita/blob/main/rebuttal/yearly.pdf) We will provide more explicit information on model training in the appendix. vit_base_patch16_clip_224 trained with laion2b are 0.2% more accurate than the openai version, but with significantly more training data. While selecting a pretrained model for inference, one would pick the higher accuracy version; the question of whether consuming significantly more energy during training for marginal accuracy gains (e.g., 0.2%) is justified is beyond the scope of this work. This topic alone deserves a future investigation by itself. - Going forward, is there anything we can do to improve energy efficiency? Based on our findings, we suggest several practical guidelines: Avoid unnecessarily increasing input image sizes, as this significantly increases energy consumption with minimal accuracy improvements. Fully utilize GPU capabilities through optimized batching. Leverage our interactive web tool to select and compare models effectively. Utilize our efficiency scoring system to evaluate models, potentially integrating it as an optimization loss function for neural architecture searches and designing new energy-efficient models. Overall, we laid a solid groundwork in the much-neglected field of energy-efficient machine learning and sustainable AI by this comprehensive study on the current trends, factors that influence energy consumption, rectified existing misconceptions, best practices for efficient model deployment, and provided various tools to estimate energy consumption and tradeoff between accuracy. The data, results, and conclusions presented in our study are entirely novel, offering groundbreaking insights into the energy efficiency of deep learning models. We hope this work serves as a catalyst for the development of sustainable AI. - Could the authors provide a complete list of the models evaluated in the text? The complete list of evaluated models is currently available in our GitHub repo (https://github.com/clinic-engulf-trowel-huihac/gurgling-theology-margarita/blob/main/source/model_db.csv). We will include this list directly in the appendix for the reader's convenience. --- Rebuttal Comment 1.1: Comment: Thanks for the response. My concern is mostly addressed. Thus I remain my positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for your promptly response and for maintaining a positive assessment of our paper. We’d like to kindly point out that some additional experiments and clarifications—especially our evaluation on four more GPUs (RTX 3090, 4090, 5000, and a mobile 1650Ti)—were included in our responses to other reviewers. These results demonstrate that our conclusions generalize across newer GPU architectures (Ada Lovelace and Blackwell) as well as different form factors (e.g., mobile GPUs on battery), which we believe significantly broadens the impact and relevance of our work. We also further clarified our distinctions from prior work, highlighted connections to energy-proportional computing and sustainable AI, and discussed extensions to downstream tasks and LLMs. We’d greatly appreciate it if you had a moment to look over these additions, and if you find them compelling, we’d be grateful if you’d consider adjusting your score accordingly. Thank you again for your thoughtful review and engagement.
null
null
null
null
null
null
null
null
Attributes Shape the Embedding Space of Face Recognition Models
Accept (poster)
Summary: The paper studies the organization of embeddings in face recognition task with respect to facial attributes. The study is mainly focussed on ArcFace and FaceNet models, while the study uses results on LFW and CelebA. ## Update after rebuttal In my initial review, most of the local issues raised were w.r.t clarity. They have been answered and while I cannot pinpoint to one as being wrong or as having not addressed completely the issues, the answers have raised the clarity only partially. The explanation are ample, with many mathematical formulation that need to be matched with the larger explanation from the paper. Thus the paper needs to be rebuild with all new information in its place to have a better understanding. Overall, I have raised my grade from 1 to 2, but am I not confident to raise it in the "acceptance range" because: - while authors have provided their arguments that FR theme is treated in NIPS and ICML, I maintain that this paper is better suited in a "Face" or "Biometric" conference. The mentioned papers are scarce and distributed over several years. In my view, a paper would need to be very strong on a niche theme attract auditorium in ICML. - the clarity is improved. But now there are too many pieces on the table. I feel that I need to reread a complete paper to be certain that the paper is clear. I am still not convinced that the energy has value for a reader. Claims And Evidence: The paper makes relatively vague claims. They can be inferred from page 2 l88-l108, left column. They would be: 1. "a procedure to check if attributes shaping the relations inter-identities are the ones most deterministically linked to an identity" 2. "an invariance energy measure to quantify the invariance of the embedding model to each attribute" A problem with both is that steps are taken towards achieving them, but the results are not clear. For claim 1, this is treated in section 3.2 but the results are in appendix B (outside the paper and therefore not mandatory to be read for review). In Appendix B (where space is unlimited) we find figure 6 (where writing and all graphical signs are too small) and without explanation. What is better? What is worse? In section 3.2 it is mentioned that there are two models yet in the appendix there is a single set of results. Some results are showed again in figure 1d, but that is again too small and it is not quite clear what is there (w.r.t what is expected). For claim 2, the paper indeed proposes an energy, but the derivation results and interpretation are not very appealing. Following section 4, it is not clear to me if "lager is better" or viceversa. Methods And Evaluation Criteria: The benchmarks make sense. The metrics make less sense. Alternatively, the paper might revisit the explanation and make it more clear. Theoretical Claims: The paper does not contribute significantly on theoretical side. Experimental Designs Or Analyses: The experimental design is arguable. While the benchmark and the problem make sense, the assumed methodology, results presentation and explanation provided are not fluent. The proposed energy is not something that reader can take away and used it to explore their models. Supplementary Material: The supplementary material contains the bulk of the results, but again it lacks in explanation, therefore is less useful. I have checked carefully Appendices A-D. Relation To Broader Scientific Literature: The paper approaches a deconstruction of the model used to provide embeddings in the face recognition tasks. There are previous results within the same trend, but the precise direction of the paper is not, to my best knowledge, previously explored. On other hand, the specific of the paper, the findings... I am not sure that ICML is the best place to present them; they seem to be more suitable in a Face dedicated conference (IEEE Face&Gesture, Biometrics, etc.) Essential References Not Discussed: I believe the paper is fine on this criteria. Other Strengths And Weaknesses: The paper approaches an interesting direction that definitely is worth of investigation. My concerns are the following: 1. instead of ICML the paper might be better suited to "Face" or "Biometrics" dedicated conference. The significance of the results is less relevant for the general Machine Learning community and more for the face recognition. 2. the paper is not clear. The method used to investigate the relevance of attributes and their impact over the mapping providing embeddings lack clarity. On the formal aspect, the paper sends the reader too many times to inform in the appendices, but some details are omitted there, too. Figures are too small and it is not explained what should be seen. On the information, the paper chooses to use a mathematical formulation, but often fails to clarify things. For instance why do we need to define a space of a transform of a specific attribute, which is than simulated with a GAN? 3. The findings are not clear. There are conclusions that try to summarize the findings, but they are vague and poorly correlated with the results (or with results presentation?) Other Comments Or Suggestions: I have not notice typos. Questions For Authors: - What is meant by "macroscale" and respectively by "microscale"? - What distance (i.e. L2, Cosinus, etc) is used for Table 1? And the pretraining process was done on which database? This may be related to information presented in Appendix A ("The models where the embedding space is equipped with cosine distance (AdaFace, ArcFace, SphereFaceR) show the distance in degrees") but a plain explanation would be better. - Similar questions for table 2. The caption and text explanation make things very vague ("we find significant negative Spearman correlations between the intra-entropies for each attribute and the previously obtained KSa (Table 2)") - Eq (2) what is meant by "\circle" between f and \alpha? Ethical Review Concerns: The paper approaches the face recognition theme, which is a sensitive one, but in my view, the approach, the proposal and the finding do not raise any ethical flag. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Here $E$=embedding space; $A$=attribute/s; $MS$=macroscale; $ms$=microscale 1.*"Figures too small, unreadable"* Thank you for the comment. Anonymous repo shows improved Figs and Tables: https://shorturl.at/12ZFg 2.*"Paper uses ArcFace and FaceNet while uses results on LFW and CelebA. Meaning of $MS$ and $ms$"* We point to the summaries provided by the 3 other reviewers. In brief, we study how interpretable $A$s of images impact the geometry of the $E$ of FR models. We clarify that we use LFW to compare the intra or inter-class distances ($d_b$, $\overline{d}$) for various models (Table 1). This motivates us to study $E$ through the lenses of $A$s at two scales (lines 137-143): $ms$ and $MS$, focusing on intra or inter-class geometry. The $ms$ relates to single identity point clouds (images as points); $MS$ to the whole $E$, (identities as points). We use CelebA to analyze $MS$ as it is endowed with interpretable $A$s. We explore the $ms$ through mapping augmented data on $E$ according to $A$s given by the latent space of GAN-Control and other $A$s (brightness, contrast, hue). 3.*"Vague claim 1; unclear Fig 1d. The paper sends the reader to Appendix. Vague Table 2"* We agree to improve Sec. 3 discussing $MS$ (see anon. repo) and the Appendix. However, we clarify that the results are in the main paper: Table 2 and Fig. 1d. Due to space constraints, Sec. 3 shows: (a) How to compare the $E$ of ArcFace and FaceNet through KSdist from the lenses of binary interpretable $A$s. (b) How FaceNet differs from ArcFace since the KSdist are higher (see new Fig. 1d, scatterplot higher than the diagonal), showing that for FaceNet, the $E$ has higher structural dependency on the $A$s. (c) Significant negative correlation (Table 2) between the intra-entropy of an $A$ (= *how much the $A$ varies inside each identity point cloud*) and the KSstat (= *how much an $A$ shapes the global geometry of $E$*), confirmed by the "bubble-size" of the scatterplot of Fig. 1d: the more the $A$s vary within identities, the less they shape $E$ at $MS$. Fig.6 analyzes only the CelebA $A$s, no mapping on $E$: the $A$s with greatest (bald, male) or lowest (mouth-slightly-open, smiling) intra-entropy from new Fig. 6b have high and low KSstat on Fig. 1d, respectively. 4.*"Claim 2: derivation and interpretation of the energy not appealing. From Sec. 4, unclear if "lager is better"* We refer to Sec. 2, then Sec. 4.1 and Fig. 2c. Sec. 2 observes that FR models achieving better identity recognition should be approximately invariant to $A$s. Thus, in Sec. 4.1, we conjecture that an FR model has high sensitivity to an $A$ if the associated local vector-field derived from the data augmentation on the same $A$ has low energy (Fig. 2c "Aligned" $\mathcal{E}=0.0$ compared to $\mathcal{E}=0.75$ "Unaligned"). We conclude that the model is more invariant to $A$s if the energy is higher ("higher energy is better"). 5.*"Energy is not something that one can take away to explore their models."* Our methods rely on either (1) an attributed dataset or (2) controllable data augmentation. If one wishes to diagnose how a particular $A$ is treated by a model, one needs to measure and/or control $A$. Since metadata labeling is expensive, controllable augmentation is the easiest way. As noted by Reviewer EQbt, one of the energy usages is to guide the fine-tuning process to achieve a better FR model. The energy measures the $A$s over which an FR model is less invariant. Thus, it not only shows the model's limitations but is also a quantitative tool for investigating potential biases and providing directions for model improvement and resilience. 6.*"Paper might be better suited to FR conference. Results less relevant for the general ML community."* We note that our references encompass previous works covering solely the FR domain and appearing in general ML conferences: (a) ICML2021 "Larnet: Lie algebra residual network for FR"; (b) NeurIPS2024 "TopoFR: A closer look at topology alignment on FR". Inspired by those, we suggest that FR is a significant domain for general ML development, specifically for representation learning. Indeed, it deals with a high-dimensional domain, where the low-dimensional manifold hypothesis has been verified (e.g., "The intrinsic dimension of images and its impact on learning." ICLR2021), so that embedding projection and metric learning are meaningful approaches. In addition, it motivates us to interpret $E$ because of the sensitive nature of identity recognition for face images. Furthermore, past works have driven a valid effort to collect images with metadata, thus enabling further analysis of $E$. Moreover, the hierarchical distinction of $MS$ and $ms$ can be extended to other open-set metric learning frameworks. 7.*"Distance for Table 1? Pretraining on which database?* See Table 5 in anon. repo listing the used models. FaceNet with Euclidean, others with cosine. 8.*"Eq2 circle"* Composition of functions.
Summary: This paper provides comprehensive analysis of the embedding space of face recognition models. Through a geometric structure perspective, this work analyses the macroscale and microscale structures of the embedding space, and quantifies how human interpretable attributes influence the structures. Experimental results show that the proposed "invariance measure" can well quantify the sensitivity of FR models to different attributes, and targeted finetuning strategy guided by the "invariance measure" effectively brings performance gains on targeted attributes. ## update after rebuttal The authors have provided detailed and reasonable responses. Though it would be better if constraints of the proposed method could be relaxed and more interesting analysis could be observed, this paper in current form is ok for ICML. So, I would like to keep "Weak accept" rating unchanged. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: [Strengths] - This work provides deeper and interesting interpretability of FR embeddings from a geometric structure perspective. - The proposed "invariance measure" can well quantify the sensitivity of FR models to different attributes, so that can suggest new avenues for improving the robustness of FR systems. - This paper is well written and easy to follow. [Weaknesses] - The calculation of "invariance measure" relies on strictly controlable face images, so that it can only be calculated and analysed on GAN-generated images currently, which may limit its' real-world applications. - Though sounds reasonable, the observations in Figure 4 are not surprising. Analysis of sensitivity of face recognition models with respect to attributes can also be accomplished by constructing face recognition testing sets featured with specific face attributes. Some "non-trivial" observations through the proposed "invariance measure" may help highlight the contribution of this work. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. *"The calculation of 'invariance measure' relies on strictly controlable face images, so that it can only be calculated and analysed on GAN-generated images currently, which may limit its' real-world applications."* Our work requires the ability to act meaningfully on the input space, which can be challenging. However, while GANs provide a powerful tool for generating controlled variations in complex attributes, they are not strictly necessary. Low-level data augmentation techniques, such as rotations or brightness adjustments, can also be used effectively to create variations in input data. The brightness, contrast, and hue in Fig. 4 belong to this class of attributes. Moreover, our proposed methodology is not limited to image data. It can be applied to interpretable inputs, such as tabular data (cf. our toy model validation experiment, where we don't use a GAN). In this case, sensitivity analysis methods are well-established, but our approach, specifically aimed at measuring invariances for a set of continuous attributes, could be complementary. Finally, transforming complex input data meaningfully remains challenging, but architectures like GAN-Control show that the ability to predict is enough to gain some controllability. Rapid advances in generative modeling may further enhance the applicability of our approach. 2. *"Unsurprising results for the invariance energy."* In Fig. 4, panels c and d are about the fine-tuning validation, so we assume your comment refers to panels a and b. The results shown in panels a and b align with expectations and, in fact, mirror findings from the macroscale experiment: they show that interpretable attributes have a greater influence on FaceNet's embedding space compared to ArcFace or AdaFace. However, the point we want to highlight is the methodology, which is different. The invariance measure allows us to analyze continuous data, for which scales are incomparable in a principled manner. For instance, it's challenging to directly compare the scale of age with that of hair color. By focusing solely on directional information, our approach provides coherent results without assuming a relation between data scales, but while, concretely, both macro- and microscale experiments use testing image sets, the tools to analyze them differ. 3. Unrelatedly, here we provide a link to an anonymous repository https://shorturl.at/12ZFg to share updated figures and tables following the comments of other reviewers --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. Most of my concerns have been addressed.
Summary: This paper investigates the geometric structure of the embedding space in Face Recognition (FR) models, focusing on how human-interpretable facial and image attributes influence the learned representations. FR models, which use deep learning and contrastive losses, aim to map images of the same identity closer together in a high-dimensional space. However, the learned embeddings also encode other attributes such as hair color or image contrast, which can impact model performance and fairness. The study introduces a new physics-inspired alignment metric to analyze the dependence or invariance of FR models to these attributes. Claims And Evidence: Yes. Methods And Evaluation Criteria: The paper presents a comprehensive methodology for analyzing the geometric structure of the embedding space in Face Recognition (FR) models, with a focus on attribute invariance. However, there are several areas where both the methods and experiments could be improved or expanded: 1. The invariance energy measure is primarily evaluated on the CelebA dataset, which, while popular, may not encompass the full range of variability seen in real-world face recognition tasks. The dataset is limited in terms of facial attributes and may not capture all the complexities of face recognition across different demographics, environments, and conditions. 2. While the study emphasizes the geometric structure of the embedding space, it could further enhance the interpretability of the results by visualizing the learned embeddings in a more user-friendly way. 3. The study compares a few models (FaceNet, ArcFace, AdaFace), but it would benefit from a more thorough cross-model analysis. 4. While the study focuses on fine-tuning with single attribute augmentation, it would be better to fully explore the effect of multi-attribute data augmentation on the embedding space. 5. Although the paper provides some insights into the geometric structure of the embedding space, it could be difficult for non-experts to intuitively understand how the changes in embedding space occur due to different attributes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Please see "Methods And Evaluation Criteria". Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: / Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The writing of this paper is good, and the structure is easy to follow. 2. The proposed method sounds reasonable. 3. The experimental results look good. Weaknesses: 1. The invariance energy measure is primarily evaluated on the CelebA dataset, which, while popular, may not encompass the full range of variability seen in real-world face recognition tasks. The dataset is limited in terms of facial attributes and may not capture all the complexities of face recognition across different demographics, environments, and conditions. 2. While the study emphasizes the geometric structure of the embedding space, it could further enhance the interpretability of the results by visualizing the learned embeddings in a more user-friendly way. 3. The study compares a few models (FaceNet, ArcFace, AdaFace), but it would benefit from a more thorough cross-model analysis. 4. While the study focuses on fine-tuning with single attribute augmentation, it would be better to fully explore the effect of multi-attribute data augmentation on the embedding space. 5. Although the paper provides some insights into the geometric structure of the embedding space, it could be difficult for non-experts to intuitively understand how the changes in embedding space occur due to different attributes. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. *"The invariance energy measure is primarily evaluated on the CelebA dataset, which, while popular, may not encompass the full range of variability seen in real-world face recognition tasks."* We want to clarify that we use CelebA for the macroscale experiment to compute the KS statistic on a real-world dataset, but GAN-Control is our main source of images for the invariance energy experiments. GAN-Control allows us to create a large number of variations of an image in a systematic and scalable way, which would not be possible with real images. In particular, we obtained more than 120K images by identity, by varying over 8 continuous latent variables corresponding to the 8 attributes. We put the details in Appendix C.2 of the submitted paper. Still, we acknowledge that the generated variability for each attribute is limited by the GAN-Control quality of generation. 2. *"the study ... could further enhance the interpretability of the results by visualizing the learned embeddings in a more user-friendly way."* The high-dimensional nature of embedding spaces in state-of-the-art models inherently limits the fidelity of visualizations. However, for the microscale analysis, we believe that visualizing the vector field over the embedding space, rather than just the embeddings themselves, can enhance interpretability. In Figure 2c, we illustrate this concept using a toy model in a low-dimensional embedding space. Although this simplification doesn't fully capture the complexity of high-dimensional face recognition models, it provides a conceptual framework. For instance, an attribute like *contrast* might appear more "disordered" compared to "head angle," aiding readers in understanding the underlying measure. 3. *"The study compares a few models."* We kindly refer to the answer 4 given to reviewer EQbt, who raised a similar point. 4. *"multi-attribute data augmentation on the embedding space."* When creating synthetic identity point clouds, we augment the starting image with all combinations of attributes (see Appendix C.2 for details). However, when computing the invariance measure, we sample curves with "infinitesimal" variations along a single attribute to approximate the natural definition of the vector field. While considering multiple attributes simultaneously is intriguing, it is unclear what geometric construct would best represent this scenario. One could explore the energy of a k-vector field or average the vector fields across different attribute augmentations. Although this presents an interesting research direction, we defer this investigation to future work. 5. *"Intuitive understanding of the change in embedding space."* Visualizing changes in high-dimensional spaces can be challenging, even for experts. Our goal in this work is to provide quantitative insights on how embeddings behave when slightly varying an interpretable attribute. If modifying an attribute shifts the embeddings in a "random" way, the energy will be high. Conversely, an attribute modification yielding very predictable changes in embeddings would correspond to an ordered embedding space.
Summary: The paper describes a multi-scale geometric structure in embedding space created by Face Recognition (FR) models' feature embedding. The paper proposes a geometric-based approach to understand the influence of facial and image attributes to FR models. A physics-inspired alignment metric is also introduce. The main findings help understand the models having some degrees of invariance across various attributes. This leading to deeper interpretability of the models' strengths and weaknesses. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There is no theoretical claims and proofs given in the paper. The paper defines metrics and formulas to obtain numbers for experimental analysis. Experimental Designs Or Analyses: Yes. The Macroscale and microscope analysis make sense as they're looking at different scale with regards of identity, global (across multiple identities) vs. local (within one identity). Supplementary Material: No. I didn't review the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper provide some basic and metrics to be used to understand any issue of the pre-trained FR models and how fine-tuning may help to improve them. Essential References Not Discussed: None Other Strengths And Weaknesses: One potential strength is the invariance energy measure is that it could help guide the fine-tuning process to achieve a better FR model. However, the paper has some weaknesses/limitation as follows. 1. Lacking of some further analysis on some attributes overlapping between macroscale and microscale analysis, e.g. hair color, age, expression. 2. Missing a reference to a sub-figure in the paper, i.e. Fig. 1c. Other Comments Or Suggestions: There should be a section to conclude on the influence of attributes to both macroscale and microscale with specific examples and further analysis as suggested in the weaknesses section. Questions For Authors: 1. Why only two FR models are chosen for the macroscale and microscale analysis in the paper? How about the other models shown in table 1? 2. What is the architecture of the ArcFace used in the analysis? Is it ResNet18 or ResNet50? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: 1. *"There should be a section to conclude on the influence of attributes to both macroscale and microscale with specific examples and further analysis as suggested in the weaknesses section... Lacking of some further analysis on some attributes overlapping between macroscale and microscale analysis, e.g., hair color, age, expression."* Thank you for the interesting comment. We don't have a detailed answer for the current work as the data type is different for the macroscale and microscale analysis (binary VS continuous); in addition, for the macroscale the attributes come from CelebA metadata, whereas for the microscale, the attributes are derived from GAN-Control; therefore, we think the mapping from both experiments is not straightforward. In general, however, the attributes in the micro and macro-scale are explored in different ways, even if they share the same name. The microscale analysis describes the effect of *small* variations of images of the same identity, while the macroscale describes the effect of *large* variations for images belonging to different identities. 2. *"Missing reference to Fig. 1c."* Thank you for the suggestion. We will include the missing reference in Sec. 3.1. 3. *"One potential strength of the invariance energy measure is that it could help guide the fine-tuning process to achieve a better FR model."* We thank the reviewer for the comment. Indeed, one of the possibilities of making use of the proposed microscale invariance energy measure is to improve the recognition performance after measuring the invariance energy on meaningful attributes. In particular, as suggested, we can think of using the invariance measure to drive further fine-tuning on specific attributes. 4. *"Why only two FR models are chosen for the macroscale and microscale analysis in the paper? How about the other models shown in table 1?"* For the macroscale experiment, we filled the gap and conducted the experiment with all models mentioned, as it is computationally feasible. Results are reported in the updated Table 2 on this anonymized repository: https://shorturl.at/12ZFg. The conclusions remain consistent: the embedding spaces' macroscale structures are most significantly influenced by attributes with the lowest intra-entropy, i.e., those most deterministically linked to an identity. For the microscale experiment, we computed the energy for four models (AdaFace, FaceNet, SphereFaceR, ArcFace-ResNet18) as shown in Figure 12 of the Appendix. However, due to computational constraints, we performed fine-tuning validation only on two of the best-performing models. A comprehensive analysis of all combinations of losses and backbone architectures for the FR models is beyond the scope of this paper. We believe the results discussed are sufficient to demonstrate that the proposed microscale and macroscale analyses are significant and valuable for exploring the embedding space through interpretable attributes. 5. *"What is the architecture of the ArcFace used in the analysis? Is it ResNet18 or ResNet50?"* For the macroscale, table 2 of the paper originally reported results for ArcFace with a ResNet50 backbone. The updated table 2 in the anonymized repository (same link as previous answer) now reports results for both backbones and other models. For the microscale analysis, AdaFace and ArcFace have the ResNet18 backbone. --- Rebuttal Comment 1.1: Comment: Thanks for your feedback. It would be great for authors to include those updated results in the main paper. Following up on the point #1 raised: Thank you for the detailed explanation regarding the separate nature of the micro and macroscale analyses and the current limitations (data type/source mismatch). While a direct comparison isn't feasible now, could you speculate on the potential relationship between attribute effects at these different scales? For future work aiming to overcome the binary vs. continuous challenge, do you think employing generation methods based on descriptive inputs, like those using large language models, could be a viable path to create controlled binary variations at the micro-level, thus allowing for a more direct comparison with the binary macro-scale findings you presented? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the comment. If accepted, we will include the updated results in the main paper. We speculate that the attributes affecting the macroscale the most are less impactful on the microscale. With all the limitations already discussed and only referring to the analyzed models, we can describe the most discriminative attributes between identities as more impactful for the macroscale. At the same time, we can speculate that the model has learned to be invariant to the same attributes (e.g., gender) inside the embedding region corresponding to a single identity since they should remain approximately constant for a specific identity. We note that some attributes are continuous and can be finely discretized (face orientation), whereas others are inherently categorical (e.g., wearing eyeglasses). For continuous attributes, we further note that while observing the microscale requires relatively small variations of the attributes, the macroscale is generally affected by large variations since it looks at the embedding space across identities (e.g., gender change = "big" attribute variation). However, for some categorical attributes (wearing a necklace), we might speculate that the microscale is affected, whereas the macroscale is not. The invariance energy can capture this behavior inside the identity regions at the microscale. Note also that in the microscale experiments with GanControl, we applied a refined discretization of the latent space to compute the local displacements relative to the single attributes (see Appendix C.2), attaining identity retention on augmented data. During the design of our experiments, at a preliminary phase, we considered using LLM multimodal generative models for image generation to obtain controlled data augmentation. We found it challenging to obtain a programmatically controllable and reproducible augmentation providing a refined discretization representing local variations of intrinsically continuous attributes. On the other hand, at the macroscale, we think that using multimodal LLM to obtain augmented images is more manageable. However, it remains challenging to achieve controllable and reproducible data augmentation. We might suggest that to make the LLM generation more robust, it is generally possible to use a classifier to describe the attribute of the generated images. Nonetheless, we observe that the process of attribute definition, at least for face images, can be inherently multimodal for many face attributes because it can involve describing images through text. Thus, we agree that multimodal LLM for refined augmentation to analyze the embedding space at the microscale and the macroscale is a meaningful suggestion for future research to enable direct comparison at the two scales. As already pointed out to Rev 43PU, we think that the analysis of embeddings in FR can be a playground for the development of ML methods useful beyond the FR domain, including controllable augmentation via multimodal LLM.
null
null
null
null
null
null
Learning Monotonic Probabilities with a Generative Cost Model
Accept (poster)
Summary: The paper tackles the problem of enforcing monotonicity in predictions as a function of some variables if the true underlying function also abides by the monotonicity argument. Given the prediction target $y$ that is supposed to be monotonic with respect to the variable $r$, the paper relies on intuitive observation that this monotonicity condition can be translated to some additional auxiliary variable as long as that variable satisfies conditional independence condition. The paper then proposes a generative model to learn this auxiliary variable using variational inference. Experiments are done on synthetic dataset as well other real-world datasets. Claims And Evidence: 1. Theoretical claims are well supported, and the paper is generally well-motivated in that sense. 2. I like the synthetic experiment on quantile regression to tractably study the problem setting, however I could not verify the insights in Figure 6. The paper claims estimated quantile values maintain strict monotonicity; I think I'd be helped if authors can provide an elaborate and annotated version of Figure 6 so that I can compare the violations of monotonicity for baselines methods. 3. In a similar sense, I think I'd be helped if more intuition can be provided for the real-world dataset experiments in Section 5.2. The paper currently compares standard metrics like AUC and RMSE, but I'd appreciate if authors could clarify how the monotonicity assumption or requirement factors in that improves these metrics as the paper claims or the experiments suggest. Methods And Evaluation Criteria: I like the synthetic experiment to tractably study, and it makes a lot of sene to study quantile regression as a test-bed for monotonicity (however, I believe the experimental insights can be presented more meaningfully), and would appreciate clarification on the evaluation on real-world datasets (see above). I'd personally also like to see comparisons on the efficiency or the computational aspects of the inference mechanism as proposed in the paper compared to the baselines. I see Table 2, and to me, the reported numbers live close to each other, or the improvements are not that significant; I don't want to play the devil here as I don't necessarily think that's a strong concern, but I'd love to get some more insights as what are other potential improvements of the proposed methodology; computational, ease of applicability, etc. Theoretical Claims: While I do get the intuition behind their theoretical claims (Lemma 4.1 and Lemma 4.2), I haven't very rigorously verified the notational description of the proofs. But to me the results are basic (in a positive sense) and correct. Experimental Designs Or Analyses: Some questions remain (see above). Supplementary Material: No. Relation To Broader Scientific Literature: The paper presents an intuitive insight and builds an inference mechanism around it to enforce monotonicity in the probabilistic predictions. Compared to the literature in the paper, it Essential References Not Discussed: Not that I'm familiar of. Other Strengths And Weaknesses: The paper is generally written well, but maybe proofs can go to appendix, to make space for more experimental insights. In addition to that, the paper is bit heavy on the notation, and could use some intuition to motivate the main insights. And maybe positioning it more broadly it into the current literature could help strengthen the contributions. Other Comments Or Suggestions: None. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the reviews and valuable suggestions. Here are our responses to the concerns raised: # Answer 1: Clarification on Figure 6 The scatter plot in the background represents the training instances $(x_i, y_i)$, with $p(y|x)$ defined in the southeast corner of page 7. Consider $y_r|x$ as the r-th quantile of $y$ given $x$; thus, $y_r$ is a function of $x$, and $(x, y_r)$ can form a curve $\\Gamma_r$ on the x-y plane. For $r_2>r_1$, the monotonicity of quantiles ensures that the curve $\\Gamma_{r_2}$ lies above the curve $\\Gamma_{r_1}$. In Figure 6, five estimated quantile curves (red curves) are plotted for $r$ in $(0.1, 0.3, 0.5, 0.7, 0.9)$ using different monotonic modeling methods at different training stages. Each column signifies a distinct modeling approach, and each row denotes a training stage. An accurate quantile value approximation should result in each estimated quantile curve being close to its actual quantile curve. Therefore, the estimated curves should: - Not have excessively narrow gaps between them. - Have $\\Gamma_{0.1}$ and $\\Gamma_{0.9}$ adjacent to the edges of the background scatter plot, with limited outliers. Observing Figure 6 reveals that: - The red curves for PosNN, MM, SMM, and SMNN (mostly monotonic by construction methods) are clustered in certain areas. - CMNN, Hint, and PWL (mostly monotonic by regularization methods) exhibit too many outliers. - GCM displays distinct gaps between the red curves and the fewest outliers. # Answer 2: More Intuition for the Experiments **Comparison between GCM and IGCM**: Our analysis reveals that within public datasets, the IGCM outperforms the GCM. This improved performance is attributed to the fact that real-world scenarios do not always adhere to strict monotonic relationships; for instance, while BMI is correlated with diabetes, it is not necessarily causative. Typically, the selection of monotonic features is informed by experience or statistical analysis. Under these circumstances, implicit monotonic assumptions tend to be more applicable. Our experimental data bolsters this assertion, as IGCM demonstrates superior performance to GCM in five of the six datasets. **Comparison between strict monotonic methods**: The strict monotonic methods evaluated include PosNN, MM, SMM, CMNN, SMNN, and the proposed GCM. The basic PosNN shows poor performance across most datasets, indicating that merely establishing a positively weighted neural network is insufficient for monotonic problems. Interestingly, the traditional MM method performs commendably, and its successor, SMM, exhibits a correlation, indicating that the core min-max architecture proves effective in optimizing the monotonic network. Recent developments like CMNN and SMNN introduce innovative monotonic network architectures, outperforming MM in several experiments. Our proposed GCM method consistently exhibits strong performance across all datasets, suggesting that reframing the monotonic problem as a generative one yields enhanced results compared to simply altering the network structure. **Comparison between nonstrict monotonic methods**: Hint, PWL, and IGCM are non-strict monotonic methods, yet only IGCM addresses the non-strict issue through implicit monotonic modeling. This approach consists of two strict monotonic challenges: $p(r|k)$ and $p(y|k)$, establishing a nonstrict monotonic link between $y$ and $r$, which proves beneficial when compared to traditional non-strict monotonic methods. # Answer 3: Other Potential Improvements **Rapid Decision Making**: As illustrated in Appendix D, GCM and IGCM provide the quickest inference for multiple revenue variables, providing a clear advantage in decision-making models. For instance, consider a robot tasked with determining a sequence of actions for a particular mission, where each action is directly proportional to total energy usage. If there is a model capable of forecasting energy consumption to help the robot minimize energy loss, this model must rapidly predict multiple action options. Here, the GCM method proves beneficial for swift decision making. **Scalability**: GCM is inherently scalable. Unlike previous monotonic models that struggle to scale due to restrictions on non-monotonic normalization techniques (like LN, BN) and limitations on activation functions or linear projections, GCM can incorporate any structure to model $p(c|x,z)$ and $q(z|x)$. Thus, with more extensive datasets, GCM's scalability advantages may become more evident. **Monotone Multitask Modeling**: For a set of tasks $y_1,\\cdots,y_n$ that are monotonic such that $(y_i=1)\\subset(y_{i+1}=1)$ and each $y_i$ is monotonic with respect to $r$, we can apply the monotonic cost variable sequence $c_1\\succ \\cdots \\succ c_k$ and let $y_i=\\mathbb I(r \succ c_i)$ to address it. We hope these clarifications address your concerns and demonstrate the validity and applicability of our proposed method. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: thanks for the response. I'm happy to keep my score.
Summary: The paper studies the problem of monotonic regression where the target variable should maintain a monotonous relationship with part of the input variables. It first establishes some analytical properties for the probability model underlying the monotonous regression and then it proposes a Bayesian network model to capture the relationships between different classes of variables. It provides numerical experiments to support the discussion. Claims And Evidence: Yes. I think all the claims are supported. Methods And Evaluation Criteria: The evaluation criteria make sense to me. But I disagree with the proposed method. If we step back, the monotonous regression can be viewed as a discriminative task, though there is the challenge of maintaining monotonicity. Yet the proposed solution essentially relies on a generative model, or, a probability model for the joint distribution of input variables and output variables, together with some latent variables. Generally, it's perceived that generative modeling tasks are more difficult than their discriminative counterparts. So I think the proposed method is to some extent misleading for future works or practical uses, in that it is too ambitious to assume that the joint probability distribution can be approximated/learned by the proposed probability model, and there is no way to know/check when it cannot be. Theoretical Claims: Yes, the claims are correct based on my check. Experimental Designs Or Analyses: The paper didn't discuss much about the choice of functions in (4) and (8). It mentions "All methods utilize the same foundational architecture: a three-layer perceptron network utilizing tanh activations". I think this part is done too casually. Is a three-layer network sufficient to serve as a generative model? Supplementary Material: I quickly scan through the supplementary materials. Relation To Broader Scientific Literature: Nan. Essential References Not Discussed: Nan. Other Strengths And Weaknesses: Nan. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and suggestions. We appreciate the opportunity to address the concerns raised in the review. # Answer 1: Discriminative vs. Generative Models We respectfully disagree with the assertion that generative models are inherently more challenging than discriminative models. The modeling difficulty depends on the task and the structure used. For example, naive Bayes, a generative model, is widely used for classification tasks and is comparable to discriminative methods such as logistic regression. In our paper, we have shown that a discriminative model $\\Pr(y=1|x,r)=G(x,r)$ can be transformed into a generative model $p(x,c)=p(c|x)p(x)$ using the relationship established in Lemma-2: $$ \\Pr( c \\prec r| x) =\\Pr( c \\prec r| x,r) = G(x, r), $$ and consequently $$ p( c | x) =\\partial G(x, c) / \\partial c. $$ This demonstrates that in monotonic modeling tasks, these approaches can be interconverted, so it is unfair to say that generative methods in monotonic modeling are more difficult arbitrarily. Moreover, traditional monotonic networks face challenges such as requiring all weight matrices to be positive [1] and all activations to be monotonic and not fully convex or concave [2]. Common normalization techniques like layer-norm and batch-norm are also not allowed since they are not monotonic operators. As a result, it is hard to train a deep monotonic network. However, with the cost generative model, we can easily build a deep network for generating the cost variable $c$ and obtain the target variable $y$ via $\\{y=1\\}=\\{c\prec r\\}$. This allows us to use unconstrained weight matrices, non-monotonic activations such as silu and gelu, fully convex activations such as relu and softplus, and normalization techniques widely used in deep networks such as LN and BN. Training this generative network is also straightforward following the loss function demonstrated in (7). [1] Monotonic Networks. [Link](https://proceedings.neurips.cc/paper_files/paper/1997/file/83adc9225e4deb67d7ce42d58fe5157c-Paper.pdf) [2] Constrained Monotonic Neural Networks. [Link](https://arxiv.org/pdf/2205.11775) # Answer 2: Challenge of Learning the Joint Probability Distribution Previous studies show the outstanding performance of learning the joint probability in machine learning tasks, such as learning the joint distribution of an image via generative methods. For simple tasks such as the MNIST dataset, a simple four-layer VAE [3] is sufficient, while more complex tasks like CIFAR10 might require deeper models such as DDPM [4]. This paper has shown that the cost variable $c$ is essential for all monotonic problems $\\Pr(y=1|x, r)$, where $r$ is the monotonic revenue variable, $x$ is the non-monotonic variable, and $y$ is the response variable determined by $c\\prec r$. Since $x,r,c$ are all real vectors, we can adopt model structures similar to image generative models. To design the generative model for $x,r,c$, we have chosen a basic structure similar to the VAE, as the tasks in our paper are not overly complex. Although $c$ is not directly observable, we can evaluate the model for $c$ using the observable variable $y=\\mathbb I(r\\succ c)$, and the testing AUC/ACC of $y$ indicates the performance of our generative model of $c$. Additionally, the ELBO $\\mathbb E_{z\\sim q}\\log p(x,r,y|z)-D_{KL}(q(z|x)\\|p(z))\\leq \\log p(x,r,y)$ serves as a lower bound for the evidence $\\log p(x,r,y)$, providing another evaluation metric. [3] Auto-Encoding Variational Bayes. [Link](https://arxiv.org/abs/1312.6114) [4] Denoising Diffusion Probabilistic Models. [Link](https://arxiv.org/abs/2006.11239) # Answer 3: Choice of Functions in (4) and (8) We employ the standard normal distribution for the priors of $z_1$ and $z_2$. Each conditional likelihood $p(a|b)$ is expressed as: $$ \\mu_a, \\log \\sigma_a = \\text{MLP}(b), \\ \\ a \\sim \\mathcal N (\\mu_a, \\sigma_a^2). $$ In the GCM model, we use the reparameterization trick to derive the variable $a$, or we apply the CDF function of the normal distribution to compute the probability $\\Pr(a<a_0)$. The MLP here consists of one or two layers. We will provide further details in the appendix of future revisions. # Answer 4: Sufficiency of a Three-Layer Network The number of layers is not a constraint for generative models. As mentioned in Answer 2, the VAE applied to the MNIST dataset uses only two layers with tanh activation functions and two affine layers without activations, yet it successfully generates high-quality handwritten numerals. In our experiments, the dimension of the revenue variable (see Table 3 on page 8) is less than 10, much smaller than the pixel count of an MNIST image. Therefore, we chose not to use an excessively deep generative network for our study. We hope these clarifications address your concerns and demonstrate the validity and applicability of our proposed method. Thank you again for your valuable feedback.
Summary: The paper introduces a new generative framework to model monotonic probabilities by reformulating the traditional problem into learning a latent cost variable. Instead of directly designing a monotonic function, the authors propose that for a binary outcome, the probability is given by the event that a latent cost variable is dominated by a revenue variable. Two models are presented: (1) GCM (Generative Cost Model): Targets strict monotonicity by modeling the latent cost variable via two independent latent variables. (2) IGCM (Implicit Generative Cost Model): Extends the approach to capture cases where monotonicity reflects correlation rather than a strict order. The paper validates the approach through simulated quantile regression and experiments on multiple public datasets. Results indicate that GCM/IGCM achieve superior performance compared to existing monotonic neural network methods. Claims And Evidence: I think authors argument are clear and convincing. Methods And Evaluation Criteria: I think both the proposed methods and the evaluation criteria are sensible for addressing monotonicity in probabilistic predictions. Theoretical Claims: The proofs are logically coherent, I think they make sense to me. Experimental Designs Or Analyses: I think the experimental design and analyses are sound and effectively demonstrate the strengths of the proposed models. Supplementary Material: I roughly went through the appendix. Relation To Broader Scientific Literature: I think the authors successfully relate their contributions to prior findings and articulate how their work extends and improves upon existing methods. Essential References Not Discussed: Although I am not familiar with the relevant literature, the references cited in the paper provide sufficient context to help me understand its contributions. Other Strengths And Weaknesses: - Strengths : The paper’s main strengths lie in its originality and rigor. It introduces a novel formulation by recasting the monotonic probability problem as one of learning a latent cost variable, thereby addressing both strict and implicit monotonicity within a unified generative framework. Their contributions supported by solid theoretical derivations and extensive experiments on simulated and real-world datasets. - Weaknesses: Assumptions such as a bounded revenue variable and conditional independence, may not always align with practical scenarios. I am wondering how sensitive of the methods to these assumptions, what would happen if the model assumptions are misspecified? Other Comments Or Suggestions: N/A Questions For Authors: 1. Could the authors clarify the practical implications of the conditional independence assumption? In particular, what types of real-world datasets or scenarios might violate this condition, and how does the model’s performance degrade in such cases? 2. The model involves variational inference with multiple latent variables, and the results appear sensitive to hyperparameter choices. Could the authors provide additional ablation studies on how these choices affect both training scalability and performance of the method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the reviews and valuable suggestions. Here are our answers to the concerns raised: # Answer 1: Bounded Revenue Variable This issue can arise in practice, for instance, when our model is trained with $r \\in (-10, 10)$, but is used for inference with values $r \\gg 10$. Such a discrepancy can lead to an out-of-distribution (OOD) problem, resulting in $\\Pr(y=1|x,r)=\\Pr(r>c|x)\\to1$. We address this by hypothesizing a non-zero probability $\\Pr(c=+\\infty|x)=p_0>0$, which yields a bounded estimate: $$ \\Pr(y=1|x,r)=\\Pr(r>c|x)<1-p_0. $$ This upper bound is independent of $r$. Alternatively, we can replace the revenue variable $r$ with a bounded monotonic function $h(r)$. Consequently, the probability $\\Pr(y=1|x,h(r))$ will be monotonic with respect to $h(r)$, effectively transforming the original unbounded problem into a bounded one. # Answer 2: Conditional Independence The conditional independence $c\\perp r\\mid x$ is crucial for solving the strict monotonic issue of $p(y|x,r)$ by achieving $\\Pr(c\\prec r|x,r)=\\Pr(c\\prec r|x)$, with $r$ representing the revenue variable and $y$ as its monotonic outcome. As demonstrated in Lemma-2, for any strictly monotonic probability $p(y|x,r)$, such a latent variable $c$ is guaranteed to exist, allowing us to construct the $c$ model without worrying about its practical interpretation. Conversely, in non-strict monotonic problems, finding such a $c$ is impossible. For example, a consumer might prefer buying a product at a lower price, but if the price is excessively low, it might seem dubious, resulting in the consumer not buying it, illustrating that $c$ (consumer cost) is affected by $r$ (inverse of the product price). In this case, an implicit monotonic model, detailed in section-4.4, can be utilized. Here, the concealed kernel variable $k$ denotes the product's authenticity, and then $r$ shows monotonicity with respect to $k$, enabling us to identify a cost variable $c$ so that $c\\perp k\\mid x$, instead of $c\\perp r\\mid x$. Here, $c$ and $k$ are both latent variables that the modeling for them will not obey the practical interpretation of $r$ and $y$. # Answer 3: Ablation on Hyperparameters in Variational Inference The ablation of $D$, the latent variable dimension, and $N$, the sampling number, is detailed in Table-5 and Table-6 within Appendix C. Testing results for GCM using the Adult dataset are as follows: | | D=4 | D=8 | D=12 | D=16 | | ---- | ------ | ---- | ----- | ---- | | N=8 | 0.7834 | 0.7840 | 0.7842 | 0.7857 | | N=16 | 0.7833 | 0.7824 | 0.7835 | 0.7851 | | N=24 | 0.7833 | 0.7825 | 0.7816 | 0.7842 | | N=32 | 0.7844 | 0.7822 | 0.7836 | 0.7843 | The findings indicate that $D$, the latent dimension, impacts the test AUC, whereas the sampling number $N$ plays a lesser role. Another critical hyperparameter is $\\beta$, utilized in the ELBO as per the $\\beta$VAE approach, where the modified ELBO is expressed as follows, $$ \\text{ELBO}_\\beta=\\mathbb E_q \\log {\\text{Pr}( c\\curlyvee_y r| z, r)p({x},{r}| z)}-\\beta \\log \\mathbb E_q \\frac{q( z| x)}{p( z)}. $$ The test outcomes for varying values of $\\beta$ are: |model |AUC | |----|-----| |$\\beta=0$ |0.7844| |$\\beta=1$ |0.7828| |$\\beta=2$ |0.7844| |$\\beta=3$ |0.7824| |$\\beta=4$ |0.7841| |$\\beta=5$ |0.7830| The results reveal that $\\beta$ values within $\\{0,\\cdots,5\\}$ yield comparable results, with $\\beta=0$ achieving the best performance. This is attributed to the test AUC metric being primarily focused on the accuracy of $y$, rather than the whole generative model $p(x,r,y,z,c)$. We hope these clarifications address your concerns and demonstrate the validity and applicability of our proposed method. Thank you again for your valuable feedback.
null
null
null
null
null
null
null
null
R3DM: Enabling Role Discovery and Diversity Through Dynamics Models in Multi-agent Reinforcement Learning
Accept (poster)
Summary: This paper tackles the problem of high quality multi agent reinforcement learning. Specifically, allowing individual agents to learn unique policies in order to better collaborate to achieve goals. The authors introduce a new approach to training, R3DM, which utilizes contrast learning to encourage individual agents to learn different policies based on their history and other agents in their group. The method is tested on several problems and appears to outperform standard methods. ## Update after rebuttal The initial review was positive and I did not feel the need to change it after the rebuttal. Claims And Evidence: The authors provide adequate evidence for each of their claims. Methods And Evaluation Criteria: The methods appear to be sufficient and make sense for the problem being studied. Theoretical Claims: There are some theoretical claims made but they do not form the basis of the results. Experimental Designs Or Analyses: The experiments are standard in the field and therefore sound. Supplementary Material: The supplementary information contains relevant background and proofs for the mathematics introduced in the manuscript. Relation To Broader Scientific Literature: The methods introduced are of great interest to the scientific community, specifically, encouraging diverse behavior in multi agent systems without introducing the huge overhead of individual models for each of them. Essential References Not Discussed: I do not believe any essential references were not discussed. Other Strengths And Weaknesses: The study appears quite complete. It is well written and I find the study to be innovative. Other Comments Or Suggestions: Outside of the questions posed, I do not have suggestions. Questions For Authors: 1. Can you quantify the role the contrastive learning is playing? Does this need to be performed throughout all of the training or can it be done later once shared policies have been learned? 2. What is the cost increase of using the algorithm? 3. How diverse of a strategy can emerge from this algorithm? 4. Have the authors compared their approach against using a shared base policy with small single layers for individual agents. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on the paper and for raising interesting and relevant questions. We aim to highlight this clearly in the camera-ready version of the paper. 1) Impact and Details on Contrastive Learning Contrastive learning plays a crucial role in R3DM by deriving intermediate role embeddings, essential for enabling role-specific intrinsic rewards that drive diverse future behaviors. Contrastive learning is conducted throughout the training process. Specifically, following the training protocol outlined in ACORM (Hu et al., 2024), contrastive learning updates occur after every 100 episodes of data collection. We refer the reviewer to the ablation study under the rebuttal for reviewer STTJ for more details. 2) Compute Cost Increase The proposed R3DM algorithm does not incur additional computational overhead during the testing or deployment phase, as only the learned policy is executed. The overhead introduced by R3DM arises solely during training, specifically due to the learning of an additional dynamics model required for computing intrinsic rewards. This extra step significantly enhances coordination through role-specific behavior. This extra step results in the increases moderately increases the training time of R3DM by 50-75% compared to the baselines. With more efficient implementations of the dynamics model on frameworks such as Jax, we believe that the training run-time can be optimized further. 3. Diversity of Strategy Our approach inherently encourages strategic diversity as a mechanism to optimize coordination objectives, which in turn improves sample complexity and efficiency. Qualitative analyses provided in the paper illustrate that R3DM successfully facilitates the emergence of diverse and sophisticated strategies. For instance, agents have learned distinct tactics such as deliberately distracting subsets of enemy agents to weaken their overall strength, thereby enabling other teammates to effectively neutralize remaining threats. 4. Have the authors compared their approach against using a shared base policy with small single layers for individual agents? Yes. In our experiments, the baseline QMIX algorithm inherently implements a shared local critic structure that agents use to formulate policies for choosing the desired actions. This implementation persists in R3DM, where we use a shared local critic and the shared role embedding network across multiple agents. We provide the anonymized code here https://anonymous.4open.science/r/R3DM-F1A0/README.md. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I don't feel I need to change the recommendation. --- Reply to Comment 1.1.1: Comment: Thank you very much for the opportunity to improve our work. We will revise the manuscript to address your questions.
Summary: The paper proposes a method for improving the ability of agents to learn to effectively coordinate. The method is based on an existing idea that clusters learning agents into roles based on their observation history using contrastive learning but extends this idea by encouraging diversity among the roles to aid exploration and role selection. The paper evaluates the method in SMAC and SMACv2 and compares against several baselines, finding that their method outperforms the baselines in sample complexity in some scenarios (but rarely underperforms). Claims And Evidence: Overall, the paper does a poor job of communicating what their contributions are, especially in relation to ACORN. Unless I've misunderstood, the contrastive learning objective and clustering strategy (sections 4.1 and 4.2) is identical to ACORN, but this is not made clear in the paper. If this is the case, using the terminology "building on" and "inspired by" is misleading on terms of the contribution of the paper and 4.1 and 4.2 should be phrased as preliminary or existing work. If the details in 4.1 and 4.2 do indeed differ from ACORN, these differences should be made clear. The claims that R3DM improves in sample complexity over baselines is relatively convincing. The experiments are limited to the SMAC and closely related SMACv2 environments so the generalizability of the claim is not clear. Methods And Evaluation Criteria: SMAC and SMACv2 are good domains for testing algorithm coordination abilities. A broader set of environments would substantially help strengthen the claims. Theoretical Claims: I did not check the correctness of any proofs. Experimental Designs Or Analyses: All of the experimental claims are somewhat limited by the fact that only SMAC and SMACv2 are used as evaluation environments. The experimental claims that R3DM is more sample efficient than existing algorithms is decently well supported although an improvement over ACORN is limited to only a few of the scenarios. However, it's not clear that R3DM converges to better solutions (rather than just more quickly) than other methods because training curves are cut off before convergence in most scenarios. It would be valuable to see experiments run to a larger number of environment steps. Supplementary Material: No supplementary material provided. It would be good to see source code to reproduce the experiments. Relation To Broader Scientific Literature: The key contribution of the paper is the extension of ACORN to include a new intrinsic reward to optimize role based diversity. The paper's evaluations show that this helps in the SMAC and SMACv2 environments. Essential References Not Discussed: One of the key points made in the paper is that the diversity of trajectories across roles can help improve learning. Existing literature [1] has explored the problems with entropy-based diversity in coordination settings. It would be interesting to see a discussion with respect to this line of work. [1] Cui, Brandon, et al. "Adversarial diversity in hanabi." The Eleventh International Conference on Learning Representations. 2023. Other Strengths And Weaknesses: Strengths: The paper proposes a new intrinsic reward that builds on top of the SOTA method for clustering-based coordination methods, ACORN. The idea is theoretically grounded and the experiments demonstrate the method improves the SOTA. Weaknesses: The experiment settings are limited. The improvements over baselines are quite small. Other Comments Or Suggestions: 130 col. 2: "through [an] information-theoretic" 355 col. 2: spacing typo Questions For Authors: What are the "test returns" recorded in SMACv2? (mentioned on line 353) I notice that the clustering of roles changes throughout and the episode visualized in Figure 4. For example, the dead agents switch between cluster 1 and cluster 2 from step 35 to 50. Also, the number of agents in the other clusters changes over time. Why does this happen? What are the error bars in the learning curve figures? Please specify this in the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their time in providing detailed feedback on the paper to better highlight the key contributions and novelties of the paper. We detail our responses to the questions below. 1) Clarification of the contributions Our primary contribution lies in introducing a novel information-theoretic objective specifically tailored for role-based multi-agent reinforcement learning (MARL). We theoretically demonstrate that this objective can be naturally decomposed into two complementary sub-objectives. One that derives intermediate role embeddings from observation-action histories, which can be effectively optimized using contrastive learning, and another that refines these embeddings by aligning them more closely with future behaviors through intrinsic rewards. This decomposition ensures that roles derived from past interactions are not only compact representations of historical data but also predictive of future actions, thereby enabling role-specific coordination and specialization. While our framework incorporates contrastive learning and clustering techniques borrowed from ACORM, our approach subsumes this to optimize the broader optimized objective. Hence, they are integral to derive these intermediate role embeddings to optimize our proposed objective. For the sake of completeness and transparency, we have included them in the methodology section of our paper. To avoid any potential confusion, we will make this distinction clearer in the final camera ready version of the paper by explicitly identifying these elements as preliminary or existing work. 2) Source Code We have uploaded and anonymized source code with the necessary data used for the plots in the paper and rebuttal (ablation studies for reviewer STTJ) to the best of our ability here: https://anonymous.4open.science/r/R3DM-F1A0/README.md 3) Clarification with respect to Entropy-based Methods [1] demonstrates that training a population of policies through adversarial diversity improves robustness by ensuring cross-policy compatibility. However, their approach requires maintaining multiple policies per agent to achieve competitive performance across permutations of partner policies, which increases complexity and computational overhead. We believe that this approach solves a related problem in the field of adhoc-teaming [2]. In contrast, our single-policy learning paradigm focuses exclusively on rapid task-reward convergence by leveraging role-driven trajectory diversity rather than optimizing for robustness across diverse partner policies. Entropy-based exploration methods such as CDS (which we benchmark against) are limited to static role assignments based on the identities of agents, which result in fixed behavior distributions that fail to adapt to evolving team dynamics. R3DM addresses this limitation by employing a mutual information objective (Theorem 4.1) to induce role emergence dynamically from observation-action histories. This allows agents to learn complementary roles based on the similarity or divergence of their trajectory histories. Additionally, R3DM’s intrinsic rewards are designed to capture diversity, specifically in future expected behaviors conditioned on the agent’s role, ensuring that roles evolve naturally and adaptively during interactions with the environment. 4) Test Returns The "test returns" mentioned in line 353 refer to cumulative rewards accumulated during testing episodes. We will clarify and correct this terminology in the final version of the paper to avoid ambiguity. 5) Changing of role-clusters Roles dynamically change throughout the episode because they depend on each individual agent’s evolving observation-action history. Thus, as agents interact with the environment and their experiences diverge over time, their corresponding roles naturally adjust, causing fluctuations in role cluster assignments, even for agents that become inactive or "dead. 6) Error Bars The error bars depicted in the learning curve figures represent the standard deviation computed across the 5 different random seeds used in our experiments. We will explicitly state this detail in the final manuscript to clarify the interpretation. [1] Cui, Brandon, et al. "Adversarial diversity in hanabi." The Eleventh International Conference on Learning Representations. 2023. [2] Mirsky, Reuth, et al. "A survey of ad hoc teamwork research." European conference on multi-agent systems. Cham: Springer International Publishing, 2022.
Summary: The authors propose R3DM, a new role-based MARL framework that enhances coordination by learning roles that shape agents' future behavior through maximizing mutual information and using intrinsic rewards derived from dynamics models. Claims And Evidence: The core claims regarding R3DM's ability to learn effective coordination and outperform baselines on SMAC and SMACv2 are strongly supported by clear and convincing evidence through quantitative results, qualitative analysis, and a theoretical framework. The acknowledged limitations suggest avenues for future improvement but do not undermine the validity of the findings. Methods And Evaluation Criteria: Generally yes, but: 1.The authors could have also shown the effect of varying the number of roles. 2. There is a lack of ablation studies, so it is unclear which components of the method are effective. Theoretical Claims: I did not check the proofs in the appendix in detail. Experimental Designs Or Analyses: There is a lack of ablation studies, so it is unclear which components of the method are effective. Supplementary Material: I only skimmed through the the supplementary material. Did not look into the proofs in detail. Relation To Broader Scientific Literature: The method leverages and extends existing ideas in role-based MARL and contrastive learning, while introducing a novel approach to intrinsic reward design based on predicting the impact of roles on future trajectories using a dynamics model. This forward-looking perspective on role learning seems to be the key contribution. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well motivated in an intuitive manner and presents a novel method grounded in an information theoretic objective. The results are impressive, but there is a lack of ablation studies and a lack of insight on how varying the number of roles affects performance. Other Comments Or Suggestions: 1. An overall block diagram describing the method would have been useful. 2. Regarding the intrinsic reward - While the mathematical derivation is provided in the Appendix, the core idea of how such a reward encourages role-aligned diverse policies should be highlighted in the main text. 3. The qualitative results could be improved by visualising the learned policies. 4. Ablation studies showing the effect of each component of the proposed methods is needed. Questions For Authors: 1. The motivating example is useful, but I was wondering - what if it is indeed better for both drones to target the same building (eg: 1 drone alone cannot put out the fire)? This points to a more general question - does the approach take into consideration the difficulty of the task under consideration? How would one balance task difficulty with diversity? 2. The MI objective could be explained in further detail- why does this objective lead to the desired role properties? 3. Having a fixed number of roles is a limitation. However, while the paper shows good performance with a fixed number of roles, how would the performance vary with different numbers of predefined roles? 4. Having two dynamic models, one for role-conditioning and one for role-agnostic could be expensive. Have the authors thought about how a single dynamics model may be used? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed and thoughtful feedback, which has greatly contributed to improving the scientific rigour of this paper. We detail our responses to the questions below and add the required ablation studies. 1) Intrinsic Reward: We clarify the intuition behind our intrinsic rewards. The rewards encourage agents to diversify their future behaviors, conditioned explicitly on their assigned roles. Formally, it represents the difference between the entropy of future trajectories and the conditional entropy of these future trajectories given specific roles. Therefore, the intrinsic reward inherently promotes diversity by increasing the overall trajectory entropy, ensuring diverse future behaviors while simultaneously maintaining the coherence and alignment of behaviors with respect to roles. 2) Clarification on the MI Objective : The MI objective in R3DM is designed to ensure that an agent’s current role influences its future behavior while being grounded in its prior observations. This objective quantifies the dependency between the concatenated observation-action trajectory (which includes both the agent’s observation-action history and its future trajectory) and its role. Intuitively, maximizing this MI objective ensures that the roles derived from observation-action histories can effectively predict future actions and observations. This, in turn, enables better specialization within the team, which facilitates more sample-efficient multi-agent learning, as demonstrated in our experiments. 3) Ablations: We conduct 3 ablations on the design choices. The plots are in the link. https://drive.google.com/file/d/1b4IkSXhNhXFedi2-d8K8tTEJjCVz4hey/view?usp=sharing a) Impact of the reward imagination: We analyze the impact of the number of imagination steps in the role-conditioned future trajectory, which is used to predict intrinsic rewards. In R3DM, we use a single imagination step to compute intrinsic rewards. We observe no statistically significant improvements when increasing the number of imagination steps to 2. However, with a higher number of imagination steps (e.g., 5 and 10), we observe a degradation in performance (more pronounced for 10 steps). We believe this degradation is due to compounding errors in the predictions by the model, which is conditioned solely on an agent’s past observations and actions. These errors lead to intrinsic rewards with higher variance and bias, ultimately degrading learning. b) Impact of the number of roles We vary the number of role clusters used in R3DM progressively from 2 to 8 for the 3s5z_vs_3s6z environment. We don’t observe a significant difference in final performances when varying the number of role clusters. We observe that the learning with 3 role clusters is more sample efficient compared to other role clusters. c) Impact of Contrastive Learning (CL) We compare the influence of intrinsic rewards and CL on the performance of R3DM. The variant of our R3DM without intrinsic rewards is equivalent to ACORM, as our method builds upon it. Next, we evaluate a variant of our algorithm that includes intrinsic rewards but excludes CL, which is used to enable more distinct role clusters. We observe that this degrades performance CL. Interestingly, this variant still outperforms ACORM, underscoring the significant impact of our proposed intrinsic rewards. These rewards enhance the entropy of future trajectories based on roles while reducing their conditional entropy with respect to intermediate role representations. 4) Balancing task rewards with exploration diversity The reviewer raises an insightful point regarding scenarios where it would be optimal for multiple agents (e.g., drones) to collaboratively target the same objective due to task difficulty. Our proposed approach accounts for such task-specific complexities by integrating intrinsic rewards with task-specific rewards. The balance between these two reward types is controlled by the hyperparameter \alpha, which can be tuned based on the requirements of the specific task. This design allows our method to flexibly adjust the trade-off between promoting diversity and fostering cooperation, depending on the inherent difficulty and coordination demands of the task. 5) Using a single dynamics model: The reviewer rightly points out that employing two separate dynamics models (one role-conditioned and one role-agnostic) increases training computational overhead. Importantly, this overhead is confined to the training phase, and dynamics models are not needed during the testing or inference phase. We have explored the possibility of using a single dynamics model with multiple forward passes using different sampled role embeddings to compute intrinsic rewards. However, empirically, we observed this approach to be computationally slower due to the necessity of multiple forward passes with different role embeddings when computing the intrinsic reward. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses. I have updated my score accordingly.
null
null
null
null
null
null
null
null
Enhancing Performance of Explainable AI Models with Constrained Concept Refinement
Accept (poster)
Summary: This paper proposes to tackle the potential gap between performance and understandability of concept embedding models, but allowing the concept embeddings to be refined in a constrained manner, thereby allowing for more flexibility, but still understandability. The authors have proposed a theoretical framework embedded in the IP-OMP framework and ran several experiments. Claims And Evidence: Honestly I find it very difficult to even identify the claims of the paper. It starts with an introduction that is well written and easy to follow, though it is somewhat generic in that it does not become clear what the authors are specifically tackling and why this is relevant. Section 2 and section 3 is then pretty much buried in mathematical statements without a high-level explanation of what this is for. The experiments look promising, but their significance falls somewhat short in how they are tied to the authors claims (which I am still not 100% I have identified). I believe the claims are: "concept embedding models should allow for refining their embeddings, but these should only be refined in a small constrained manner." In this context I think having a concrete example throughout the paper would really help understanding the problem and relevance of the problem in the first case, and what the method does. Another question I have is what the authors mean by "concept embeddings" related to previous work. It would be good if the authors define this either in the introduction or section 2 Methods And Evaluation Criteria: Please see my remarks in the previous box. In addition, it is not clear why the authors frame the problem within the framework of section 2, i.e. "predicting a random variable by sequentially selecting a list of related random variables, termed queries, and checking their values." However, this seems quite central to the work and more clarification would be necessary here. Theoretical Claims: I have not been able to check the correctness of the theoretical claims. Experimental Designs Or Analyses: This seems fine for me, however I would suggest to move some of the experimental details into the appendix in order to make room for embedding and discussing the experiments more in the context of the authors claims. Supplementary Material: I briefly skimmed them. Relation To Broader Scientific Literature: This is a section that is quite missing in the paper. I really think it would help to move this from the supplements to the main paper. But in addition to this remark, I believe the section is very sparse, the majority of the referenced papers is repeatedly from Chattopadhyay et al.. I consider it important to add references from the field of XAI, concept learning and dictionary learning to increase the breadth here, but also make it more understandable how this work relates to them. Essential References Not Discussed: See above. Other Strengths And Weaknesses: The experimental results look valuable, but it is hard to understand them without understanding the claim and importance of the underlying problem. Other Comments Or Suggestions: Please write a discussion and conclusion of your work. This makes it additionally difficult to understand what the paper is about. Overall the authors should rework the overall presentation of their work. What is the goal? Maybe add a running example. How does this relate to previous work? What is the conclusion/what have we learned from each section? Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the time and effort dedicated to reading our manuscript and for providing valuable and constructive feedback. In response to the concerns raised regarding the clarity and coherence of the paper’s central ideas, we propose to make major revisions to the manuscript. The details of these modifications are outlined in the subsequent response. We hope that these changes will enhance the readability of our work and allow the reviewer to better appreciate the contributions presented in this paper. **Claims And Evidence** In response to the reviewer’s comments and upon careful reconsideration, we acknowledge that there exists a logical gap between the introductory section and the theoretical exposition, which may hinder the reader’s ability to follow the overall narrative. At a high level, the reviewer's understanding is accurate: *the principal contribution of this paper is that we propose refining the representations of concepts in interpretable AI models within a constrained deviation, thereby enabling the resulting model to achieve a more favorable trade-off between performance and interpretability.* To address the reviewer’s suggestions and improve the clarity of the manuscript, we will include the following paragraph at the end of the first section: On a high level, our main contribution is the introduction of a framework in which concept embeddings in explainable AI models are refined within a restricted parameter space to attain a better balance between predictive performance and interpretability. The remainder of the paper substantiates this claim along two key dimensions: 1. **Theoretical validation.** We demonstrate that our method is both **necessary** (Section 2.1) and **effective** (Section 3) under the theoretical framework introduced by [1], whose background is reviewed in Section 2. This framework serves as a suitable testbed for our contributions for two main reasons: (1) Among interpretable-by-design models [2-6], only [1] presents a generative model wherein both performance and interpretability are rigorously defined. (2) The algorithm motivated by this generative framework achieves state-of-the-art results within the class of interpretable AI methods. 2. **Empirical evaluation.** We conduct experiments on multiple benchmark datasets for image classification tasks to assess the practical effectiveness of our approach (Section 4). **Relation To Broader Scientific Literature** We thank the reviewer for their constructive suggestions regarding the improvement of the literature review section. We would also like to respectfully clarify that, given the close relationship between the motivation of our work and prior literature in explainable AI (XAI), we dedicate the majority of Section 1 to introducing the relevant background. Appendix A serves as a complementary component, offering a more in-depth discussion of works that are particularly closely aligned with our approach. Upon careful review, we believe that we have sufficiently highlighted and discussed the related literature in the areas of XAI, concept learning, and dictionary learning. If the reviewer has specific papers in mind, we would be happy to consider including them in the revised manuscript. **Other Comments Or Suggestions** Based on the reviewer's suggestion, we will add an additional section by the end of the paper in the revised manuscript. Due to the character limitations for each rebuttal, please see the first section in our response to Reviewer LpAZ for the full text. Reference: [1] A Chattopadhyay, R Pilgrim,R Vidal. Information maximization perspective of orthogonal matching pursuit with applications to explainable AI. [2] B Zhou, Y Sun, D Bau, A Torralba. Interpretable basis decomposition for visual explanation. [3] PW Koh, T Nguyen, YS Tang, S Mussmann, E Pierson, B Kim, P Liang. Concept bottleneck models. [4] M Yuksekgonul, M Wang, J Zou. Post-hoc concept bottleneck models [5] T Oikarinen, S Das, L Nguyen, T Weng. Label-free concept bottleneck models [6] A Chattopadhyay, K Chan, B Haeffele, D Geman, R Vidal. Variational information pursuit for interpretable predictions.
Summary: In this paper, the author has proposed an improvement for the original structure of the concept bottleneck models and CLIP-IP-OMP models for improving the interpretability and trustworthiness of the model without sacrificing the model's accuracy. The key idea of this method is trying to optimize the concept embeddings while keeping them within a constrained range, which addresses the issue of the inaccurate query features that are generated from the dataset like CLIP. The key contributions of this paper include the theoretical justifications of the necessity of the problem, the accuracy and interpretability guarantees, and empirical experiment results that show its performance compared to the existing methods. Claims And Evidence: In this paper the authors provides several claims and are mostly providing theoretical proofs that shows the necessity of concept refinement, and the CCR overcomes the problem. However, I am a little bit concerned about the authors' claim of improving the interpretability statement using CCR. I understand the author's claim about enhanced interpretability through the introduction of the function $\rho$ in the model. However, for a concept bottleneck model or the similar concept-based models, the original input space/model has already provided sparse explanations. In other words, the output that the models like CILP provide is already sparse, so with a modification of $\Delta D$, it cannot guarantee that the level of interpretability (I am not sure if that's means sparsity in this case) is changed. I would like to see if the authors can better define what interpretability means here, as I can see the introduction of the residual part more correlated to the definition within the robust regression [1]. I can see that the results/definition is aiming for a robust solution to me, rather than saying that it is interpretable, and the related works that the authors states in line 210-219 about the sparse encoding also doesn't provides me an explanations on why the closer to the $D^*$, the more interpretable it would be. If the authors can provide me more explanations on that, that would really strengthen the claim. [1] Xu, Huan, Constantine Caramanis, and Shie Mannor. "Robust regression and lasso." _Advances in neural information processing systems_ 21 (2008). Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem and applications. I am a little bit concerned about the interpretability metrics that the authors used both in the paper and the criteria, as they are mostly the case study instead of quantitative metrics. I have more comments in the experiment design sections for the concerns about the methods comparison. Theoretical Claims: I did check the correctness of the proofs for the theoretical claims. I have no specific questions for them. Experimental Designs Or Analyses: I think the proposed method CCR, is well-motivated for the problem of improving the accuracy of interpretable AI model while preserving its ability to explain the decisions. However, I am several concerns about the design of the experiments: [Sparsity and $\rho$ trade-off] In CCR, the constraints $\rho$ determines how the concepts embeddings can be refined. Therefore, it is actually risky when the $\rho$ goes larger. Therefore, I am wondering if the authors have experiments to show the relationship between the sparsity and the selection of the $\rho$ is real-world applications since it seems a bit hard to tune comparing to setting a strict top $k$ in the sparse autoencoder settings. I am wondering if the authors can provides more guidelines on how the accuracy and sparsity ($ASR$) have changed through the change of $\rho$. [Missing baselines Metrics] In the experiments in table 1, the authors have shown the average explanations length, sparsity, and the concept embedding deviations for the CCR, However, there missed a baseline here on how without CCR would influence those metrics. As the results without CCR is already sparse, so it is not the case that we can regard 100% as the baseline. I am wondering if the authors could provide the baseline/comparisons for other methods as a guidelines in the table so that we can see how the ASR/AEL/ACED have changed through the optimization. How much it has sacrificed/improved from the sparsity level perspective. This comparison missing in the appendix sesssion where it is hard to tell the improvment for interpretability in Appendix D.3. Supplementary Material: Yes, I have reviewed the supplemental material for proofs in Appendix C and additional experiments in Appendix D. Relation To Broader Scientific Literature: The contributions of the paper related to the broader scientific literature include: [Concept-based Model Construction] The paper builds on Concept Bottleneck Models (CBMs)[1], which introduced the idea of human-interpretable intermediate representations. In CBMs, models first predict a set of predefined datasets and then use them to make final decisions. However, a major issue with CBMs is their reliance on imperfect concept annotations, leading to performance degradation when concepts are noisy or biased. CCR directly addresses this issue by refining concept embeddings while ensuring they remain interpretable, whereas CBMs typically assume fixed, pre-defined concepts. [Theoretical proofs for the misaligned/adversarial attack leading to the high performance drop] Prior work [2] showed that explainability methods are often fragile to small changes in feature representations. CCR mitigates this fragility by ensuring that concept refinements remain interpretable. [1] Koh, Pang Wei, et al. "Concept bottleneck models." International conference on machine learning. PMLR, 2020. [2] Ghorbani, Amirata, Abubakar Abid, and James Zou. "Interpretation of neural networks is fragile." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019. Essential References Not Discussed: I don't think that there are essential references that are not discussed, but I have mentioned in the other strengths and weaknesses that asking about if the references about the robust regressions and their strategies should be included as the related works for the paper, as the problem statements seem to be similar to them. Other Strengths And Weaknesses: I am a little bit concerned about the strategies that the author has proposed that can be aligned with the robust regression fitting problem [1]. There are several current works for using the perturbation idea for solving similar issue but not just to the concept-based problems [2]. [1] Xu, Huan, Constantine Caramanis, and Shie Mannor. "Robust regression and lasso." _Advances in neural information processing systems_ 21 (2008). [2] Su, Peng, et al. "CR-lasso: Robust cellwise regularized sparse regression." Computational Statistics & Data Analysis 197 (2024): 107971. Other Comments Or Suggestions: No extra comments. Questions For Authors: If the authors can provide better explanations/definitions for the interpretability before and after the concept refinement, that would better show the usefulness of this approach. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed feedback and positive assessment. We hope that our clarifications and additional experiments could further elevate their recommendation of our work in the final review. **Claims And Evidence** We would like to offer two clarifications that may help resolve the confusion expressed by the reviewer: 1. In this work, we evaluate the performance of CCR in two distinct settings: the theoretical model proposed by [1], and the interpretable image classification task. Interpretability can be quantitatively assessed only in the former setting; for applications such as image classification, the community currently lacks universally accepted metrics for interpretability. As a result, most related works instead rely on post-hoc, case-specific analyses to illustrate interpretability [2,3,4,5]. These analyses typically involve examining the predicted output and the most salient concepts identified by the model, and assessing whether these concepts provide a human-interpretable explanation for the output. Accordingly, our claim of *improved interpretability* applies solely to the theoretical setting. In both settings, it is important to note that sparsity does not equate to interpretability. We have proposed a revision to the manuscript to make this distinction explicit; please refer to our response to Reviewer grrY for further details. 2. In the theoretical setting, the input $\mathbf{x}$ is generated as a sparse linear combination of the columns of ground truth (but unknown) $\mathbf{D}^*$. Therefore, perfect interpretability in this setting corresponds to the exact recovery of $\mathbf{D}^*$, and not sparsity. In this simplified setting, improvements in accuracy and interpretability are aligned, as demonstrated by Theorem 3.4. **Experimental Designs Or Analyses** [Sparsity and $\rho$ trade-off]: We have included plots that shows the relationship between the performance/sparsity and $\rho$ for CIFAR-10 dataset through the following [anonymous link](https://anonymous.4open.science/r/ICML_Review-C283) (Plot 4 and 6). We will add the same experiment for all other datasets in the revised manuscript. [Missing baselines Metrics]: As mentioned above, sparsity is not used as a metric for interpretability in this work. Accordingly, metrics such as ASR, AEL, and ACED are not intended to reflect interpretability, but rather to highlight the simplicity and efficiency of our method relative to existing approaches. These metrics demonstrate that our method significantly reduces computational cost compared to state-of-the-art methods, while simultaneously achieving improved accuracy. Nonetheless, we appreciate and agree with the reviewers' suggestion that the interpretability of the baseline models needs examination. Accordingly, we will include a corresponding analysis—similar in nature to Appendix D.3—for the baseline models in the revised manuscript. **Other Strengths And Weaknesses** We appreciate the reviewer's insightful observation regarding the connection between our problem and robust regression. Using Lasso as an illustrative example, it is indeed equivalent to solving a sparse coding problem, where the goal is to represent a signal $\mathbf{x}$ using a small number of columns from a dictionary matrix $\mathbf{D}$. Nevertheless, there are two key distinctions between our setting and that of Lasso: 1. Similar to sparse coding problem, in Lasso, it is assumed that $\mathbf{D}$ is fixed and known, and the primary goal is to promote sparsity in the reconstruction. In contrast, our setting does not assume that $\mathbf{D}$ is fixed; rather, we treat the given $\mathbf{D}$ (typically obtained from CLIP) as uncertain and potentially erroneous. Our objective is to co-optimize both $\mathbf{D}$ and the reconstruction. This formulation is more closely related to dictionary learning, a connection we discuss extensively in the paper. 2. The primary objective of Lasso is to identify a sparse code $\mathbf{s}$ such that $\|\mathbf{x} - \mathbf{D}\mathbf{s}\|_2$ is minimized. In our framework, however, the emphasis is not solely on reconstruction accuracy, but on using the resulting $\mathbf{s}$ to improve performance on downstream tasks. [1] A Chattopadhyay, R Pilgrim,R Vidal. Information maximization perspective of orthogonal matching pursuit with applications to explainable AI. [2] PW Koh, T Nguyen, YS Tang, S Mussmann, E Pierson, B Kim, P Liang. Concept bottleneck models. [3] M Yuksekgonul, M Wang, J Zou. Post-hoc concept bottleneck models [4] T Oikarinen, S Das, L Nguyen, T Weng. Label-free concept bottleneck models [5] A Chattopadhyay, K Chan, B Haeffele, D Geman, R Vidal. Variational information pursuit for interpretable predictions. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification — it addresses all my concerns. I will keep the score unchanged
Summary: This paper addresses the challenge of balancing accuracy and interpretability in machine learning models, particularly for interpretable-by-design methods that often sacrifice accuracy for transparency. The authors identify that deviations in concept representations, a crucial component of interpretable models, can negatively impact prediction performance. To counter this, they propose a novel framework called Constrained Concept Refinement (CCR) that optimizes concept embeddings under constraints to preserve interpretability. CCR significantly improves the prediction accuracy of interpretable models like IP-OMP on image classification tasks (CIFAR 10/100, ImageNet, CUB200, and Places365) while maintaining interpretability and achieving a reduction in runtime compared to other explainable methods. ## update after rebuttal I appreciate the thoughtful and detailed responses from the authors, which successfully addressed all my concerns. Consequently, I have decided to increase my score for the paper, as I now believe it holds significant potential to contribute meaningfully to the field of explainable AI. I also hope that, if the paper is accepted, the final version will incorporate the additional results and address reviewer suggestions to further strengthen its impact. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence, but several key assertions have limitations or rely on restrictive assumptions. 1. Assumption of Column-Orthogonality and Full Rank : The theoretical guarantees rely heavily on assumptions such as column-orthogonality and full-rank latent feature matrices (e.g., Assumption 3.2). These assumptions may not hold in realistic scenarios where concept embeddings often exhibit correlations or redundancy. The authors acknowledge this limitation briefly but do not empirically investigate how violations of these assumptions affect performance in practice. 2. Dependency on Pre-trained Models (CLIP) : The method relies significantly on pre-trained CLIP embeddings, which introduce biases from large-scale internet data used for training CLIP. The authors acknowledge potential biases but do not empirically analyze how these biases might impact CCR's reliability across diverse datasets or domains. 3. Interpretability Preservation : While qualitative examples (Figures 3, 5–19) show interpretable concepts, there is no quantitative metric for interpretability (e.g., human evaluation scores or concept alignment metrics). Interpretability is subjective, but the paper relies on only ad-hoc visualizations without rigorous validation. 4. Hyperparameter Choice : The paper mentions the correction radius ρ as a crucial hyperparameter that controls the trade-off between accuracy and interpretability. The process for selecting the optimal value for this and other hyperparameters is not detailed extensively, and the sensitivity of the results to these choices could be further explored. Also, optimal ρ likely varies across datasets, but the paper provides no guidance for tuning it. 5. Concept Set Dependency : Performance and interpretability depend heavily on the predefined concepts, which are treated as a black box. Here the concept set is generated using GPT-3, which may introduce bias (e.g., cultural or linguistic limitations). The paper does not analyze how concept quality affects CCR. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are generally well-suited to the problem of improving explainable AI (XAI) models, particularly in interpretable image classification. However, there are areas where the choices could be questioned or improved for broader applicability and robustness. 1. Interpretability Metrics: While AEL, ASR, and ACED are useful metrics, they do not fully capture human-centric notions of interpretability. For example, there is no user study or human evaluation to validate whether the explanations generated by CCR are genuinely understandable or actionable for end-users. 2. Limited Analysis of Misleading Concepts: Although qualitative examples highlight CCR's ability to handle misleading concepts (e.g., assigning low weights to irrelevant features), a systematic analysis of failure cases or edge cases is missing. Theoretical Claims: The theoretical claims are largely correct within their stated assumptions and provide valuable guarantees for CCR's performance under idealized conditions. However, some assumptions (e.g., orthogonality, full rank) and practical considerations (e.g., sample complexity) limit their applicability to real-world scenarios. The proofs for Theorems 2.6, 3.3, and 3.4 are mathematically rigorous to me, but I did not check them thoroughly. Experimental Designs Or Analyses: The experimental designs and analyses in this paper are generally sound and appropriate for evaluating CCR in image classification tasks. However, there are some limitations that should be addressed to strengthen the validity and generalizability of the findings. 1. The paper does not report statistical significance tests for its results (e.g., accuracy improvements over baselines). This omission makes it difficult to assess whether the observed improvements are robust across different runs or datasets. 2. The paper could benefit from ablation studies to understand the contribution of different components of the CCR framework to the overall performance improvement. Supplementary Material: The supplementary material provides detailed proofs and experimental validations that support most theoretical claims made in the paper. While I read more details about the interpretability experiments, I did not thoroughly checked the proofs. Relation To Broader Scientific Literature: The paper contributes to the field by addressing a key challenge in interpretable AI, building upon existing interpretable-by-design methods, focusing on the crucial role of concept embeddings, proposing a novel constrained optimization framework with theoretical guarantees, and demonstrating its practical effectiveness in a relevant application domain. Essential References Not Discussed: The authors has added critical references or related works that are essential to understanding the context of its key contributions. Other Strengths And Weaknesses: The paper makes significant contributions to explainable AI by introducing Constrained Concept Refinement (CCR), which improves accuracy and interpretability while maintaining computational efficiency. Its originality lies in directly optimizing concept embeddings within constrained neighborhoods, a departure from prior methods that rely on additional black-box modules. However, the paper's generalizability beyond image classification tasks, reliance on pre-trained models like CLIP, lack of human-centric interpretability validation, strong theoretical assumptions, and limited analysis of failure cases are notable weaknesses that should be addressed. Other Comments Or Suggestions: I don't have any specific comments or suggestions for the authors beyond what I already mentioned. Questions For Authors: Despite all these limitations mentioned before, the paper represents a substantial step forward in interpretable machine learning research, offering both theoretical insights and practical improvements that could inspire further advancements in the field. Hence I am leaning towards accepting this paper. But addressing these weaknesses would enhance its impact and generalizability, and improve the chance of acceptance from my side. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are sincerely grateful to the reviewer for their thorough evaluation and for the positive feedback on our work. In response to their suggestions, we have proposed additional experiments and provided further clarifications in the revised manuscript. We hope that our responses effectively address the concerns raised and that the reviewer would consider raising the score in the final assessment. **Assumption of Column-Orthogonality and Full Rank** We thank the reviewer for this insightful comment. We divide our response into two parts: Full-rank condition: This condition is fundamental and cannot be relaxed; it is not an artifact of our theoretical analysis. In particular, when the dictionary is not full-rank, our analysis shows that $\mathbf{D}$ may deviate from $\mathbf{D}^*$ in directions that are column-orthogonal to the column space of $\mathbf{D}^*$. Such deviations cannot be corrected by the algorithm. This behavior is further corroborated by numerical experiments. In Appendix D.1, we present synthetic data experiments that highlight the necessity of the full-rank condition. Specifically, the middle column of Figure~4 corresponds to a rank-deficient $\mathbf{D}^*$, and the results show that, in the absence of the full-rank assumption, the algorithm fails to achieve column-wise convergence. This outcome is fully consistent with the theoretical guarantees established in our main theorem. Orthogonality condition: In practical scenarios where the dictionaries are not orthogonal, we propose a novel procedure, termed *Concept Dispersion*, designed to mitigate the impact of highly correlated embeddings. The details of this sub-algorithm are provided in Appendix D.2. The effectiveness of it is shown in the Plot 3 from this [anonymous link](https://anonymous.4open.science/r/ICML_Review-C283), where we compare the correlation between concept embeddings before and after this process. **Dependency on Pre-trained Models** We would like to clarify that the introduction of CCR is specifically intended to address the errors, biases, and uncertainties that arise from CLIP embeddings. The reviewer is absolutely correct in noting that CLIP embeddings are often far from accurate. This observation is, in fact, the primary motivation behind the development of CCR in our work. In particular, we demonstrate—both empirically and theoretically—that while existing methods rely solely on the raw outputs of CLIP, without any mechanism for refining the embeddings, our proposed CCR framework not only leverages CLIP-derived embeddings but also further refines them to achieve strong performance within each specific domain. **Interpretability Preservation** We agree with the reviewer that large-scale human evaluation experiments would be valuable in demonstrating the interpretability of our method. This is indeed a limitation—not only of our work, but also of most existing efforts aimed at developing interpretable machine learning models. Addressing the reviewer’s comment and incorporating large-scale human evaluation would require substantial resources, which are currently beyond our reach. In the absence of such resources, our current interpretability evaluation is based on significantly smaller scale human evaluations on a set of randomly selected images, for which we report the model’s outputs and qualitatively demonstrate their interpretability. We are grateful for the reviewer’s constructive suggestion and will strive to incorporate more comprehensive evaluation practices in future work. **Hyperparameter Choice** For an empirical sensitivity analysis of $\rho$, we refer to the Plot 4 from this [anonymous link](https://anonymous.4open.science/r/ICML_Review-C283). **Concept Set Dependency** Our primary motivation for adopting the concept set generated by GPT-3 is to ensure fair comparisons between our method and existing approaches, as both lf-CBM and IP-OMP utilize the same concept set. The concern raised by the reviewer is indeed valid, as the quality of the concept set can significantly influence model performance—a consideration that applies to all interpretable-by-design algorithms. While the development of improved concept sets is undoubtedly important, we regard this as beyond the scope of the current work and leave it as a direction for future research. **Interpretability Metrics & Limited Analysis of Misleading Concepts** Please see our response above for Interpretability Preservation. **Statistical Significance Test** We thank the reviewer for their valuable suggestions. to better showcase the robustness of our method, we have enhanced our experimental protocol by performing 10 independent runs for each dataset and reporting the maximum, minimum, and average performance. See Plot 5 from this [anonymous link](https://anonymous.4open.science/r/ICML_Review-C283). **Ablation Studies** We thank the reviewer for this valuable suggestion and will include a ablation study in the revised manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response, which has clarified my doubts. I appreciate the effort they put into addressing my concerns. However, there are still a couple of aspects that remain unclear: 1. The response does not mention any failure cases of the proposed method. Does this imply that no such cases were observed during your experiments? 2. Could you elaborate on your plans for including the ablation study? Specifically, what aspects of the method will be the focus of this analysis? I will maintain my current score. --- Reply to Comment 1.1.1: Comment: We are glad to hear that our response has addressed some of the reviewer’s concerns, and we thank the reviewer for the thoughtful follow-up questions. As these questions were raised on the final day of the author-reviewer discussion period, we will do our best to address them promptly and thoroughly below: **Failure cases:** Our method does indeed exhibit failure cases. In the context of our image classification experiments, failure can be characterized in two main ways: (i) misclassification and (ii) lack of interpretability. *Misclassification:* We have already included several examples of misclassified images in the appendix (see Figures 16(c), 18(b), and 19(c)). We kindly refer the reviewer to these figures. While many similar examples exist, we believe the presented cases are representative and sufficient to demonstrate that, although our method outperforms existing approaches in terms of accuracy across most benchmarks, it is still susceptible to misclassification. *Lack of interpretability:* As discussed in our previous response under the section “Interpretability Preservation,” systematically identifying the causes of interpretability failures would require a more principled analysis—most likely involving large-scale human evaluation, as rightly suggested by the reviewer. While we currently lack the resources to conduct such a study, we will explicitly acknowledge this limitation in the impact statement of the revised manuscript. **Ablation study:** We appreciate the reviewer’s suggestion and apologize for the lack of clarity in our initial response. Conceptually, our approach comprises three distinct modules, each of which can be independently evaluated through ablation: *Concept refinement:* One of the key novelties of our method is the refinement of initial concepts via an optimization algorithm. An ablation study in this context would isolate the effect of concept refinement while keeping the other components fixed. This analysis has already been conducted in the main paper (although it was not explicitly labeled as an ablation study). Specifically, the blue curve (“Constrained Concept Refinement”) and the orange curve (“Baseline without CCR”) in Figure 2 illustrate the performance with and without concept refinement. We will explicitly mention this comparison as an ablation study in the revised version. *Concept dispersion:* In the anonymous link (plot_3), we have shown the effect of concept dispersion on the correlation between the concepts. We further plan to evaluate the performance of CCR with and without concept dispersion and include it in the ablation study section of the revised manuscript. *Sparse coding via hard-thresholding:* Another component of our method that can benefit from ablation study is the use of hard-thresholding to generate the sparse code $\mathbf{s}$. In the revised manuscript, we will replace this process with classical OMP algorithm. Note that the resulting model will still differ from IP-OMP due to the existence of the concept dispersion step. If time and resources allow, we also plan to implement task driven DL method [1] in this module. Based on the observation from some pilot experiments, our hard-thresholding approach demonstrates strong computational efficiency compared with other counterparts. [1] Mairal, J., Bach, F., and Ponce, J. Task-driven dictionary learning. IEEE transactions on pattern analysis and machine intelligence,
Summary: The paper proposes CCR to improve both prediction accuracy and model interpretability. Claims And Evidence: It does not have the impact statements and ends abruptly. Seems like a work in progress? Methods And Evaluation Criteria: - metrics for interpretability like avg explanation length and avg concept embedding deviation are proxy measures. It remains unclear how well these reflect human understanding. There is a better figure on this in the Information Maximization Perspective of Orthogonal Matching Pursuit with Applications to Explainable AI by Chattopadhyay et al. paper. - Why is CIFAR-10 curve in Figure 2 for baseline flat? Puts into question if this was a fair comparison. - I wonder if the theoretical analysis depends on overly strong assumptions that may not generalize Theoretical Claims: Not well enough. Experimental Designs Or Analyses: See methods and eval criteria Supplementary Material: No Relation To Broader Scientific Literature: Seem very tightly tied to Information Maximization Perspective of Orthogonal Matching Pursuit with Applications to Explainable AI by Chattopadhyay et al. including paper structure. Essential References Not Discussed: Main papers that come to mind would probably be outside the scope of this paper Other Strengths And Weaknesses: See above Other Comments Or Suggestions: Missing Impact Statements. Questions For Authors: See other sections Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the time and effort invested in evaluating our manuscript. Should the reviewer find our clarifications and revisions satisfactory, we would be grateful if they would consider raising their score in the final assessment. **Claims And Evidence** We apologize for not including the impact statement of the paper. We will include the following section in the final manuscript: Conclusion and Impact Statements This paper introduces Constrained Concept Refinement (CCR), a principled framework that helps bridge the long-standing gap between interpretability and accuracy in machine learning. By constraining the refinement of concept embeddings to lie within a small neighborhood around their initial values, CCR enables interpretable-by-design models to improve prediction performance without compromising their explainability. Our theoretical and empirical results demonstrate the effectiveness of CCR across a variety of tasks and datasets, both in terms of predictive performance and computational efficiency. The broader impact of this work lies in its potential to advance the practical adoption of interpretable machine learning methods in real-world settings. In particular, the computational efficiency afforded by CCR may facilitate the deployment of explainable artificial intelligence (XAI) techniques in resource-constrained environments. From an ethical perspective, the capacity to generate explanations that are faithful, stable, and aligned with human-interpretable concepts contributes to addressing critical concerns related to algorithmic accountability and bias. For the proposed method, hyperparameter tuning remains a crucial yet underexplored component for ensuring effective performance in practical applications. This aspect warrants careful consideration to ensure that the resulting outputs are both reliable and justifiable. **Methods And Evaluation Criteria** > metrics for interpretability like... We thank the reviewer for the constructive feedback. To address the comment, we conducted additional experiments to better illustrate the relationship between performance and sparsity. See the Plot 1 from this [anonymous link](https://anonymous.4open.science/r/ICML_Review-C283). > Why is CIFAR-10 curve ... The main reason for the lack of obvious improvement on the CIFAR-10 dataset is that we report test accuracy **after** each epoch. Since CIFAR-10 is relatively easy to train, its performance potential is largely realized after a single pass through the training data (i.e., after the first epoch). If we instead report test accuracy **before** each epoch, a clear improvement becomes apparent after the first epoch. This improvement is shown in the Plot 2 of this [anonymous link](https://anonymous.4open.science/r/ICML_Review-C283). > I wonder if the theoretical ... We thank the reviewer for raising this important point. Indeed, the concept of interpretability remains elusive from a theoretical perspective, and there is currently no consensus in the community on how to rigorously quantify it. As a result, existing works typically take one of two approaches: (i) relying on human annotations and evaluations to assess interpretability, while omitting theoretical guarantees [1-4]; or (ii) studying a well-defined generative model, where samples are drawn from an underlying ground-truth concept embedding, allowing interpretability to be quantified as the distance between the learned and true concept embeddings. Our work incorporates both of these perspectives. Admittedly, the generative model we consider is not fully general, and its assumptions may not hold in real-world settings. However, as we clearly state in the paper, our motivation for studying this model lies elsewhere: it provides a controlled test-bed in which the limitations of existing methods can be rigorously examined, and our proposed approach can be shown to overcome them effectively. Moreover, this generative model has been extensively explored in prior work. For instance, [1] uses a variant of it as a pragmatic compromise between theoretical tractability and empirical relevance. It is also closely related to a well-established line of research in dictionary learning, where similar generative models have been successfully applied. Reference: [1] A Chattopadhyay, R Pilgrim,R Vidal. Information maximization perspective of orthogonal matching pursuit with applications to explainable AI. [2] B Zhou, Y Sun, D Bau, A Torralba. Interpretable basis decomposition for visual explanation. [3] PW Koh, T Nguyen, YS Tang, S Mussmann, E Pierson, B Kim, P Liang. Concept bottleneck models. [4] M Yuksekgonul, M Wang, J Zou. Post-hoc concept bottleneck models [5] T Oikarinen, S Das, L Nguyen, T Weng. Label-free concept bottleneck models
null
null
null
null
null
null
Symmetry-Driven Discovery of Dynamical Variables in Molecular Simulations
Accept (poster)
Summary: The paper introduces a framework for discovering effective degrees of freedom (DOF) in molecular simulations by identifying approximate symmetries of the energy function. Instead of relying on simulation trajectories or training datasets, the authors formulate an optimization problem—called a “symmetry loss”—and, in one approach, use second-order (Hessian-based) information to find transformations that keep the energy nearly invariant. These transformations become a set of collective variables or DOF that help explore low-energy regions of the molecular landscape more efficiently. The authors demonstrate their methods on alanine dipeptide (in both vacuum and implicit solvent) and on chignolin, showing that the discovered DOF can reach known conformers and reveal states that are traditionally harder to sample with basic molecular dynamics. Overall, the paper’s main contribution is a data-free technique that systematically derives low-energy directions from the force field itself, offering a way to speed up or enrich sampling of molecular conformations. Claims And Evidence: The paper’s key theoretical claim—namely, that low-energy degrees of freedom can be discovered by finding approximate symmetries of the energy landscape—receives strong support through detailed derivations and proofs. The authors demonstrate mathematically how to use the gradient or Hessian of the force field to identify transformations that leave the energy almost unchanged, then validate these ideas on small systems (alanine dipeptide and chignolin). This evidence is clear and consistent for the claim that the proposed methods can uncover physically relevant low-energy directions in the configuration space. However, the assertion that these methods lead to significantly more diverse sampling or noticeable speedups compared to established enhanced-sampling techniques is less rigorously substantiated. The authors do present examples where their approach discovers states that standard molecular dynamics simulations take longer to find, but there is no in-depth, head-to-head comparison of sampling efficiency (e.g., run-time or number of force evaluations) against commonly used methods. Thus, while the theoretical foundation and small-scale results strongly support the symmetry-based framework, the more expansive claims about large gains in sampling diversity or efficiency remain only partially demonstrated. Methods And Evaluation Criteria: The authors focus on two well-known test systems—alanine dipeptide and chignolin—both of which are standard benchmarks in molecular dynamics. These systems provide clear baselines for verifying whether new approaches capture known conformers and relevant collective variables. Because the paper’s main claim is that approximate symmetries of the energy can yield physically meaningful DOF, using these relatively small yet widely studied molecules is a sensible choice: it makes it straightforward to check whether the discovered degrees of freedom align with known dihedral angles or populated states. That said, the evaluation consists mainly of seeing whether the method recovers recognized conformations and can occasionally reach less-populated states; it does not include more comprehensive comparisons against standard enhanced-sampling approaches. Theoretical Claims: The paper’s main derivations—specifically, using Taylor expansions around a reference configuration and linking approximate energy invariance to low-curvature or degenerate directions of the Hessian—align with standard linear algebra and perturbation results. The statement that degenerate eigenvalues in the Hessian induce rotation-like symmetries follows well-known principles. While minor indexing or notation issues are present, they do not invalidate the arguments, and the proofs appear consistent with conventional theoretical foundations. Experimental Designs Or Analyses: The experimental approach—focusing on alanine dipeptide (both vacuum and implicit solvent) and chignolin (implicit solvent)—is broadly sound as a preliminary demonstration that the discovered transformations can access known conformers. Both systems are widely recognized testbeds for validating the discovery of local minima and dihedral-angle variations. A long MD simulation serves as a reference to confirm that the proposed method recovers states consistent with standard MD. The procedure involves extracting approximate symmetries at a reference configuration, systematically applying transformations (on a grid) to generate new structures, and then minimizing and running short simulations to confirm low-energy stability. This is appropriate to gauge whether the method meaningfully covers conformational diversity; however, the study does not provide comparisons to established sampling methods. Moreover, while the results are promising, the scope is limited to relatively small systems, leaving open questions on scalability, performance with explicit solvent, and how comprehensively the new approach samples large, complex conformational landscapes. Supplementary Material: The supplementary material includes expansions and derivations for the Hessian-based approach, detailing second-order approximations and related notational aspects. These additions remain consistent with the main text’s framework and do not introduce evident inconsistencies. Relation To Broader Scientific Literature: The paper addresses the longstanding problem of identifying collective variables for molecular simulations, a task also tackled by methods such as metadynamics, replica-exchange MD, and various data-driven approaches. Unlike those methods, which often rely on extensive trajectories or predefined biases, this work derives low-energy directions straight from the force field’s gradient and Hessian, placing it closer in spirit to normal-mode analyses used in protein dynamics. Essential References Not Discussed: All the relevant references are discussed. Other Strengths And Weaknesses: List of typos: Line 34, «approximate» Line 131, missed space in front of «of» Line 275, «assume», «meaning» Line 288, «only» Line 316, «minimizing» Line 551, 569, «eq 5» and similar — no reference Line 681, «DOF» Line 715, «One» Line 718, has to be «degenerate eigenspaces for each» Other Comments Or Suggestions: I don’t think that this work lies in the field of machine learning, so I would recommend that the authors submit it to some applied biology conferences or journals. I see potential in this paper, but it requires significant refinement—especially in how the methodology is presented and how it is compared with competing approaches. Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank you for your review and we hope we can address some of your concerns below : ### Quantitative Metrics and Baselines ### We understand your concerns regarding the rigor of the evaluation criteria. Please refer to the response for Reviewer 9HNJ to view a table of quantitative results that might address some of the concerns you raised. As seen in the table, the grid of initial configurations given by the discovered DoF provides coverage of the entire conformational landscape. As a result, we are able to find at least some points within each conformational basin within only 10 simulation steps. To address your second point about the lack of comparison with other enhanced sampling methods, we find other method to be orthogonal in nature to some of the other sampling methods available. Most enhanced sampling methods like umbrella sampling and metadynamics based methods require specified CVs in order to sample the space. As a result these methods, work on a reduced low-dimensional space unlike the our proposed method which directly works on the all atom model. While replica exchange methods do directly sample from the high-dimensional all-atom positions, they do not involve learning any local symmetries. Furthermore the replica exchange methods require communication between different replicas and cannot explore different parts of the energy landscape simultaneously. We provide a highly parallelizable method which uses local symmetry information to efficiently index and explore conformational basins. Using parallel exploration we only need to simulate <10 steps/starting configuration (total 1000 starting configuration) to get to all the known conformational basins for the explored systems. As a result, the method cannot be directly compared to the sequential algorithms like metadynamics and replica exchange models. ### Scaling to Larger Systems ### As mentioned in the paper, the direct optimization based methods can be applied to any system immersed in a force field. Even in the presence of explicit solvent, the procedure only needs to access the force exerted on the atoms in the molecule for given configurations. As correctly stated by the reviewer, the paper is introducing the idea of discovering local DoF, we tried to focus mostly on small systems with well-known and interpretable DoF. ### Concerns Regarding Applicability to ML ### We believe the symmetry discovery/optimization part of the procedure as well as the problem of transfer and generalizability of learnt DoF is an ML algorithm, an ML venue is much better qualified to judge its validity than a bio venue. Additionally, comp bio, drug discovery and physical simulation are among the fastest growing and most important use cases of ML. --- Rebuttal Comment 1.1: Comment: I see potential in this paper, but it requires significant refinement—especially in how the methodology is presented and how it is compared with competing approaches. While I understand that methods such as metadynamics or replica exchange may not be directly comparable, it would be valuable for the paper to include a clear discussion of these methods’ limitations and a comparison that accounts for those constraints. Without such an analysis, it is hard to fully assess the proposed method, and thus I am not inclined to change my current evaluation. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their appreciation of the potential of the paper. ### Improvement in Presentation ### We have provided some points towards improving the presentation in our responses to reviewer p5r2. We feel the additional glossary of notation and a section illustrating the use of the proposed method on a small sample system would significantly improve the presentation. Please check our rebuttal to reviewer p5r2 and the rebuttal response to reviewer zCrM for an example illustrating the use of our proposed methods for symmetry discovery. We feel this example should demonstrate the key ideas of the paper. ### Comparision to Enhanced Sampling Methods ### We understand the reviewer's suggestion to compare our methods to existing enhanced sampling methods. As noted in our earlier response, we would like to highlight that CV-based methods like Umbrella Sampling and metadynamics are known to be capable of exploring only low-dimensional spaces spanned by CVs, but our current method tries to explore the high-dimensional space spanned by the all-atom positions. Thus, the only enhanced sampling method that is somewhat comparable is the family of Replica Exchange Molecular Dynamics (REMD) methods. We can add a more in-depth discussion regarding these methods. As suggested by the reviewer, we add the REMD simulation baseline results here for alanine-dipeptide to contrast it with the results we have for our proposed methods (given in our rebuttal to reviewer 9HNJ). Here we use REMD simulation with 8 parallel simulations at logarithmically spaced out temperatures between 300 K - 500 K with an attempted transition between adjacent temperatures every 50 steps ($0.1 ps$). | | | 2 ps | 10 ps | 20 ps | 0.1 ns | 0.2 ns | 1 ns | 2 ns | 10 ns | 20 ns | | --------------------------- | ------ | -------- | ------- | ------- | ------ | ------ | ------ | ------ | ------ | ------ | | | $C5$ | 0.4659 | 0.3676 | 0.2463 | 0.2168 | 0.2264 | 0.2082 | 0.2232 | 0.2340 | 0.2316 | | Alanine Dipeptide in Vacuum | $C7_{eq}$ | 0.1818 | 0.2745 | 0.3515 | 0.3745 | 0.3708 | 0.3311 | 0.3357 | 0.3367 | 0.3302 | | |$C7_{ax}$ | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0768 | 0.0467 | 0.0226 | 0.0332 | | | | | | | | | | | | | | | $\\beta$ | 0.9205 | 0.7451 | 0.6052 | 0.4808 | 0.4336 | 0.4359 | 0.4422 | 0.4505 | 0.4557 | | Alanine Dipeptide in Solvent | $\\alpha_L$ | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0017 | 0.0063 | 0.0061 | 0.0065 | | |$\\alpha_R$ | 0.0000 | 0.0956 | 0.2166 | 0.3079 | 0.3417 | 0.3383 | 0.3238 | 0.3200 | 0.3153 | We use smaller timescales here than the timescales in the long simulation baseline, as the convergence is faster. However, we still see that the proposed methods are faster by an order of magnitude in terms of simulation time (our proposed methods find all conformations within $20 fs$ of simulation time). Moreover, we also note that unlike REMD, which requires some interaction (exchanging) between the parallel simulations being run on different temperature scales, the parallel simulations used in our method are fully independent. This makes the simulations given by our method even faster in terms of wall-clock time. We hope these additional results address the concerns raised by the reviewer.
Summary: This paper proposes a method to identify effective degrees of freedom (DOF) of dynamics by connecting them with approximate symmetries of the energy function. This is done through identifying a state $x$ with a group element (here restricted to the general linear group) whose action on $x_0$ (a reference state) produces $x$. For sufficiently close $x$ to $x_0$, this can then be parameterised by a linear combination of basis vectors of the Lie algebra. These symmetry based DOFs are optimised according to two principles: the transformations should not cause large changes in energy globally (so that main conformations are still distinct under this parameterisation), but should cause enough change in energy to overcome local energy barriers. This results in loss functions that can either be directly optimised, or under a local assumption be used to derive analytical loss expressions involving the Hessian. Both approaches are tested on two relatively simple test problems (alanine dipeptide and Chignolin), where it is shown that these eDOFs are useful in exploring the conformation landscape. Claims And Evidence: The two examples presented demonstrate to a certain extent the effectiveness of the method, although some quantitative evaluation of the effectiveness of the DOFs would have been better. Moreover, there lacks a general comparison with other methods which tries to construct eDOFs (such as collective variables), although such a comparison may be difficult without a well-defined evaluation criterion. At the very least, one could compare meta-dynamics driven by learned CVs and the current method on the Ramachandran plot. Note that meta-dynamics can be used to perform sampling from the correct distribution, and compute the free energy, but the current method does not seem to be able to do this. The authors may wish to comment on this, and perform some numerical comparison. Methods And Evaluation Criteria: The visualisation of the Ramachandran plot is useful in demonstrating that the eDOF constructed can be used to probe the conformations, although I do not believe it is easy to perform quantitative comparisons with different methods. There is also a lack of ablation studies. For example, can one compare the Ramachandran plots of the case which carries out steps 1-5 on page 7, except in step 2 one uses either 1) two randomly sampled $L_1, L_2$; or 2) replaces $L_1 x_0$ and $L_2 x_0$ by the top two eigenvectors of the Hessian of $E$ at $x_0$? Theoretical Claims: I checked the overall claims and they appear correct. I did not check every step in the derivation in the appendix. Experimental Designs Or Analyses: The experiments are reasonably designed, although as discussed above the evaluation and ablation can be improved. Supplementary Material: I had a brief look through the code, although I did not try to reproduce the results. Relation To Broader Scientific Literature: The related work is generally well written and relevant. Essential References Not Discussed: It may be useful to discuss https://arxiv.org/pdf/2307.00365 and methods therein for learning/constructing collective variables. Other Strengths And Weaknesses: trengths - The problem of finding effective ways (DOFs) to navigate conformational landscape of macromolecules is an important application area, both for qualitative analysis and quantitative computation (sampling, free energy computation, etc) - The method appears quite interesting and has new ideas that connect (approximate) symmetries of the energy function to effective DOFs. - The qualitative experiments over two simple but representative problems demonstrate the potential of the method Weaknesses - Lack of quantitive evaluations of their method and comparisons with related methods (e.g. learned/constructed CVs as effective DOFs) - Lack of ablation studies - Last but not least, while I do like the new ideas in the paper, the presentation of the paper can be *substantially* improved, there are many typos and many expressions lacking proper definition (details below in "Other Comments or Suggestions") Other Comments Or Suggestions: - Abstract: "... we do not require data and rely on knowledge ..." -> "... we do not require data but rely on knowledge ..." - Page 1: "we don’t want to change in energy to be so high that it would break chemical bonds" -> Grammar issue - Page 1: "both computationally efficient and physically insightful." -> what do you mean by physically insightful? - Page 3: I find the notation for general linear group strange. It is usually $\text{GL}(n)$ or $\text{GL}(n,F)$ if you want to specify the field, but $\text{GL}(\mathcal X)$ doesn't make sense to me, and moreover it is not consistent with the subsequent usage of (special) Euclidean groups (which are also not properly defined in the text). - Page 3: The Orbit defintion seems trivial as written, since its either $0$ or whole space minus 0. Do you mean to consider a subgroup? - Page 3: The symbols used in the section under "Group parameters as DOF" is quite unclear to me. - For the notation of the Lie algebra basis $\mathbf{L}_a$, is this a single basis element, or does $a$ run through the dimension of the Lie algebra? - For the notation $\theta \cdot \mathbf{L}$, does this mean $\sum_a \theta_a \mathbf{L}_a$ or something else? It is not defined. - If $\theta$ is a vector, the Taylor expansion error term should be $O(|\theta|^2)$. - Since it appears to me that we are always considering the general linear group, what's the point of this abstraction to Lie groups/algebras? Why not just identify the Lie algebra with $M_n(\mathbb R)$? - Boldfaced $\mathbf{I}$ is used but subsequently $I$ is used instead (e.g. page 4). Is there any meaning to the different notations? Overall, I suggest the authors properly define the notations (even if it is placed in the appendix). These issues affect readability of the core parts of the paper. - Page 4: "set of DOF at by" -> Grammar issue - Page 4: Equation 1 is not clear, is $S$ considered a subset of $GL(n,\mathbb R)$ or $M_n(\mathbb R)$? It should be the latter given equation 2. Also, what's the dependence of $\delta E$ on DOF? Why not just use $E_\text{barrier}$ which is already used in the previous page and not introduce an extra $\eta$? - Page 4: the variables $\theta_l,\theta_s$ do not seem to appear in equations 1 and 2. Moreover, subsequent line with $g\approx I + \theta \mathbf{L}$ now takes $\theta$ as a number instead of a vector (as introduced on the previous page). - Page 5: "where we wanted $\delta E$" -> "where we wanted $\delta E$ to be small" - Page 6: "menaing" -> Spelling, "oonly" -> Spelling - Page 6: Equation 15, should $K$ be $n_L$ as in equation (14)? - Page 6: "minimas" -> "minima" - Page 6: "presense" -> "presence" - Page 7: "to find the conformers is settings with" -> Grammar issue Questions For Authors: 1. Can the authors give a minimal example to illustrate your approach and why the DOFs parameterised through symmetries make sense? For example, we can take d=1, n=2 and consider some potential like $E(x_1,x_2) = F(x_1-x_2)+\epsilon\sin(x_2/\epsilon)$ where $F$ is say a double-well potential (or something to that extent, involving potentials of different scales) . We know that the effective DOF should be $x_1-x_2$. It would be helpful to illustrate that the method correctly identifies this. 2. As I understand, since you always take a linear approximation of the exponential map of the general linear group, this is the same as finding perturbation directions as $x_0 \mapsto x_0 + \sum_{i} c_i L_i x_0$. This locally yields "collective variables" in the form of $z_i = L_i x_0$, and if one can piece this together for neighbourhoods of each $x_0$, then one should arrive at some global collective variable? Is this understanding correct? If so, there should be some discussion on this point and the connection of this approach to e.g. the collective variable discovery approaches outlined in https://arxiv.org/pdf/2307.00365. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough and thoughtful review. We address some points below: ### Quantitative Metrics & Baselines ### Please refer to our response to Reviewer 9HNJ for additional results. Although there are existing methods for CV discovery, our method considers the slightly orthogonal task of discovering local DoF. Unlike CVs, the discovered DoF are local in nature and do not necessarily correspond to any global Collective Variable. Moreover, we show that we can sample all the conformers for these molecules within 20 fs (10 steps) of starting the simulation from the sampled gridpoints. This setup is very different from the standard CV setup in metadynamics or replica exchange where the initial configuration is fixed. ### Presentation ### Thank you for the pointing out the notational inconsistencies. We will add a glossary of the relevant notation in the supplementary material. Furthermore, we will also add a section on illustrative examples applying the proposed method on small systems. Please refer to our response to Reviewer p5r2 for a discussion of a small synthetic system similar to the one you proposed. As the Hessian based method in ### Ablation Studies ### We will add the suggested ablation studies in the supplementary material. It is not always possible to plot the trajectories for the highest valued eigenvectors of the Hessian as the transformations can lead to unnatural states that lead to errors in the openmm simulation. However, we will add a more detailed discussion in the paper. ### Local CV and Relation to other CV discovery work ### We thank the reviewer for the reference [1], but the CVs discussed do not align very well with the present work. Although one could define local variables $z_i = L_i x_0$, these resulting variables behave differently from collective variables. We treat all configurations that can be reached from $x_0$ as an equivalence class; thus, all the z_i's would form a basis for this subspace. Thus, all the local $z_i$'s can be used to index the local conformation basin, but it is unclear if these can be patched together or if patching these would give a good global CV. ### Clarifications for Typos ### - Page 1: "both computationally efficient and physically insightful." -> what do you mean by physically insightful? We mean it provides insight into the effective dynamics of the system. - Page 3: I find the notation for general linear group strange. … $GL(\mathcal{X})$ doesn't make sense to me, … not consistent with the subsequent usage of (special) Euclidean groups. Actually, $GL(V)$ is the standard way the general linear group for a vector space $V$ is denoted, though it may not look as familiar as $GL(n,R)$. We will explain and add the definition of $SE(n)$. - Page 3: The Orbit definition seems trivial as written … Do you mean to consider a subgroup? You are correct for $\mathcal{X} \sim R^n$. We are indeed usually concerned with a subgroup (the approximate symmetries of $E$). Will reword it. - Page 3: symbols in "Group parameters as DOF" is quite unclear to me. - Lie algebra basis $L_a$, is this a single basis element, or does $a$ run through … the Lie algebra? $L_a$ is a single basis element. - For the notation $\theta \cdot L$: Yes, it is $\sum_a \theta_a L_a$ - If θ is a vector, the Taylor expansion error term should be O(|θ|2) Any combination $\theta_a \theta_b$ is second order, but, yes, one can pull out the norm of $\theta = |\theta| \hat{\theta}$ and write the error as $O(\theta^2)$. - Since … always GL, what's the point of this abstraction to Lie groups/algebras? Why not just identify the Lie algebra with $M_n(R)$? They are matrices, yes, but we want a subgroup - Boldfaced $I$ vs normal. Any difference? No, will fix it. - Overall, I suggest … properly define the notation. Thank you, will do. - Page 4: Equation 1 is not clear, is $S$ considered a subset of $GL(n,R)$ or $M_n(R)$? It should be the latter… We should clarify in the paper that we are working with the canonical representation of $GL(n,R)$ which maps elements to a subset of $M_n(R)$. So, yes, all in $M_n(R)$. Also, what's the dependence of δE on DOF? Why not just use $E_{barrier}$ which is already used in the previous page and not introduce an extra η? Eq 3, $\delta E = \theta \nabla E \cdot Lx$ (here $\theta \in R$) shows dependence of $\delta E$ on DOF $L$. We can replace $\eta$ with $E_{barrier} in eq 1. - Page 4: the variables $\theta_l,\theta_s$ do not appear in eqs 1 and 2. Sorry, we changed it to $\epsilon_l, \epsilon_s$, will fix. - g≈I+θL now takes θ as a number instead of a vector. A bit of abuse of notation here, will fix. Here $L$ denotes an arbitrary vector in the Lie algebra instead of specific basis elements $L_a$. Then, $\theta \in R$ is a small $\epsilon$ making clear that $g$ is near identity. - P6: Eq 15, $K= n_L$ as in eq (14)? Yes, will fix. [1] Understanding recent deep-learning techniques for identifying collective variables of molecular dynamics --- Rebuttal Comment 1.1: Comment: I think the paper has some nice ideas and I have increased my score assuming that the authors can fix all the notational issues and inconsistencies. Also, the simple example system to demonstrate your approach would be very beneficial. It would be nice to describe it in a reply. --- Reply to Comment 1.1.1: Comment: Thank you for your suggestions. The simple example system is quite interesting to explore. Because of space constraints, we only discuss one possible solution to the minimization problem for two methods instead of exploring the complete solutions. We refer you to our response to Reviewer p5r2 for the setup and part of the solution using the Hessian-based approach. We explore symmetries around given minima $x'$. ### Hessian-Based Method ### One correction; the hessian should have $\sin(x'_2/\epsilon)$ instead of $\cos(x'_2/\epsilon)$, yielding : $$ H_E(x') = H_F(|x'_1| - x'_2) \begin{bmatrix} 1 & -\textbf{sgn}(x'_1) \\\\ -\textbf{sgn}(x'_1)& 1 \end{bmatrix} - \frac{\sin(x'_2/\epsilon)}{\epsilon} \begin{bmatrix} 0 & 0 \\\\ 0 & 1 \end{bmatrix} = H_F(|x'_1| - x'_2) \begin{bmatrix} 1 & -\textbf{sgn}(x'_1) \\\\ -\textbf{sgn}(x'_1)& 1 - \frac{-1}{\epsilon H_F(|x'_1| - x'_2)}\end{bmatrix} = C \begin{bmatrix} 1 & -\textbf{sgn}(x'_1) \\\\ -\textbf{sgn}(x'_1)& \gamma \end{bmatrix}$$ Minima condition gives $\sin$ as $-1$. We refer to $H_F(|x'_1| - x'_2)$ as $\rho$ for brevity. In the previous reply, we explored the case when $\gamma \approx 1$, which corresponds to $\epsilon >> \rho^{-1}$. For $\epsilon << \rho^{-1}$, we have $\gamma \rightarrow \infty$, or rather the approximation $ H_E(x') \approx C \gamma \begin{bmatrix} \gamma^{-1} & -\gamma^{-1}\textbf{sgn}(x'_1) \\\\ -\gamma^{-1}\textbf{sgn}(x'_1)& 1 \end{bmatrix}$ where $C\gamma = \frac{1}{\epsilon}$. Thus, we see that taking $L^* = \begin{bmatrix} 1 & 0 \\\\ 0& 0 \end{bmatrix}$ would give $K_S = a \begin{bmatrix} \gamma^{-1} & -\frac{1}{2}\gamma^{-1}\textbf{sgn}(x'_1) \\\\ -\frac{1}{2}\gamma^{-1}\textbf{sgn}(x'_1)& 0 \end{bmatrix}$. Ignoring the constant $a$, $2 tr(K_S^2) + tr(K_S)^2 = 2 (\gamma^{-2} + \gamma^{-2}\frac{1}{4} + \gamma^{-2}\frac{1}{4} ) + \gamma^{-2} =4 \gamma^{-2} \approx 0$. Thus, $e^{\eta L^*} x = x + (e^\eta - 1)L^* x$ which only changes $x_1$, conserving $x_2 = (e^{\eta L^*} x)_2$. The Hessian-based method gives the symmetry present at the lowest scale. If $\epsilon$ is small, we get the symmetry $L^* = \begin{bmatrix} 1 & 0 \\\\ 0& 0 \end{bmatrix}$ $\Big( x_2 = (e^{\eta L^*} x)_2$ corresponding to $x_2$ being conserved $\Big)$. For large $\epsilon$, $L^* = \begin{bmatrix} 1 & \textbf{sgn}(x'_1) \\\\ \textbf{sgn}(x'_1) & 1 \end{bmatrix} \Big( x_1 - \textbf{sgn}(x'_1)x_2 = (e^{\eta L^*} x)_1 - \textbf{sgn}(x'_1)(e^{\eta L^*} x)_2$ is conserved$\Big)$ . ### Direct Optimization ### For direct optimization method, the infinite sample limit optimization problem can be restated as $$ \mathbb{E}_{x \sim \mathcal{N}(x', \sigma I)} \left[(\nabla E(x)^\top L x)^2 \right] = \mathbb{E} \left[tr (x\nabla E(x)^\top L)^2 \right] = \mathbb{E} \left[vec(L)^\top vec (x\nabla E(x)^\top) vec (x\nabla E(x)^\top)^\top vec(L) \right] = vec(L)^\top \mathbb{E} \left[vec (x\nabla E(x)^\top) vec (x\nabla E(x)^\top)^\top\right] vec(L) $$ Assuming a quadratic Hessian-based approximation of $F$ and assuming $\textbf{sgn}(x_1) = \textbf{sgn}(x'_1)$, i.e., minima $x'$ is far from origin we get the following matrix $$ H_{E, \sigma} = \begin{bmatrix} A & -\textbf{sgn}(x'_1) A \\\\ -\textbf{sgn} (x'_1) A & A \end{bmatrix} + \begin{bmatrix} 0 & -\textbf{sgn} (x'_1) B \\\\ \textbf{sgn}(x'_1) B & C - 2B \end{bmatrix} $$ where $A$ scales with multiplier $\sigma^2 H_F(|x'_1| - x'_2)^2$, $B$ scales with multiplier $\sigma H_F(|x'_1| - x'_2) \frac{\sigma}{\epsilon} e^{-\left(\frac{\sigma}{\epsilon}\right)^2}$ and $C$ scales with multiplier $1 - e^{-\left(\frac{\sigma}{\epsilon}\right)^2}$. Henceforth, we refer to $H_F(|x'_1| - x'_2)$ as $\rho$. Case $\epsilon << \rho^{-1} :$ , for $\epsilon << \sigma << \rho^{-1}$, we have $H_{E, \sigma} \approx \begin{bmatrix} 0 & 0 \\\\ 0 & C \end{bmatrix}$ which yields $L^* = \begin{bmatrix} 1 & 0 \\\\ 0& 0 \end{bmatrix}$. However for $ \epsilon << \rho^{-1} << \sigma $, we get $H_{E, \sigma} \approx \sigma^2H_F(|x'_1| - x'_2)^2 \begin{bmatrix} A & -\textbf{sgn}(x'_1) A \\\\ -\textbf{sgn} (x'_1) A & A \end{bmatrix} + \begin{bmatrix} 0 & 0 \\\\ 0 & C \end{bmatrix} $ which yields $L^* = \begin{bmatrix} 1 & \textbf{sgn}(x'_1) \\\\ \textbf{sgn}(x'_1) & 1 \end{bmatrix}$. Case $\epsilon >> \rho^{-1} :$, for $\epsilon >> \sigma >> \rho^{-1} $ we have $H_{E, \sigma} \approx \begin{bmatrix} A & -\textbf{sgn}(x'_1) A \\\\ -\textbf{sgn} (x'_1) A & A \end{bmatrix}$ which yields $L^* = \begin{bmatrix} 1 & \textbf{sgn}(x'_1) \\\\ \textbf{sgn}(x'_1) & 1 \end{bmatrix}$. For $\sigma >> \epsilon$, we have $H_{E, \sigma} \approx \sigma^2 \rho^2 \begin{bmatrix} A & -\textbf{sgn}(x'_1) A \\\\ -\textbf{sgn} (x'_1) A & A \end{bmatrix} + \begin{bmatrix} 0 & 0 \\\\ 0 & C \end{bmatrix} $. But the second part is much smaller than the first and can be mostly ignored. Thus, we still get $L^* = \begin{bmatrix} 1 & \textbf{sgn}(x'_1) \\\\ \textbf{sgn}(x'_1) & 1 \end{bmatrix}$. Thus, we can choose the scale at which the symmetry is to be discovered.
Summary: This paper introduces a data-free scheme for discovering the effective degrees of freedom in a molecular simulation, which is based upon the exploration of energy landscape symmetry. The effectiveness of this framework is theoretically illustrated and experimentally validated. Claims And Evidence: Yes. The theoretical and experimental arguments and evidence are very convincing. I especially like the idea of using just the energy, rather than data, to learn the symmetry. Methods And Evaluation Criteria: Yes. Although more thorough benchmarks using even smaller systems, such as Lenoard-Jones particles, charged particles, can be also interesting. Theoretical Claims: Yes. They appear correct and sound. Experimental Designs Or Analyses: Yes. They look reasonable. Although figures 3-5 are somewhat too qualitative and can benefit from more quantitative arguments to be more closely related to the theoretical arguments. Supplementary Material: No. Relation To Broader Scientific Literature: The contributions are extremely relevant to the molecular simulation and drug discovery community. Essential References Not Discussed: NA. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, and we try to address some of the concerns below: ### More Quantitative Results ### As pointed out by the reviewer, some of the results presented in the paper might look a bit qualitative. Here, we provide some more quantitative statements about the results shown in the paper. We intend to add this table in the main paper in order to better illustrate the effectiveness of the discovered DOF. Long Simulation Results: The table below shows the frequency of observing the molecule in a specific conformation as a function of the simulation time ( simulation is run with a step size of 2 femtoseconds). We consider a molecule within a confirmation basin if the dihedral angles are within the known range for the specific conformer. | | | 0.25 ns | 0.5 ns | 2.5 ns | 5 ns | 25 ns | 50 ns | 250 ns | 500 ns | | ---------------------------- | -------- | ------- | ------ | ------ | ----- | ----- | ----- | ------ | ------ | | | $C5$ | 0.257 | 0.268 | 0.233 | 0.247 | 0.246 | 0.238 | 0.215 | 0.23 | | Alanine Dipeptide in Vacuum |$C7_{eq}$ | 0.479 | 0.429 | 0.448 | 0.436 | 0.436 | 0.425 | 0.382 | 0.408 | | |$C7_{ax}$ | 0 | 0 | 0 | 0 | 0 | 0.027 | 0.117 | 0.059 | | | | | | | | | | | | | | $\\beta$ | 0.411 | 0.424 | 0.416 | 0.404 | 0.367 | 0.39 | 0.39 | 0.39 | | Alanine Dipeptide in Solvent |$\\alpha_L$ | 0 | 0 | 0 | 0 | 0.001 | 0.001 | 0.001 | 0.001 | | |$\\alpha_R$ | 0.043 | 0.043 | 0.048 | 0.047 | 0.041 | 0.043 | 0.043 | 0.043 | | | | | | | | | | | | | Chignolin in Solvent | folded | 0.992 | 0.994 | 0.997 | 0.79 | 0.206 | 0.4 | 0.4 | 0.4 | | |misfolded | 0 | 0 | 0 | 0.183 | 0.073 | 0.048 | 0.048 | 0.048 | DOF-assisted Simulation Results: Under our method, we only run a simulation of 2 picosecond duration for every starting configuration provided by our DOF discovery algorithm. Thus, the conformers do not have enough time to escape from one conformational basin to another. Thus, the initial distribution of the conformers stays the same throughout the short simulation. Here, we provide the frequency of finding a molecule within a given conformational basin (within 20 ps of simulation) given the initial 1000 starting configurations. | | | Full Hessian | Slow Hessian | Degenerate Hessian | Optimization eps 0.1_0.01 | | --------------------------- | ----- | ------- | -------- | --------- | ------------ | | | $C5$ | 0.175 | 0.08 | 0.302 | 0.044 | | Alanine Dipeptide in Vacuum | $C7_{eq}$ | 0.181 | 0.194 | 0.171 | 0.268 | | |$C7_{ax}$ | 0.214 | 0.194 | 0.134 | 0.231 | | | | Full Hessian | Slow Hessian | Degenerate Hessian | Optimization eps 0.1_0.01 | Optimization with solvent eps 0.1_0.01 | | ---------------------------- | -------- | ------- | -------- | --------- | ------------ | -------------------- | | | $\\beta$ | 0.427 | 0.149 | 0.485 | 0.299 | 0.369 | | Alanine Dipeptide in Solvent | $\\alpha_L$ | 0.069 | 0.144 | 0.022 | 0.207 | 0.157 | | | $\\alpha_R$ | 0.106 | 0.284 | 0.105 | 0.116 | 0.105 | | | | Full Hessian | Optimization eps 0.1_0.01 | Optimization eps 0.5_0.01 | Optimization with solvent eps 0.1_0.01 | Optimization with solvent eps 0.5_0.01 | | -------------------- | ------ | ------- | ------------ | ------------ | -------------------- | -------------------- | | Chignolin in Solvent | folded | 0.648 | 0.6 | 0.646 | 0.678 | 0.674 | | | misfolded | 0.006 | 0.024 | 0.013 | 0.024 | 0.019 | ### Simulating the algorithm on additional smaller systems ### As suggested by the reviewer, we intend to add some example applications of the algorithm for smaller analytical systems. Please check bullet point 2 in our response to Reviewer p5r2 for further details and a small example system.
Summary: This work proposes an approach for discovering degrees of freedom in MD simulations, relying primarily on the Hessian matrix rather than large simulation data. The method was applied to two prototypical peptide systems, showing that it is capable of efficient exploration of the configurations space along discovered DOFs. Claims And Evidence: The main results are on the qualitatively side. At least I am not sure how to interpret the plots in a quantitative way. There is a lack of quantitative metrics. This is perhaps understandable given that the purpose is to discovery collective variables, a somewhat ambiguous goal. Methods And Evaluation Criteria: There is no comparison with competing existing approaches. See also above. Theoretical Claims: I tried to go through the derivations but got lost half way. So this part of my review should be considered incomplete. Experimental Designs Or Analyses: See above in "Claims And Evidence" Supplementary Material: The figures. Relation To Broader Scientific Literature: There was literature review of several aspects. But the paper needs more discussions about existing Hessian based methods. See below Essential References Not Discussed: Key literature survey of Hessian or approximate Hessian based methods for materials or molecular modeling is missing. Hessian based methods are extensively used in structure optimization and transition state finding/optimization. The latter is especially relevant for this paper. Other Strengths And Weaknesses: Strength: the method is geared towards an important scientific question of collective variable identification and conformation exploration for molecular system, with potential applications in drug design, protein folding dynamics, etc. An advantage of the method is that it does not require big MD simulation data. Weakness: the presentation of the methodology is not easy to follow. Honestly I did not quite finish it. The the different methods to find the CVs were given without immediate explanation. There were some discussions/motivations scattered at other places. But it was not clear how to interpret the results properly without clear understanding of these methods. The results are lacking quantitative metrics and comparison with existing methods. This paper sounds like it has a lot of potential. But the present presentation prevents me from fully appreciating it. Other Comments Or Suggestions: * About equation 4: The "direct optimization" uses Eq. 4, but it was stated that eq. 5 should be used instead of eq. 4. * In "Full Hessian", Eq. 11 contains both sigma^2 and sigma^4. Was the whole of Eq. 11 used, or just the sigma^4 term? * typo "a stable conformations" Questions For Authors: An easier to follow presentation of the theory could be very beneficial. A lot of jargon was invoked without explanation, which does not work for everybody. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ### Literature Review for Hessian-based methods ### We thank the reviewer for pointing us toward this literature. We will include a detailed overview of the work in the literature and a comparison with the current paper. For geometric structure optimization, most of the methods in the literature build upon the idea of the Newton-Raphson method to accelerate gradient descent using second-order Hessian information to provide quadratic convergence near local minima by providing faster updates without calculating the full Hessian. On the other hand, transition state-finding methods utilize information about the eigenvector corresponding to a negative eigenvalue of the Hessian to find saddle points. Some synchronous transit methods like LST and QST also use Hessian information to better estimate the energy landscape near intermediate states between two final reaction states. While conformer discovery shares some of the same objectives as geometric structure optimization and transition state finding, unlike geometric structure optimization, we need to escape local energy basins to find a diverse set of conformers. Unlike the transition state finding task, we do not have a specified final reaction state that we intend to reach. Our primary objective is to discover certain local degrees of freedom to explore the energy landscape and find more conformers efficiently. In our method, we discover local DoF for a given conformer and use the discovered DoF to explore the basin near the conformer without any additional knowledge of CVs. ### Better Presentation of the Methodology ### As suggested by the reviewer, some of the methodology is quite hard to understand from the current presentation. We plan to reduce some of the jargon used in the paper and add a table of references in supplementary materials to help with the notation used in the paper. Additionally, we will add a section with some illustrative examples of applying our proposed methods to small systems. We present the following 1-d, 2-particle system suggested by Reviewer zCrM as an example. Due to space constraints, we only show the discovered local DoF using the Full Hessian method. We leave a more detailed discussion for larger systems in the paper. Consider the energy function: $E(x) = F(|x_1| - x_2) + \epsilon \sin\left(\frac{x_2}{\epsilon}\right)$ for $x \in \mathbb{R}^2$, where $F$ is a 1-D potential. In this case, we can use the first order optimality condition to see that at any minima, $F(|x_1| - x_2) = 0$ and $\cos\left(\frac{x_2}{\epsilon}\right) = 0$. Then, we see that the Hessian at any such minima, $x'$, can be given as $$ H_E(x') = H_F(|x'_1| - x'_2) \begin{bmatrix} 1 & -\textbf{sgn}(x'_1) \\\\ -\textbf{sgn}(x'_1) &1 \end{bmatrix} - \frac{\cos(x'_2/\epsilon)}{\epsilon}\begin{bmatrix} 0 & 0 \\\\ 0 & 1\end{bmatrix} = C \begin{bmatrix} 1 & -\textbf{sgn}(x'_1) \\\\ -\textbf{sgn}(x'_1) & \gamma \end{bmatrix} $$ for some constants $C, \gamma$. In order to find a local symmetry transformation $L$ such that for any $x$ near $x'$, we need $E(e^{\eta L} x) - E(x) \approx 0, \implies (\nabla E (x) \^\top L x)^2 \approx 0,\implies ((x-x')^\top H_E(x') L x)^2 \approx 0$. Full Hessian method needs us to minimize $2tr(K_S^2) + tr(K_S)^2$ where $K_S = (L^\top H_E (x') + H_E(x') L)/2$. This results in a generally complicated equation, but we get a simple solution for $\gamma \approx 1$. For $L^* = \frac{1}{2} \begin{bmatrix} 1 & \textbf{sgn}(x'_1) \\\\ \textbf{sgn}(x'_1) &1 \end{bmatrix}$ gives $K_S = a\begin{bmatrix} 0 & -\textbf{sgn}(x'_1)\frac{1 - \gamma}{2} \\\\ -\textbf{sgn}(x'_1)\frac{1 - \gamma}{2} & 1-\gamma \end{bmatrix}$. Thus, $2tr(K_S^2) + tr(K_S)^2 = 4(1 - \gamma)^2 \approx 0$ for $\gamma \approx 1$. Thus, $L^*$ forms a local DoF for points around a minima $x'$. As ${L^*}^2 = L^*$, exponentiating this matrix we get, $e^{\eta L^*} = \sum_{j=0}^\infty \frac{\eta^j}{j!}{L^*}^j = I + (\sum_{j=1}^\infty \frac{\eta^j}{j!})L^* = I + (e^{\eta} - 1)L^*$. Thus, we have $ e^{\eta L^*} x = x + (e^\eta - 1) L^* x = \begin{bmatrix} x_1 + (e^\eta - 1)(x_1 + \textbf{sgn}(x'_1) x_2) & x_2 + (e^\eta - 1)(x_2 + \textbf{sgn}(x'_1) x_1) \end{bmatrix}^\top$. So, we see that if $x'_1 > 0, x_1 - x_2 = (e^{\eta L^*} x)_1 - (e^{\eta L^*} x)_2$ and if $x'_1 < 0, x_1 + x_2 = (e^{\eta L^*} x)_1 + (e^{\eta L^*} x)_2$ giving two different local DoF (conserved $x_1 - x_2$ when $x'_1 >0$ and conserved $x_1 + x_2$ when $x'_1 < 0 $) depending on the local structure of the hessian. ### Quantitative Results and Baselines ### Please refer to our response to Reviewer 9HNJ for additional results. Although there are existing methods for CV discovery, our method considers the slightly orthogonal task of discovering local DoF. Unlike CVs the discovered DoF are local in nature and do not necessarily correspond to any global Collective Variable. Thus, the proposed method is not directly comparable to existing baselines.
null
null
null
null
null
null
Transforming Visual Classifiers for Zero-Shot Text-Based Interpretability
Reject
Summary: The authors introduce a method for increasing the interpretability of image classifiers by combining the image classifier with a text tokenizer and their own trainable FFN. Essentially, the image feature vector obtained from passing the image to the vision classifier is converted into the text embedding space by the proposed FFN. This new image/text encoded vector is compared against each of the encoded dataset classes using cosine similarity, producing a new distribution over the output classes. The FFN is trained using cross-entropy loss between the predicted class distribution and the aforementioned cosine similarity vector. The two main contributions are that this allows for an increase in model interpretability without (1) needing additional labeled training data and (2) being specific to any particular vision encoder and text tokenizer. The authors provide a number of experiments. (1) They show an average loss in accuracy of only about 0.2% when compared against the vanilla model their method is augmenting. (2). They compare against other methods they discuss in the paper. (3) They adapt their method for zero-shot captioning and provide an experiment. (4) They also provide additional experiments in the appendix. ## Update from rebuttal ## During the discussion phase, I raised a question regarding the faithfulness highlighted by one of the other reviewers. It would have been ideal if the authors would have used an example with multiple classes instead of a binary class problem when responding to this question. The use of a binary class problem limits the insights gained from the experiment. The use of a two class problem essentially makes the concepts and the classes equivalent? Therefore, I lowered my score to a 3. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper is rigorously evaluated using many models. Theoretical Claims: The main theoretical claim they make is with the loss function for the FFN. While I have not checked it myself, the presented equation is intuitive and makes sense in the context. Experimental Designs Or Analyses: The experimental setup is intuitive Supplementary Material: I skimmed it, as I felt the main text did a good job of representing the paper. Relation To Broader Scientific Literature: Much research is going into model interpretability, as neural networks are considered black boxes until some algorithm is introduced that allows the user to understand how the network came to a decision. There are many ways to approach model interpretability. This paper approaches it by examining how the features of an image, generated by a visual encoder (CNN, transformer), can be explained through text. This particular line of research has been explored previously. The key contributions of this paper is that the authors are presenting a way to provide textual interpretations of visual features without (1) needing labeled data and (2) being language model architecture dependent. Essential References Not Discussed: The appropriate references are included. Other Strengths And Weaknesses: Strengths: 1) I found the use of the original image output distribution as the ground truth for the loss function to be a clever way to get around having to use labeled data and become specific to the architectures in use. 2) All of the writing is very clear and easy to follow 3) The figures are particularly useful 4) I appreciate the authors going into detail as to why their approach did not perform as well on B@4 and Meteor metrics on the Zero-Shot evaluation. This is a very solid submission that is organized well and presented clearly. Other Comments Or Suggestions: n/a Questions For Authors: During the discussion phase, I raised a question regarding the faithfulness highlighted by one of the other reviewers. It would have been ideal if the authors would have used an example with multiple classes instead of a binary class problem, which limits the insights gained from the experiment. The use of a two class problem essentially makes the concepts and the classes equivalent? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your time and effort in reviewing our paper, and for the thoughtful review and strong accept decision. We are delighted that you found our manuscript interesting and appreciated the importance and unique aspects of our work, such as the innovative, novel, and clever solution, the rigorous evaluations, clear writings, and detailed analysis. Your acknowledgment of the importance and strengths of our work is truly encouraging, and we are grateful for your supportive comments. --- Rebuttal Comment 1.1: Comment: I think the idea of mapping the output distribution of the labels of any classifier into a text space is an important contribution. This was the main reason for my initial score of the paper. Several of the other reviewers have raised concerns regarding the faithfulness of the explanations. The paper is mainly evaluated using end-to-end results on the quantitative side. Would it be possible to perform a quantitative evaluation to assess if the extracted concepts are faithful. For example, reviewer bjB7 mentioned the issue below. However, I understand if it is not possible to generate a full set of new results before the end of the discussion period. 1. Quantitative experimental evaluation of the reliability and interpretability of the explanation is insufficient. The paper evaluates the proposed method mainly in terms of task accuracy. However, these evaluations do not guarantee the validity of the generated explanations. A fair comparison with existing methods should be made, e.g., by evaluating the quality of concepts and interventions as done in [c] and [d]. --- Reply to Comment 1.1.1: Comment: We thank you for your comment. We address your concern below with experiments. First of all, previous works [R1] conduct the intervention experiments using the Waterbirds-100 dataset. This is a binary classification dataset which includes two classes: waterbirds and landbirds. The training images of waterbirds are on water backgrounds, and the training images of landbirds are on land backgrounds. However, the validation images do not have that correlation (the validation images of waterbirds are on land backgrounds and the validation images of landbirds are on water backgrounds). The model is assumed to learn the water-land correlation for classification on this task. By building a CBM, we can correct this bias by intervening in concepts in the CB layer. However, there are two problems with using the Waterbirds dataset in our work: 1) We cannot transform a classifier trained on this dataset with our method, because it is not possible to learn meaningful text representations (our MLP) using only two text labels (waterbird, landbird). 2) We also cannot test our ImageNet-trained models on this dataset because Waterbirds is a highly artificial dataset, and many of the samples create severe OOD samples for ImageNet trained classifiers. Therefore, we conduct the following experiment. We manually created our own dataset of waterbirds/landbirds from ImageNet validation images. Specifically, for the waterbird images, we consider classes of birds from ImageNet that are at least 90% of the time found in water backgrounds (we manually inspected 100 random training images of those birds to verify this). For those birds, we then select their images in the ImageNet validation set, that are found on land backgrounds. We perform a similar procedure for the landbird class. For the landbird class, we encountered an issue. While we found many classes of birds from ImageNet to have land backgrounds in their training images (almost all the time), we could not find many validation images of those birds to have water backgrounds (e.g., we could not find any image of the bird *robin* with water background in the ImageNet validation set). In order to solve this issue, we utilized a text-to-image generative model (Stable Diffusion 2.1) to generate images of those birds on water backgrounds. We ensured that the generated images of those birds have the correct physical and distinctive features of the bird. For all images, we always ensure that the background (water/land) is clearly visible. This leads us to a validation dataset of 140 images (70 images for each class): - Waterbirds: 70 images from ImageNet validation set - Landbirds: 15 images from ImageNet validation set, and 55 generated from Stable Diffusion. We create our ZS-CBM using the two class labels: "an image of a waterbird" (for the waterbird class) and "an image of a landbird" (for the landbird class), using the same concept set from [R1]. The ZS-CBM achieves a low accuracy as shown in the Table below, which indicates the bias that the model uses. To correct this, we intervene in the concepts in the CB layer, following the setup from [R1]: - Intervention R: We zero-out activations of any bird concepts from the bottleneck layer, and expect the accuracy to drop. - Intervention K: We keep activations of bird concepts as they are, but scale down the activations of all remaining concepts (we multiply them by 0.1), and expect the accuracy to increase. The results are presented below for some models and show the success of our intervention experiments: |Model| Original CBM | Intervened (R)↓ | Intervened (K) ↑ | |-|-|-|-| |BeiT-B/16:| 54.29| 41.43 **(-12.86)**| 58.57 **(+4.28)** | |BeiT-L/16| 52.86| 44.29 **(-8.57)**| 58.57 **(+5.71)**| |ConvNextv2_pt@384| 53.57| 42.14 **(-11.43)**| 59.29 **(+5.72)**| |ConvNext_Base_pt |53.57| 42.86 **(-10.71)**| 58.57 **(+5.0)**| |DiNOv2|52.86|43.57 **(-9.29)**|59.29 **(+6.43)**| [R1] Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery
Summary: The paper aims to make arbitrary image classifiers interpretable by textual explanations. The paper points out as a challenge that existing methods rely on CLIP, which may limit applications. To this end, the paper maps image features of arbitrary visual classifiers to text features of off-the-shelf language models by transforming the image features with MLP and training the relationship between the visual features and the class labels in the text space to be similar to the original classification result. The paper proposes a zero-shot concept bottleneck model and zero-shot text decoding as applications of this mapping. Experiments quantitatively confirm that the proposed method outperforms baselines, including CLIP-based methods, in terms of accuracy and qualitatively evaluate the output text-based explanations. ## Updates after rebuttal I would like to maintain my rating according to the evaluation below: As discussed in the Claim and Evidence section of my initial review, my primary concern is the faithfulness of the explanations provided by the proposed method. In summary, after rebuttal, there are currently three critical problems in faithfulness. - Since the proposed method aligns visual and text features at the distribution level via MLP, the guarantee of the semantic correspondence between the proposed explanation and the visual feature is weak in theory. - Quantitative experimental evaluation of the explanation's reliability and interpretability is insufficient. Task accuracy is not appropriate for evaluating explanations. - The proposed method shifts the explanation dependency from CLIP to another language model, and the essential difference from the approach that uses CLIP is not clear because there is no empirical comparison in this regard. In addition to them, it became clear in the discussion that there is a lack of verification on fine-grained datasets. The authors present the results of the Places dataset in the zero-shot transfer setting of the ImageNet pre-trained model in Appendix A, but they avoid evaluating the model trained on Places. If the authors' claim is correct, the proposed method should also work effectively on such fine-grained datasets. This evaluation is quite important because the main motivation of this work is to make the custom visual models explainable, and such custom models are often used for specific (e.g., fine-grained) tasks. Based on these discussions, I believe the current rating should be maintained. Claims And Evidence: - **Claim 1. Parameter efficiency**. - The paper claims that the proposed method is efficient because it optimizes only MLP. - However, there is no basis for this claim because existing methods (e.g., Text-to-Concept [a]) have the same or lower number of parameters. - **Claim 2. Label-free training** - The paper claims that the proposed method does not require any label annotations when training MLPs. - This claim is valid because the algorithm in Section 3 trains MLPs for feature projection in a manner similar to self-learning without using labels. - **Claim 3. Faithfulness of explanation** - The paper claims that projecting image features of any model onto text features can provide a reliable explanation. - This claim has not been adequately evaluated. The reasons are as follows: - The proposed method learns a mapping between the image feature space and the text feature space using the text features of the class labels, but the basis for the validity of this mapping is insufficient. That is, since the image features and text features are not learned in a one-to-one correspondence, the paper cannot guarantee that the image features projected by MLP correspond semantically exactly to text features. For example, [b] guarantees this with cycle consistency by inverse mapping (i.e., text-to-image) over the feature spaces. - Quantitative experimental evaluation of the reliability and interpretability of the explanation is insufficient. The paper evaluates the proposed method mainly in terms of task accuracy. However, these evaluations do not guarantee the validity of the generated explanations. A fair comparison with existing methods should be made, e.g., by evaluating the quality of concepts and interventions as done in [c] and [d]. - The paper points out in L080 (right) that existing CLIP-based methods are biased against CLIP but lack discussion and rationale for this negative impact and the advantages of the proposed method. The proposed method uses an external language model instead of CLIP. In other words, the feature transformation by MLP is affected by the bias of the external language model. In fact, Table 1 shows that the use of textual features degrades the performance on average, indicating that the transformation is not perfect and that some changes have occurred. Without discussing the impact of these changes on interpretability, it is difficult to claim that the explanation is faithful. - **Claim 4. Architecture Independency** - The paper claims that the proposed method is applicable to arbitrary models. - Section 5 indeed provides experimental results in terms of performance, but there is insufficient evidence to show that it can provide a reasonable explanation for interpretability for each architecture. ### Reference - [a] Moayeri, Mazda, et al. "Text-to-concept (and back) via cross-model alignment." ICML 2023. - [b] Kim, Siwon, et al. "Grounding counterfactual explanation of image classifiers to textual concept space." CVPR 2023. - [c] Koh, Pang Wei, et al. "Concept bottleneck models." ICML 2020. - [d] Yang, Yue, et al. "Language in a bottle: Language model guided concept bottlenecks for interpretable image classification." CVPR 2023. Methods And Evaluation Criteria: The proposed method is implemented by a simple MLP and cosine similarity. The paper quantitatively evaluates the proposed method in terms of task accuracy and qualitatively evaluates the explanations. On the other hand, since the explanations are not evaluated quantitatively, the claims regarding the quality and faithfulness of the explanations are not verified. Theoretical Claims: There is no theoretical claim in this paper. The paper claims that the trained MLP can transform image features into text features, but this has no theoretical guarantee. Experimental Designs Or Analyses: The validity of the explanations provided by the proposed method has not been experimentally evaluated. For example, examining the intervention on the concept and the rate of recovery of the true concept would be helpful in evaluating the explanations. Also, since the dataset is limited to ImageNet and COCO, it has not been verified that the method generalizes to a variety of datasets and domains. Supplementary Material: I have not checked the Supplementary Material. Relation To Broader Scientific Literature: The goal of this paper is to obtain interpretability by connecting image and language models. In this regard, the idea of mapping image features to text features with MLP or linear layers has been reported in several existing studies [a,h]. The main novelty of this paper is to show that a textual explanation can be provided by mapping image features to text features without using CLIP. However, the contribution is limited due to the lack of evidence for the validity of the provided explanation, as discussed above. ### Reference - [h] Merullo, Jack, et al. "Linearly mapping from image to text space." ICLR 2023. Essential References Not Discussed: There is a lack of discussion with existing research on making any image classifier explainable. I recommend to include the following literature in the discussion. - Kim, Siwon, et al. "Grounding counterfactual explanation of image classifiers to textual concept space." CVPR 2023. - Laguna, Sonia, et al. "Beyond concept bottleneck models: How to make black boxes intervenable?." NeurIPS 2024. - Yang, Xingyi, and Xinchao Wang. "Language Model as Visual Explainer." NeurIPS 2024. - Balasubramanian, Sriram, Samyadeep Basu, and Soheil Feizi. "Decomposing and interpreting image representations via text in vits beyond CLIP." NeurIPS 2024. - Tan, Andong, Fengtao Zhou, and Hao Chen. "Explain via any concept: Concept bottleneck model with open vocabulary concepts." ECCV 2024. Other Strengths And Weaknesses: Nothing to report. Other Comments Or Suggestions: Nothing to report. Questions For Authors: See the section "Claims and Evidence" and address the concerns. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for his time in reviewing our paper. We clarify your concerns below. > Existing methods (e.g., Text-to-Concept [a]) have the same or lower number of parameters. As mentioned in L70-L84 in the related work, while the parameters may be the same or less, Text-to-Concept is not faithful (largely biased towards CLIP), and not label-free (requires supervision from CLIP features as the ground-truth). Efficiency is not the sole objective of our work. > the idea of mapping image features to text features with MLP or linear layers has been reported in several existing studies [a,h] Work [a] and its drawbacks are discussed clearly in the related work. We also mention how our method is different and how it tackles the problems of [a]. We refer you to L71-L84 (right). [h] is a work that uses image annotated captions as the training objective and therefore is very similar to DeVIL and the other mentioned works in Section 2 and suffers from the same limitations. We also mention how our method is different and how it tackles their problems. We refer you to L106 (left)-L61. The issues that you have raised are inherent in the literature and are thoroughly discussed in the related work. Our method is explicitly made to tackle the problems that these methods have. > The paper points out in L080 (right) that existing CLIP-based methods are biased against CLIP but lack discussion and rationale for this negative impact and the advantages of the proposed method. We discuss this thoroughly in L21-29 (right). > Table 1 shows that the use of textual features degrades the performance on average, indicating that the transformation is not perfect and that some changes have occurred. The performance drop is negligible (0.2 points drop on average across all models). In our view, this negligible drop does not degrade performance. Furthermore, the negligible loss is compensated by unlocking text-based interpretations to vision models, all in a zero-shot manner. > Since the image features and text features are not learned in a one-to-one correspondence, the paper cannot guarantee that the image features projected by MLP correspond semantically exactly to text features. We explicitly align the distributions rather than the features. The loss we use guarantees that the distributions are aligned. Using a one-to-one feature-to-feature mapping as the reviewer suggests will ignore the relation of other classes to the image, and will ignore the classifier’s reasoning process. As a very simple example, consider an image of a dog, with a chair present in the background. The softmax distribution for this sample would assign a high probability to the dog class and a moderately high, yet non-negligible probability to the chair class. Therefore, using distribution alignment loss considers the presence of both these classes. It cares about the distribution of the image over all classes, which is also how the classifier reasons and makes decisions. That is why, using distribution alignment loss rather than feature alignment loss is a faithful, effective and valid way for our approach. > It has not been verified that the method generalizes to a variety of datasets and domains. We did verify the zero-shot generalization of our MLPs trained solely on ImageNet class names, to the COCO dataset. We mention this in detail in L360-L366, where we describe how the two datasets differ in distribution. Furthermore, we showed zero-shot classification generalization experiments to other datasets (Places365) in the supplementary material (Table 5). We were also transparent and acknowledged in our manuscript that the classification generalization of our method to more fine-grained datasets is a limitation of our method. > Examining the intervention on the concept and the rate of recovery of the true concept would be helpful in evaluating the explanations. The claims regarding the quality and faithfulness of the explanations are not verified. We thank the reviewer for the valuable comment. We agree that evaluating the intervention on concepts could offer additional insights into the interpretability and reliability of the explanations. However, evaluating with interventions on ImageNet poses challenges due to the lack of intervention annotations (the expected biases in ImageNet classes). We therefore chose the core, standard way of evaluating CBMs which is task accuracy. For decoding of visual features, the main issue is, that we don't know what the model uses for reasoning (no ground truth). There is currently a gap in the literature regarding a proper benchmark for such models. We have discussed the limitation of this evaluation in L355 (left) - L359 in the main manuscript. As there is no ground-truth of what the model uses for reasoning, using annotated captions is the best approximation we can have. --- Rebuttal Comment 1.1: Comment: Thank you for the response. > As mentioned in L70-L84 in the related work, while the parameters may be the same or less, Text-to-Concept is not faithful, ... and not label-free ... Efficiency is not the sole objective of our work. If there is no baseline to compare and no technical novelty, then this is simply the result of a naive implementation, and efficiency is hardly the main contribution of the paper. I would recommend introducing efficiency as a secondary benefit and lowering the tone in writing. > Work [a] and its drawbacks are discussed clearly in the related work. We also mention how our method is different and how it tackles the problems of [a]. We refer you to L71-L84 (right). ... I've already read the discussion and understood the difference between methods. Here, I've discussed the main technical novelty of this paper in the context of broader scientific literature, not to mention the lack of discussion. In this sense, the main technical novelty of this paper is to show that a textual explanation can be provided by mapping image features to text features without using CLIP. However, as mentioned in the initial review, this technical novelty has not been validated sufficiently. > We discuss this thoroughly in L21-29 (right). Of course, I've already read the mentioned discussion. However, this discussion does not seem to address my concerns about the following: "Since the linear layer maps feature to the CLIP space, T2C is strongly biased towards interpreting the CLIP model rather than the original classifier." In other words, what is the difference between the explanation given by CLIP and that given by the proposed method? And why is the proposed method "faithful" even though it also performs feature transformations in MLP? The current paper seems to provide little convincing evidence in this regard, as “faithfulness” is not defined or adequately evaluated. > The performance drop is negligible (0.2 points drop on average across all models). Sorry for the lack of clarity. Here, I discussed about the faithfulness of the explanation, not about the negligibility of the performance drop. If there is some changes in performance, there is some changes in the feature. Under such situation, how does the proposed method guarantee the correctness of the explanation? Is the explanation faithful? > That is why, using distribution alignment loss rather than feature alignment loss is a faithful, effective and valid way for our approach. Thank you for the clarification. However, faithfulness has not been proved theoretically and experimentally. Also, the example you presented assumes a one-to-one correspondence between visual class objects and text, but text data is flexible enough to represent whole image including the relationship between classes. My question is, why can the proposed method guarantee that a feature converted uni-directionally from image to text is faithful? > Furthermore, we showed zero-shot classification generalization experiments to other datasets (Places365) in the supplementary material (Table 5). We were also transparent and acknowledged in our manuscript that the classification generalization of our method to more fine-grained datasets is a limitation of our method. Thanks for the additional explanation. The limited generalization in fine-grained recognition is critical because fine-grained recognition is considered as the main application of custom image recognition models in industry, which is the target of this paper as presented in Introduction. This explanation reveals that there is a gap between the motivation of the paper and the results of the proposed method. > We therefore chose the core, standard way of evaluating CBMs which is task accuracy. Given that Black-box models achieve good accuracy, task accuracy cannot be used as a proxy measure of explainability. For example, [e] reports that models that explain the prediction via text sentences can achieve high accuracy even when the textual explanation is collapsing and meaningless. Thus, in my opinion, a systematic evaluation of explainability is essential in papers claiming explainability. Without this assessment, I do not think that the paper's claims about the faithfulness of explanations (especially those provided by CBM) are valid. Some datasets, such as CUB [f], provide ground-truth concept labels (attributes) and captions [g], making it possible to perform a quantitative evaluation of the concept. If it is still difficult to perform a mechanical quantitative evaluation, it is possible to introduce a human evaluation, as is done in [d]. [e] Yamaguchi, Shin'ya, and Kosuke Nishida. "Explanation Bottleneck Models." AAAI 2025. [f] Wah, Catherine, et al. "The caltech-ucsd birds-200-2011 dataset." (2011). [g] Reed, Scott, et al. "Learning deep representations of fine-grained visual descriptions." CVPR 2016. --- Reply to Comment 1.1.1: Comment: We thank you for the follow-up. > If there is no baseline to compare and no technical novelty, then this is simply the result of a naive implementation, and efficiency is hardly the main contribution of the paper. The technical novelty is the adapter trained to bridge the image and text space, thus making it applicable to **arbitrary vision backbones**, not just models with a CLIP backbone. Specifically, in contrast to existing work, we propose to align the computed cosine similarities from our adapter for classes, with the original softmax distribution, in order **to be faithful to the original classifier’s distribution** and thus the original model. Reviewer HxVm called this “a clever way to get around having to use labeled data and become specific to the architectures in use”. Indeed, as we mentioned before, the efficiency **is not the main contribution of the paper**. Rather, it is the 1) faithfulness to the original classifier through our novel modeling explained above, and 2) being label-free, which makes it “both practical (requiring minimal data annotation) and broadly applicable to various existing visual classifiers” (reviewer 4kqE). Regarding comparisons to baselines: We **do compare** against **5** baselines across different architectures **including CNNs and transformers** on the main downstream task of CBMs and do compare against the two recent works on the downstream task of text decoding. Our method achieves SoA results and all in a zero-shot setting. We do not compare to Text-to-Concept work [a] as they do not report any quantitative results on CBMs, only showing one qualitative example on CBMs (Figure 7 in their paper). CBMs are one of the primary downstream applications of text labeling for vision models. > What is the difference between the explanation given by CLIP and that given by the proposed method? The goal of the research is to make text accessible for *any* visual classifier, as currently this is restricted to CLIP models. The explanations we therefore get reflect the classifier (e.g., DINO) rather than CLIP. Now why is that even important? Because we may wish to explain a specific classifier and understand the biases and shortcuts it learns, through text. For example, one classifier could pay attention to shape and texture cues whereas the CLIP embedding could focus on color and “internet-popular” descriptions due to the training data of CLIP. Different models offer different insights to understand. If we can only produce text explanations for CLIP (or using CLIP, as in [a]), we lose the ability to interpret specialized or fine-tuned models—where interpretability often matters most (e.g., in medical imaging). There are many other text-based interpretability applications we could perform with our method, as well as zero-shot vision-language applications that were previously limited to CLIP. > The limited generalization in fine-grained recognition is critical Please note that this is not a problem with our formulation. This is the problem of the classifier itself. If the classifier (e.g., ResNet) cannot generalize to OOD fine-grained classes (which it does not), then **we cannot expect** our text-transformed classifier to do so. This is because our transformed classifier is a **replication** of the original classifier, and therefore inherits all its limitations. In fact, it would be bad to expect our transformed classifier to perform better than the original classifier because this means that it does not reflect its original reasoning. Also as shown in Table 5 in the Supp., better classifiers achieve better OOD fine-grained results, which perfectly aligns with the trend in OOD literature, as shown in the blue line in Figure 1 (top left) in the work of [R1], which represents OOD generalization of ImageNet-trained classifiers. [R1] Robust fine-tuning of zero-shot models > The example you presented assumes a one-to-one correspondence between visual class objects and text, but text data is flexible enough to represent the relationship between classes. The example we gave (feature-to-feature, one-to-one correspondence) was highlighting the limitations of the feature alignment loss that you suggested. We mentioned that this way **does not** represent the relationships between classes and the classifier’s way of reasoning over them. We specifically wrote in our rebuttal: “Using a one-to-one feature-to-feature mapping as the reviewer suggests will ignore the relation of other classes to the image, and will ignore the classifier’s reasoning process. Therefore, using distribution alignment loss cares about the distribution of the image over *all classes*, which is also how the classifier reasons and makes decisions”. By *all classes*, we mean relationships between classes, which is achieved by the softmax over all classes with respect to the image in our loss function. In conclusion, the example we gave was to demonstrate the limitations of your suggested counter approach.
Summary: The paper introduces a method to provide text explanations for vision models. Given textual representations of the image classes, a text encoder, and a visual classifier to interpret, the authors method associate image samples to textual classes by essentially training an MLP to calibrate the vision latent space to the text encoder latent space. After training, this MLP can be used to generate text explanations via "text queries". They demonstrate this by generating textual concepts using CBMs, and by generating a natural language explanation using a pretrained language model. Crucially, the method is zero-shot in the sense that it does not need to be trained on paired samples (images, textual explanations), providing advantages to models such as CLIP. The authors demonstrate their method on an extensive set of 40 visual classifiers. Claims And Evidence: I find the paper and the method introduced intuitive, novel to the best of my knowledge and evidence sufficiently compelling for the applications shown. And overall I do like very much the idea of generating faithful text explanations for a pure vision model. I do however have some concerns which I will line out in the remainder of this section and in the next sections. 1) The authors repeat that their approach is faithful. Their experiments on models trained on ImageNet (table 1) are convincing for a first assessment. However, I am wondering if this faithfulness always hold. Some scenarios to consider: - How does the faithfulness vary as a function of the number predicted classes and the text encoder used? - How does the faithfulness vary as a function of the performance of the to-be-explained classifier? - How does faithfulness vary as a function of the MLP? The reasons I am asking is that I assume that faithfulness depends on how well the approximation $W \approx VU^T$ holds, with $W$ and $U$ corresponding to the notation introduced by the authors in Section 3, and $V$ somewhat related to $f,\tilde{f}, MLP$. Would be interesting if the authors could comment in this direction, since faithfulness is one of the main and most repeated claims, but I do not see it addressed in the limitations in the supplementary. 2) Along the lines of faithfulness, now in the context of CBMs (first application), towards the end of section 4.1 the authors state that their CBM derivation is faithful because they do not change $U$. However, this seems wrong to me. Faithfulness of the "CBM transformation" evidently depends on the concepts in C, no? specifically it will probably depend on how the selected concepts are correlated to each other, as well as how they are/they are not correlated to the classes of the vision task. Am I missing something? Methods And Evaluation Criteria: I find the method simple and reasonable for the goal stated by the authors (making a visual model text-queryable to generate explanations). However, I think faithfulness should be more thoroughly assessed and discussed (see above). Theoretical Claims: No theoretical claims are stated. Experimental Designs Or Analyses: 1) I personally find the qualitative examples (e.g. Figure 5) not particularly convincing. The authors interpret the results with statements such as (L. 322) "we see that the image is predicted as a “goose” because it has duck-like features". However, I find these findings a lilttle bit left to the human interpretation. More specifically, how do we actually know that the classifier actually classifies the image as goose because of so-called duck-like features? What if it is because in all the training dataset pictures all the ducks are on the grass, and the "duck" concept is activate because of the greenery in the image? Furthermore, from an interpretability perspective, simply stating "duck-like features" as an explanation may not be enough. An ornithologist might want to know which specific duck-like features are used by the model. The authors should perhaps comment on this and possible add it as a limitation of their method. 2) Following the point above, I think the CBM use case and experimental results could be strengthened with human user studies. 3) I find the experiments for the second application "Zero-Shot Decoding of Visual Features into Text" not really towards the goal stated by the authors (generate text to explain a visual model). It seems to me that the experiments confound the merits of the authors method, with the generalization capability of the underlying visual model and of the pretrained LLM to the COCO dataset. Specifically, table 3 simply shows to me that ConvNext together with LLM, and the authors construct, generate captions that correlate to the ground truth captions better than the CLIP-based models. However, it does not tell me anything regarding whether a human-user would understand how the visual model itself "reasons" about an image to achieve a prediction. Again, I think a human-user study could be a good way (although not the most prompt) to evaluate if their method enables interpretability for visual models. Supplementary Material: I quickly reviewed the supplementary material mostly to read about the limitations. I believe the authors should more extensively discuss their methods limitations. Relation To Broader Scientific Literature: The authors provide a post-hoc global explainability method for image classification models. Furthermore, among post-hoc methods, I would say that the proposed method belongs to the class of surrogate methods (e.g. LIME) which attempt to approximate the original model in order to remain faithful (in this case an MLP is trained for "cross-modality calibration"). I believe the idea is simple,yet not quite explored in the literature to the best of my knowledge. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: Paper if fairly easy to read. I think the idea of generating textual explanation for a vision model with no ground-truth annotations is fairly unexplored in literature, so I appreciate the originality and the simplicity of the idea. I think better experiments would strengthen the paper. Other Comments Or Suggestions: No other major comments. Questions For Authors: No further major questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your time and effort in reviewing our paper, and for the valuable feedback. We are glad that you liked our paper and thank you for all the positive points you reported for our work. Below, we address your concerns. > How does the faithfulness vary? As a function of different text encoders, we showed in Table 6 of the appendix that the performance is not affected by different text encoders. The difference is at max 0.07 points for any text encoder we use. We also justified the reason for this in L703. We also test the impact of the MLP on the faithfulness through the following ablations: - Mean Ablation: For an image, we replace the input features to the MLP with a constant mean of the features calculated across the full ImageNet validation set. - Random Features: For an image, we replace the input features to the MLP with random values sampled from a normal distribution with a mean and standard deviation equal to that of the features calculated across the full ImageNet validation set - Random Weight Ablation: We randomize the weights of the MLP projection - Shuffled Ablation: For an image, we replace the input features to the MLP with input features of another random image in the validation dataset For all ablations, we calculate the ImageNet validation accuracy. We expect the accuracy percentage to drop in all ablations. This is clearly shown in the Table below. The accuracy drops to near zero in the ablations. |Model|Mean Feature|Random Features|Shuffled Features|Random Weights| |-|-|-|-|-| |ResNet101v2|Ours: 81.49/Ablated: 0.10 |Ours: 81.49/Ablated: 0.11|Ours: 81.49/Ablated: 1.70 |Ours: 81.49/Ablated: 0.11| |ConvNeXt-Base|Ours: 83.88/Ablated: 0.10|Ours: 83.88/Ablated: 0.11|Ours: 83.88/Ablated: 1.79|Ours:83.88/Ablated: 0.10| |BeiT-L/16| Ours: 87.22/Ablated: 0.10|Ours: 87.22/Ablated: 0.11|Ours: 87.22/Ablated: 1.87|Ours: 87.22/Ablated: 0.11| |DINOv2-B| Ours: 84.40/Ablated:0.10| Ours: 84.40/Ablated: 0.13| Ours: 84.40/Ablated: 1.76|Ours: 84.40/Ablated: 0.09| > Faithfulness of the CBM transformation depends on the concepts Our Zero-Shot CBM formulation is general and flexibly allows to use any concept set at test time directly (on-the-fly), including future improvements on reference concept sets. Indeed, if a (very) poor concept set is used, the approach is not able to faithfully reconstruct the original distribution. We assume that for the CBM the provided concept set is sufficiently expressive, which is the case even for most automatically generated concept sets such as from a LLM or the top 20k most common words. In this work, we chose the 20K most common words in English as our concept test for the sole reason of establishing a fair comparison with other works. We will address the case of poor choices for concept sets in our extended limitation discussion. > I find the qualitative examples a little bit left to the human interpretation. While we acknowledge that our qualitative example in Figure 5 may appear open to human interpretation and subject to confounding factors (e.g., background cues like greenery), this observation directly reflects the inherent ambiguity of the concept set. As mentioned in the previous concern, our zero-shot CBM formulation (Eq. 2) is independent of the concept set used (we do not train on any concept set), allowing for diverse input concept sets that can be used at test time directly (on-the-fly) that can yield more precise, attribute-specific interpretations; we refer you to C1 of Reviewer 4kqE for further clarification and qualitative examples on different concept sets attached in an anonymous link. > Table 3 simply shows to me that ConvNext together with LLM, and the authors construct, generate captions that correlate to the ground truth captions better than the CLIP-based models. However, it does not tell me anything regarding whether a human-user would understand how the visual model itself "reasons" about an image to achieve a prediction. The table indeed shows that ConvNext (and some other models ) with our adaptation performs better than the widely-used CLIP-based models. We agree that a user-centric benchmark would be very valuable for evaluation, which however does not exist yet. The main issue is, that we don't know what the model uses for reasoning (no ground truth), so even if a human user study would be conducted, the user could only judge whether the generated text is sensible with respect to the input and corresponding predicted output. This hence does not evaluate whether the textually-decoded visual features correspond to the network's inner reasoning. There is currently a gap in the literature regarding a proper benchmark for such models. We have a corresponding discussion of the limitation of this evaluation in L355 (left) - L359 in the main manuscript. As there is no ground-truth of what the model uses for reasoning, using annotated captions is the best approximation we can have.
Summary: The authors propose a method to convert a pre-trained visual classifier into a text-based classifier that supports interpretability through natural language. Specifically, they train an MLP layer to map visual features (extracted by an existing visual classifier) into a text-embedding space, where the cosine similarity with class names reflects the original classifier’s output probabilities. This approach requires no additional annotated data. The paper demonstrates that this method can effectively produce zero-shot concept bottleneck models (CBMs), as well as zero-shot decoding of the visual features into text descriptions. ## Update after rebuttal My initial concern has been somewhat addressed through rebuttal; however, I agree with the other reviewer’s concern regarding faithfulness, which I believe was not fully resolved in the rebuttal. Therefore, I am maintaining my original score. Claims And Evidence: * The authors assert that their technique can transform any existing visual classifier into a CBM-like model. They provide a mathematical formulation in Equation (2) to justify why the resulting classifier can be interpreted under the CBM framework. * In terms of performance, they show that their approach preserves much of the original classifier’s accuracy. Through experiments on ImageNet-1K with various architectures (e.g., ViT, CNN-based models), they report minimal drops in accuracy compared to the unmodified baseline classifiers. Methods And Evaluation Criteria: * The authors use the ImageNet-1K dataset, a standard benchmark for image classification, to assess their approach across multiple architectures (ViT, CNN, etc.). * They compare top-1 classification accuracy before and after the transformation. Additionally, they evaluate zero-shot concept bottleneck capability and text decoding performance to demonstrate the broader utility of their method. Theoretical Claims: The paper does not provide a formal proof or convergence analysis, aside from referencing Equation (2), which explains how the learned MLP mapping can be interpreted within a CBM-like framework. Beyond that, there are no detailed theoretical proofs. Experimental Designs Or Analyses: * The experiments focus on validating that the modified classifier behaves similarly to the original, with minimal accuracy drop on ImageNet-1K. * By mapping visual features into a text space, they also showcase zero-shot text decoding. While the authors provide examples illustrating how the method can generate text from visual features, it remains an open question whether the generated text always corresponds to truly “new” concepts or simply re-labels existing classes in a less structured manner. Supplementary Material: * Appendix A discusses certain limitations regarding concept analysis, specifically when concept words map to partial or incorrect semantic associations. * Appendix E provides details about the process of zero-shot decoding of visual features into text, describing the implementation and showing more qualitative examples. Relation To Broader Scientific Literature: * The paper relates to various lines of research aiming to interpret visual classifiers via textual or concept-based explanations, such as network dissection, concept discovery, and concept bottleneck models. * Traditional classification models trained on discrete class labels give limited insight into whether the learned features encapsulate more general attributes or “concepts.” This work attempts to shed light on those features by projecting them into a text space, thereby re-purposing them to be more interpretable. Essential References Not Discussed: There appear to be no critical references missing from the paper’s discussion. Other Strengths And Weaknesses: **Strengths:** 1. The proposed zero-shot CBM is both practical (requiring minimal data annotation) and broadly applicable to various existing visual classifiers. 2. The experimental results show only minor performance degradation, which makes the method attractive for real-world adoption. 3. The zero-shot decoding of visual features into text introduces new possibilities for model interpretability. **Weaknesses:** 1. In the examples (e.g., Figure 6), the “concepts” often appear more like class labels rather than truly granular attributes, raising the question of how meaningful these concepts are in practice. 2. CBMs typically require that each concept is a distinct factor contributing to the final prediction. However, the paper’s examples sometimes resemble re-labeled classes rather than a deeper set of human-recognizable attributes. It’s unclear if this fully aligns with the spirit of CBMs, which aim to make interpretable classification process based on concepts that is deeply related to classes. Other Comments Or Suggestions: No Questions For Authors: * While Equation (2) demonstrates that the concept space and class space can be projected and aligned, how does one ensure that the discovered “concepts” are truly acting as the justification or rationale for the classification rather than merely providing similar or re-labeled classes? * How does the paper ensure that these text-based “concepts” fulfill the CBM philosophy of providing a transparent, concept-level explanation for the final class decision? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your time and effort in reviewing our paper and for the valuable feedback. We thank you also for acknowledging the strengths of our paper. In what follows, we address your concerns and remarks. > [C1] In the examples, the “concepts” often appear more like class labels rather than granular attributes, raising the question of how meaningful these concepts are in practice.... the paper’s examples sometimes resemble re-labeled classes rather than a set of attributes.….How does one ensure that the discovered “concepts” are not merely providing similar or re-labeled classes? This issue is due to the concept set we used, it is not an issue of our formulation. Unlike existing CBM works that train on a specific concept set, our zero-shot CBMs allow us to use any concept set at test time directly (on the fly); this is mentioned in L197 (left). Our method is completely independent of the concept set, and we do not train our core method (Section 3) nor our CBMs (Section 4.1) on a concept set (as mentioned in L127 (right) and detailed in Section F of the appendix). The chosen concept set is merely used at test time in the applications, and our method is flexible to any concept set as it merely involves encoding the concept set with the text encoder. In the example in Figure 5, we used the 20K most common words in English as our concept set. The 20K concept set includes both class-like names (e.g., duck, bird, pigeon) as well as granular attributes, and the reason for using it in our work as the main concept set is to establish a fair comparison with other works that also used it [R1]. In Figure 5, as mentioned in the Figure caption, while the last three examples use the 20K concept set, the first example uses LF-ImageNet—an attribute-specific concept set extracted from ImageNet classes using a large language model. We include examples from different sets precisely to highlight the flexibility of our approach to different concept sets. When using the 20K-word set, class-like names often appear closer to the image in the embedding space than fine-grained attributes, which explains why they dominate the top detected concepts. In Figure 6, we again use the LF-ImageNet set, which focuses on fine-grained attributes (e.g., powerful jaws, harpoon, gills, large triangular fins, etc.). To address your concern even better, we show in an anonymous link the top-detected concepts for the second images (goose) shown in Figure 5, using exactly the same setting but with various different concept sets. We used the following concept sets: - Creating our own concept set specific to attributes from ImageNet by prompting GPT4o-mini with the prompt: “Which physical features and attributes make a {class name} different from others of the same type?”, where {class name} is a class name from ImageNet. - The concept set of LF-ImageNet - Creating our own concept set specific to attributes from ImageNet by prompting GPT4o and Llama LLMs with the prompt: “What are useful visual features for distinguishing a {class name} in a photo?”, where {class name} is a class name from ImageNet. The anonymous link is here: https://github.com/user-attachments/assets/cd8e0ba0-062b-4d44-a541-2fda87f870a2 As it can be shown, when using a concept set of fine-grained attributes, this issue no longer exists. Again, we merely used the 20K as our main concept set in order to establish a fair comparison with other works. > [C2] How does the paper ensure that these text-based “concepts” fulfill the CBM philosophy of providing a transparent, concept-level explanation for the final class decision? Indeed, the concepts should provide a transparent explanation for the final classification. Our designed approach follows this principle, where the classification layer is a simple linear layer from the concepts to the output (classes): $ pred = w_1 c_1 + w_2 c_2 + w_3 c_3 + \dots + w_Z c_Z $, where $c$ is the concept, $w$ is the weight connecting the concept to the prediction class, and $Z$ is the total number of concepts. As this is a linear layer, we can interpret the classification decision by looking at the concepts with the highest weights connecting to the prediction. However, by merely looking at the weights, we only get a global explanation (since the weights are constant and do not change as a function of the input). That is why, in order to make the explanation local to the prediction, we have to multiply those weights with their concept activation scores obtained by feeding the image (as mentioned in L320-left). Therefore, we dissect the prediction into its input elements: $ pred = [w_1 * c_1 , w_2 * c_2 , w_3 * c_3 , \dots , w_Z * c_Z] $ and then obtain the top-k elements. The numbers shown next to the concepts in Figure 5 are calculated by multiplying the weight of the concept by its concept activation score ($w_x * c_x$). [R1] DN-CBM: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery, ECCV 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, I feel that the reply does not fully address the question I raised. > This issue is due to the concept set we used, it is not an issue of our formulation. I also agree that this may not be a problem with the proposed formulation itself. However, if the concept set already includes concepts that are closely aligned with the target classes, and if class predictions are effectively determined during the concept prediction stage and directly carried over to the final class prediction, I’m not sure this can be considered a truly new framework for interpretability using CBMs. Since empirical validation of the proposed framework is also crucial, I believe that experiments should have been designed with this concern in mind. Furthermore, the authors highlight that one of the strengths of the proposed ZS-CBM is that it preserves performance when converting existing models into the CBM structure. But if this performance preservation is due to the reasons mentioned above, can we truly consider this a meaningful transition to a CBM in the sense that we aim for with interpretable models? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the valuable comment. We address your concern below: To ensure that the concepts are meaningful and free of terms that are overly similar or directly derived from the target classes, we apply a rigorous filtering procedure to the concept set. Specifically, we remove any terms that 1) exactly match the target class name, 2) any constituent words that form the class name (for example, eliminating “tiger” and “shark” when the class name is “tiger shark”), 3) terms corresponding to the subparent class (e.g., “fish” for the class “tiger shark”), 4) terms corresponding to the parent class (e.g., “animal” for the class “tiger shark”), 5) other species within the same category, and 6) any synonyms of the target class name. We obtain 3,4,5 and 6 with an LLM for scalability (we used ChatGPT4o-mini). We perform this filtering procedure on the concept set before we apply our ZS-CBM formulation (Eq. 2). We test a selection of models and present the results below. As seen, the performance is completely independent of these terms. The accuracy remains the same on all models tested. |Model|original set|filtered set| |-|-|-| |ResNet50|73.9|73.9| |ResNet50v2|78.1|78.1| |ResNet101|75.3|75.3| |ViT-B/16|79.3|79.3| |ViT-B/32|73.3|73.3| |BeiT-B/16|83.0|83.0| |BeiT-L/16|86.2|86.2| |ConvNeXt-Base|84.0|84.0| |ConvNeXtV2-Bpt@384|86.3|86.4|
null
null
null
null
null
null
Stay-Positive: A Case for Ignoring Real Image Features in Fake Image Detection
Accept (poster)
Summary: The paper addresses AI-generated image detection challenges, showing that detectors rely on spurious real-image artifacts. It introduces Stay-Positive, which retrains the last layer to focus only on fake features by enforcing non-negative weights. Key contributions: 1. Identifies spurious correlations (e.g., WEBP compression) that mislead detectors. 2. Stay-Positive algorithm improves generalization by removing reliance on real-image artifacts. 3. Enhances robustness to post-processing, resizing, and newer models (FLUX, aMUSEd). 4. Improves detection of partially AI-modified images, aiding forensic applications. The findings suggest ignoring real-image features improves fake detection and reduces misclassification risks. Claims And Evidence: The paper provides strong empirical support for its main claims. It evaluates multiple generative models (LDM, FLUX, aMUSEd, GANs) and tests post-processing artifacts, resizing, and inpainting, making the results robust. Well-Supported Claims: · Spurious Correlations in Detection: The study shows real-image artifacts (e.g., WEBP compression, downsizing) cause misclassification, supported by fake and real score distributions. · Stay-Positive Improves Robustness: The method re-trains only the last layer with non-negative weights, preventing reliance on real-image features. Tables 1 and 2 confirm its effectiveness. Claims Needing More Support: · Post-Processing Artifacts: The study focuses mainly on WEBP compression. Testing JPEG, PNG, and frequency-based artifacts is needed to rule out dataset bias. · FLUX Misclassification due to real features: FLUX is out-of-distribution for the model. Testing diverse generative models (e.g., DALL·E, Imagen) would clarify if this is a broader limitation. Methods And Evaluation Criteria: Yes, the model is trained on LSUN, COCO, Redcaps and evaluated on LDM, FLUX, aMUSEd, GANs. Which includes up-to-date models and comparison datasets. The Stay-Positive modification correctly re-trains only the final layer, enforcing non-negative weights by setting all negative values (which pushes prediction to zero, i.e. real) to zero. This ensures detection relies only on fake features, improving robustness to post-processing (compression, resizing, inpainting). Theoretical Claims: The argument for using non-negative weights in the final layer appears well-founded within the framework of their proposed method. It effectively forces the model to focus exclusively on fake image features, thereby reducing reliance on spurious correlations from real data. Logically, this makes sense because a feature that contributes positively to the detection of fakeness is assigned a label of 1. However, this technique introduces a bias toward the fake distribution, potentially limiting the model’s ability to generalize across different generators. Spurious Fake Features Remain: While the method removes real-image-related biases, it does not necessarily address spurious correlations within fake features. The paper itself acknowledges this: "Our approach ensures that the detector ignores real image-specific features, but these features can still shape its notion of fakeness." Experimental Designs Or Analyses: Yes, I reviewed the experimental setups and analyses. Below are the key experiments and their validity: 1. Case Study 1: Post-Processing Artifacts a. Post-Processing Artifacts: The experiment confirms that WEBP compression causes misclassification due to spurious correlations. The results are valid. b. Issue: The study only tests WEBP; broader validation with JPEG, PNG, and other artifacts is needed. c. The resizing-based artifact analysis only uses images generated by SDv2.1, despite an existing benchmark dataset for this type of evaluation. the study should leverage an existing benchmark – for example, https://github.com/grip-unina/ClipBased-SyntheticImageDetection/tree/main 2. Case Study 2: Generalization Across Generators a. The experiment examines how Corvi-trained detectors struggle with FLUX-generated images, suggesting reliance on spurious real-image features. b. The analysis is well-supported by fake and real score distributions, which show FLUX images receive higher real scores than LDM ones. c. Issue: FLUX images are out-of-distribution for the model. More tests on other out-of-distribution generators (e.g., Imagen, DALL·E) could indicate whether this is a generalizable limitation. 3. Stay-Positive Algorithm Validation a. Results confirm that the method reduces misclassification caused by spurious correlations, improving detection on post-processed and partially inpainted images. Supplementary Material: Yes, I reviewed the supplementary material, specifically: 1. Implementation Details (Appendix A.1) – Covers training setup, data augmentations, and optimization for Stay-Positive. 2. Performance on Real Images (Appendix A.2) – Analyzes real image distributions to ensure representativeness. 3. GenImage Benchmark Evaluation (Appendix A.3) – Evaluates Stay-Positive's performance on diffusion-based models in the GenImage benchmark. 4. Improved Detection of GAN-generated Images (Appendix A.5) – Extends Stay-Positive to GAN-generated images, comparing a ResNet-50 trained on ProGAN images with a modified version that ignores real features (GAN-Baseline vs. GAN-Baseline Ours). Relation To Broader Scientific Literature: Unlike prior methods that focus on both real and fake features, this paper argues that fake image detection should be based exclusively on generative artifacts, disregarding any patterns related to real images, using a very simple method of second-stage training. 2. Experiments show significant performance gains in detecting FLUX and aMUSEd-generated images, which were previously misclassified due to reliance on real-image artifacts. 3. Prior detection methods fail to identify partially AI-modified images due to their reliance on real-image features. Essential References Not Discussed: The paper compares Stay-Positive to state-of-the-art zero-shot approaches like CLIP-based detectors (CLIPDet) and universal fake image detectors (UFD). · Findings: While CLIP-based zero-shot detectors perform well on seen distributions, they struggle with new diffusion models (e.g., FLUX, aMUSEd), whereas Stay-Positive improves generalization. There are not many cities of formal statistical tests, and zero-shots works such as: · Manifold induced biases for zero-shot and few-shot detection of generated image. · ZeroFake: Zero-Shot Detection of Fake Images Generated and Edited by Text-to-Image Generation Model Which aligns with the idea of focusing on fake features generated by LDMs. Other Strengths And Weaknesses: Strengths: The paper introduces a simple yet effective idea: classifying fake samples primarily based on fake features rather than relying on the real data distribution. This approach has several advantages: a. The second-stage fine-tuning is relatively fast to perform. b. It focuses on specific features and artifacts generated by different image generators. c. The method is practical and applicable to real-world scenarios. d. The empirical diagnostics are well-structured, intuitive, and logically sound. e. Robustness to inpainting detection. Weaknesses: The approach is supervised and trained on relatively small datasets, especially compared to zero-shot methods that leverage large-scale, pre-trained predictors. Ignoring certain real-image features may lead to misclassification in cases where "fakeness" is absent in generated samples. This poses a generalization limitation, particularly when detecting more advanced or unseen generative models. Moreover, this approach may introduce a strong bias toward the fake distribution, potentially compromising the model's generalization. Other Comments Or Suggestions: Compare the proposed method with more zero-shot techniques to provide a broader evaluation. 2. Further test the hypothesis from Case Study 2 by applying additional post-processing techniques and evaluating more generative models. This will strengthen the analysis derived from distribution plots, as the observed differences could also be attributed to a generalization gap. 3. Some experiments were conducted on very small datasets, such as Section 5.1.2 (Resizing-based artifacts), which may impact the reliability of the findings. 4. Certain tests utilized custom-generated images from Stable Diffusion 2.1 rather than pre-existing benchmark datasets (e.g., in Section 5.1.2). For instance, WhichFaceIsReal has a benchmark dataset based on COCO, and other open-source datasets are available as well. Using standardized datasets would improve reproducibility and comparability. Questions For Authors: -- Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Further test the hypothesis from Case Study 2 by applying additional post-processing techniques and evaluating more generative models. This will strengthen the analysis derived from distribution plots, as the observed differences could also be attributed to a generalization gap** **Test on other generators**: We would like to clarify that the generalization gap could manifest in two ways. First, the learned patterns associated with fake images may be absent in FLUX images. Second, while some signs of fakeness may be present in FLUX images, the decision may be influenced by spurious signs of realness. We believe the latter issue should be avoided, which our work demonstrates. The principle that relying on real features harms the detector holds true for generators beyond FLUX. We have evaluated our approach on aMUSEd (Tables 1, 2) and observed the same issue. This effect is also evident in VQDM (Table 4) and generators like DALL-E, GLIDE and ADM[1] (https://imgur.com/a/eLTN1Kv), reinforcing our claim that this is a broader limitation of existing methods, which our approach mitigates. **Post-Processing**: We have tested our detector's sensitivity to JPEG compression, additive noise, and low-pass filtering. For the results please refer to the response to reviewer JDtd. However, we would like to note that PNG is a lossless compression format and does not introduce artifacts. We hope that this serves as evidence regarding the general nature of this limitation. **However, this technique introduces a bias toward the fake distribution, potentially limiting the model’s ability to generalize across different generators.** If an image lacks signs of fakeness, our method will not classify it as fake, which we do not consider a limitation. Given training images from a specific generator, we can only learn how that generator deviates from the real distribution. Without detectable artifacts, there is insufficient evidence to classify an image as fake. Our work is the first to show that when generalizing to images from a known family of generators (Introduction, lines 12–28), existing detectors underperform because they rely on real features. Without signs of “fakeness”, existing detectors may attempt to classify images as real based on learned patterns, but as shown in Sections 3 and 5.4, these features are mostly spurious, making the classification hypothesis unreliable despite potential success in some cases. Appendix A.2 shows that our detectors, Corvi+ and Rajan+, trained on COCO and LSUN real/fake images, perform similarly on real images from other domains, like GTA and artworks, despite these domains being unseen during training. This confirms that our method detects fakeness patterns, not just labeling out-of-distribution images as fake, reinforcing our core claim. **Spurious Fake Features Remain** It is true that spurious fake features can remain even after our method. However, we would like to clarify that the purpose of the study is to show that the real features learned by the neural network are spurious which end up harming the detector when it comes to generalization. **Use existing benchmarks for resizing studies** We thank the reviewer for sharing the Synthbuster benchmark for the resizing study. We have tested our approach on this benchmark, using a plot similar to Fig 8 from the Synthbuster paper. Results Link: https://imgur.com/a/SMYKgea The results show that our method effectively mitigates Corvi's spurious association of downsampling with realness, providing evidence of the generality of our algorithm. **Compare the proposed method with more zero-shot techniques to provide a broader evaluation.** We compare our detector with large model-based detectors like UFD and ClipDet, and observe that these methods do not perform as well as fully-trained neural networks, as shown in Table 2. We also tested the latent diffusion specific zero-shot detector AEROBLADE (Ricker et al., 2024), which fails to match the performance of our detectors. Therefore, we do not believe leveraging pre-trained techniques offers advantages over our approach. | AP | SD | MJ | KD | PG | PixArt | LCM | Flux | Wuerstchen | aMUSEd | |--------------------|--------|--------|-----------|------------|----------|---------|---------|------------|---------| | AEROBLADE | 90.81 | 96.48 | 94.03 | 71.53| 87.84|60.34| 88.39| 85.93| 88.39| | Corvi + (Ours)|98.94 | 94.92 | 97.71| 97.87| 98.59 | 98.74| **94.23**| **98.16**| 95.47| | Rajan + (Ours)| **99.23** | **96.98** | **98.22**| **98.53** | **99.11**| **99.57**| 91.85 | 94.74| **97.26** | **References not discussed** We would like to highlight the fact that we have discussed zero-shot methods (Related Work, line 417-428). However, we will also cite works such as ZeroFake in our final version. **References** [1] Dhariwal, P., & Nichol, A. (2021). Diffusion models beat gans on image synthesis. NeurIPS 21 --- Rebuttal Comment 1.1: Comment: The authors addressed the main concerns with additional experiments and clarifications. They expanded their evaluation to include more generative models and post-processing methods, tested on the Synthbuster benchmark for resizing artifacts, and added comparisons with zero-shot detectors such as AEROBLADE. They indicated that these results will be included in the final version. These additions help support their claims and improve the completeness of the evaluation. The core idea is simple and practical. While the method is straightforward, it appears effective in reducing misclassification caused by spurious correlations and shows some robustness across different generative models and artifacts, though it still relies on supervised training and inherits the associated limitations. Given the strengthened empirical support and broader evaluation, I am increasing my score to 3.
Summary: This paper presents a method for improving fake image detection. It makes the observation that a fake image detector may learn spurious features associated with real images, such as post processing artifacts or image quality, which may lead to suboptimal detection performance. It thus proposes to constrain the detector to focus solely on artifacts that characterize fake images. Specifically, it focuses on the setting where the last layer of the fake image detector is a linear layer, where positive weights contribute to the likelihood of classifying as fake and negative weights reduce this likelihood. It proposes retraining this layer so that only positive weights contribute to the final prediction. Experiments show that this approach improves the robustness and consistency of fake image detectors. ## update after rebuttal I am satisfied with the authors' rebuttal and keep my original score of 3. Claims And Evidence: The claims made in the paper are generally backed up by experimental analyses and/or citations of findings from existing works. Methods And Evaluation Criteria: [Method] Strengths: * The proposed method looks reasonable. Conceptually, it does address the motivation of preventing the spurious correlation associated with real images. * It is lightweight and is compatible with other existing methods of learning fake images detectors. Concerns: * In Figure 7, it appears that the Redcaps images have much higher fakeness probabilities under the Corvi with the proposed method model than under the regular Corvi model, even at scaling factor 1. As scaling factor increases, the fakeness probability under Corvi+proposed method further increases -- e.g. under scaling factor 1.6, a Redcaps image on average gets fakeness probability above 0.5 under Corvi+proposed method, but only receives fakeness probability at around 0.1 on average under regular Corvi. Could the authors provide more discussion about this phenomenon? * Related to the question above, since the proposed method retrains the last layer to focus on fake attributes, would it be a concern that spurious correlations associated with fake images might inadvertently be amplified through this process? [Evaluation Criteria] The selection of benchmark datasets is reasonable and covers diverse real-world scenarios. The evaluation metrics are appropriate for this task. Theoretical Claims: N/A. This work does not involve theoretical claims or proofs. Experimental Designs Or Analyses: The experimental designs and analyses are reasonable. * The settings consider comprehensive scenarios, where the real images cover different domains and styles, and the fake images are sourced from various recent, widely used diffusion models and GAN models, with both full and partial (i.e. inpainting) generations. This covers realistic scenarios in practical applications and demonstrate generalizability of the proposed method. Results verify that the proposed method attains consistent performance improvements across these different conditions. * Analyses are provided to support the claims that existing fake image detectors learn spurious correlations of real images, such as compression and downsizing, and show that the proposed method successfully mitigates the problem. Supplementary Material: I have read through the supplementary attached to the submitted paper. Relation To Broader Scientific Literature: This paper falls under the broader fields of data forensics and fake image detection. The paper is motivated by the observation that detectors trained with existing methods may inadvertently learn spurious correlations and the proposed method alleviates this problem and can be applied on top of various recent existing works. Essential References Not Discussed: The paper sufficiently covers relevant literature in the field. Other Strengths And Weaknesses: Please refer to other sections. Other Comments Or Suggestions: [Minor Suggestions on Writing/Presentation] * The second and third paragraphs of the introduction seem a bit repetitive and read like paraphrased versions of each other. They could be condensed into one. * It could be helpful to add an “Average” column in the tables to make it easier to compare overall performance and consistency across different fake image sources among the baselines as well as the proposed method. * Typo in Limitations - L415 should be "associates upsampled images with the *fake* distribution"? Questions For Authors: * See the questions under Methods And Evaluation Criteria. * Additionally, are there any insights or analyses of what specific types of real or fake features are learned by the detectors, e.g. by inspecting the weights, using saliency maps, or other interpretability methods, etc.? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **In Figure 7, it appears that the Redcaps images have much higher fakeness probabilities under the Corvi with the proposed method model than under the regular Corvi model. Could the authors provide more discussion about this phenomenon?** We thank the reviewer for raising this point. In all our experiments (Sections 5.2, 5.3, and 5.4), we observe that Corvi+ performs better in detecting fake images, despite the apparent "higher probability of fakeness for real images" (further details in Appendix A.2). This issue arises from our use of the term "probability of fakeness," which we now recognize as a misleading interpretation. Our approach removes negative weights in the final layer, meaning it does not rely on "real features." As a result, the real score (Section 3.2, lines 156–159) is always zero, preventing our method from assigning highly negative scores for a given image. Consequently, for real images, the sigmoid output is closer to 0.5 rather than 0. This does not imply that a real image is half as likely to be fake, but rather represents a logit score that helps separate real and fake distributions. We will provide additional clarification for this in our final report. Both Corvi and Corvi+ learn the same spurious fake features, such as upsampling artifacts. However, because Corvi+ eliminates spurious correlations associated with the real distribution while retaining those linked to the fake distribution, it ultimately has fewer spurious correlations overall. This leads to better detection performance. **Related to the question above, since the proposed method retrains the last layer to focus on fake attributes, would it be a concern that spurious correlations associated with fake images might inadvertently be amplified through this process?** There is a possibility that spurious correlations associated with the fake distribution could be inadvertently amplified. However, the core goal of our work is to highlight that patterns associated with the real distribution are spurious and should not influence the decision. Ignoring these patterns improves the detector's performance, independent of the spurious features in the fake distribution. **Additionally, are there any insights or analyses of what specific types of real or fake features are learned by the detectors, e.g. by inspecting the weights, using saliency maps, or other interpretability methods, etc.?** We have identified and explained specific artifacts linked to each distribution. In Section 3, we show that compression artifacts can be associated with real images, and low-level artifacts may be spuriously linked to the real distribution due to quality differences in the training data (Section 3.2). Further evidence is provided in Section 5.3.2 (lines 372–384), where post-processing FLUX- and aMUSEd-generated images significantly improves the performance of the original Corvi detector (from 57.25 to 74.57 on FLUX) and Rajan detector (from 80.64 to 87.80 on FLUX), suggesting that post-processing removes spurious low-level features which the detector associates with the real distribution. We also experimented with GradCAM [1], but the activation maps did not provide clear insights, so we did not discuss them in our work. **Minor Suggestions** We thank the reviewer for providing us with these minor suggestions, we will incorporate these changes in the final version. **References** 1. Selvaraju, Ramprasaath R., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." Proceedings of the IEEE international conference on computer vision. 2017. --- Rebuttal Comment 1.1: Comment: Thank the authors for the rebuttal. I am content with the response and will keep my score.
Summary: This paper introduces "Stay-Positive," an algorithm designed to improve AI-generated image detection by focusing solely on generative artifacts while disregarding features associated with real images. The authors argue that spurious correlations, such as compression artifacts that detectors mistakenly associate with real data distribution, significantly impact detection performance. Their key insight is that an image should be classified as fake if and only if it contains artifacts introduced by the generative model. The proposed method involves retraining the last layer of existing detectors to constrain them to focus exclusively on generative artifacts. Through extensive experimentation, the authors demonstrate that Stay-Positive improves detector performance in several ways: (1) reducing susceptibility to spurious correlations, (2) enhancing generalization to newer generative models within the same family, and (3) increasing robustness to post-processing operations like compression and downsizing. Notably, the authors show substantial improvements on challenging generators like FLUX and aMUSEd compared to baseline methods. Claims And Evidence: The claims made in the paper are generally well-supported by empirical evidence. The authors provide comprehensive experimental results that demonstrate: - The effectiveness of Stay-Positive in improving detection performance across multiple generative models, with particularly significant gains for challenging models like FLUX (42.08 AP improvement) and aMUSEd (10.62 AP improvement). - Enhanced robustness to post-processing operations like compression and downsizing, which is demonstrated through controlled experiments and visualization. - The premise that focusing solely on fake image features rather than real image features yields better generalization. These claims are supported by quantitative results presented in tables and figures, showing performance metrics (Average Precision) across different scenarios and comparing against established baselines. The experimental design includes appropriate controls and covers a wide range of generative models and post-processing techniques. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The Stay-Positive algorithm involves retraining the last layer of existing detectors to focus exclusively on fake image features while ignoring real image features, which directly addresses the identified problem of spurious correlations. For evaluation, the authors use a diverse comprising: - Real images from various sources (Redcaps, WikiArt, LAION-Aesthetics, whichfaceisreal) - Fake images generated by multiple models (SDv1.5, MidJourney, Kandinsky, FLUX, etc.) - Post-processed versions to simulate real-world conditions - Partially Inpainted Images The evaluation metrics (Average Precision) are standard and appropriate for binary classification tasks. The authors test across a wide range of scenarios, including different generative models and various post-processing operations, which provides a comprehensive assessment of the method's effectiveness and generalization capabilities. Theoretical Claims: The paper does not present formal mathematical proofs for theoretical claims. Instead, it focuses on empirical validation of the proposed approach through extensive experimentation. The conceptual foundation—that detectors should only focus on fake artifacts and ignore real image features—is well-articulated and supported by the experimental results, but no formal proofs are provided. Experimental Designs Or Analyses: The experimental design appears sound and comprehensive. I examined the main experiments presented in the paper: - Comparison with baseline detectors (Corvi and Rajan) across multiple generative models, with and without post-processing Robustness tests against compression and downsizing - Performance on challenging newer models like FLUX and aMUSEd - The authors use appropriate controls, ensure diverse test datasets, and report standard metrics. The experiments effectively isolate the contribution of the Stay-Positive approach by comparing it with strong baselines using the same underlying architectures. One particularly convincing aspect is the demonstration of improved performance on generative models not seen during training, which supports the claim about better generalization capabilities. Supplementary Material: Based on the available PDF, I reviewed the appendix section which contains: Implementation details, including training recipe, batch sizes, data augmentations, and inference methodology - Additional experiments on the performance with different types of real images (Test Real, GTA, ImageNet, Cubism, Pop Art, Modern Art) - Validation that the test set represents various types of real image families These supplementary materials provide important details about the experimental setup and additional validations that strengthen the main claims of the paper. Relation To Broader Scientific Literature: This work builds upon and extends previous research in fake image detection: - It addresses limitations identified in recent works by Corvi et al. (2023) and Rajan et al. (2024), particularly regarding robustness to post-processing and generalization to new models. - It connects to the broader literature on spurious correlations in machine learning models, applying these concepts specifically to the fake image detection domain. - The paper relates to work on detecting images from diffusion models, flow-based models, and other generative architectures, extending detection capabilities to newer models like FLUX and aMUSEd. Essential References Not Discussed: I think the references in the manuscript are relatively comprehensive. Other Strengths And Weaknesses: Strengths: - The proposed approach is conceptually simple yet effective, making it easy to implement on top of existing detectors - The extensive evaluation across multiple generative models and post-processing techniques demonstrates the method's practical utility - The focus on reducing spurious correlations represents a meaningful contribution to improving detector robustness Weaknesses: - The paper could benefit from a more detailed analysis of potential limitations or failure cases of the proposed approach - There is limited discussion about the computational overhead or additional training time required for implementing Stay-Positive compared to traditional approaches Other Comments Or Suggestions: Personally, I think the overall idea of "stay-positive" not only benefit the generated images detection. It would be helpful to discuss how the method might be extended to other media types (audio, video) or multimodal content. - Consider including a more detailed analysis of cases where Stay-Positive underperforms or fails, which could provide insights for future improvements - It would be beneficial to discuss the computational efficiency of the approach more explicitly, including additional training time and inference costs - The paper could be strengthened by exploring more diverse post-processing operations beyond compression and resizing, for example, random noise, etc. Questions For Authors: See "Other comments" part Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **The paper could benefit from a more detailed analysis of potential limitations or failure cases of the proposed approach**\ In this work, we have analyzed two limitations in detail, both related to the network's potential to learn spurious fake features. In Section 6, Fig 7, we show that our improved version of Corvi can still associate upsampling artifacts with the fake distribution. Additionally, in Appendix A.4, we explain a way in which the neural network (from stage-1 in Fig 3) could have learned to associate the absence of certain features with fake images, such as the lack of WEBP compression. We hope this clarifies the drawbacks of our work. We kindly request the reviewer to highlight any specific limitations that could be further explored, and we are happy to address them. \ \ **It would be beneficial to discuss the computational efficiency of the approach more explicitly, including additional training time and inference costs** \ We thank the reviewer for raising these points. First, we would like to clarify that our method re-trains the final linear layer of the original network. Therefore, there are no additional inference costs, in comparison to the original method. We also compute the additional training time of our method (Rajan setting, only second stage), for which we use a batch size of 1024 on a single NVIDIA RTX A6000 machine. For optimal performance, we conduct stage-2 (Fig 3) re-training for 15 epochs which takes an additional 4h 8m 33s. Note that stage 1 takes 42h 5m 42s. **Personally, I think the overall idea of "stay-positive" not only benefit the generated images detection. It would be helpful to discuss how the method might be extended to other media types (audio, video) or multimodal content.**\ We agree with the reviewer’s point. While the scope of our current work is focused on fake image detection, we believe the core principle of ignoring features associated with the real distribution will apply to other forms of media forensics, such as video and audio. We will address this in the final version of the paper. **The paper could be strengthened by exploring more diverse post-processing operations beyond compression and resizing, for example, random noise, etc.**\ We thank the reviewer for raising this point, we have tested the sensitivity of our approach to JPEG Compression, additive gaussian noise and low-pass filtering based on the suggestions from various reviewers. We use the same experimental setting from Sec 3.1, where we take real images from reddit (Desai et al., 2021) and fake images from Stable Diffusion 1.5. \ Link: https://imgur.com/a/HtYRCvO \ We notice that both our Corvi+ and our Rajan+ are very robust to post-processing operations. This shows that our detector can reliably detect fake images in the wild. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I do not have any further question and will keep my score.
Summary: This paper proposed an algorithm designed to constrain the focus of detectors to generative artifacts while disregarding those associated with real data. This method will help the model reduce susceptibility to spurious correlations and enhance robustness. ## update after rebuttal The authors have addressed some of my concerns, but I still have concerns about the quality of the paper, so I chose to give a weak rejection. First of all, what confuses me the most is that the authors compared with an important existing method UFD, but as we all know, UFD also proposed a very influential public dataset UniversalFakeDetect, which includes 19 test settings, but the authors did not conduct experiments on UniversalFakeDetect but only on a subset CNN-generated images. Secondly, Supplementary Material is Supplementary Material, and Appendix is Appendix. There is a special submission window for Supplementary Material, and authors can choose to provide their code, demonstration videos, or other materials. Of course, I read the Appendix provided by the authors, but this paper does NOT provide Supplementary Material. In addition, the authors did not provide important ablation experiments at the beginning, and the newly added ablation experiment part cannot fully show the effectiveness and robustness of the proposed method. Finally, although these will not affect my final score, I hope the authors can further improve the quality of writing. There are only ten quotation marks in the whole paper, and the authors only needs to search to find at least two errors (misuse of quotation marks: ”real” (line 402) and typographical errors: While most regions of such images are“real” (line 368) ). It is best for the authors to read the paper completely from beginning to end. Claims And Evidence: The authors' point that the distribution of real samples is harmful to detectors may need further discussion and verification. Many papers such as DIRE [1] and [2] clearly state the benefits of positive samples and even think about the problem of generated image detection from a perspective similar to anomaly detection. [1] Wang, Zhendong, et al. "Dire for diffusion-generated image detection." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Li, Jiaming, et al. "Frequency-aware discriminative feature learning supervised by single-center loss for face forgery detection." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: The benchmarks provided by this paper are partially missing. Supplementary Material: None. Relation To Broader Scientific Literature: This paper may provide a new perspective in the field of generated image detection. Essential References Not Discussed: No. Other Strengths And Weaknesses: Advantages: - The motivation of this paper is clear and the authors chose a straightforward but effective method to achieve the goal. - The charts related to the experiments in the paper are relatively clear, and the organization of the charts is logical. Disadvantages: - Some of the images in the paper are not clear. Please provide images in PDF format if possible. Other Comments Or Suggestions: None. Questions For Authors: - There are many larger benchmarks in the field of fake image detection, for example in DIRE [1] and UFD [3] there are larger benchmarks than the one in this paper, but the authors did not conduct a complete test. [3] Ojha, Utkarsh, Yuheng Li, and Yong Jae Lee. "Towards universal fake image detectors that generalize across generative models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. - The robustness tests in fake image detection also include the effects of JPEG compression and Gaussian noise. The authors should provide more robustness experimental tests. - This paper does not provide ablation experiments to analyze the impact of different modules and parameters on the experimental results. - The writing of this paper needs further improvement, for example, some quotation marks are incorrectly written - Please unify the format of references. At least ensure that the citation formats of conferences and journals are consistent. - In the paper, the authors show many examples that perform better than other methods. It would be better if the authors could show some visualization samples where the model fails to classify and analyze the reasons. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **The authors' point that the distribution of real samples is harmful to detectors may need further discussion and verification. Prior works show the benefits of “positive samples”** \ We would like to make some clarifications. We are unsure of the exact meaning of 'positive samples,' but we assume it refers to real images. Our paper does not suggest that using real images for training harms the detector. In fact, we also incorporate real images in our training process. The key finding of our work is that associating specific patterns with the real distribution can often be spurious, which undermines the detector’s reliability. \ \ We provide concrete evidence for this claim. In Section 3.1, we show that the real distribution may contain unknown spurious correlations. Section 3.2 demonstrates that a detector trained to distinguish real images from LDM-generated ones can mistakenly associate certain features with real images. However, these same 'real' features appear in FLUX-generated images, proving the unreliability of 'real features' learned by discriminative models. Our results in Section 5 confirm this, as also noted by other reviewers (JDtd, GtjV, Qhvq). \ \ The DIRE detector suggested by the reviewer supports our argument by associating JPEG compression with real images, leading to spurious correlations, as discussed in AEROBLADE (Ricker et al., 2024) Section 9. Similarly, Work [2] attempts to reduce intra-class variability in real images but remains susceptible to the same issue. As shown in Section 3.2, the features defining 'realness' relative to a generator are often spurious. \ \ **The benchmarks provided by this paper are partially missing. There are larger benchmarks than the one in this paper.**\ We seek clarification on what the reviewer means by “partially missing.” We assume this refers to the claim of not testing our method on popular public benchmarks. However, we respectfully disagree. As detailed in Appendix A.3, we evaluate our model on GenImage, a widely used, modern benchmark for fake image detection. Our GAN-based detector is also tested on the established CNNDet benchmark (line 329, Appendix A.5.2). While our primary benchmark is the publicly available dataset from Rajan et al. (2024), chosen for its inclusion of images from recent generators, we have also tested our LDM-based detector on the Autoregressive/Diffusion Models from the UFD benchmark, as suggested by the reviewer, the results can be found in https://imgur.com/a/eLTN1Kv. Our results show that the stay-positive algorithm improves detector performance across various unseen generators, hopefully addressing the reviewer’s concerns.\ \ **The robustness tests in fake image detection also include the effects of JPEG compression and Gaussian noise. The authors should provide more robustness experimental tests.**\ Please refer to the response to reviewer JDtd.\ \ **Include Ablations**\ We assume the reviewer refers to ablating other design choices for stay-positive. To do so, we conducted two ablations: (i) clamping detector weights without re-training, and (ii) re-training the entire network while clamping only the last layer. Results: https://imgur.com/a/H8fuyIZ Our results (similar to Table 2) show that clamping without re-training leads to suboptimal performance due to improper reweighting of fake features. Training the entire backbone while clamping the final layer underperforms on FLUX images, likely due to newly learnt spurious fake features. We hope this clarifies the reviewer’s concerns. **The writing of this paper needs further improvement, for example, some quotation marks are incorrectly written**\ We're happy to refine the writing but request the reviewer specify which parts need improvement. The example regarding incorrect quotation marks is unclear, could the reviewer point to the relevant line or section? **Supplementary Material: None.**\ We respectfully point out that this statement by the reviewer is incorrect. Our Appendix includes five sections that thoroughly discuss various details, and this has been acknowledged and confirmed by the other reviewers. **Please unify the format of references.**\ We thank the reviewer for the suggestion and will unify the citation formats in the final version.\ \ **It would be better if the authors could show some visualization samples where the model fails to classify and analyze the reasons.**\ In the Limitations section (Fig. 7), we show that our Corvi+ detector struggles with upsampled real images due to reliance on spurious features. Additionally, our models (Corvi+, Rajan+) fail to detect fake images from generators entirely different from the training distribution. Here are some qualitative examples: Firefly images (Detector trained on LDM images cannot detect these): https://imgur.com/a/GagThLg \ Real Images and Upsampled Versions (Corvi, Corvi+ can detect the original 512x512 ones but cannot detect the upscaled 1024x1024 ones): https://imgur.com/a/MHX3Py0
null
null
null
null
null
null
Info-Coevolution: An Efficient Framework for Data Model Coevolution
Accept (poster)
Summary: The paper addresses the challenge of high annotation costs and inefficiency in training models on growing datasets by proposing a framework for online selective annotation that co-evolves data and models. The method combines information gain estimation (model uncertainty and dataset locality via nearest-neighbor analysis) with Bayesian prediction fusion to merge model and data-derived predictions, reducing bias, and dynamic rechecking to update sample priorities post-annotation for class balance. Using efficient approximate nearest-neighbor search (HNSW), it scales logarithmically, enabling million-scale dataset handling. Key results include achieving lossless ImageNet-1K performance with 32% fewer annotations (68% total) and 50% with semi-supervised learning, while generalizing across datasets (CIFAR, SVHN) and architectures (ViT, ResNet). The framework integrates public data (e.g., LAION-400M) to enhance performance with minimal unlabeled data and incurs low overhead (~10 GPU hours on ImageNet). It outperforms coreset selection and active learning by avoiding distribution bias and automates annotation halting when gains plateau. Claims And Evidence: Yes Methods And Evaluation Criteria: To me, the proposed method is more like a hybrid active learning (AL) methods combining uncertainty-based AL (in this method, based on the decision boundary of a pretrained backbone) and diversity-based AL (in this method, based on the nearest neighbor data similarity) for information gain estimation. Therefore, I cannot read this work as a new paradigm for AL. It's more like an improvement work for hybrid AL. Regarding the method design, information gain has been widely recognized as an effective criterion for AL. The Bayesian Prediction Fusion of data similarity and model prediction is reasonable to me. It provides to some extent new insights for AL community towards the data-agnostic and model-agnostic AL. Experiments on ImageNet, CIFAR-10, CIFAR-100, Standfordcars, fool101 and SVHN are conducted. These are commonly used active learning datasets, so the classification-based AL experiments are generally adequate. However, only image classification task is evaluated. Showing results on segmentation/detection tasks would be more convincing. Theoretical Claims: I checked Theorem 3.1, and it looks correct. Experimental Designs Or Analyses: The paper only compares with one AL method, DQ (coreset-based), in Fig. 6. More AL methods could be compared, from the early entropy-based methods to recent AL methods. I understand the proposed method is both data-agnostic and model-agnostic, but it'd be better to include more AL methods to comprehensively evaluate its effectiveness. More experiments on different semantic understanding tasks could be considered. Supplementary Material: I checked the proof and it should be correct. Relation To Broader Scientific Literature: The proposed method would benefit ML training by reducing the annotation and training costs on large datasets. Essential References Not Discussed: No Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer i2qw for the suggestions as well as the potential improvements. We make responses as follows. **Q1: The paper only compares with one AL method, DQ (coreset-based), in Fig. 6. More AL methods could be compared, from the early entropy-based methods to recent AL methods. I understand the proposed method is both data-agnostic and model-agnostic, but it'd be better to include more AL methods to comprehensively evaluate its effectiveness. More experiments on different semantic understanding tasks could be considered.** Thanks for the suggestions. We attach more baselines here and extend our algorithm to the semantic segmentation task. ### **More baselines** ImageNet-1k with ViT supervised training from 1% data to 10% data, |Method|step|Acc|Cost| |---|---|---|---| |MASE|9%|80.0|1 training + selection| |MASE|1w|80.2|12 training + selection| |BASE|9%|79.7|1 training + selection| |BASE|1w|80.2|12 training + selection| |Partial BADGE|9%|78.9|1 training + selection| |Partial BADGE|1w|80.4|12 training + selection| |Info-Coevolution|9%|**80.2%**|**1 training** + selection| |Info-Coevolution|~2.25%|**80.5%**|**4 training** + selection| **Analysis:** - Previous active learning methods require training for each step. At the same step number, Info-Coevolution has the best performance with better time cost. Given the full budget, other baseline methods have much higher costs ($O(n^2)$ for these step-wise methods) and still have lower performance than Info-Coevolution. - What's more, these methods are also hard to further scale (due to $O(n^2)$ training complexity) while Info-Coevolution can directly scale from 12w samples to 87w samples without a in-middle training step. - Additionaly, Info-Coevolution doesn't introduce any data distribution problems, so we are able to do continual training. We have additionally verified that using continual training on previous checkpoints has no performance loss. (1% data with 50 epoch, continual training with selected10% data with 45% epoch, then continual training on 68% data with 40 epoch) The training cost of our algorithm could be O(n) in total for selected data. **Conclusion:** - Info-Coevolution has better performance and much better scalability than previous baselines. ### **Semantic segmentation** We here extend our algorithm to semantic segmentation and attach results on ADE20k with UperNet(BEiT-v2 backbone). The UperNet has a backbone (BEiT-v2-large) and a UperHead. We use the backbone feature from the last layer (dim 1024\*16\*16) and mean pooling it to dim 1024. We adapt Info-Coevolution accordingly, using pixel-wise confidence avg as model confidence, class-wise similarity weighted confidence as knn confidence, and $gain=1-avg(confidence_{model},confidence_{knn})$. The result for using 1% data trained model to select 10% data is as follows: **ADE20K 10% random** |aAcc|mIoU|mAcc| |---|---|---| | 82.19 | 46.89 | 58.72 | **ADE20K 10% our selection (using BEiT feature)** |aAcc|mIoU|mAcc| |---|---|---| | 82.84 **$\uparrow$0.65** | 48.39 **$\uparrow$1.50** |60.81 **$\uparrow$2.09**| **Analysis:** - It can be seen that the **mAcc, mIoU, and mAcc are improved by 0.65, 1.50, and 2.09 respectively**, which is significant at this data amount. (For larger data ratio we are still running experiments and will update the results in the later updates. We will add thorough experiments in the revision. The code of this part will also be published.) **Conclusion:** - Info-Coevolution is generalizable to more tasks as analyzed.
Summary: This paper points out the current issue of active learning. Traditional active learning methods select informative data points for annotation but suffer from high computational costs, frequent model retraining, and bias in uncertainty-based selection. Considering these, they propose a novel framework called Info-Coevolution, a model-data fusion coevolution model that integrates: (1) Bayesian information gain estimation to evaluate how much information a sample contributes to model improvement; (2) kNN approximation with HNSW, to measure entropy and confidence without model retraining and (3) Bayesian fusion, to model confidence and data-driven uncertainty for more robust sample selection. The authors then test their model on CIFAR-10, CIFAR-100, ImageNet-1K, and the Info-Coevolution got 68% annotation cost while maintaining full model performance and 50% annotation cost with semi-supervised learning. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: For the Bayesian information gain estimation, no proof shows how kNN-based entropy estimation approximates the true situations or the model-based entropy estimates (approximation error). Experimental Designs Or Analyses: The baselines selection is too limited, just include very normal baselines like random, coreset. Can consider stronger baselines like BADGE (incorporating the uncertainty and diversity based measures, it is comparable). Supplementary Material: I didn't check it. Relation To Broader Scientific Literature: There is no discussion about: Bayesian fusion vs. standard active learning work that combines the model- and data-driven approaches like BADGE and Batch Active Learning at Scale. Essential References Not Discussed: See previous sections. [r1] Ash, Jordan T., et al. "Deep batch active learning by diverse, uncertain gradient lower bounds." arXiv preprint arXiv:1906.03671 (2019). [r2] Citovsky, Gui, et al. "Batch active learning at scale." Advances in Neural Information Processing Systems 34 (2021): 11933-11944. Other Strengths And Weaknesses: Strengths: 1. It reduces the re-training cost; 2. It reduces annotation costs by 30-50% while maintaining performance. Weaknesses: 1. Should add stronger baselines; 2. No ablation study to show the effectiveness of (1) Bayesian fusion, like Bayesian vs. direct calibration (e.g., temperature scaling); Bayesian fusion vs. Monte Carlo Dropout-based uncertainty estimation and remove this part; (2) kNN approximation vs. true model entropy estimation (can be replaced by computing from full retraining). Other Comments Or Suggestions: Should use more space to describe Bayesian fusion. Line 025-028: "For real-world datasets like ImageNet-1K, Info-Coevolution reduces annotation and training costs by 32% without performance." is not a completed sentence. Questions For Authors: See Section "Other Strengths And Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer 9pQu for the suggestions as well as the potential improvements. We make responses as follows. **Q1: Adding references and baselines** **A1**:Thanks for the suggestions. We attach more baselines here, and will add the corresponding references. ImageNet-1k with ViT supervised training from 1% data to 10% data, |Method|step|Acc|Cost| |---|---|---|---| |MASE|9%|80.0|1 training + selection| |MASE|1w|80.2|12 training + selection| |BASE|9%|79.7|1 training + selection| |BASE|1w|80.2|12 training + selection| |Partial BADGE|9%|78.9|1 training + selection| |Partial BADGE|1w|80.4|12 training + selection| |Info-Coevolution|9%|**80.2%**|**1 training** + selection| |Info-Coevolution|~2.25%|**80.5%**|**4 training** + selection| **Analysis:** - Previous active learning methods require training for each step. At the same step number, Info-Coevolution has the best performance with better time cost. Given the full budget, other baseline methods have much higher costs ($O(n^2)$ for these step-wise methods) and still have lower performance than Info-Coevolution. - What's more, these methods are also hard to further scale (due to $O(n^2)$ training complexity) while Info-Coevolution can directly scale from 12w samples to 87w samples without a in-middle training step. - Additionally, Info-Coevolution doesn't introduce any data distribution problems, so we are able to do continual training. We have additionally verified that using continual training on previous checkpoints has no performance loss. (1% data with 50 epoch, continual training with selected10% data with 45% epoch, then continual training on 68% data with 40 epoch) The training cost of our algorithm could be O(n) in total for selected data. **Conclusion:** - Info-Coevolution has better performance and much better scalability than previous baselines. **Q2: Ablation study to show the effectiveness of (1) Bayesian fusion, (2) kNN approximation** **A2**: Thanks for the question. We make a clarification here, and ask for the reviewers' clarification on some question points to further clarify/discuss in the next response. We have conducted part of the ablation in sec 4.3 ablation, including kNN only, model-only, fused, and their combination of dynamic rechecking. The Data column is referring to kNN-only. It can be seen dynamic rechecking is the one guaranteed non-biased distribution when incorporating model-based gain estimation (model estimation without dynamic rechecking will lower the performance), while kNN-only without dynamic rechecking could improve performance already. For the calibration, which calibration (e.g., temperature scaling) is referred to? Could you give a reference so we can be more clear and add a comparison? Monte Carlo Dropout-based uncertainty estimation introduces random noise in the feature space, which is somehow similar to our aggregating information from nearby samples to estimate. But the differences are: 1. our method does not have to do inference multiple times. 2. As a model-based method, Monte Carlo Dropout-based uncertainty estimation is estimating the model optimization space. It could still suffer distribution problems, as the dropout is not considering data distribution (it is more about the local curvature of the model). For kNN approximation vs. true model entropy estimation, our weighted KNN-based method doesn't introduce much distribution bias. Incorporating model-based estimation usually causes this (as observed with lower performance than randomly selected data), and needs to be fixed by dynamic rechecking or other distribution trimming methods. **Q3: For the Bayesian information gain estimation, no proof shows how kNN-based entropy estimation approximates the true situations or the model-based entropy estimates (approximation error).** **A3**: We would like to first clarify on the point: is the reviewer referring to the problem that kNN-based entropy estimation or model-based entropy estimation could differ to real entropy? We add some preliminary discussions here and will update it in the next reply based on the reviewer's feedback. Empirically, the model confidence (defined in section 3) is linear correlated (more than 0.95) to sample prediction accuracy on both training data and validation data. We are using this probability proxy instead of real entropy (mentioned in sec 3), because it keeps this observed linearity instead of introducing log function with an unstable bound, and thus when linearity does hold, the knn prediction can help to decide the sample priority. We also added some discussion in A1 to Reviewer aJ9B about our Lipschitz constant assumption, which is somehow related. We are open to further discussions. --- Rebuttal Comment 1.1: Comment: I mean, your kNN-based entropy estimation implicitly assumes that local neighborhoods in the feature space reflect the model’s predictive behavior, therefore, aggregating confidence from nearby samples can replace retraining-based entropy or Monte Carlo-based uncertainty estimation, right? So, considering it as a confidence estimator for unlabeled samples, the author should (1) compare kNN-based entropy vs. true model entropy (e.g., computed by retraining the model, and measuring entropy) and (2) compare with entropy from multiple dropout inferences (MC Dropout), Deep Ensembles, Expected Calibration Error (ECE), Trust score (To Trust Or Not To Trust A Classifier, Jiang et al. 2018). --- Reply to Comment 1.1.1: Comment: Thanks for clarifying the points. We are now more clear with the questions and make responses as follows. First, we would like to clarify that, Info-Coevolution is directly estimating the gain of labeling a sample in the confidence space without calculating entropy explicitly (it is taking advantage of the linearity discussed in section 3, while we can still approximately calculate it according to section 3.4). Using the confidence as proxy only needs to keep the most confident prediction and the corresponding confidence. Second, as stated in sec 3, Info-Coevolution is designed to conduct sample selection and predict the information gain of labeling/learning a sample, instead of being a confidence estimator for a single sample. The information gain of a sample (not the "true entropy") is estimated on a model and the data distribution, which is quite different to those confidence estimators defined on a model and a single sample, and it is not fully comparable. However, we found Bayesian Fusion actually could perform as a kind of orthogonal model calibration method to model-based ones like temperature scaling. **How kNN-based dynamic rechecking prediction approximates retrained model prediction** We evaluate the effectiveness of dynamic rechecking (with kNN-based prediction + Bayesian Fusion) on approximating model retraining with the following setting: we start from ViT model trained with our 5% ImageNet-1K data at validation accuracy 75.8%, and extend the data to 7% data. For samples updated by dynamic rechecking with an updated gain larger than 0.1, the **cosine similarity** with retrained model's gain estimation ($1-confidence$) is 0.82479. It shows a quite high correlation, as we are using a k=8 (so kNN prediction has a granularity about 0.125). A higher estimated gain also shows a higher cosine similarity (cosine similarity 0.88 for gain>0.5). This suggests that dynamic rechecking can approximate the model retraining on predicting high-gain samples efficiently and effectively. **Discussion on Model Calibration Works** As stated in the second point above, the gain estimated by Info-Coevolution's Bayesian Fusion is a different one to those of model calibrations (confidence gain on model and dataset vs. confidence of model on single sample). These model calibration methods can potentially be applied to enhance the model confidence estimation part of Info-Coevolution, introducing a corresponding cost. Table of Comparison |Method|Purpose|Cost|Range of Confidence| |---|---|---|---| |MC Dropout|Model Calibration|Inference*M|[0,1] with std| |Deep Ensemble|Model Calibration|Training*M|[0,1]| |Temperature Scaling|Model Calibration|Inference*1+One Parameter Training on Validation Set|[0,1]| |Trust Score|Model Calibration|Training Data Inference*1+KNN|[0,$\infty$)| |Info-Coevolution|Estimate Sample Annotation Gain|Training Data Inference*1+KNN|[0,1]| In terms of the cost and confidence ranges, MC Dropout and Deep Ensemble is introducing much larger cost than our origianl algorithm; Trust Score's confidence range is non-compatible. Due to these factors and time limit, we here first try wheter Temperature Scaling could helps to give a better initial confidence, and wheter Bayesian Fusion is effective with it. **Effectiveness of Bayesian-Fusion on Confidence Estimation** **Setting:** ViT model trained with our 5% ImageNet-1K data, the Expected Calibration Error (ECE) of model prediction and Bayesian fusion is (lower is better) | |Model|Bayesian-Fusion|Temperature Scaling|Bayesian-Fusion with Temperature Scaling| |---|---|---|---|---| |ECE|0.310|**0.275**|0.181|**0.171**| **Analysis:** We can see the Bayesian-Fusion better estimates the real model confidence, as it has a lower Expected Calibration Error. For the case using original model confidence and the case using temperature scaling, Bayesian-Fusion improved the expected calibration error in both cases. **Conclusion:** Though Bayesian Fusion is designed for gain estimation, it could also improve model confidence estimation as an orthogonal method with model calibration methods. **Update:** We further investigate the effect of incorporating model calibration method into Info-Coevolution framework. When selecting additional **2%** ImageNet data using ViT model trained with our 5% ImageNet-1K data, the calibrated model (by temperature scaling) could further improve our performance: |Data Amount|Ours (previous)|Ours+Temperature Scaling| |---|---|---| |5%->7%|78.0|78.9($\uparrow$0.9)| |7%->10%|80.5|80.5| |10%->50%|85.1|85.0| **Analysis:** - The calibrated model can provide a better original confidence estimation. **When the model is not good enough** It improves the linearity gain/confidence prediction of our framework and could benefit sample selection. When the model is good, temperature scaling has a negligible benefit on improved model confidence estimation and the sample choice.
Summary: The paper introduces Info-Coevolution, a framework for selective data collection which aims to improve data annotation efficiency. It proposes strategies to estimate information gain by leveraging Bayesian principles, and also uses ANN structures to help achieve efficient data selection with minimal computational overhead, which reduces the need for frequent model updates over the selection process. The framework was benchmarked against conventional active learning and coreset selection methods, and was shown to reduce annotation costs while maintaining model performance. Claims And Evidence: The claim that Info-Coevolution can reduce annotation and training costs without compromising model performance was also support by benchmarks on various datasets, where it was compared with other baseline methods. The ablation experiments also further validated that each component of the methodology contributes to the performance of the framework. Methods And Evaluation Criteria: Yes, the methodology introduced tackles the task of improving annotation efficiency while maintaining the model performance. Benchmarks were performed on well known datasets such as ImageNet-1k, CIFAR-10/100 etc to showcase the method’s effectiveness compared to other baseline methods. The authors also evaluated both the supervised and semi-supervised learning settings, covering different training paradigms. Theoretical Claims: Yes, the proof for Theorem 3.1 is correct, however it should be noted that it is made under the assumption that the function g is Lipschitz continuous. While most common used neural networks exhibit Lipschitz continuity, the Lipschitz constant may be difficult to compute rigorously and large in practice. Hence the bound may not be tight in real world settings, a limitation that should be addressed. Experimental Designs Or Analyses: The experimental designs and analyses were sound, they were performed on well known benchmark datasets, and compared with established baselines under supervised and semi-supervised settings, with clearly defined annotation metrics such as accuracy improvements across different annotation ratios and annotation efficiency. The ablation experiments also served to demonstrate that each component of the framework contributes to the performance improvements. However some limitations would include that the benchmarks only contained image/computer vision datasets and the models used were mostly limited to ViTs and ResNets. The set of experiments do not demonstrate the generalizability of the method to other data modalities or models, and expanding that evaluation would make the analysis stronger. Supplementary Material: Yes, the appendix provided a proof of Theorem 3.1 (which was theoretically correct). It also provided more information about the experiment setup and data selection methodology. Relation To Broader Scientific Literature: The contributions are related to several areas in ML, notably active learning and coreset selection. Traditional active learning methods focuses on selecting the most informative samples for annotation, usually based on model uncertainty. Info-Coevolution additionally integrates distribution awareness of the data for selection, which aides in addressing common issues in active learning such as bias and distribution shifts. It also builds upon coreset selection ideas where representative subsets are chosen to improve training. But unlike traditional coreset methods which are often static, Info-Coevolution filters data dynamically and in an online manner, making the process more efficient. Lastly, the methodology extends ideas from information theory, using information gain based selection, but incorporates additional information such as data similarity and using a bayesian prediction function to estimate the information gain for each datapoint more accurately. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The methodology is novel by integrating several ideas in a unified framework - The benchmarks have demonstrated the effectiveness of the method and the savings it can achieve without compromising model performance, which is key for real world applications - The experiments were clear and ablation study helped to break down the components of the approach Weaknesses: - The authors mention in the limitations section that their experiments mainly considers cases where the training and target distributions are the same, the method claims to be able to address data distribution shifts however it was not shown in the benchmarks - Other concerns/weaknesses have been pointed out in the comments above Other Comments Or Suggestions: There is a typo in the abstract where a sentence is incomplete: > … Info-Coevolution reduces annotation and training costs by 32% without performance. Questions For Authors: The paper does not provide a detailed analysis of the hyperparameters used in the framework (such as the distance and similarity thresholds). Could the authors give more insight into finding the optimal values, and how adjusting them might affect the method’s performance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer aJ9B for the recognition and appreciation of our work, and for the valuable questions as well as comments. For the comments and questions, here are our responses: **Q1: The Lipschitz constant may be difficult to compute rigorously and large in practice.** **A1**:Thanks for the good question. Previously we use the theorem 3.1 to support our introduction of locality, but it is true that the Lipschitz constant is not tight enough to be used in many real settings. We here discuss a tighter bound and the overall reasonability. For a single linear layer ($g$) with W ($\mathbb{R}^{d\times C}$) and b, its Lipschitz constant is bounded by the spectral norm of W, which is $||W||_2$. However, for a Gaussian weight with $N(0,\sigma^2)$, it could still have a bound of $O(\sigma(\sqrt d + \sqrt C))$. In practice, this is still quite big, and it only serves as a worst-case bound. There is a tighter bound using the local gradient assuming smoothness (which is true for most type of $g$): $$\lVert g(z_1)-g(z_2)\rVert_2 \leq sup_{\lambda \in [0,1]} \lVert\nabla g(z_\lambda)\rVert_2 \leq sup_{\lambda \in [0,1]} \lVert \nabla g(z_\lambda)\rVert_2 \cdot\epsilon ,$$ Where $z_\lambda = (1-\lambda)z_1 + \lambda z_2$ . Empirically, the linearity mainly takes effect when the predictions are reasonably good. It is because - On one hand, when the prediction is poor, the neighbour could be more random and the predicted knn-prediction will give a high entropy itself as well as the model, which will not contradict our purpose of the algorithm (the corresponding sample would very likely have a high entropy and gain, and the theoretical guarantee of knn pred is from the k value); - On the other hand, when the prediction is good (e.g. p>0.7 or higher) in the region, the local gradient will be smaller, and the softmax backward propagation further suppresses the bound with $||p-y_t||<=1$. So in the cases where the local gradient might fail to give a reasonable bound, the result itself would very likely have high entropy and be correctly estimated as high entropy by both model and knn, and the knn's estimation error doesn't matter too much. The actual algorithm reliance on this linearity is relaxed in bad cases due to the algorithm design. And for regions with good predictions we can empirically evaluate the value. **Q2: The method claims to be able to address data distribution shifts however it was not shown in the benchmarks** **A2**: The data distribution shifts in the article refer to the problem of model-based active learning: it only emphasizes on samples learned poorly by the model, and the selection causes a distribution problem and worse performance. This is shown in the ablation as dynamic-rechecking/KNN-only are all without this performance drop. We will revise the corresponding parts to make this clear. **Q3: The set of experiments does not demonstrate the generalizability of the method to other data modalities or models, and expanding that evaluation would make the analysis stronger.** **A3**: Thanks for the advice. We further add semantic segmentation experiment on ADE20k with UperNet(BEiT-V2 backbone). See A2 to Reviewer X845. For modalities other than vision, if there is a sample-level feature, it is also possible to extend the framework to them. Will add more discussion in the revision of future work. **Q4: Typo in the abstract** **A4**: Thanks, we have fixed it in local revision. Should be "… Info-Coevolution reduces annotation and training costs by 32% without performance loss". **Q5: Hyperparameters used in the framework (such as the distance and similarity thresholds). Could the authors give more insight into finding the optimal values, and how adjusting them might affect the method’s performance.** **A5**: The hyperparameters should depend on the feature space itself (how dense is the data in the space), and can use empirical methods to evaluate(retrieve some samples to estimate the knn distance distribution and knn-prediction correlation, and an adequate k). Generally, k = 8, cosine similarity 0.9 is good enough, and cosine similarity 0.85 also works in the setting; 0.95 will be too big as there will be very few near neighbour pairs. Too small distance (too large similarity threshold) may fail to reduce cost and become a model-based method (for dynamic rechecking. Actually using 0.9 for knn and 0.85 for dynamic rechecking is also a good choice), while too large distance threshold may affect sparse-region samples' knn-prediction (their k-near-nighbour could be further and less linear correlated, and should be kept for the sake of generalization). In our actual use cases, the threshold value is not sensitive, cosine similarity 0.9 directly works without tuning, and cosine similarity 0.85 doesn't make a statistically significant difference.
Summary: The paper presents **Info-Coevolution**, a framework aimed at enhancing the co-evolution of data and models through **online selective annotation**. The primary goal is to minimize annotation costs while preserving model performance by utilizing **Bayesian Prediction Fusion** and **data locality analysis** to assess the information gain of samples. This approach selectively annotates data, facilitating efficient dataset construction and model training. Key findings include: - **Reduced Annotation Costs**: Info-Coevolution achieves **lossless performance** on ImageNet-1K with only **68% of the annotation cost**, further reducing to **50%** with semi-supervised learning. - **Efficiency**: The framework incurs minimal computational overhead, completing the selection process in **1 minute** for million-scale datasets. - **Compatibility**: It is compatible with both supervised and semi-supervised learning, avoiding distribution shifts during continual training. The paper also investigates **retrieval-based dataset enhancement** using unlabeled open-source data, showing improved performance with additional unlabeled data. Claims And Evidence: The claims are generally well-supported: - **Reduced Annotation Costs**: Supported by experiments on ImageNet-1K, CIFAR-10/100, and other datasets, demonstrating comparable or superior performance with fewer annotations. - **Efficiency**: Backed by computational overhead analysis, indicating logarithmic scaling and rapid completion for large datasets. - **Compatibility with Semi-Supervised Learning**: Validated through experiments with Semi-ViT and Fixmatch, showing enhanced data efficiency. However, the claim of **generalizability to other tasks** is not fully substantiated, as experiments are confined to classification tasks. Methods And Evaluation Criteria: The methods and evaluation criteria are suitable for the problem: - **Bayesian Prediction Fusion** and **data locality analysis** are well-justified for estimating information gain and enhancing sample selection efficiency. - The use of **Approximate Nearest Neighbor (ANN)** structures like HNSW ensures scalability for large datasets. - Evaluation criteria (e.g., accuracy on ImageNet-1K, CIFAR-10/100) are standard benchmarks, facilitating comparison with prior work. The evaluation could be improved by including **additional tasks** and **real-world datasets** to demonstrate broader applicability. Theoretical Claims: The paper presents several theoretical claims: - **Theorem 3.1**: Asserts that predictions for nearby samples in feature space are similar under certain distance thresholds. The proof, provided in the appendix, appears correct but relies on assumptions (e.g., Lipschitz continuity) that warrant further discussion. - **Information Gain Estimation**: The extension to broader tasks is theoretically sound, but the derivation could benefit from more rigorous analysis, especially in multi-class settings. In general, while the theoretical claims are credible, they also appear somewhat trivial. Experimental Designs Or Analyses: The experimental designs are robust and validate key claims: - **ImageNet-1K Experiments**: Demonstrate lossless performance with 68% annotation cost, compatible with semi-supervised learning. - **Comparison with Coreset Selection**: Shows competitive performance with state-of-the-art methods. - **Generalization Across Datasets**: Consistent improvements in annotation efficiency on CIFAR-10/100, StanfordCars, and other datasets. However, experiments are limited to **image classification tasks**, lacking validation on **real-world industrial data** or **larger datasets**. The computational overhead, while low, remains significant for resource-limited settings. Supplementary Material: There is no provided Supplementary Material. Relation To Broader Scientific Literature: The paper builds on and extends prior work in: - **Active Learning**: Addresses limitations of traditional methods by integrating model-specific estimation with distribution awareness. - **Coreset Selection**: Improves upon existing methods by leveraging model-specific information without requiring fully annotated data. - **Semi-Supervised Learning**: Bridges the gap between fully supervised and weakly supervised approaches. The paper contributes to the literature by proposing a **more efficient and scalable approach** to data annotation and model training, with potential applications in resource-constrained settings. Essential References Not Discussed: The paper does not discuss **dataset distillation**, a highly relevant field that shares similarities with the goals of Info-Coevolution. Dataset distillation focuses on synthesizing a small, informative dataset that can be used to train models with performance comparable to training on the full dataset. This is conceptually aligned with Info-Coevolution's goal of reducing annotation costs while maintaining model performance. A key work in this area is: - **Wang et al. (2018)**: "Dataset Distillation" (arXiv:1811.10959). This paper introduces the concept of dataset distillation, where a small synthetic dataset is created to mimic the performance of a larger dataset. The methods and insights from this work could provide valuable context for Info-Coevolution, particularly in terms of reducing data requirements while preserving model performance. The omission of this reference is notable, as dataset distillation represents a complementary approach to the problem of efficient dataset construction and could enrich the discussion of related work in the paper. Including this reference would help situate Info-Coevolution within the broader landscape of data-efficient machine learning techniques. Other Strengths And Weaknesses: **Strengths:** - **Originality**: The integration of Bayesian Prediction Fusion, data locality analysis, and online selective annotation presents a novel and creative approach. - **Significance**: This framework addresses the critical challenge of reducing annotation costs in machine learning while maintaining performance. - **Clarity**: The paper is well-written with clear explanations of the methodology and results. **Weaknesses:** - **Limited Task Scope**: The experiments focus solely on image classification, and the method's potential application to other tasks, such as object detection and segmentation, is not investigated. - **Theoretical Depth**: The theoretical derivations require further expansion, particularly regarding limitations and assumptions. - **Omission of Dataset Distillation**: There is no discussion of dataset distillation, which is closely related to the goals of Info-Coevolution, aiming to synthesize a small, informative dataset that achieves performance close to training on the full dataset. This aligns with the goal of reducing annotation costs while maintaining model performance in Info-Coevolution. Other Comments Or Suggestions: - **Experiments**: Expand to encompass other tasks, such as object detection and segmentation, and incorporate real-world datasets. - **Limitations and Future Work**: Include a broader discussion on limitations and future directions, specifically regarding applicability to weakly supervised and non-classification tasks. - **Dataset Distillation**: Discuss its relationship with Info-Coevolution to strengthen contextual grounding and demonstrate awareness of related approaches. Questions For Authors: 1. **Generalizability**: Can Info-Coevolution be extended to tasks beyond image classification, such as object detection or segmentation? 2. **Theoretical Limitations**: While the theoretical claims are plausible, they seem somewhat elementary. Could the authors provide a more comprehensive theoretical analysis concerning deep neural networks, including aspects of optimization and generalization theory? 3. **Real-World Validation**: Has Info-Coevolution been evaluated on real-world industrial datasets or in online annotation environments? 4. **Dataset Distillation**: How does Info-Coevolution compare to dataset distillation methods, such as those proposed by Wang et al. (2018) and subsequent works? [1] Wang et al. (2018). "Dataset Distillation" (arXiv:1811.10959). Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer X845 for pointing out the missing references as well as the potential improvements. We make responses as follows. **Q1: Dataset Distillation references not discussed.** **A1**: Thanks for the advice. In general, dataset distillation and active learning have a main overlapping research area which is coreset selection (if not requiring full annotation in advance, then the coreset selection can also serve as active learning), and is introduced in related works. Dataset distillation methods also include synthetic data methods, which is another setting not comparable in this work. We will add a discussion of dataset distillation to the related works to make it more clear. **Q2: Task Scope beyond image classification such as object detection and segmentation, is not investigated.** **A2**: Thanks. We here extend our algorithm to semantic segmentation and attach results on ADE20k with UperNet(BEiT-v2 backbone). The UperNet has a backbone (BEiT-v2-large) and a UperHead. We use the backbone feature from the last layer (dim 1024\*16\*16) and mean pooling it to dim 1024. We adapt Info-Coevolution accordingly, using pixel-wise confidence avg as model confidence, class-wise similarity weighted confidence as knn confidence, and $gain=1-avg(confidence_{model},confidence_{knn})$. The result for using 1% data trained model to select 10% data is as follows: **ADE20K 10% random** |aAcc|mIoU|mAcc| |---|---|---| | 82.19 | 46.89 | 58.72 | **ADE20K 10% our selection (using BEiT feature)** |aAcc|mIoU|mAcc| |---|---|---| | 82.84 **$\uparrow$0.65** | 48.39 **$\uparrow$1.50** |60.81 **$\uparrow$2.09**| **Analysis:** - It can be seen that the **mAcc, mIoU, and mAcc are improved by 0.65, 1.50, and 2.09 respectively**, which is significant at this data amount. (For larger data ratio we are still running experiments and will update the results in the later updates. We will add thorough experiments in the revision. The code of this part will also be published.) **Conclusion:** - Info-Coevolution is generalizable to more tasks such as segmentation as analyzed. **Q3: Theoretical Depth: requires further expansion, particularly regarding limitations and assumptions.** **A3**: Thanks for the advice. The current theoretical part supports us to introduce locality into the algorithm, which could greatly improve the efficiency of distribution-based entropy estimation methods, and benefit efficient data re-balance (dynamic data rechecking). We agree an expansion should be in the assumption of Lipschitz-continuous, please refer to the discussion in the reply A1 to reviewer aJ9B Q1 (due to char limit this year). We are open to further discussion. On the other hand, the current assumption is that the target distribution should be between IID and uniform, so any sample predicted as redundant by the framework doesn't affect the final classification result. We will attach a theoretical discussion in the next update for this argument. So a limitation is that it currently doesn't handle a target distribution beyond the range of IID to uniform. A solution could be adjusting the sampling frequency based on target distribution. All the discussed parts will be added to the revision, and we are **open to further discussion** for the theoretical part. **Q4: Include a broader discussion on Limitations and Future Work (applicability to weakly supervised and non-classification tasks)** **A4**: Thanks. The semantic segmentation experiment is now added in A2. We will add a more detailed discussion in the revision. In general, the framework could be applied to models with an $f$, which produces a feature for each sample (and could be used for a retrieval task). If mutual information could be approximated as in classification, it could be a good usage case; even if not a direct approximation, the ANN could also provide distribution-based estimation (with a weighted mean of uncertainty). Classification can be interpreted as a kind of coarse-grained retrieval. For weakly supervised training, as for CLIP/BLIP schema, they are retrieval tasks directly training the $f$ with contrastive training. The locality is also taking effect, and mutual information could be estimated in the feature space. So theoretically our framework is also applicable to weakly supervised training, and the reannotation gain could be estimated for human/ChatGPT annotation to improve data quality. **Q5: Real-World Validation** **A5**: We evaluate the method with some in-house data pipeline, and it achieves promising results.
null
null
null
null
null
null
Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts
Reject
Summary: The authors introduce a novel Mixture of Experts (MoE) architecture for Dynamical Systems Reconstruction (DSR): Mixture of Expert Reconstructors (MixER). MixER employs a top-1 MoE strategy with a custom routing mechanism that enables unsupervised clustering and meta-learning across different DS. The manuscript explores benefits and limitations of the approach by applying the method to various DSR meta-learning backbones across a platitude of different datasets. Claims And Evidence: “We propose an effective unsupervised routing mechanism for MoEs to collectively learn dynamical systems with various degrees of relatedness.” (p. 2): In my opinion, this claim lacks convincing evidence. The proposed method rarely outperforms baseline setups, i.e. MixER-$1$ and MixER-$N^{\dagger}$ with naive routing mechanism. Also, MixER seems to *prefer* highly heterogenous data, i.e. where families of environments are clearly different and struggles in cases of high relatedness (as pointed out by the authors). Methods And Evaluation Criteria: Methods and evaluation criteria seem mostly appropriate. However, including a benchmark dataset that encompasses chaotic dynamical systems (e.g. [1]) would further substantiate the method. Of course I acknowledge the real-world data (Epilepsy-2), which is a challenging dataset with signatures of chaotic dynamics. However, I think it would make sense to first validate the method on clear benchmarks, which do not necessarily face the problem of high ambiguity in the data. Theoretical Claims: There are no theoretical claims and therefore no proofs. Experimental Designs Or Analyses: All experiments and analyses seem valid. Supplementary Material: I glanced at the Appendix, but did not review the material in detail. Relation To Broader Scientific Literature: The paper uses the concept of Mixture of Experts (MoE) to introduce a novel routing method, MixER, to learn across diverse families of dynamical systems which is a mandatory concept when thinking of foundational models in this field. Current research mainly deals with meta-learning across different environments, which generally span different parameter settings (possibly across bifurcations) of one dynamical system but are generally unable to learn across (potentially) unrelated systems. Essential References Not Discussed: I am not aware of any essential references that need mentioning. Other Strengths And Weaknesses: **Strengths**: The paper tackles an unaddressed problem: Extracting DSR models from heterogeneous data using a potentially highly interpretable approach and it’s great that the method is tested on multiple benchmarks, real-world data and across different meta-learning backbones. **Weaknesses**: It is not clear under which circumstances (e.g. dataset + backbone combination) MixER with M > 1 experts consistently improves overall performance and in which cases performance is degraded, irrespective of the actual underlying number of families in the data at hand. Other Comments Or Suggestions: **Other Comments**: - Figure 5 axis labels and ticks are quite small and should be increased in size. I also don’t think Figure 5 bottom is strictly necessary, it does not really provide more information than the top row in my opinion. It would be more interesting to see heatmaps of MixER-20 (see also questions) - Figure 6 legend and axis labels are extremely small and hard to parse. **References** [1] Gilpin, W. (2023). Model scale versus domain knowledge in statistical forecasting of chaotic systems. Physical Review Research, 5(4), 043252. [2] Hess, F. et al. (2023). Generalized Teacher Forcing for Learning Chaotic Dynamics. In International Conference on Machine Learning (pp. 13017-13049). PMLR Questions For Authors: 1. How are experts selected if $M > F$ ? I.e. how interpretable is the architecture when the number of experts exceeds the number of families? Does MixER naturally converge to only use $\approx F$ experts or does it in a sense “overfit” and potentially split environments from one family into two separate families? In that context, it would be nice to see the heatmap in Fig. 5 for MixER-20, too, at least for GEPS. 2. Do the authors have more input on why MixER seems to prefer GEPS over NCF and CoDA as a backbone? Why does MixER lead to more specialization in case of GEPS (Tab. 3, Fig. 5 top)? 3. Can the authors comment on the scalability of MixER in terms of number of environments and experts? Computing losses of all experts on all environments seems extremely demanding, especially if the dynamics under consideration become high-dimensional and the dynamics backbones become more computationally demanding themselves. I think this point is especially important in the context of foundation models. 4. The ODEBench-X mainly encompasses dynamics of fixed point or cyclic dynamics. How does the algorithm (with an appropriate backbone) fare on a dataset of families of chaotic dynamics, e.g. the dataet used in [1]? Mere MSE losses might not be a suitable measure for expert routing, based on the fact that MSE is not necessarily a sufficient measure of DSR in those settings [1,2]. 5. “Clustering and routing analysis (Figure 9) shows that MixER logically partitions datasets into three subsets, but this partitioning limits each expert meta-learner’s exposure to the full dataset, potentially explaining the performance degradation despite clear cross-environment commonalities.” → But does this not show a fundamental problem with the approach? If this is the case, then what is the benefit of MixER in this setting? If we know the number of families, we’d seem to be better off by training meta-learners on those families separately? I think what MixER misses is some evidence that if the number of experts is larger than the number of families in the dataset, MixER converges to use only the minimum necessary number of experts to explain the data. This way we would learn something fundamental about the data (if we do not know the families a priori, which is the general setting considered in the paper). 6. Validataion loss dynamics of MixER seem quite irregular and "spiky" (Fig. 2 bottom). Can the authors comment on this? *I am happy to increase my score to accept if the authors can address my concerns*. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thorough and insightful evaluation. Your recognition of MixER as a novel approach is encouraging. We are grateful for the acknowledgment that MixER addresses an important and previously unaddressed problem in DSR and that our experiments are well-executed across multiple benchmarks and real-world datasets. We also appreciate your comments on the relevance of our work to foundational models and your endorsement of our experimental design and evaluation criteria. --- ### Q1. Expert Selection and Interpretability When $M>F$, MixER does not impose strict constraints on the number of experts used. While this may lead to splitting environments from the same "family", it may reflect nuanced differences rather than overfitting. As suspected by the reviewer, on ODEBench-10B, MixER splits the 10 families into subfamilies so that none of the 20 experts are unused (see https://anonymous.4open.science/r/MixER/assets/odebench_heatmaps_20M.png). A similar pattern is visible in Figure 9. Our new experiment on chaotic systems (see Q4) reveals that up to three experts might be necessary, even when only $F=2$ families were envisioned. With $M=4$, the model converges to using only three experts. Through Algorithm 2, MixER allows for empty K-means clusters, effectively sidelining unused experts (https://anonymous.4open.science/r/MixER/assets/lorentz_heatmaps.png). In summary, while MixER may split our tentative families, it captures the underlying structure in the data. While we cannot guarantee when the system converges to fewer experts, we suspect this behavior depends on the dataset and the backbone. Future Work on MixER will investigate theoretical guarantees, and devise synthetic datasets that expose this behavior. --- ### Q2. Preference for GEPS as a Backbone We think GEPS is preferred over NCF and CoDA due to its greater flexibility and expressiveness in capturing shared knowledge. Further analysis of each backbone's properties could provide deeper insight, and we will include this in the Future Work section. --- ### Q3. Scalability Considerations Efficient approximations and batching strategies (batch size $B\ll E$) mitigate scalability challenges. Plus, the gate is updated only via **forward** passes, and no backpropagation is performed. This leads to very fast updates. In practice, MixER incurs roughly a 20% slowdown over naive MoE (OdeBench-2): ||CoDA|GEPS| |---|---|---| |MixER|933 s|449 s| |Naive MoE|790 s|375 s| We acknowledge that computational costs can increase when dynamics are demanding. We will highlight this as a limitation in Section 6. --- ### Q4. Performance on Chaotic Dynamics We appreciate and will include the references [1] and [2]. For evaluation on chaotic dynamics, we merged noisy Lorentz63 and noisy Lorentz96 datasets from [3]. Lorentz63 trajectories were padded with 0 to match Lorentz96’s dimensionality (10). The optimization loss was Gaussian negative likelihood, used for expert routing. In addition to the $L_2$ evaluation loss, we considered the Hellinger distance $D_H$ for long-term dynamics from [3]. Although we expected two experts hier-shPLRNN [3] to be ideal, we tested MixER with up to four experts. Looking at $D_H$, three experts proved optimal, with the fourth unused in MixER-4 (see link in Q1). Two experts handled most environments, while the third handled outlier cyclic trajectories. ||MixER-1|MixER-2|MixER-3|MixER-4| |---|---|---|---|---| |$L_2$|4.557e-03|1.793e-03|1.442e-03|1.038e-03| |$D_H$|5.610e-01|5.555e-01|5.543e-01|5.544e-01| [3] Brenner et al., ICLR 2025, Learning Interpretable Hierarchical Dynamical Systems Models from Time Series Data --- ### Q5. Fundamental Limitations and Benefits of MixER We agree that MixER does not offer much benefit on classical DSR datasets, especially if SoTA performance is the primary goal. (In fact, we included these datasets specifically to highlight this limitation of MoE approaches.) That said, as noted in Q1 and Q4, MixER can uncover fundamental information in the data. We recommend using a small number of experts to minimize computational costs. --- ### Q6. Validation Loss Dynamics In Figure 2 (bottom), this is due to abrupt shifts in gating function assignments: environments being wrongly assigned, then corrected in the next step. This is partly due to adding noise (Algorithm 1, line 23) before performing least squares regression, done to prevent the system from getting stuck in suboptimal configurations. --- ### Q7. Weakness: Under which Circumstances does MixER Improve Performance? Figure 2, Table 2, and Table 3 show diminishing returns on ODEBench2, ODEBench-10A, and ODEBench-10B, respectively. MixER performs best in low-data regimes and when families are highly distinct, which we believe will be the dominant setting for foundational DSR models. Additionally, we appreciate the reviewer’s comments on Figures 5 and 6 and will update them accordingly if accepted for publication. --- Rebuttal Comment 1.1: Comment: I thank the authors for the additional experiments and clarification during the rebuttal phase. **Q1 + Q4** Thanks. It makes total sense that MixER separates different dynamics, rather than separating the environments based on the ground truth underlying system, i.e. it makes much more sense for the algorithm to route cyclic dynamics to a separate expert, even though this does not reflect that the origin of the data is the same set of ODEs. That MixER-4 only uses 3 experts is nice to see. It is also great to see that the authors considered long-term dynamics measures such as $D_H$. However, the results puzzle me - did the authors correctly apply the measure? In the original publication [1] the results on the Lorenz datasets in terms of $D_H$ is an entire order of magnitude smaller. It also features no variance across different MixER settings, which suggests that all combinations produce similarly bad (good) results? What is the standard deviation across trajectories? **Q2** I see, thanks. To me it seems quite important that the backbone maximally captures shared knowledge, as this would most benefit MixER by minimizing the required experts. That said, I do think backbone performance is a separate topic and does not influence the significance of the manuscript. **Q3** I see, however, the approach still needs $M_B \times E_B$ forward pass evaluations for each batch, correct? Paired with the premise that the approach aims at foundational models, this scaling might be prohibitive since, as of today, foundational models are trained on vast amounts of data, and hence generally need larger batch sizes for optimal convergence. I think the manuscript would hence benefit from a more rigourous runtime complexity analysis (if at least in the appendix). **Conclusion** After also reading the other reviews and rebuttal answers, my conclusion is that the paper isn’t quite ready yet. Specifically, if the authors aim to keep the “foundational models” aspect, I suggest a more rigorous runtime and scaling analysis. Also, the method only seems to increase performance over baselines in very specific settings (highly heterogeneous data in terms of families, little data in terms of trajectories/environments). That MixER converges to a reasonable number of experts in the rebuttal experiment with the Lorenz datasets is a nice result, but it does not convince me to lean towards accept. I will hence keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for acknowledging our rebuttal efforts and taking the time to provide additional comments. **Q1+Q4**. We have leveraged the code from [3] to compute the Hellinger distance $D_H$, and we have no doubt their implementation is accurate. Given your statement that you would maintain your score regardless—despite previously implying openness to change contingent on a thorough rebuttal of the concerns you **previously** had—we see limited value in performing additional experiments to include standard deviations at this time. That said, thank you for pointing that out. **Q3**. Thank you for this helpful suggestion. We agree that examining the influence of the batch size $B$ on both performance and runtime would strengthen the paper. We will include this analysis in the appendix.
Summary: Authors consider the following system reconstruction formulation: given D-dimensional time series, or measurements, of length T, the goal is to reconstruct the process generating this data. The series are grouped into E distinct environments, which are grouped into F families. It is assumed that the measurements within the same environment are closely related, environments within the same family may have higher-order relationships, and the relationship between families can be arbitrary. Authors highlight two families of DSR models for solving this problem: flat (seq2seq and IVP) and hierarchical, or contextual (hypernet- and concatenation-based). The paper focuses on the second family and studies the question of optimally clustering environments (into families) in order to maximise performance of the underlying contextual models. To do this, the paper proposes MixER: Mixture of Expert Reconstructors – a sparse top-1 Mixture-of-Experts (MoE) which comprises M contextual DSR model. MixER’s main innovation is in the fact that it uses E environment-conditioned context vectors for gating, and its custom gating update algorithm, which uses K-means clustering and least squares optimization, instead of gradient descent. The router is a linear mapping from context vectors into M logits (argmax of which is the selected expert), and its parameters update operates in 4 stages: 1. K-means clustering for context vectors, K := M; 2. Calculating median loss for each of M^2 expert-cluster pairs; 3. Greedy pairing of experts and clusters according to the median losses; 4. Least-squares optimisation with one-hot targets derived at step 3. Authors conduct a set of experiments to showcase strengths and limitations for MixER. They extract three subsets of the ODEBench dataset, each of which has a varying number of environments and families. Firstly, they demonstrate limitations of existing DSR models, namely NCF, CoDA, and GEPS, on the smallest of three ODEBench-2 task (F=2, E=5) of reconstructing two 2-dimensional ODEs. Additionally, they show that naive MoEs also fail to solve the problem, as opposed to the MixER approach, which is able to generalise to the validation dataset. Secondly, they report MixER-M models with M experts performance during training and adaptation, claiming that one of two modifications of MixER-10 outperforms MixER-1 in ODEBench-10A (F=10, E=5), but underperforms compared to the baselines on ODEBench-10B (F=10, E=16). Finally, the paper investigates feature interpretability and downstream clustering on two time-series datasets, Epilepsy2 and SCCTS, showing that the clustering labels derived by MixER are highly correlated with the ground truth labels for SCCTS, however not so much for Epilepsy2. Additionally, in spite of having extra model capacity, MixER underperforms on both of these datasets compared to the baselines. Authors hypothesize that the reason for that may be dataset’s inherent noise and close relationship between presented environments. Claims And Evidence: Authors make several claims in the paper, however, in my opinion, they fail to provide clear evidence for most of them. I will comment on the main three claims made in the paper. 1. L78-82, “We claim that strategic combination of contextual meta-learners enables simultaneous reconstruction across all families while preserving rapid adaptation capabilities, obviating the need for manual dataset partitioning prior to meta-learning on each subset” – while this may generally be true, the paper doesn’t provide strong evidence in support of this. Moreover, the proposed method uses several manually designed heuristics in its gating mechanism (k-means, greedy pairing, argmax for top-1), whose efficiency is unclear compared with any manual/fully heuristic dataset partitioning. Also, the results from section 4.1 show that MixER struggles with adaptation, hence the claim remains unproven. 2. L100-104, “We identify a fundamental limitation of gradient descent when routing contextual information to DSR models, which slows down expert specialization when training MoEs” – this claim is purely based on the experimental results in section 2.2, which, firstly, used a small dataset and, secondly, if my understanding is correct, invalid experimental protocol (described below). I also suspect that the baselines were not properly tuned, e.g. authors claim that the naive MoE sends all inputs to the same expert, which can be addressed by tuning load balancing loss. 3. L184-185, “... our framework eliminates the need for importance or load balancing terms in the loss function.” – this claim remains unaddressed in the text. From the algorithm description it follows that the derived clusters can be of arbitrary size (as also demonstrated in Figure 4), and, although Figure 8 suggests that the clusters can be equally sized, there are no guarantees for this, and I’d expect this to not hold in practice in general. Methods And Evaluation Criteria: The benchmarks are suitable to investigate the problem at hand. Theoretical Claims: No theoretical claims are made in the paper. Experimental Designs Or Analyses: My main concern is that the authors don’t mention anywhere that all compared models use the same number of parameters, which, apparently, means that they don’t? If it’s the case, then all the experiments in the paper are not invalid due to the incorrect comparisons with the a-priori weaker baseline. Section 2.2. with the motivational example doesn’t state how many experts are employed in the MoE models. Since the dataset is very small, I wonder how authors chose this number and how they ensure no overfitting? I suspect the latter happens for the MixER model and doesn’t happen for the naive MoE because of its suboptimal hyperparameters. Another evidence for MixER overfitting is poor results for the adaptation mode in section 4.1. Why do authors report train-set MSEs in Tables 2 and 3? Is it a typo? If not, what are the numbers on the corresponding test sets? Supplementary Material: Authors included a link to their code. I looked into it, however, unfortunately, it neither added clarity to the algorithmic details, nor helped to better understand the experimental setup due to the quite poor structure and quality of the code. I also reviewed supplementary material. It contains valid descriptions of the used datasets and classic algorithms, visualisations for the experimental results, as well as some MixER implementation details. I didn’t check the correctness of the latter because of the poorly structured code. Relation To Broader Scientific Literature: The paper combines the ideas of hierarchical meta-learning and the mixture-of-experts technique. It is an attempt to combine the MoE with K-means clustering in order to train domain-specific experts, where domains form the highest level (family) in a 3-level hierarchy (measurement, environment, family). Essential References Not Discussed: I am not aware of any. Other Strengths And Weaknesses: Overall, I am not convinced that experimental results are strong enough to claim that the proposed method is stronger than the baselines. Most of the reported results (except for the motivational example) contain quite mixed numbers, and MixER is subpar to the baseline in most comparisons. Authors include a good analysis of the limitations, however no significant strengths are successfully demonstrated. Other Comments Or Suggestions: Figure 4 shows two clusters of different sizes (apparently, 3 and 5), however the derived Y labels have a 4:4 split. Is it a typo, or is there some additional procedure that ensures equal split between labels? I haven’t found it in the paper. Some references are incomplete or incorrect, e.g sequence-to-sequence references do not include the [1], and the MLP references don’t mention [2]. [1] Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems 27 (2014). [2] Rosenblatt, Frank (1958). "The Perceptron: A Probabilistic Model For Information Storage And Organization in the Brain". Psychological Review. 65 (6): 386–408. Questions For Authors: Please consider addressing my comments from above, which are essentially questions about the paper. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for reviewing our work and for providing thorough comments. We find your summary of our work especially pleasing to read. Thank you for noting the combination of the ideas of hierarchical meta-learning and the mixture-of-experts technique that our method leverages. We are happy our attempt to combine the MoE with K-means clustering in order to train domain-specific experts was worth mentioning. Furthermore, thank you for noting the relevance of our supplementary material, which contains valid descriptions of the used datasets and classic algorithms, visualizations for the experimental results, as well as some MixER implementation details. --- ### Q1. Claims And Evidence (L78-82) Our overall objective is to have a fully unsupervised and automated pipeline for designing foundational models for DSR. The several heuristic methods we propose in our gating mechanism are designed to be quick and interpretable. We do note, however, that other heuristics could be considered, and we plan to do so in future work (see Q3 to Reviewer rCHt). We also note that manual partitioning can be detrimental. For instance, Figure 6 shows how vanilla K-Means (first column) provides a less clear distinction between the classes compared to MixER with 3 experts. Finally, adaptation with MixER in Section 4.1 reveals similarity to meta-training. While we did not include timings, adaptation with the backbone meta-learners is relatively quick compared to the meta-training stage. --- ### Q2. Claims And Evidence (L100-104) Indeed, our claim from L100-104 is mostly based on Section 2.2, and we apologize for not explicitly stating the number of experts used (which is $M=2$ as seen in Figure 8). That said, we respectfully disagree with the reviewer's comment invalidating our experimental protocol. Since the hierarchy in the data is unknown, load balancing terms are not suitable (see Q3). The hyperparameters (mentioned in Appendix C.3) are the same for naive MoE and MixER (see Q5 to Reviewer rCHt). We believe this makes the comparison in Figure 2 fair and supportive of our claim. Finally, we note that the limitations of gradient descent as a routing mechanism persist when the number of experts is scaled up (see W2 to Reviewer Cntw). --- ### Q3. Claims And Evidence (L184-185) The clusters can be of unequal size. We do not enforce an equal split of environments, nor do we necessarily want such a split. Indeed, if the families fundamentally possess different numbers of environments, we want the model to automatically pick up on this, and specialize appropriately. In summary, although Figure 8 suggests that the clusters are equally sized (as they should be as per Table 1 with ODEBench), we recommend using Figure 4, which provides a more holistic overview of our approach. --- ### Q4. Experimental Design: Parameter Count Issues? In section 4.2, we mention that the compared methods have the same **active** number of parameters (see Table 4, repeated in L294-295). In other sections, we retain the same active parameter count. We will add this information in the Introduction to Section 4. --- ### Q5. Experimental Design: Overfitting? We ensure fair comparison by picking the best model based on the validation loss (Figure 2, bottom). The reason why naive MoE and task-specific meta-learning fail is very clearly due to inconsistent routing, as we show with Figure 8. If indeed, overfitting was occurring, we wouldn't observe the excellent Relative L2 error on the left-out test set in Figure 2 (top). --- ### Q6. Experimental Design: Train-Set MSE ? We agree that "Train MSE" is not the most intuitive term for this column. This is indeed not the error on the train set. Instead, it is the error on the meta-training's query set. Similarly, the "Adapt" column indicates the error on the adaptation's query set. These splits are explained in Table 5. --- ### Q7. Quality of the Code We have updated our code. Hopefully, this improved codebase can prompt the reviewer to comment on the correctness of our implementation. --- ### Q8. Other Strengths And Weaknesses We respectfully disagree with the reviewer that no significant strengths are successfully demonstrated. As per our reply in Q2+Q5 above, we believe Figure 2 vividly demonstrates the value of MixER. Furthermore, Figure 6 and Q4 to Reviewer 6AnX strengthen our claim. The weaknesses of our approach are analyzed and fully documented, and we believe knowledge of those limitations is valuable for the community moving forward. --- ### Q9. Figure 4 Equal Split Typo? There is no typo. Figure 4 shows two clusters of different sizes (indeed, 3 and 5), and the derived $Y$ labels have a corresponding 3:5 split. As per our response in Q3, there is no procedure to ensure an equal split (which isn't necessarily desirable). --- ### Q10. Additional References We are grateful to the reviewer for these references, and we will make sure to include them in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for taking time to respond to my review. While the clarifications are appreciated, I remain concerned about the experimental protocol's rigour. Matching the active number of parameters doesn't ensure a fair comparison, as the proposed MoE models have M times more total weights. This not only gives them greater representational power compared with non-MoE models, but also makes them more prone to overfitting. Consequently, a proper evaluation requires comparison against (at least) carefully tuned MoE baselines. Taking this into account, as well as other reviews, I think the promising results shown in Figure 2 are not sufficient on their own to convincingly demonstrate the proposed model's viability and generality, so I keep my current score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you for acknowledging our rebuttal and for providing these additional comments.
Summary: This paper addresses the challenge of dynamical system reconstruction (DSR) in data-scarce environments, where traditional meta-learning approaches struggle to generalize across loosely related system hierarchies. To overcome these limitations, the authors propose MixER (Mixture of Expert Reconstructors), a sparse Mixture of Experts (MoE) framework that replaces conventional gradient-based gating with a K-means clustering and least squares optimization mechanism. Such routing strategy enables more efficient expert specialization and adaptation, facilitating hierarchical meta-learning across diverse dynamical systems. Extensive experiments demonstrate that MixER achieves superior efficiency and scalability in low-data, heterogeneous settings, successfully reconstructing systems governed by up to ten parametric ordinary differential equations (ODEs). However, its performance deteriorates in high-data regimes, particularly when experts are constrained to process highly correlated datasets, limiting their ability to generate meaningful contextual representations. Claims And Evidence: Following claims are well-supported. - The experimental comparisons convincingly demonstrate the limitations of existing methods in handling hierarchical DSR. - The experiments clearly illustrate how the new routing mechanism improves expert allocation. However, I think the performance of MixER is highly unstable. For example, in Table 2, it cannot outperform naïve MoE w.r.t. train MSE with GEPS backbone, and adapt MSE with CoDA, GEPS backbone. Moreover, it cannot outperform MixER with only 1 expert, as shown in Table 2 and 3. Why could that happen? Also, while the experiments confirm improved generalization across loosely related environments, the drop in performance on highly related datasets contradicts the claim of universal generalization. Methods And Evaluation Criteria: The proposed MixER method is a sparse top-1 MoE layer with K-means and least square-based gating mechanism. It seems to be suited for hierarchical dynamical system reconstruction in sparse, multi-family datasets. It has potential to address limitations of traditional meta-learning by enabling simultaneous learning across loosely related systems while retaining adaptability, making it apt for scientific applications like neuroscience. However, its top-1 architecture may limit performance in high-data regimes. Evaluation criteria, including relative L2 error, MSE, and classification accuracy, appropriately assess reconstruction and generalization. Benchmarks like ODEBench, classical DSR datasets (e.g., Lotka-Volterra), and Epilepsy2 EEG data cover diverse scenarios, aligning with the problem’s scope. Theoretical Claims: N/A. There is no proofs in this manuscript. Experimental Designs Or Analyses: Also see Claims and Evidence for more information. 1. Few-Shot Learning on ODEBench-10A and ODEBench-10B: a. The paper do not provide any other baseline results. b. The adaptation phase uses only one environment for training (Table 5), which may not reflect real-world few-shot scenarios requiring multiple support samples. c. The lack of discussion on hyperparameter tuning (e.g., K-means iterations, learning rates) raises questions about reproducibility and sensitivity. 2. Time Series Classification on Epilepsy2 a. Without full results, it’s unclear how MixER handles the test set imbalance—accuracy alone may be misleading without precision/recall. b. The single-channel EEG focus limits complexity compared to multi-channel data, potentially overstating performance. c. Batch size (571) during adaptation (Appendix C.3) seems unusually large for few-shot learning, requiring clarification. Supplementary Material: I have reviewed all parts of supplementary material. Relation To Broader Scientific Literature: The key contributions of "Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts" align with and extend several threads in the broader scientific literature on dynamical system reconstruction (DSR) and meta-learning. 1. Hierarchical Meta-Learning for DSR 2. Mixture of Experts (MoE) in Scientific Modeling 3. Foundational Models in Science Essential References Not Discussed: There is no essential reference which is not discussed. Other Strengths And Weaknesses: Strengths 1. The paper proposes a new gating mechanism which can potential benefit the research of MoE models. 2. The paper is relatively well-written and easy to understand. 3. The focus on low-data regimes aligns with real-world scientific challenges (e.g., neuroscience, clinical data), making it a valuable contribution to foundational models for scientific discovery. Weaknesses 1. Unclear Motivation for MoE: The rationale for choosing MoE over other architectures (e.g., single large networks, transformers) is not well-articulated. While inspired by large language models, the paper lacks a deeper justification linking MoE’s sparsity to DSR’s unique demands. 2. Insufficient Baselines: The baseline selection is incomplete. It omits comparisons with non-meta-learning methods or other scalable architectures (e.g., Mamba,) limiting the scope of validation. 3. Neglect of Alternative Routing Methods: The paper focuses solely on K-means and least squares routing without exploring other strategies. This narrows the methodological exploration and weakens claims of optimality. 4. Limited Theoretical Rigor: Theoretical claims (e.g., gating superiority, optimization convergence) lack formal proofs, relying heavily on empirical results, which reduces their universality and robustness. 5. Reproducibility Concerns: Key hyperparameters (e.g., learning rates, K-means initialization details) are underspecified, potentially hindering replication and sensitivity analysis. Other Comments Or Suggestions: N/A Questions For Authors: My questions are in lines with above concerns, issues and weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for reading and commenting on our work. We are happy you praised the novelty of our gating mechanism from which MoE research stands to benefit. Your appreciation of how well our paper was written and how our exposition made it easy to understand is encouraging. The fact that you found our contribution valuable to foundational models for scientific discovery is also encouraging. Indeed, our end goal is to solve real-world scientific challenges, especially in the low-data regime in which foundational models tend to be fine-tuned (e.g., neuroscience, clinical data). --- ### Q1. Unclear Motivation for MoE DSR is unique in that its datasets are very different from one source to another, contrary to language, which inherently contains lots of structure. While large monolithic models are powerful, they do not explicitely provide specialization across distinct behaviors. Another argument for using MoEs for DSRs is that they are computationally efficient since only a small subset of experts is active, not to mention increased interpretability. The limitations of single monolithic models have been explored in [1,2]. As for transformers meta-learning DSRs, we can refer to [3]. Finally, as our proposal is simply an augmentation of existing meta-learners, it does not prohibit the use of underlying transformer backbones. While we do not pursue this avenue, we believe the community will find it helpful. To clarify this motivation, we will expand our introductory paragraph in the paper to indicate that our choice of MoE is driven by more than the success of recent large language models. #### References: [1] Yin et al. LEADS: Learning dynamical systems that generalize across environments, NeurIPS, 2021. [2] Nzoyem et al., Neural Context Flows for Meta-Learning of Dynamical Systems, ICLR 2025 [3] Serrano et al., Zebra: In-Context and Generative Pretraining for Solving Parametric PDEs, arXiv, 2024. --- ### Q2. Insufficient Baselines Our objective with this work is to show that preexisting **meta-learners** can be augmented with MixER. As such, we believe it is important to show results with and without MixER, as we've done repeatedly. As for non-meta-learning (which isn't the focus of our work), please see Q1. Concerning the use of other scalable architectures like Mamba, we'd be happy to incorporate any work that specializes it for non-linear dynamical systems across parameter changes into our MixER formulation. --- ### Q3. Neglect of Alternative Routing Methods We selected K-means and least squares routing as they provide a **simple** yet effective unsupervised clustering mechanism for expert assignment. However, we acknowledge that other routing strategies (e.g., soft gating, reinforcement learning-based routing) could further improve performance. Our goal was to keep the routing method **interpretable** and computationally **efficient**. That said, we deeply appreciate the suggestion to explore alternative routing mechanisms and will investigate this in future work. --- ### Q4. Limited Theoretical Rigor Indeed, our approach primarily relies on empirical validation as MixER routing for DSR is inherently data-dependent and difficult to formalize in a universal manner. It would take significant efforts to provide such guarantees, perhaps even a full paper. We will add this theoretical limitation in Section 6. --- ### Q5. Reproducibility Concerns While we included major hyperparameters in Appendix C.3, we recognize that details such as K-means initialization (see Algorithm 2, line 4) could be more thoroughly documented. Our code partly addresses this problem, with all hyperparameters clearly stated at the start of each `main_mixer.py` script in the `./examples` folder. Due to rebuttal limitations, we cannot write down more details here. --- ### Q6. Claims And Evidence: Why could that happen? The one powerful expert meta-learner is exposed to more shared knowledge than any of the 10 or 20 experts in Tables 2 and 3. Please see Q7+Q5 from Reviewer 6AnX and Q4+W1 from Reviewer Cntw for more on this. Keeping this limitation in mind, we do not claim universal generalization. --- ### Q7. Experimental Design Or Analyses - 1b) We agree that this may not reflect real-world scenarios, but this is the meta-learning for DSR setting inherited from [1,2]. We had little control over this. - 2c) The adaptation batch size of 571 (Appendix C.3) was chosen for computational efficiency. With 11,420 adaptation environments, it would have been intractable to adapt one at a time. We split them into 20 batches, resulting in a batch size of 571. --- We could not respond to all the reviewer's concerns due to character limitations (especially in Q7), and we are happy to answer any additional concern. We thank you once more for your comments, as they've already helped improve our paper.
Summary: The authors propose MixER, an Mixture-of-Experts routing mechanism for learning diverse dynamical systems while preserving adaptability within individual environments. It incorporates an environment-specific context vector for better gating decisions, and integrates k-means clustering for hierarchical learning. While well-motivated, the method lacks strong experimental evidence of its effectiveness, and some details need further clarification. Claims And Evidence: Some of the claims are well-supported, others need further clarification. For instance,“We identify a fundamental limitation of gradient descent when routing contextual information to DSR models, which slows down expert specialization when training MoEs.” --> Other than the example in Section 2.2, is there other evidence for this claim? Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are mostly valid and appropriate. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs are mostly standard, but I would appreciate more meaningful model comparisons and thorough ablation studies. Supplementary Material: I review most supplementary material, including datasets, additional results, etc. Relation To Broader Scientific Literature: This article sits in between a few research directions including Neural ODE, scientific foundation models, multitask learning, meta-learning, etc. It redefines a research topic of learning across diverse dynamical systems into meta-learning and proposes a specific methodology to this formulation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Using a context vector as a gating input is a reasonable and interesting approach. The authors also conducted extensive experiments across multiple datasets. Weaknesses: The empirical results are not compelling and do not support the claim well. Other Comments Or Suggestions: Figure 4 is difficult to understand, please improve clarity and captioning. Questions For Authors: Can this model be extended into a foundation model for ODEs? What potential aspects can be further explored? Can this cross-family learning simply be cast into a multi-task learning problem? The results in Fig. 2 do not appear to align with the trends observed in Tables 2 and 3, where MixER-10 does not clearly outperform other baselines. Could you clarify the underlying reason? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read and provide comments on our paper. We are happy you found the approach well-motivated and interesting while acknowledging how we redefined a research topic of learning across diverse dynamical systems into meta-learning. We are pleased you found our results extensive across multiple datasets. --- ### Q1. Can this model be extended into a foundation model for ODEs? Our model can indeed be extended into a foundational model for ODEs, especially since ODEs come in various types (bifurcation, time scales, etc.). It is difficult to imagine a foundational model for DSR without involving a mixture of specialized experts. We view hierarchical meta-learning as a possible route toward cheap foundational models for ODEs. We envision several issues along the way, the main one being the diminished representational power when each expert is exposed to only a fraction of a very abundant dataset (see Q4 from Reviewer 6AnX). We point this out in our limitations, and it constitutes a solid **warning** to use varied datasets for training foundational DSR models. --- ### Q2. What potential aspects can be further explored? Our results highlight the important promises of this approach. As mentioned above, future work will explore ways to retain the capabilities of task-specific meta-learners within MixER. Specifically, we will investigate when MixER converges to only using the minimum number of required experts to fully represent the data, for optimal performance. Other important areas of future work were pointed out in our responses to Reviewer 6AnX. --- ### Q3. Can this cross-family learning simply be cast into a multi-task learning problem? We agree that this cross-family learning can be cast into a multi-task learning problem. However, to maintain the ability of these models to quickly adapt—to be **easily** fine-tunable for downstream tasks (like we believe foundational models should be)—we choose to incorporate meta-learning, thus building on rapidly growing research in this area. --- ### Q4. Could you clarify the underlying reason why MixER doesn't consistently outperform the baselines? This is because the dataset used in Figure 2 is much sparser than the ones for Table 2 and Table 3 (as we discuss in Table 1 and the Appendix). In Figure 2, the families are clearly different, and there is very little room for ambiguity in the routing; while in Figure 5 (related to Table 3), we have 10 families, and trajectories in some families (in chunks of 16) are similar to trajectories in other families. When compared to Figure 8 (related to Figure 2), we see that routing is not clean anymore. This routing uncertainty is at the source of the performance drop. All in all, with abundant data and little family discrepancy, MixER struggles to outperform the task-specific meta-learner, which can more effectively extract shared knowledge from the whole dataset. --- ### W1. Weak experimental evidence We acknowledge that our presentation of experimental evidence in the main Introduction was lacking and does not promote our approach very well. We believe, however, that a better presentation as follows is possible: - Exceptional results on ODEBench-2 (starkly different datasets) and on SCCTS - Underperformance on ODEBench-10A and on Epilepsy2 The common denominator is that MixER performs well when little data is available for very different environments. --- ### W2. Additional evidence of the weakness of GD ? No, we do not include any other additional evidence for this. In the manuscript, Section 2.2 is the only experiment specifically designed to illustrate the limitations of gradient descent. We note, however, that the poor routing was observed throughout all our experiments (e.g. https://anonymous.4open.science/r/MixER/assets/gate_values_geps_gd_10M.pdf). We will include similar plots in the final manuscript to indicate that the competitive performance of Naive MoE on some problems is not due to clever utilization of its experts, but to strong generalizability of one or a few of its base meta-learners. --- ### W3. More results We appreciate your request for more meaningful results and ablation studies. But seeing as our paper is essentially a giant ablation study of gradient descent as the routing mechanism, we do not see which other ablation can be performed. Perhaps the reviewer could suggest a few datasets and specific experiments? --- ### W4. Figure 4 captioning Thank you for this suggestion: we propose to add to the arrows the label numbers (1), (2), (3), or (4), and change the caption to "_Illustration of the main stages of our context-based gating update algorithm: (1) K-means clustering; (2) per-expert per-environment loss computation; (3) expert-cluster pairings; and (4) least-squares optimization. All tensors are in compatible shapes._" --- Once again, thank you for your positive and valuable feedback. Your insights have helped us improve the clarity of our paper. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. >Response to Q4 There should be a remedy for this performance drop when families are similar. If adding more data causes problems in model performance, I do not see adequate scaling ability of the model and efficacy of the methodology. And in principle, when families are similar, I would expect the model to learn even better. This would be a critical point to resolve in order to make this article convincing and significant enough to be accepted.
null
null
null
null
null
null
The Four Color Theorem for Cell Instance Segmentation
Accept (poster)
Summary: The paper presents a novel cell instance segmentation method inspired by the four-color theorem, which simplifies the segmentation task by transforming it into a four-class semantic segmentation problem. The key contributions include: Four-Color Encoding Scheme: Cells are treated as "countries" and tissues as "oceans," allowing the use of a four-color encoding to ensure adjacent cells receive distinct labels. Asymptotic Training Strategy: This step-by-step approach first distinguishes foreground and background, then predicts categories within the foreground, addressing class imbalance. Encoding Transformation Method: This method maps the predicted output to a consistent four-color representation, ensuring training stability. Theoretical Analysis: The authors prove the global optimality of the greedy algorithm for cell coloring under specific conditions. Empirical Validation: The method achieves state-of-the-art performance on three datasets (BBBC006v1, DSB2018, PanNuke), demonstrating robustness and reduced computational complexity. The proposed method effectively balances performance and efficiency, offering a promising solution for biomedical image segmentation. ## update after rebuttal The responses are satisfactory, and we acknowledge the improvements made in both theoretical and empirical aspects. Claims And Evidence: The claims made in the submission are largely supported by clear and convincing evidence. The authors present a novel approach to cell instance segmentation using the four-color theorem, which is well-explained and validated through extensive experiments. The four-color encoding scheme is effectively demonstrated to transform the segmentation task into a simpler semantic segmentation problem, and the asymptotic training strategy is shown to address class imbalance issues. The encoding transformation method is theoretically justified and empirically proven to enhance model convergence and performance. The state-of-the-art performance on three datasets (BBBC006v1, DSB2018, and PanNuke) further substantiates the effectiveness of the proposed method. However, there are certain aspects that could benefit from additional clarification. The generalizability of the method across diverse cell types and imaging modalities is not fully explored, and further discussion or experiments would be valuable to demonstrate its broader applicability. Additionally, while the impact of the non-uniqueness of encoding is addressed through the encoding transformation method, a more detailed analysis of its effect on model performance would strengthen the paper. The computational efficiency claims, though supported by a comparison of computational complexity, could be further bolstered by detailed benchmarks on standard hardware configurations. Lastly, the theoretical analysis of the global optimality of the greedy algorithm, while convincing, relies on specific assumptions about cell distribution and structure, which could be more thoroughly discussed in the context of real-world biomedical images. Overall, the submission presents a robust and innovative approach, with minor areas for enhancement to fully address potential limitations. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for cell instance segmentation in biomedical images. The four-color encoding scheme simplifies segmentation by transforming it into a four-class semantic task, while the asymptotic training strategy addresses class imbalance. The encoding transformation method ensures training stability by handling non-uniqueness in encoding. The negative sampling constraint enhances the model's ability to distinguish adjacent cells. The evaluation on three diverse datasets (BBBC006v1, DSB2018, PanNuke) using standard metrics (DICE, AJI, DQ, SQ, PQ) validates the method's robustness and generalizability. Overall, the approach effectively addresses key challenges in cell segmentation with strong empirical support. Theoretical Claims: As a whole, there are no logical errors in the algorithm mentioned in the article, but I am concerned about the following: This strategy relies on accurate initial segmentation. If there are errors in the initial segmentation (such as cell adhesion is not segmented correctly), the four-color strategy may not be able to effectively distinguish instances and affect the final result. On large-scale high-resolution pathological images, the computational complexity of the four-color strategy is high, especially when constructing the adjacency graph and optimizing the coloring, which may affect the reasoning efficiency. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are sound and appropriate for evaluating the proposed cell instance segmentation method. The authors used three diverse datasets and standard metrics to comprehensively assess performance, demonstrating robustness and superiority over state-of-the-art methods. Ablation studies and convergence analysis further validate the effectiveness of the proposed techniques. Supplementary Material: I focused on the four-color coding analysis section and the visualization results section of the related work section in the additional material. The four-color coding part shows the underlying mathematical principle of the algorithm, which effectively improves the credibility and reproducibility of the paper. In the more visual results section, sufficient visual images effectively verify the effectiveness of the method. Relation To Broader Scientific Literature: The key contributions of the paper are well-aligned with the broader scientific literature, particularly in biomedical image segmentation and graph theory. The use of the four-color theorem is a novel adaptation of a classic graph theory result to address the challenge of distinguishing adjacent cells in segmentation tasks. This approach simplifies the problem by converting it into a four-class semantic segmentation task, which is a significant departure from traditional methods that often struggle with computational complexity and accuracy in high-density cell environments. The asymptotic training strategy and encoding transformation method build on existing deep learning techniques to address training instability and non-uniqueness of encoding, which are common issues in semantic segmentation. These strategies are innovative solutions that enhance model robustness and performance, aligning with ongoing research efforts to improve segmentation accuracy and efficiency. The theoretical analysis provided in the paper, which proves the global optimality of the greedy algorithm under specific conditions, contributes to the understanding of graph coloring in biomedical images. This analysis is grounded in established principles of graph theory and combinatorial optimization, providing a rigorous foundation for the proposed method. Overall, the paper integrates well with the broader literature by offering a new perspective on a longstanding problem in biomedical imaging, leveraging both theoretical insights from graph theory and practical advancements in deep learning. Essential References Not Discussed: In my opinion, there are no essential related works missing from the paper that would significantly alter the understanding of its key contributions. The authors have adequately covered the relevant literature in the fields of biomedical image segmentation, graph theory, and deep learning. The application of the four-color theorem is a unique approach that addresses specific challenges in cell segmentation, and the paper provides sufficient context for its contributions. Other Strengths And Weaknesses: The paper presents an innovative application of the four-color theorem to cell instance segmentation, offering a novel perspective on a challenging problem in biomedical imaging. The transformation of the segmentation task into a four-class semantic segmentation problem is a significant contribution, simplifying the process and reducing computational complexity. The theoretical analysis and empirical validation on multiple datasets further strengthen the paper's credibility. However, the generalizability of the method to diverse imaging modalities and complex cell structures remains unexplored. The reliance on accurate initial segmentation and the need for post-processing to correct errors also pose limitations. Addressing these issues could enhance the robustness and applicability of the proposed approach. Overall, the paper's originality and potential impact are noteworthy, but further work is needed to address its limitations. Other Comments Or Suggestions: In my opinion, the summary part of the paper is relatively small, and the innovation, experiment, advantages and disadvantages of the paper should be sorted out. However, there does not seem to be enough room for this, and I suggest that the pseudocode be placed in additional material in order to expand the conclusions. Questions For Authors: I have no other questions here. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough evaluation of our manuscript and the many constructive comments provided. We are also profoundly grateful for your recognition of our proposed method's innovation, theoretical contributions, and experimental design. According to your metioned several minor concerns, **we have summarized these into following five points and made one-by-one responses below**. We hope our responses further reinforce your confidence in our approach. For your convenience, we have prepared supplementary materials to assist with your review: [Supp](https://drive.google.com/file/d/1dERoIrcnDklZhwcJDs5zXBBAFn7-I5u3/view?usp=sharing). **1. Generalizability to Diverse Imaging Modalities** Thank you for your valuable suggestions to improve our experimental comparsions, hence we conducted additional experiments on three challenging cell segmentation datasets to demonstrate broader applicability, including **fluorescence image**, **bright-field image** and **phase-contrast image**, and one **natural-scene dataset (PerSense)**. These four datasets contain many complex, low-contrast, and densely packed objects, as shown in **Supp.RG-Fig.1**. At the same time, we add extensive comparison experiments, and the quantitative results are shown in **Supp.RG-Tab.1**. From the table, we can see that our method shows comparable results across these diverse scenarios. Notably, it achieves **AJI scores of 0.834 and 0.822** on the **Yeaz-BF** and **Yeaz-PC** datasets. On the PerSense dataset, our method outperforms the second-best method by more than **2%**, demonstrating excellent generalization beyond biomedical domains and reinforcing robustness and scalability. Besides, the visualization results in **Supp.RG-Fig.2–6** further indicates our FCIS can tackle complex scenario cell segmentation and densely distributed objects segmentation. **2. Effect of Non-Uniqueness in Encoding** As detailed in the main paper and further illustrated in **Supp.R1-Fig. 6 and Fig.7**, inconsistent color assignments across training batches can cause unstable optimization and fragmented predictions. To resolve this, we introduce an encoding transformation mechanism that includes a "buffer region" in the prediction space. This mechanism maps all variant encodings to a canonical and stable representation. As shown in **Supp.R1-Fig.7(a)**, this transformation significantly improves training convergence. To further support this claim, we provide visual comparisons in **Supp.R1-Fig.7(b)**. The segmentation results clearly demonstrate that the encoding transformation mechanism eliminates fragmented predictions, which confirms its effectiveness in stabilizing training and enhancing model robustness. **3. Computational Efficiency and Hardware Benchmarks** Thank you for your valuable suggestion regarding hardware dependency in our computational complexity analysis. In the initial submission, we used an **NVIDIA A100 GPU** to compute parameter quantification and FLOPs. We will clearly state this hardware configuration in the revised manuscript. **4. Theoretical Assumptions and Real-World Applicability** As you rightly pointed out, our method is based on certain assumptions about cell distribution characteristics—such as local clustering and global dispersion, which imply planar distribution and non-cross instances. These assumptions hold in our current setting and ensure the feasibility of constructing a valid four-color adjacency graph. However, as you suggested, extending our method to more complex scenarios that do not meet these assumptions would require further theoretical analysis and algorithmic adaptation. We will continue to explore and refine this direction in our future work. Thank you again for your insightful and constructive suggestion. **5. Summary and Pseudocode Placement** Thank you for noting the brevity of the summary section. However, due to it is currently not allow to update the main text, we promise to expand the conclusion to better highlight the key contributions, main results, and limitations in a future version. Meantime, we also appreciate your suggestion to move the pseudocode to the supplementary material to improve the structure and readability of the main paper, and we will revise it accordingly. We appreciate this suggestion, which will help improve the overall presentation of the paper. Once again, we sincerely thank you for your thoughtful feedback and positive evaluation. We hope the additional experiments, theoretical clarifications, and manuscript revisions have addressed your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and thoughtful rebuttal. The responses are satisfactory, and we acknowledge the improvements made in both theoretical and empirical aspects. --- Reply to Comment 1.1.1: Comment: Thank you for your continued guidance and for recognizing our responses during the rebuttal phase. We are truly encouraged by your acknowledgment of the improvements we have made. We will continue to do our best to advance the field of instance segmentation. Once again, thank you for your support and encouragement.
Summary: This paper introduces a new approach to cell instance segmentation based on the Four-Color Theorem from graph theory. The authors propose reformulating instance segmentation as a constrained semantic segmentation problem using only four classes. This method simplifies the task by ensuring that adjacent cells receive different labels, allowing instance differentiation without explicit instance segmentation. Evaluated on three biomedical image datasets ## Update after rebuttal I have adjusted my score to reflect the improvements. Please see my reply to the rebuttal below. Claims And Evidence: - Instead of detecting individual cell instances explicitly, the method treats segmentation as a semantic task where each foreground cell is assigned one of four colors to ensure instance differentiation. The authors present a greedy four-color encoding algorithm (Algorithm 1) that assigns colors while ensuring that adjacent cells have distinct labels. They demonstrate that this reduces model complexity compared to detection- and contour-based approaches. - Directly training on four-color encoding leads to instability due to class imbalance and non-unique encoding. Authors propose binary foreground-background segmentation and four-class label assignment within the foreground. An ablation study shows that using four-color encoding directly degrades performance. The asymptotic training method significantly improves performance metrics. Methods And Evaluation Criteria: The proposed method and evaluation criteria generally make sense for the problem of cell instance segmentation in biomedical imaging. However, there are some important considerations regarding their real-world applicability. The method assumes that cells are adjacent but not significantly overlapping. In cases where cells heavily overlap (e.g., thick tissue slices, some fluorescence images) or foreground/background distinction is not clear (TEM images, brightfield microscopy etc.), adjacency might be unclear, leading to errors in differentiation. Competing models like Cellpose can explicitly model overlapping cells using shape priors, which this method does not. The greedy four-color algorithm assumes that cells behave like planar regions. In reality, cells can have irregular or elongated shapes, making adjacency more complex. Some image applications require segmenting cell bodies and extension (e.g. hepatocytes in cytotoxic assays). The method assumes compact cell regions. Theoretical Claims: - Any planar graph can be colored with at most four colors such that no two adjacent regions share the same color. The authors provide a proof sketch that shows how cell segmentation map can be transformed to a planar graph, where each cell is a node and each cell boundary defines an edge. - The four-color encoding is non-unique. This ambiguity can cause inconsistencies during training. A detailed proof is provided in the supplementary material (not checked) which shows that a mapping function that transforms the predicted results to a four-color encoding does exist. Experimental Designs Or Analyses: - The experimental design is structured to compare the four-color method against existing cell segmentation approaches. However, some choices may bias the results or leave important questions unanswered. The datasets do not include highly overlapping cell populations. So, the methods generalizability is in question. - Cellpose, a leading generalist model for cell segmentation, is missing in the comparison. This is major gap since Cellpose is robust across various imaging conditions. - Ablations studies support the paper's claims and is a strong point of the experimental design. Supplementary Material: Supplementary material is useful. It provides additional results and analysis. Relation To Broader Scientific Literature: The paper falls into broader category of cell instance segmentation. The paper challenges the need for explicit instance detection by showing that cells can be differentiated purely via semantic segmentation with a constrained label space (four colors). No prior work is cited on using four-color encoding for instance segmentation, leaving its connection to past literature vague. Similar theorems were previously used in image processing and graph coloring etc. Essential References Not Discussed: The authors compare their model mostly against contour-based, detection-based, and distance-based methods (HoverNet, DoNet, CPP-Net, etc.). Although Cellpose may not fall into these categories, it is widely regarded as one of the most robust and generalist cell segmentation models, capable of handling touching and overlapping cells across diverse datasets. The fact that this paper only briefly mentions it and does not include it in the benchmarks raises serious concerns about completeness of experimental validation. It is very fast and works for any type of images (H&E, cryo-EM, TEM, confocal, bright field etc.). It is only cited once in this sentence "The advancement of deep learning revolutionized automated cell segmentation ...". There are three versions of CellPose already, most recent of which is recently published. The version cited is from 2021. Other Strengths And Weaknesses: Strength: By proving that any cell segmentation map can be represented as a planar graph where adjacent cells can be assigned different colors using at most four labels, the authors introduce a greedy four-coloring algorithm that enforces instance separation without requiring explicit instance segmentation masks. Weakness: The approach makes instance differentiation emerge from the color constraints, but it’s not entirely clear how well this works for very crowded or irregularly shaped cells. As the method has no ways of incorporating expected size highly overlapping irregular shapes cell populations generate spurious cell instances (see Figure 9 in supplementary) Other Comments Or Suggestions: n/a Questions For Authors: Algorithm 1 starts with a cell graph where V is the set of cells. This algorithm does not discuss how V is obtained from the foreground. Does the algorithm assumes that each connected component in the foreground will be corresponding to a cell instance? If that is the case then why cell coloring is so important as identifying adjacent cells would be less relevant. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful and thorough evaluation of our work and your recognition of its methodological and theoretical contributions. To address your concerns regarding real-world applicability, baseline completeness, and so on, **we summarize these issues into five points and provide detailed responses as follows**. For ease of reference, we kindly refer you to our response file: [Supp](https://drive.google.com/file/d/1dERoIrcnDklZhwcJDs5zXBBAFn7-I5u3/view?usp=sharing). **1. Generalization in Complex Scenarios** You are right that cell segmentation models should adapt to real-world challenges. To validate the generalization of FCIS, we conducted additional evaluations on complex cell segmentation datasets (recommended in CellPose 3.0) and natural scenes as shown in **Supp.RG-Tab.1**, which covers: **(1) Irregularly shaped cells** (MP6843); **(2) Low contrast, densely packed cells** (Yeaz-BF, Yeaz-PC), and **(3) Natural scenes with dense objects** (PerSense). The quantitative results in **Supp.RG-Tab.1** and qualitative visualizations in **Fig.2–6** show that our method performs robustly across these datasets, maintaining competitive AJI and DQ scores. For instance, on the Yeaz-BF and Yeaz-PC datasets, we achieve AJI scores of 0.834 and 0.822, respectively, highlighting the model's superior ability to fit complex scenarios. **2. Motivation and Practical Relevance of FCIS** The initial motivation behind our model may differ somewhat from that of CellPose. **As a general-purpose cell segmentation model, CellPose has demonstrated remarkable segmentation performance and proven its strong generalization capability.** However, when initially employing cell segmentation approach specifically for pathology, we found some new challenges that need to be solved. - **Inference efficiency is critical**: As shown in **Supp.R3-Fig.8**, WSIs comprise tens of thousands of patches. A classical pathology segmentation model, HoVerNet, heavily relies on post-processing and causes computationally expensive (**4.59s** per patch). In contrast, our method achieves a significantly faster inference speed (**0.29s per patch**) than HoVerNet. - **Annotation scalability**: Expert-annotated cell are costly to scale, hence some **semi-supervised (SS)** or **domain-adaptive (DA)** training strategies are necessary to simplify the manual annotation. However, present SS [MMT-PSM, MICCAI] or DA [PDAM, CVPR] frameworks are very complicated shown in **Fig.9-10** due to the multi-branch or detection frameworks. Therefore, our method reformulates instance segmentation as a semantic task, which can provide new training paradigm and makes instance segmentation elegant, similar to semantic segmentation. **3. Inclusion of CellPose as a Baseline** We sincerely thank you for pointing out the omission of CellPose. In response, - We have included **CellPose** in our experiments (**Supp.R3-Tab.3**). Results show that while CellPose performs well across many scenarios, significantly beyond previous cell segmentation model, such as DCAN and NucleiSegNet. - In the revised manuscript, we will explicitly cite the **latest version of CellPose (2025, Nature Methods)** and expand the Introduction to include a more detailed discussion of its contributions. **4. Clarification on Handling Overlaps and Graph Construction** - **Overlapping Instances**: In datasets with single-instance-per-pixel annotations (e.g., most cell datasets), our four-color encoding is directly applicable. For datasets where pixels belong to multiple instances (e.g., cervical smear images), a multi-channel prediction mechanism may be more appropriate. This is a promising direction for future work. - **Node Definition in Algorithm 1**: In our method, **the cell graph is built from ground truth instance annotations**, where each connected region (cell) is assigned a unique ID. The vertex set \( V \) consists of these labeled cells, and adjacency is determined via edge contact. In scenarios where only binary masks are available, morphological processing (e.g., erosion) can be applied to approximate instance separation before applying our encoding scheme. **5. Literature Citation** To our knowledge, **our work is the first to apply four-color encoding to cell instance segmentation in deep learning**. While previous image processing works may use coloring heuristics, our method introduces: - A greedy coloring algorithm with theoretical guarantees - An encoding transformation strategy that addresses non-uniqueness - A training pipeline that converts instance segmentation into semantic prediction Hence, We will enrich the Related Work section to include broader references from graph theory for segmentation. Once again, we thank you for your valuable feedback and for encouraging us to improve the completeness and clarity of our work. We hope these clarifications and additional results have addressed your concerns, and we welcome any further questions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I appreciate the additional experiments on challenging datasets, the inclusion of Cellpose in the benchmarks, and the clarification on graph construction and inference speed advantages. These additions strengthen the paper and address some of my initial concerns. Some limitations remain, particularly regarding overlapping cells and reliance on connected components or preprocessing for vertex construction, which may affect applicability in certain settings. Nonetheless, the core idea is novel, the formulation is well-motivated, and the empirical validation is now more complete. I will adjust my score to reflect these improvements. --- Reply to Comment 1.1.1: Comment: Thank you for the time and effort you dedicated to improving our work during the review process. We also sincerely appreciate your recognition of our responses during the rebuttal stage. Once again, thank you for your continued support and valuable contributions!
Summary: This paper proposes an asymptotic training architecture for cell instance segmentation based on the four-color theorem. Unlike traditional multi-class, multi-channel segmentation methods, the proposed approach follows a step-by-step process: first distinguishing foreground from background, then classifying instances within the foreground region. To enhance training stability, the method imposes orthogonal constraints on adjacent cells to address class imbalance and introduces an encoding transformation technique that maps outputs to a minimum color representation, ensuring consistent segmentation. Claims And Evidence: None Methods And Evaluation Criteria: Sounds good Theoretical Claims: I did not find theoretical claims or analysis in this paper. Experimental Designs Or Analyses: It sounds. But I hope there is additional comparison as described in the second weakness that I wrote in the weakness section. Supplementary Material: None Relation To Broader Scientific Literature: This work can have a broader effect on the medical domain, but I am not sure in the machine learning field. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: The proposed method effectively applies the four-color theorem to cell instance segmentation, introducing a novel approach to the problem. The authors provide an extensive ablation study across multiple datasets, demonstrating the robustness of their method. The methodology is straightforward and easy to follow, making it accessible to readers. Weaknesses & Areas for Improvement: 1. Introduction & Justification of Claims The paper argues that distance mapping models significantly increase computational overhead and model complexity. However, there exist methods with fewer parameters or requiring fewer FLOPs. The authors should provide a clearer comparison to substantiate this claim. 2. Comparison with Competitive Methods The authors should compare their approach with the recent methods in [1]. Since the proposed method based on the four-color theorem is not inherently limited to cell instance segmentation, it would be beneficial to demonstrate its effectiveness on additional segmentation datasets. If the proposed method cannot be validated on diverse datasets, this work may be more suitable for a specialized medical imaging conference such as MICCAI. 3. Presentation of Complexity and Performance Metrics Currently, the model complexity details are presented separately in Table 1, while performance metrics are distributed across Tables 2–4. Combining these into a single table would facilitate an easier comparison of performance and computational efficiency. 4. Equation Clarity and Notation Issues - Equation (5) & (6): Does the index 𝑖 represent the image index {1,…,𝑁}? - Equation (5): It appears that 𝑏 represents the background and 𝑓 represents the foreground. However, the notation for 𝑓 is reused in - Equations (9) and (10), leading to potential confusion. - Equation (5): Is all 𝑌^𝑖[:,0] assigned to 𝑌^𝑏? Please clarify. - Equation (7): How does 𝑖 appear in 𝑌^{𝑏,𝑖} when there is no 𝑖 in the input? - Equations (9) & (10): There is no explanation of how the feature 𝑓 is obtained. This should be explicitly described. - Line 143: Given that 𝑖=1,…,𝑁 does this imply that 𝑖 and 𝑗 both range over {1,…,𝑁}? If so, how is 𝛽 again of size 𝑁? Additionally, the range of index 𝑗 should be explicitly defined. - Line 306: The function 𝑓 mapping 𝑃 to 𝐶 is introduced, but it shares the same notation as a previously defined feature. Consider renaming to avoid ambiguity. 5. Minor Issues & Typographical Errors - Line 321: "Where" → "where" - Line 300: "n represents" → "where 𝑛 represents" - Equation (15): "L_cls." → "L_cls" **Overall Assessment** The proposed method presents a novel adaptation of the four-color theorem for cell instance segmentation and achieves significant performance improvements. However, there are multiple issues related to equation clarity, notation consistency, and typographical errors. Furthermore, the lack of validation on diverse datasets and absence of comparisons with recent works such as [1] raise concerns about the generalizability and competitiveness of the approach. Given these limitations, I am inclined to take a negative stance on this submission. Reference: [1] Chen, Zhen, et al. "Un-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images." arXiv preprint arXiv:2402.16663 (2024). Other Comments Or Suggestions: See the strength and weakness part. Questions For Authors: See the strength and weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable feedback and the time dedicated to reviewing our manuscript. To better convey our motivations and address your concerns, **we have organized our response into five key areas:** (1) theoretical contributions, (2) experimental analysis, (3) comparison with recent literature, (4) presentation and minor issues, and (5) other clarifications. For ease of reference, we kindly refer you to our response file: [Supp](https://drive.google.com/file/d/1dERoIrcnDklZhwcJDs5zXBBAFn7-I5u3/view?usp=sharing). **1. Theoretical Contributions** Our paper introduces a novel instance segmentation framework based on the four-color theorem, which reformulates the traditionally complex instance segmentation problem as a four-class semantic segmentation task. This simplification leads to more efficient optimization and significantly faster inference than conventional instance-aware methods. In the submitted manuscript, we present two key theoretical contributions: - **Global Optimality Theory (Line 210)**: We demonstrate that the greedy algorithm for four-color encoding yields globally optimal results under the constraint of instance non-adjacency. - **Compatibility Theory (Line 291)**: We prove that any valid four-color encoding can be transformed into a canonical form, ensuring training stability through an encoding transformation mechanism. Meantime, the full theoretical derivations and proofs are also included in the supplementary material. In addition, **we believe our approach can potentially reshape traditional thinking in instance segmentation**. Drawing a parallel to the U-Net architecture, initially proposed for cell tracking and later widely adopted in general computer vision—we envision that our FCIS framework, backed by solid theoretical grounding and practical efficiency, can also generalize to a wide range of high-density instance segmentation tasks beyond the biomedical domain. **2. Experimental Validation and Broader Applicability** We fully agree with your suggestion to demonstrate the broader applicability of our method. To that end, we conduct additional experiments on **three challenging cell segmentation datasets** and one **natural scene dataset**. These datasets include various challenging scenarios—irregular shapes, low contrast, and densely packed objects as illustrated in **Supp.RG-Fig.1**. From the quantitative results in **Supp.RG-Tab.1** and visualizations results in **Supp.RG-Figs.2–6**, we can see our FCIS consistently delivers strong and competitive performance across all scenarios. For instance, on the **Yeaz-BF** and **Yeaz-PC** datasets, we achieve AJI scores of **0.834** and **0.822**, respectively, highlighting the model's superior ability to differentiate instances. Moreover, on the **PerSense** dataset, our method outperforms the second-best approach by more than **2%**, clearly demonstrating its robustness and generalization capabilities beyond the medical imaging domain. **3. Comparison with Recent Methods (Un-SAM)** Thank you for highlighting the recent work **Un-SAM**. Following your suggestion, we **included Un-SAM as a baseline** in our supplementary experiments shown in **Supp.R2-Tab.2**. The results show that Un-SAM performs competitively across several datasets and significantly beyond DCAN and DoNet at least 2\% in Dice metrics. Besides, we also update the Un-SAM to our manuscript and added proper citations to acknowledge Un-SAM as a significant contribution. **4. Presentation and Minor Issues** We appreciate your thorough reading regarding typographical errors. In response, **we carefully revised the manuscript** as follows: - Equation Clarity: All relevant equations have been reviewed and revised for clarity. - Index Definitions: The roles and ranges of index variables such as \(i\) and \(j\) have been clearly defined to remove ambiguity. - Grammar and Style: We corrected all minor grammatical and formatting issues to improve overall readability. Additionally, we restructuct the performance and complexity tables of the manuscript to enhance readability and facilitate comparisons, which also can be seen from **Supp.R2-Tab.2**. **5. Clarification on FLOPs vs. FLOPS** To convenience you understand the use of our FLOPs, we explain it as follows: - **FLOPs** (floating-point operations) measure the total number of computations required by a model and are used to evaluate algorithmic complexity. - **FLOPS** (floating-point operations per second) refer to the computational throughput of the hardware. In our paper, we specifically use **FLOPs** as a metric for comparing model inference efficiency. Hence, **a lower FLOPs value indicates reduced computational cost and improved efficiency**. Once again, we sincerely appreciate your detailed feedback. We hope the revised results can thoroughly address your concerns about our FCIS. If you have any further questions, we would be more than happy to provide additional clarifications. --- Rebuttal Comment 1.1: Comment: Thank you for rebuttal. I carefully read the author’s rebuttal. I realized that I missed the results reported in the supplementary material. I increased my initial rating to weak accept. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your efforts in helping improve our work during the review process, as well as your recognition of our responses during the rebuttal stage. Thank you again for your kind support and valuable contribution!
Summary: This paper proposes an innovative cell instance segmentation method based on the four-color theorem, aiming to simplify the instance differentiation process. By conceptualizing cells as "countries" and tissues as "oceans," the authors introduce a four-color encoding scheme to ensure adjacent cell instances receive distinct labels, reformulating instance segmentation as a four-class semantic segmentation problem. To address training instability caused by the non-uniqueness of the four-color encoding, the authors design an asymptotic training strategy and encoding transformation method. Experimental results on multiple datasets demonstrate the method's advantages in segmentation performance and computational complexity. Claims And Evidence: The claims made in the paper are supported by clear experimental evidence, particularly from the results on various benchmark datasets. However, the issue of non-uniqueness in four-color encoding leads to instability during training, and the paper does not thoroughly explore how this issue impacts the model's performance. While the method shows great promise on several datasets, more experimental evidence is needed to prove its robustness under more complex and diverse cellular distributions. Methods And Evaluation Criteria: The proposed four-color encoding method and evaluation criteria are reasonable and appropriate for the cell instance segmentation task. Through tests on multiple datasets, the paper demonstrates its method’s advantages in both performance and computational complexity. However, the experiments lack a thorough analysis of how the non-uniqueness of the four-color encoding impacts model training in more complex cellular distributions, which should be further explored. Theoretical Claims: The application of the four-color theorem in cell instance segmentation is innovative. However, the discussion on the non-uniqueness of the four-color encoding and its impact on training stability is insufficient. While the asymptotic training strategy is reasonable, more in-depth theoretical explanations are needed, especially on how the strategy performs on complex datasets. Experimental Designs Or Analyses: The experimental design is solid, covering multiple datasets and using ablation studies to validate the components of the method. However, the paper does not sufficiently investigate how the non-uniqueness of the encoding affects model training in more complex or densely packed cell distributions. The authors could also further demonstrate the generalization of the method to other types of cellular datasets. Supplementary Material: The supplementary material reviews related works and provides implementation details, helping to understand the novelty of the approach. However, the discussion on the non-uniqueness issue in encoding is not sufficiently addressed, potentially raising concerns about its applicability. Relation To Broader Scientific Literature: The paper clearly compares its approach with existing methods in cell instance segmentation, including detection-based, contour-based, and distance mapping approaches. While these comparisons are adequate, there is room for further discussion on newer related works, particularly Transformer-based segmentation methods, which might perform better on complex datasets. Essential References Not Discussed: The paper overlooks some essential recent works in the field of cell instance segmentation, especially some Transformer-based methods. Discussing these approaches would provide a more comprehensive understanding of the context for the proposed method and suggest possible directions for future improvements. Other Strengths And Weaknesses: The paper’s innovation is strong, and the proposed four-color theorem-based method offers practical value, especially in reducing computational complexity. The results show that the method can maintain good segmentation performance while reducing model complexity. However, the non-uniqueness of the four-color encoding remains a challenge, particularly in terms of training stability, which needs to be addressed in future work. Other Comments Or Suggestions: 1) The authors could further optimize the training process, especially addressing the instability caused by the non-uniqueness of four-color encoding. More experiments on its robustness in complex cell images would be valuable. 2) In future work, it would be beneficial for the authors to apply the method to more complex datasets and real-world biomedical application scenarios to further validate its broad applicability. Questions For Authors: 1) Regarding the non-uniqueness of four-color encoding: The paper mentions that the non-uniqueness of four-color encoding may cause training instability. Could you provide more detailed experimental results on how this issue affects model convergence, especially in complex or dense cell distributions? 2) Adaptability to complex cell distributions: Given the strong performance on simpler datasets, could you discuss how the method performs on datasets with more complex or highly overlapping cell distributions? Are there any improvements or adaptations to handle such scenarios? 3) Future improvements to the method: Are there any planned future improvements to handle the limitations of four-color encoding, particularly in cases where cell boundaries are irregular or cells are densely packed? How do you plan to address such edge cases? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback, recognition of the novelty of our proposed FCIS based on the four-color theorem, and positive feedback regarding our experimental design and theoretical contributions. According to your mentioned several issues in the *Weaknesses*, *Comments*, and *Questions* sections, **we summarize these into the following four points and provide detailed responses one by one**. For ease of reference, we kindly refer you to our response file: [Supp](https://drive.google.com/file/d/1dERoIrcnDklZhwcJDs5zXBBAFn7-I5u3/view?usp=sharing). **1. How Non-Uniqueness of Encoding Affect Training Convergence** In medical images, cells often form similar spatial patterns, such as chain-like structures. However, due to the non-uniqueness of the four-color (FC) encoding, structurally similar regions may be assigned inconsistent color configurations across different training batches. As illustrated in **Supp.R1 Fig.6**, a specific arrangement of adjacent cells may be encoded as “red-green” in batch *n*. In contrast, in batch *n+1*, it may appear as “red-blue.” This variation causes the model to receive conflicting signals during training: it may first learn that red borders green, then later that red borders blue. These inconsistencies disrupt the learning process, leading to unstable optimization and even fragmented segmentations during inference. To resolve this, we introduce an encoding transformation mechanism that includes a " buffer region" in the prediction space. This mechanism maps all variant encodings to a canonical and stable representation, i.e., the greedy FC encoding. As shown in **Supp.R1 Fig.7(a)**, this transformation significantly improves training convergence. Additionally, we provide theoretical support in the main text that the FC encoding is compatible with any other valid encoding strategy, ensuring the stability and correctness of this transformation process. To further support this claim, we provide visual comparisons in **Supp.R1 Fig.7(b)**. The segmentation results clearly demonstrate that the encoding transformation mechanism eliminates fragmented predictions, which confirms its effectiveness in enhancing model robustness. **2. Extending the Method to More Complex Scenarios** We sincerely appreciate your suggestion to evaluate our method under more complex scenarios. In response, we conduct extensive experiments on four additional datasets that cover challenging conditions: **irregular shapes, low-contrast boundaries, and densely packed regions**, along with a **natural scene dataset, PerSense**, containing many tightly clustered objects as shown in **Supp.RG-Fig.1**. The quantitative comparisons are shown in **Supp.RG-Tab.1**, and the corresponding visualization results are presented in **Supp.RG-Figs.2–6**. From the table, our FCIS achieves the best performance in DQ and PQ metrics across nearly all datasets. This demonstrate that our method **generalizes well to diverse and challenging scenarios**, maintaining high segmentation accuracy. In addition, the excellent segmentation performance from visualization results can further support the conclusions. **3. Future Directions for Improvement** As you emphasized, evaluating robustness under more complex scenarios is crucial. In response, our new experiments on four diverse datasets further demonstrate the reliability of our approach. Beyond performance, our work is also driven by a practical motivation: reducing **inference time complexity**. As discussed in our main text and **Supp.R3-Fig.8**, while distance-based methods often perform well, they require complex post-processing, making them computationally expensive. This becomes a bottleneck for applications such as whole-slide image (WSI)-level cell segmentation, where a single WSI may contain tens of thousands of patches. Our method eliminates this bottleneck, achieving inference speeds comparable to semantic segmentation—**0.29s per patch**, versus **0.94s for CellPose** and **4.59s for HoverNet**. Therefore, in the future, we will expand the method to pathological field to accelerate the analysis efficiency. Besides, we also note that high-density instance segmentation is a common challenge in domains beyond biomedicine, e.g., natural scene parsing and remote sensing. However, existing methods in these fields often rely on heavy Mask R-CNN architectures. In contrast, our work offers a **lightweight and scalable alternative**, which will be our explored direction. **4. Other Minor Questions** We thank you for highlighting recent Transformer-based segmentation methods. We will incorporate these works into the related work section of our revised manuscript. Additionally, we will further explore the integration of Transformer modules into our framework to further enhance model performance. We hope that our responses and the newly added experiments have addressed your concerns. Once again, we sincerely thank you for your thoughtful feedback. --- Rebuttal Comment 1.1: Comment: I have read the author response and agree to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our rebuttal and for your positive assessment of our manuscript. We sincerely appreciate your careful review of our work and the constructive comments you provided.
null
null
null
null
null
null
MuseControlLite: Multifunctional Music Generation with Lightweight Conditioners
Accept (poster)
Summary: MuseControlLite is an efficient adapter-based, controllable text-to-music based on Stable Audio Open, by adopting decoupled cross-attention (IP-adapter). The key finding is that for time-varying control signals, integrating suitable positional encoding (i.e. RoPE) to the adapter itself is crucial to achieve good results with relatively small number of parameters for the adapter. Claims And Evidence: The necessity of positional embeddings for time-varying conditioning signal is intuitive, and this work constitues a timely execution of devising a successful recipe of the efficient control over several musical attributes (dynamic, melody, and rhythm) along with the claimed RoPE application to the IP-adapter. Methods And Evaluation Criteria: The method is a combination of existing techniques (pre-trained Stable Audio Open backbone, IP-adapter, and RoPE) which is technically correct. Evaluation metrics employ commonly found ones (FD, KL, CLAP), along with melody accuracy to measure correctness of the chromagram-based melody condition. Theoretical Claims: This paper is mostly empirical, and I find no standout theoretical claims to evaluate. Multi-attribute classifier free guidance (appendix A) has been investigated in existing literature. Experimental Designs Or Analyses: The work considered several music generation models with controllability (MusicGen-Melody and a reproduced Stable Audio Open ControlNet). While I acknowledge that this work aims efficient adaptation, the objective metrics do not seem to be a clear win over Stable Audio Open ControlNet. Given that objective metrics often do not correlate with human perception, I believe conducting subjective evaluation over attributes in this work (melody, rhythm, dynamics) through mean opinion scores (MOS) will strengthen the merits of this work. Supplementary Material: I have reviewed the demo page. Relation To Broader Scientific Literature: A timely contribution towards controllable music generation, which is a prominent area of research after the success of recent text-conditional music generative models. Essential References Not Discussed: None. Other Strengths And Weaknesses: While enabling fine-grained control over musical attributes (dynamic, melody, and rhythm) is a welcomed addition, it also potentially adds complexity to the users depending on how hard it is to obtain and manipulate the attributes to non-experts. I acknowledge that these attributes are also used in previous studies, so the choice of attributes itself is not the drawback of this work. Other Comments Or Suggestions: None. Questions For Authors: In the author's opinion, which time-varying musical attributes provide the best perceptual control? The ablation study discussed in Sec 5.2 along with Table 4 and numerical results, but how the differences would translate to human perception of correctness of the control? I would like to support this important direction (controllable music generation), but the experimental rigor has room for improvement to warrant acceptance. Can the authors consider subjective evaluation, or at very least a qualitative study by adding baseline models, to the demo for the readers to form their opinion? I believe having nuanced findings regarding the relation of controlled attributes and human perception would add significant academic merits especially for this work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your supportive feedback and hope that our responses below address and alleviate your major concerns. #### On Melody Representation: Thanks to your valuable comment, we have identified an oversight that causes the perceived inferiority of our model’s output on the original demo website. Specifically, our initial model (denoted as `v1`) adopted the melody representation from MusicGen-Melody, which offers lower pitch resolution compared to that employed by Stable-Audio ControlNet. To ensure a fair comparison with Stable-Audio ControlNet, we have retrained our model and developed a new version (`v2`), which now aligns with Stable-Audio ControlNet’s melody representation. This adjustment significantly enhances the perceptual quality of our generated samples. - v1: This version employs a one-hot 12-pitch-class chromagram as melody condition, same as MusicGen-Melody. However, this melody representation lacks octave specificity, causing the model to misinterpret pitch information. - v2: This version adopts a top-4 128-pitch-class CQT to represent melody condition as proposed by Stable-Audio ControlNet. To ensure a fair comparison with Stable-Audio ControlNet, we modified only the conditioning input, leaving the remainder of the pipeline unchanged. As shown in Table B below, MuseControlLite v2 outperforms v1 in FD and KL. While v2 exhibits a lower Mel acc. than v1, we recognize that this metric does not fully capture perceptual melody alignment. Thanks to the review comments, we conducted a listening test, detailed below, which confirms that v2 surpasses v1 in melody control performance. #### Table B |Model|Train Parms|Total Parms|FD|KL|CLAP|Mel Acc.| |-----|-----------|-----------|--|--|----|--------| |MusicGen-stereo-melody-large|3.3B|3.3B|187.0|0.47|0.36|43.7%| |Stable-audio ControlNet|572M|1.9B|97.7|0.27|0.40|56.6%| |v1|**85M**|**1.4B**|135.5|0.38|**0.40**|**70.9%**| |v2|**85M**|**1.4B**|**82.2**|**0.25**|0.38|61.4%| #### On Missing Listening Test: Initially, we excluded a subjective evaluation due to the unavailability of the code and weights for Stable-Audio ControlNet. However, in response to multiple reviewers’ requests, we have conducted a listening test by using examples from the Stable-Audio ControlNet project website (https://stable-audio-control.github.io/web/), despite that these samples may have been cherry-picked. For this evaluation, we recruited 34 participants and utilized the same text and melody conditions as those demonstrated on their website. We generated music using both our model and MusicGen-Melody, then compared these outputs with the samples retrieved from their demo page. As demonstrated in the following Table C (mean opinion scores ∈ [1, 5]), the results demonstrate that, our v2 model performs favorably with Stable-Audio ControlNet, despite requiring only about 1/6 of the trainable parameters. Moreover, we note that we only used only the MTG-Jamendo dataset for training, while Stable-audio ControlNet used four training datasets (MTG-Jamendo, FMA, MTT, Wikimute). #### Table C |Model|Text adherence|Melody similarity|Overall preference| |-----|--------------|-----------------|------------------| |MusicGen-stereo-melody-large|3.12±0.25|2.67±0.23|3.06±0.23| |Stable-audio ControlNet|**3.69**±0.28|4.17±0.23|**3.65**±0.25| |Ours v1|3.34±0.27|3.62±0.27|2.93±0.25| |Ours v2|3.58±0.20|**4.21**±0.20|3.63±0.22| We provide samples in the “Updated Melody-conditioned Comparison” section of the demo page to showcase the audio generated with the new melody condition. These samples are the same ones used in our subjective evaluation, with no cherry-picking at all. #### On Usability: We agree that creating complex melody or rhythm conditions can sometimes be challenging. Our solution is to provide a reference audio sample that contains the desired condition. For example, a user could record themselves humming or clapping, and we can post-process that audio to extract both the melody and rhythm conditions. This approach should also work for dynamics; alternatively, users can simply draw a dynamics curve, which our model will accept. #### On Questions for Authors: Regarding which time-varying musical attributes provide the best perceptual control? For *Melody Condition*: Ours (v2) offers the best perceptual control in our opinion, since it provides more comprehensive conditional information. The misalignment between the melody conditions in v1 and v2 is due to the v1 version being limited to only one octave of information. For *Rhythm and Dynamics*, on the other hand, we consider the objective metrics for rhythm and dynamics as well aligned with the human perception. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I think the added experiments along with the subjective evaluation would make this work more convincing. Also I appreciate the v2 addition that fixes the melody representation with further improvements. With that said, I also would like to point out that the v2 has been a late addition which makes consistent evaluation of the work bit difficult to the reviewers. Having seen the matching/improved results now with v2, can the authors discuss bit more on the motivation of being "lite" in attaching the adapters? While it is obvious that the smaller adapter would be preferred in general, the readers may also wonder scalability of the method. For example, can it (especially now with v2) beat the baseline further if the user scales the size to a similar regime (e.g. 500M)? or, is the improvement capped at the presented size (85M)? I acknowledge the timeframe is limited to prepare the full result, so sharing the author's preliminary observations would still be valuable at this point. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer for the detailed feedback. Our motivation for making the model "lite" is to increase accessibility for users with limited computational resources for training and inference. Our approach is significantly more lightweight than ControlNet, while still offering similar fine-grained control capabilities. Although we have already integrated decoupled cross-attention layers into every transformer block, there remains room to increase the number of trainable parameters by employing deeper neural architectures, rather than relying solely on single linear layers for the key and value projections. Due to time constraints, we were unable to retrain the model with the scaled-up adapters before the April 8 deadline. However, we did evaluate the inference speed of our model: - Original Stable-audio: 4.92 iterations/second - Ours with 85M trainable parameters: 4.88 steps/second - Ours with 500M trainable parameters: 3.95 steps/second In this test, we naively scaled up the key and value projections in the decoupled cross-attention layers using multiple linear layers and activation functions. All models were evaluated using fp32 precision during inference. We appreciate the reviewer’s suggestion and see this as a promising direction for future work. We plan to explore this further prior to the open-source release, and we will include training results with the scaled adapters in the camera-ready version.
Summary: Within the domain of raw audio music generation, motivated by the need for (1) lighter alternatives for fine-tuning, together with the need for (2) better control accuracy (i.e. for the user), the authors propose MuseControlLite, a system for time-varying condition control for music generation. MuseControlLite reduces parameter count relative to, e.g. Music ControlNet (Copet et al 2024), by using decoupled cross-attention layers (Ye et al 2023) in a diffusion transformer, and then to get this to work well they incorporate a modification of rotary positional embedding (Su et al 2024). The resulting model supports joint attribute and audio control, and they train a separate set of adapters for audio conditioning to allow inpainting and outpainting. Results are presented on the public Song Describer benchmark (Manco et al 2023) and they report a 14% improvement in melody accuracy. Example generated audio files are provided on a demonstration website. Claims And Evidence: Yes, the claims are overall well supported. One problematic claim is the somewhat implicit claim that the model handles multiple conditioning signals well (e.g. “These results suggest that the model effectively learns to respond to multiple controls simultaneously, despite the added complexity.” in Section 5.2; and “a lightweight training method that [..] enables precise control of music generation under specified musical attribute conditions” in Section 6, etc). However, listening to the demonstration audio files on the provided website (https://musecontrollite.github.io/web/) indicates that while some certainly do sound good (impressive!), there are quite a few that do not sound good, and/or do not effectively achieve what seems to be the intention. To list a few examples: * In “Dynamics Control”, the “recording of a melodic piano solo” neither sounds like a piano solo (it has other instruments) nor is it melodic; it’s almost purely textural. * Also in “Dynamics Control”, the jazz band has the right instrumentation, but is musically incoherent (and I appreciate free jazz, but that is not the issue in this case :) * In “Melody Control”, the jazz band version of the Chopin Eb nocturne (Op 9 No 2) does not reflect the melody other than a few seconds here and there; even an experienced musician who knows that Nocturne would likely be unable to guess that that is where the melody is coming from. (See also my comment in the next section on metrics) * In “Rhythm Control”, for the Mozart (“Eine Kleine…”) / cello quartet combination, the simple solution would have been to simply return the same basic piece but with a more legato sound, as requested in the text—it is practically already almost in a harmonized string quartet arrangement, whereas the generated example doesn’t sound “harmonized” as requested, and sounds more just like repeated notes (which is fine but not really addressing the text prompt). * There are just a few examples; there are others. I still think that this is an impressive system, and the quality of the audio output, overall, is good! So I believe that the above issue could easily be addressed by (a) distinguishing between quantitative results (which seem to be relatively good) and perceptual quality (which seems to be variable), and (b) adjusting the language/tone in a few places to be more aligned with/reflective of the actual audio outputs, and (c) providing some qualitative discussion about all this, with pointers to some of the examples. Methods And Evaluation Criteria: Relative to the conventions within the music generation community, yes, the evaluation criteria make sense. The metrics include Melody Accuracy, Dynamics Correlation (i.e. the correlation between the dynamics curve of the generated audio with the ground truth), Rhythm F1 (a fairly standard, if somewhat problematic, way to evaluate beat alignment), and self-similarity-matrix-based Novelty Value (Muller 2015). All of these are reasonable choices. However, my comments in the “claims/evidence” section above point to an example (the Chopin Nocturne (Op9 No2) / Jazz band in “Melody Control”) where the pitch chroma might be somewhat matched (i.e. perhaps decent “melodic accuracy”), but perceptually speaking, the generated melody is effectively unrecognizable. Evaluating generative models is difficult and there are no great solutions at the moment. So, while these metrics are reasonable in context of prior work and available tools, they are very limited. These limitations—both of the system and also of the evaluation metrics—simply need to be acknowledged. (It is possible that there is something I am fundamentally misunderstanding about the control process, such that my expectations are incorrect; if so I would be glad to be corrected. I think this is slightly unlikely, because there are other examples that do sound good and “as I expected”.) In terms of baseline comparisons, there appears to be no other system that accepts the combination of controls that MuseControlLite accepts, and therefore there would be no direct comparison in any case, and some other related systems are infeasible for comparisons for yet other reasons, so in that context, the baselines chosen (i.e. MusicGen, Stable Audio Open ControlNet, and a simple baseline implemented by the authors using Stable Audio Open) also do make sense. Critically, the authors also provide a fairly extensive set of examples to listen to. (It would be nice if they were marked as “cherry picked” vs “random”, e.g. as done for the ControlNet paper). This is a valuable and important part of a paper on generative methods for audio. Theoretical Claims: N/A Experimental Designs Or Analyses: I read the experimental descriptions and analyses and overall did not notice any issues. I did notice that perhaps some of the audio examples that “didn’t sound good” were cases where it was just a particularly challenging task, e.g. the Chopin Nocturne / jazz band is not a simple request. Again, it is not clear how to analyze/evaluate/quantify this, but it’s possible that if one were able to explore some of this systematically, then ultimately it might work “in favour” of the proposed system, e.g. the jingle bells-xylophone pairing (in the “Rhythm Control” section) is inherently easier and indeed works reasonably well; perhaps the system does better on the easier ones, which would be entirely fair. Supplementary Material: I did not read the appendix carefully (providing details on the separated guidance scale formulation). I did listen to many/most of the accompanying demonstration audio clips on the provided website. Relation To Broader Scientific Literature: I believe the authors do a good job of relating the paper’s contributions to the broader literature, e.g. the need for more/better/local/time-varying control, and the relationship to a variety of other audio music generation models, as well as to a few relevant diffusion models more generally. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper demonstrates an impressive level of engineering combined with some creative innovation and insight in how (and why) to put together many complicated moving parts. In particular, the use of rotary positional embeddings for getting the decoupled cross-attention to work well was very nice. Other Comments Or Suggestions: N/A Questions For Authors: If a misunderstanding is leading to my expectations being unreasonable for the quality of audio samples (see my comment above in Section on "Claims & Evidence'), then I would be glad to try to identify the source of the misunderstanding so that it may be corrected. Otherwise, I would like to see a discussion about the generated audio, as I explained earlier. This is an impressive system, and my score is a placeholder; if the above issue can be resolved, I will raise it. [EDIT APRIL 7 -- Raised Score] Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful and valuable feedback, which has inspired us to make several significant updates into our work, as detailed below. We hope the reviewer will agree that these revisions greatly enhance the scientific quality of the paper. #### On Melody Representation: Thanks to your valuable comment, we have identified an oversight that causes the perceived inferiority of our model’s output on the original demo website. Specifically, our initial model (denoted as `v1`) adopted the melody representation from MusicGen-Melody, which offers lower pitch resolution compared to that employed by Stable-Audio ControlNet. To ensure a fair comparison with Stable-Audio ControlNet, we have retrained our model and developed a new version (`v2`), which now aligns with Stable-Audio ControlNet’s melody representation. This adjustment significantly enhances the perceptual quality of our generated samples. - v1: This version employs a one-hot 12-pitch-class chromagram as melody condition, same as MusicGen-Melody. However, this melody representation lacks octave specificity, causing the model to misinterpret pitch information. - v2: This version adopts a top-4 128-pitch-class CQT to represent melody condition as proposed by Stable-Audio ControlNet. To ensure a fair comparison with Stable-Audio ControlNet, we modified only the conditioning input, leaving the remainder of the pipeline unchanged. As shown in Table B below, MuseControlLite v2 outperforms v1 in FD and KL. While v2 exhibits a lower Mel acc. than v1, we recognize that this metric does not fully capture perceptual melody alignment. Thanks to the review comments, we conducted a listening test, detailed below, which confirms that v2 surpasses v1 in melody control performance. #### Table B |Model|Train Parms|Total Parms|FD|KL|CLAP|Mel Acc.| |-----|-----------|-----------|--|--|----|--------| |MusicGen-stereo-melody-large|3.3B|3.3B|187.0|0.47|0.36|43.7%| |Stable-audio ControlNet|572M|1.9B|97.7|0.27|0.40|56.6%| |v1|**85M**|**1.4B**|135.5|0.38|**0.40**|**70.9%**| |v2|**85M**|**1.4B**|**82.2**|**0.25**|0.38|61.4%| #### On Missing Listening Test: Initially, we excluded a subjective evaluation due to the unavailability of the code and weights for Stable-Audio ControlNet. However, in response to multiple reviewers’ requests, we have conducted a listening test by using examples from the Stable-Audio ControlNet project website (https://stable-audio-control.github.io/web/), despite that these samples may have been cherry-picked. For this evaluation, we recruited 34 participants and utilized the same text and melody conditions as those demonstrated on their website. We generated music using both our model and MusicGen-Melody, then compared these outputs with the samples retrieved from their demo page. As demonstrated in the following Table C (mean opinion scores ∈ [1, 5]), the results demonstrate that, our v2 model performs favorably with Stable-Audio ControlNet, despite requiring only about 1/6 of the trainable parameters. Moreover, we note that we only used only the MTG-Jamendo dataset for training, while Stable-audio ControlNet used four training datasets (MTG-Jamendo, FMA, MTT, Wikimute). #### Table C |Model|Text adherence|Melody similarity|Overall preference| |-----|--------------|-----------------|------------------| |MusicGen-stereo-melody-large|3.12±0.25|2.67±0.23|3.06±0.23| |Stable-audio ControlNet|**3.69**±0.28|4.17±0.23|**3.65**±0.25| |Ours v1|3.34±0.27|3.62±0.27|2.93±0.25| |Ours v2|3.58±0.20|**4.21**±0.20|3.63±0.22| We provide samples in the “Updated Melody-conditioned Comparison” section of the demo page to showcase the audio generated with the new melody condition. These samples are the same ones used in our subjective evaluation, with no cherry-picking at all. #### On Variable Quality of the Examples on the Demo Page: The audio samples on our initial demo page were selected at random, resulting in variable quality. Regarding the dynamics and rhythm control samples specifically noted by the reviewer, we wish to clarify that the observed text adherence challenges stem primarily from the characteristics of the training data utilized during both the pre-training and fine-tuning phases. - Fine-Tuning Dataset: The Jamendo dataset, employed for MuseControlLite, includes limited representation of classical instruments, which constrains the model’s ability to capture such timbres effectively. - Pretraining Dataset Text Descriptions: The pretrained Stable Audio model was trained on data with limited musical specificity. Consequently, it struggles to interpret nuanced musical terms such as "melodic," "legato," "harmonized," and other concepts rooted in jazz theory. We have updated our demo website in the “Highlighted Audio” section to better showcase the inherent limits of the pretrained model (Stable Audio) in terms of text adherence. --- Rebuttal Comment 1.1: Comment: **I thank the authors for their detailed rebuttals, and for the update to (v2) and associated explanations, and experiments!** (Indeed it sounded like chroma, but not necessarily octaves, were previously being matched, so this all made sense.) v2 clearly matches melodies better than v1! Nice! I do have a few notes and questions: * in the "Updated Melody-conditioned Comparison", example 3 (starts with a low solo plucked banjo-like sound with a bluegrassy band coming in at ~0:09.5), (v2) is better than (v1), but it still completely fails to get the "piano" sound. I understand the issue with musical-text descriptions that the authors mention in their rebuttals, but it's interesting that "solo piano" is so hard. It's also interesting that all the models break down on this one in one way or another. This is not a critical issue, I just think it's good to highlight where things don't currently work well. * in same section, example 2 (piano solo in minor key), one prompt says "tabla used for percussion in the middle". I hear almost no percussion at all (or am I missing something?), and definitely no clear tabla anywhere. For the same example, another prompt says "string ensemble", but the percussive onset of the piano is still very clearly there: the model knows what a "string ensemble" is (the sustain sounds like strings), but it has issues removing the piano attack. Again, all this is not surprising, but I think it's important to hightlight where things don't work well. * For same section, example 5 (starts with a solo viola-ish sound with more layers added after a couple of seconds), one prompt says "cheerful piano performance", and the piano sound is kind of there, but the string sounds is also still kind of there, and at around 0:13-0:17 the model struggles with the string dynamics in a way that reminds me a bit of the struggles it has with modifying the beethoven symphony in the "Melody, Rhythm & Dynamics Control" section. Again, it's OK but just acknowledge where things don't work well. * I could list many more examples that I hear in the audio, but at some point I believe it is the authors' responsibility and role to acknowledge and discuss such hard-to-measure but perceptually salient observations. I really appreciate that the authors provided extensive demo materials to allow the readers this kind of observation in the first place! My original review comment still holds: *"I still think that this is an impressive system, and the quality of the audio output, overall, is good! So I believe that the above issue could easily be addressed by (a) distinguishing between quantitative results (which seem to be relatively good) and perceptual quality (which seems to be variable [though now Improved with v2!!]), and (b) adjusting the language/tone in a few places to be more aligned with/reflective of the actual audio outputs, and (c) providing some qualitative discussion about all this, with pointers to some of the examples."* (unless the authors addressed this somewhere and I missed it? if so, I apologize, and please point me to it.) * To double check: (v2)-generated samples were added to the demo page only in that first section where they are explicitly labelled as such, and previous samples were created with (v1) and left as is, is that correct? I ask because I am still curious whether using (v2) would help with the chopin nocturne example (Ex 3 in "Melody Control") which I think is still the previous version? and/or any of the other examples that I listed in my review? Or is the issue something else? (Or is there some reason I missed that v2 cannot be used in this context?) I am still inclined to raise my score, because I think this is good work; at this point I would simply need to see these framing issues addressed. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the detailed feedback. In the camera-ready version, we will include a qualitative discussion and elaborate on the common failure modes of our model, as outlined below: ### Text Adherence and Training Data Alignment - We have found that using text prompts closely aligned with the training data (including both the Stable-Audio pretraining and our fine-tuning data) improves the text adherence of the generated music. - However, please note that the samples in the **Updated Melody-conditioned Comparison** section on the demo page originate from the Stable-Audio ControlNet demo, and may have been generated using an LLM that may be different from the pretraining and fine-tuning text. ### Instrumental Residuals in Melody Conditioning - We observed that, although not frequently, certain instruments exhibit distinct patterns in the melody condition that the model recognizes. As a result, even if an instrument is not explicitly mentioned, the model may still render its sound. - For example: - **Example 3 (Pop solo piano…)**: The string contour was not completely eliminated. - **Example 2 (A string ensemble…)**: The piano attack remains. - These observations suggest that the melody condition may sometimes include timbre information. ### Potential Remedies We believe that this issue could be mitigated by: 1. **Increasing the guidance for text**, or decreasing the guidance for musical attributes. 2. **Reducing the percentage of dropped text conditions** during fine-tuning. - Originally, we dropped text condition **50%** of the time. Increasing the percentage of seeing the text condition ### Hard-to-Measure but Perceptually Salient Observations We have noticed that the instrumentation in the generated audio does not always precisely align with the text prompt. It appears that CLAP score may have limitations in distinguishing between multiple instruments. Moreover, if the melody condition retains timbre information from the reference audio, the final output could sometimes reflect a fusion of timbres from both the text prompt and the reference audio. ### Clarifications on demo page examples - All melody-conditioned samples were generated using version (v1) when not explicitly labeled. - The Chopin nocturne example has been updated in the *Highlighted Audio* section. --- We hope these responses address the reviewer’s comments effectively.
Summary: The paper introduces MuseControlLite, a lightweight fine-tuning mechanism for text-to-music generation that extends previous control work. Its main contributions include a new adapter design using decoupled cross-attention with positional embeddings for time-varying musical attributes. The model claims to control melody, rhythm, and dynamics—and supports both inpainting and outpainting tasks—with significantly fewer trainable parameters compared to some existing methods. Claims And Evidence: I personally like this work, which showing great results, but I need to point out that the claim needs more evidence and a wider and fair comparison. While the paper presents experimental results that show improvements in control accuracy—particularly in melody control—the evidence for some claims is not entirely convincing. For instance, the parameter efficiency claim is undermined by an unfair comparison: one baseline (coco-mulla) reportedly uses only 4% of parameters (about 60M), yet is dismissed by the authors as unsuitable for comparison. This raises questions about whether the reported gains in control precision are solely attributable to the proposed design. Methods And Evaluation Criteria: The proposed method of integrating positional embeddings into decoupled cross-attention layers appears reasonable for managing time-varying conditions in music generation. The evaluation criteria (including melody accuracy, rhythm F1 score, and audio realism metrics) are standard and appropriate for this domain. However, the method largely extends existing approaches rather than introducing fundamentally new ideas, which somehow limits its novelty. Theoretical Claims: The paper does not provide deep theoretical proofs or rigorous analyses to substantiate its claims. While the discussion on the importance of positional embeddings is interesting, the theoretical foundation remains somewhat informal. No detailed proof is provided for the improvements claimed, so the correctness of any theoretical claims is not thoroughly validated. Experimental Designs Or Analyses: The experiments are comprehensive, addressing multiple control aspects (melody, rhythm, dynamics) and tasks (inpainting and outpainting). However, the experimental design could benefit from a more balanced comparison against baselines—especially regarding the tuning parameter counts. Additionally, the paper does not explicitly discuss its limitations, which makes it difficult to assess the potential trade-offs and areas where the method may fall short. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Overall, while the work is interesting and provides a useful baseline—especially if the code is open-sourced—the contribution in terms of theoretical innovation is limited. The paper might be better suited for an applications-focused venue (e.g., ISMIR) rather than a flagship conference like ICML. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your supportive feedback and hope that our responses below address and alleviate your major concerns. #### On the Need of More Empirical Evidence: As elaborated in our response to Reviewer c3ce, we conducted a user study to bolster the empirical rigor of our paper. Initially, we excluded a subjective evaluation due to the unavailability of the code and weights for the key baseline, Stable-Audio ControlNet. However, in response to multiple reviewers’ requests, we have conducted a listening test by incorporating examples from the Stable-Audio ControlNet project website (https://stable-audio-control.github.io/web/), despite the possibility that these samples may have been selectively chosen by the authors. For this evaluation, we recruited 34 participants and utilized the same text and melody conditions as those demonstrated on their website. We generated music using both our model and MusicGen-Melody, then compared these outputs with the samples retrieved from their demo page. We put the detailed results of this listening test in our response to Reviewer 5zZP as Table C. The results demonstrate that, when equipped with the same melody representation as Stable-Audio ControlNet, our model performs favorably, despite requiring significantly fewer trainable parameters. We believe this strengthens the robustness of our findings. #### On Missing Comparison with Coco-Mulla: Coco Mulla employs parameter-efficient fine-tuning (PEFT) to enhance the pretrained MusicGen model, enabling control over chord, rhythm, and piano roll features. It would have served as a great baseline for our study, facilitating comparisons with both larger adapters (e.g., ControlNet) and models with varying numbers of trainable parameters (e.g., Coco Mulla). However, a direct comparison is confounded by differences in rhythm representation between our approach and Coco Mulla’s, as well as by the distinct pretrained backbone models utilized. Additionally, Coco Mulla’s prefix-based conditioning strategy is optimized for language models (LMs) rather than diffusion models, rendering it incompatible with the diffusion-based architecture of Stable Audio Open. Consequently, we have excluded this empirical comparison from our paper. Nevertheless, we concur with the reviewer’s observation that Coco Mulla’s parameter-efficient fine-tuning approach for text-to-music generation merits recognition. We will revise the final version of the paper to acknowledge this contribution appropriately. #### On Missing Discussion on Limitations: We concur that such discussions are essential for evaluating potential trade-offs and identifying limitations in our approach. Accordingly, we intend to incorporate the following discussions on the weaknesses of our model into the final version of the paper - The fine-tuning approach, which employs decoupled cross-attention along with rotary positional embedding and zero convolution, becomes unnecessary when training from scratch is feasible. - The generated distribution of our model is largely influenced by the training dataset used for the pretrained backbone. - Using multiple classifier-free guidance requires passing multiple batches during inference, which slightly reduces inference speed. --- Rebuttal Comment 1.1: Comment: Thank you for your response. While I appreciate your efforts to address my concerns, I must admit that I remain unconvinced on several key points. Regarding the subjective evaluation, acknowledging the limitations of using potentially biased samples from the Stable-Audio ControlNet website does not fully alleviate my concern about the lack of consistent improvement shown by your model. The fact that your model doesn't consistently outperform baselines across all metrics raises questions about the actual effectiveness of your proposed method. Simply stating the inherent challenges of subjective evaluation doesn't negate the data presented, which suggests the improvements are not as clear-cut as initially claimed. On the matter of comparing with Coco-Mulla/AIR-Gen, while I understand the technical differences you highlight, your initial dismissal of this comparison felt inadequate. The argument about architectural incompatibility with this model feels somewhat weak, especially since you did compare with the original MusicGen, which also has its own architectural nuances. Coco-Mulla (or AIR-Gen) represents a readily accessible and relevant baseline for parameter-efficient conditional music generation, and I am okay if you can find some alternative baselines for a more through-out comparison. While a qualitative discussion is better than nothing, it doesn't provide the necessary quantitative grounding to truly assess the parameter efficiency claim in a fair context. Your work shares a similar goal, and therefore, a more direct comparison, even with its challenges, would have been significantly more informative. Therefore, while I acknowledge your willingness to include a more detailed discussion in the revised paper, my core concerns about the lack of robust empirical evidence and fair comparisons, particularly concerning parameter efficiency, are not fully addressed by this rebuttal. My initial recommendation of weak reject still stands. --- Reply to Comment 1.1.1: Comment: Thank you for your response. ### Regarding the subjective evaluation: - The **subjective evaluation** for Stable-audio was conducted using samples from their official demo website. These may be **cherry-picked**, which raises concerns about their generalizability. In contrast, our evaluations, as well as those from other models, are conducted **in the wild**, ensuring a more fair and realistic performance assessment. <!-- - The Melody Accuracy (Mel Acc.) metric was computed using a one-hot 12-pitch-class chromagram, with the evaluation code provided by the Stable-audio ControlNet authors. However, this metric only accounts for pitch classes within a single octave, which is misaligned with human auditory perception. As a result, although Ours (v1) achieves higher objective melody accuracy, it appears inferior in subjective perception, highlighting a gap between perceptual and numerical evaluation. --> - Our model is trained using significantly fewer resources: - We use **<1/6 of the trainable parameters** compared to Stable-audio ControlNet. - We rely solely on the MTG-Jamendo dataset, whereas Stable-audio ControlNet uses a combination of MTG-Jamendo, FMA, MTT, and Wikimute. Despite these constraints, our model achieves comparable performance in both subjective and objective metrics. This demonstrates that our method is highly **effective**, especially considering the resource disadvantage. We do not claim to outperform all baselines on every metric, but our results clearly indicate strong performance under fair and realistic conditions. ### Regarding Coco-Mulla **Coco-mulla** differs from our method, **MuseControlLite**, in several key aspects: #### **Conditioning Method** - Coco-mulla uses a **quantized codec representation from drum tracks**(separated by Demucs), which limits its applicability to audio that doesn't include drums. - In contrast, MuseControlLite extracts **rhythmic features directly from the audio**, making it more flexible and broadly applicable. #### **Architecture** - We employ **decupled cross-attention** across all transformer blocks in our adaptation of Stable-audio, and that will be 6% of the pretrained backbone. It is possible to only employ decupled cross-attention for only several block as done in coco-mulla, which will avheieve less training parameters, but due to time constraint, we were not able to explore this configuration in our current experiments. #### **Parameter & Inference Efficiency** - Although Coco-mulla uses less than 4% of trainable parameters, its inference speed is significantly slower due to its auto-regressive architecture and **prefix conditioning method**. Using the official implementation and MusicGen-Large as the backbone, we benchmarked 20-second audio generation on a single RTX 3090: - **MusicGen-Large**: 53.95 seconds - **Coco-mulla**: 101.03 seconds ➝ *Coco-mulla is about **87% slower** than its own backbone.* In comparison, **MuseControlLite introduces minimal slowdown**: - **Original Stable-audio**: 4.92 steps/sec - **MuseControlLite (85M trainable params)**: 4.88 steps/sec ➝ *Only **1% slower**. ### Additional Evaluation on Song Describer Clips To ensure a fair evaluation, we manually selected **30 clips** from the **Song Describer dataset**, ensuring each clip contained drums (as coco-mulla requires). Evaluation results are shown below: | Model | Train Parms | Total Parms | FD | KL | CLAP | Rhythm F1 | |------------------|-------------|-------------|---------|---------|-------|------------| | coco-mulla | *132M | 3.3B | 217.94 | **0.47**| 0.36 | 0.63 | | MuseControlLite | 85M | **1.4B** | **216.27** | 0.48 | **0.39** | **0.87** | *Note: While the reviewer mentioned that coco-mulla only uses 60M trainable parameters, based on their official code and model specs (including a hidden size of 2048), we estimate the correct figure to be around 132M, consistent with MusicGen-Large. FD, KL, and CLAP are computed using the Stable-audio evaluation metrics. Rhythm F1 is computed using madmom, consistent with both Coco-mulla and our evaluation. Our model outperforms Coco-mulla on FD, CLAP, and Rhythm F1, while achieving comparable KL divergence, showcasing strong and balanced performance. We believe our work demonstrates robustness, efficiency, and flexibility under realistic constraints. We have been mindful to conduct comprehensive and fair experiments, and we appreciate the opportunity to present our findings.
Summary: The paper introduces MuseControlLite, a parameter-efficient methodology for aligning a pre-trained, DiT-based text-to-music model, to both symbolic and audio controls. The authors demonstrate the capability of MuseControlLite to extend the controllability of a pre-trained StableAudio-Open model, from text prompts, to conditioning on melodies, dynamics, rhythm and audio excerpts for inpainting and outpainting, with light-weight zero-convolution additive adapters operating on decoupled cross-attention layers. Claims And Evidence: The claims made in the submission are mostly clear and supported by evidence. However, I have the following concerns: 1.The authors present the incorporation of both symbolic and audio controls in music generation as a core contribution of the paper. This claim isn’t aligned with prior work. JASCO, e.g., a prior work cited by the authors, is a trainable text-to-music model that combines text, symbolic and audio controls. 2. The experimental section lacks a subjective evaluation measuring the performance of MuseControlLite in terms of control adherence, and more importantly, in terms of the perceptual quality compared to the baselines. Listening to samples from the demo page, it is apparent that MuseControlLight is likely inferior in terms of audio quality and musicality compared to the baselines. This might be unaligned with the trend implied by the objective evaluation, in which MuseControlLight is on-par with the baselines in terms of quality and musicality. 3. In Section 5.2. - “Ablation Study for Musical Attribute Conditions” - the authors attribute the reduction in FD, KL and CLAP obtained by adding musical controls as an evidence for improved audio quality and semantic alignment with the reference dataset. While this argument is partially sound, the authors didn’t address the uncertainty of a potential information leakage when conditioning on musical attributes that are a function of the ground truth audio. In specific, the significant reduction in FD, obtained by introducing melody conditioning, might be instead attributed to additional information on the GT samples rather than quality improvement. To make the argument more valid, the authors should experiment with style transferred samples, e.g. the original melody with an out-of-genre text prompt or vice versa, and to validate the trends with a human study. Methods And Evaluation Criteria: In general, the methodology and the evaluation criteria make sense for the problem and for supporting the proposed approach. However, the lack of subjective evaluation significantly reduces my confidence in the effectiveness of the proposed technique. Theoretical Claims: I briefly checked the derivation of the multi-source classifier free guidance in appendix A, and I didn’t find any issues. Experimental Designs Or Analyses: I checked the design of the following experiments: 1. Baseline comparison in terms of quality and melody adherence. 2. Ablation on the influence of the different musical attributes on MuseControlLite performance. 3. Comparison to baselines in terms of inpainting and outpainting. I didn’t find major issues beyond what was previously mentioned in the “Claims and Evidence” section. Supplementary Material: I thoroughly reviewed the audio samples provided by the authors in the paper demo page. Relation To Broader Scientific Literature: The paper expands prior work on temporally controlled music generation, by introducing an adaptor-based method that is significantly more efficient in-terms of having a light-weight set of trained parameters, compared to prior ControlNet based approaches, such as MusicControlNet. As opposed to MusicControlNet’s control adaptation technique, that requires a set of new weights that is in the order of magnitude of the original model, here, the adaptor consists of a parameter set that is an order of magnitude lighter than the baseline StableAudio-Open model. In terms of the set of musical controls, the paper provides a set of controls that is more diverse than MusicControlNet, but very similar to the control set proposed by DITTO. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Weaknesses: * Fixed mask positioning, at least in the demo page, limiting the applicability of the system. * Qualitative inpainting results expose a significant degradation of the audio quality in the in-painted areas. The transition between original and in-painted areas sounds unsmooth. Strengths: * The ablation on the effect of ROPE introduction to decoupled cross-attention adapters demonstrates a clear and convincing trend. * Planned open-sourcing of code and model checkpoints. Other Comments Or Suggestions: * Typo in figure 1, “farword” instead of “forward”. * Line 257 - “Since the segments controlled by c_audio are more rigid, we propose to use musical attribute conditions to flexibly control the masked audio segments.” - this is unclear * Line 295 - “as we found that latent space length has only a minor influence on audio quality.” - Please add an explanation in addition to the empirical observations. * Baselines - section 4.4 - “or not generating a relatively short audio” should be “or generating a relatively short audio” Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful and valuable feedback. In response to your comments, we have made several plans to update to our submission, as outlined below. #### On Claims & Evidence: *Firstly*, you are right that JASCO has effectively integrated symbolic and audio controls. We will revise the paper to reflect this accurately. For example, in the second contribution outlined at the end of Section 1, we will update the phrasing from "first trainable model" to "first trainable lightweight adapter". For the reviewer's information: Jasco differs from ours in that it's trained from scratch and that it generates 10s audio. Moreover, Jasco's audio condition is quantized to facilitate style transfer, while we use full-resolution audio for in/out-painting. *Secondly*, we initially omitted a subjective evaluation due to the unavailability of the code and weights for the key baseline, Stable-Audio ControlNet. However, in response to multiple reviewers’ requests, we have conducted a **listening test** by incorporating examples from the Stable-Audio ControlNet project website, despite the possibility that these samples may have been cherrypicked. We recruited 34 participants and used the same text and melody conditions as those demonstrated on their website. We generated music using both our model and MusicGen-Melody, then compared these outputs with the samples retrieved from their demo page. Due to space limit, we present the detailed results in our response to Reviewer 5zZP as Table C. The results show that, when equipped with the same melody representation as Stable-Audio ControlNet, our model performs favorably, despite requiring significantly fewer trainable parameters. We appreciate the reviewer’s suggestion, which prompted this valuable addition. We wish to highlight that the perceived inferiority of our model’s output on our original demo site stemmed from our use of a different melody representation than that of Stable-Audio ControlNet. This oversight was identified thanks to a comment from Reviewer 5zZP. Specifically, our initial model (denoted as `v1`) adopted the melody representation from MusicGen-Melody, which offers lower pitch resolution compared to that employed by Stable-Audio ControlNet. For fair comparison, we have retrained our model and developed a new version (`v2`), which now aligns with Stable-Audio ControlNet’s melody representation. This change significantly enhances the perceptual quality of our generated samples. We have updated our demo website accordingly, and we invite the reviewer to explore the improved results. *Finally*, we share the reviewer’s intrigue regarding potential information leakage and have addressed this concern by implementing the suggested **style transfer** experiment. Using the Song Describer Dataset, we divided it into two disjoint subsets. We generated samples by pairing text from the first subset with attributes extracted from the second subset, ensuring that musical attributes are independent of the ground-truth audio. The generated samples were then evaluated against the first subset as the reference set. As presented in Table A below, the results align with our prior findings: using more conditions enhances FD and KL scores. We will replace the Table 4 of our paper by this new Table A. Besides, since the text and melody conditions in the aforementioned user study were from distinct music clips, we have also extended this "style transfer" approach to a human evaluation. #### Table A |Grp|Melody|Rhythm|Dynamics|FD|KL|CLAP|Mel Acc.|Rhythm F1|Dyn cor.| |---|---|---|---|---|---|---|---|---|---| |None|–|–|–|185.48|0.67|0.36|0.10|0.22|0.08| |Single|✓|–|–|139.89|0.49|0.37|0.69|0.43|0.17| |Single|–|✓|–|158.64|0.62|0.34|0.10|0.85|0.47| |Single|–|–|✓|189.05|0.64|0.34|0.10|0.49|0.93| |Double|✓|✓|–|**124.38**|0.49|0.34|**0.70**|0.87|0.52| |Double|✓|–|✓|145.00|**0.42**|**0.38**|0.69|0.67|0.94| |Double|–|✓|✓|173.93|0.56|0.32|0.10|**0.88**|**0.95**| |All|✓|✓|✓|138.18|0.47|0.35|**0.70**|0.86|**0.95**| #### On Other Weaknesses: In our demo page, we used fixed mask positioning mainly for simplicity. However, during training, we applied random masking ranging from 10% to 90% of the audio condition. This enables arbitrary mask sizes at inference time, ensuring applicability. The suboptimal audio quality observed in the inpainted examples likely stems from an overly challenging configuration in our initial setup. Specifically, we masked the middle 20s---2/3 of the total audio duration---while providing only the first 5s and last 5s as context. In contrast, existing models (e.g., DITTO) typically adopt a less demanding setting, such as masking the middle 1/3 of the audio. We acknowledge this oversight and plan to rerun the experiment using this more standard config to see whether it gives improved audio quality. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and for the additional experiments, adequately answering the main concerns reflected in my review. I raised the score to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, I appreciate your recognition of the clarifications and additional experiments we provided. We are grateful for the time and consideration you devoted to reviewing our work. Sincerely,
null
null
null
null
null
null
Online Linear Classification with Massart Noise
Accept (poster)
Summary: The paper studies online linear classification under massart noise where noisy label might be flipped with a rate of $\eta$. In the case where the dataset is separable with a margin of $\gamma$. The paper used a scaled leakyRelu function as the surrogate loss and guarantees a mistake bound of $\eta T + O(T^{3/4} / \gamma)$. The binary classification tasks can be extended to $k$-armed bandit setting under the assumption of rewards being monotonic and separated by $\Delta$, an expected reward $(1 - 1/k) \Delta T - o(T)$ is recovered. The later result is a generalization of the first when $k = 2$, with a slight worse dependence on $o(T)$. Claims And Evidence: The paper claims focusing on computational efficient algorithm with provable guarantee rather than previously established algorithm scales with $O(\sqrt{T})$ but incomputable. The claim is supported by main theorems. Methods And Evaluation Criteria: The paper used classical and well acknowledged metric in the filed, mistake bound and reward bounds, for evaluation Theoretical Claims: All checked Experimental Designs Or Analyses: Not applicable Supplementary Material: All checked Relation To Broader Scientific Literature: the paper contributes to provide new computational efficient online algorithm with theoretical guarantee for linear classification with labels subject to Massart noise. The bound matches with previous offline result. Essential References Not Discussed: well referenced. Other Strengths And Weaknesses: The paper is well written and easy to follow. The intuition of selecting LeakyReul with appropriate parameter is clearly demonstrated. Other Comments Or Suggestions: - typo line 234 left '+' instead of '-' followed by $|t|$. - typo, eqn start from line 312 left: indexing with $t$ instead of $i$ - I did not follow at line 310 right panel, proof to Theorem 1.9. In particular, by definition of $R(T)$ in line 308 right and definition of $\bar R(T)$ in line 268 left. It seems conclude $E[R(T)] \le O(GD / \sqrt{T}) = \bar R(T)$ with appropriate $G, D$ being substituted. What choices allowed the claimed bound $\bar R(T) = O(\sqrt{T} / \tau)$ - Another confusion of above might arise at $\eta$ had clashed interpretation: step size in Fact 2.4 and noise level in Definition 1.1; then $G$ is related to $\eta$ (in the noise sense)... Questions For Authors: The required assumption in definition 1.5 seems very restrictive if we are considering $k$-armed bandit, as it requires pairwise $\Delta$ separability. $\Delta$ separability between every arm and the best arm is common. I wonder whether definition 1.5 can be lifted to the standard assumption. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer qMz2 for the positive feedback on the paper's writing and clarity of intuition. We will fix the typos pointed out by the reviewer. **Clarification on $\bar{R}(T)$ Derivation:** We apologize for the confusion regarding the derivation involving $ \bar{R}(T)$ around Line 310. The bound indeed stems from applying Fact 2.4 (Online Gradient Descent regret bound). In our setting, the parameter values are as follows: The domain diameter (D) is equal to 1. The gradient norm $G$ is related to the Lipschitz constant of our reweighted loss, which is $O(1/\tau)$. Substituting these into Fact 2.4 gives the bound $O( \sqrt{T}/\tau)$. We acknowledge the inconsistent reuse of the symbol $\eta$ for both the Massart noise level and the step size (Fact 2.4). We will change the notation for the step size throughout the paper. **Restrictiveness of Assumption (Definition 1.5):** The reviewer correctly notes that our assumption of pairwise separability by $\Delta$ for all pairs of arms in the bandit setting is stronger than the more standard assumption requiring only separability between the best arm and all other arms. Relaxing our assumption to the standard "best vs. others" gap introduces significant technical challenges in adapting our Massart noise learner. While this is a very interesting and important direction, we leave it open for future work.
Summary: They present a computationally efficient alg that achieves mistake bound \eta T + o(T) where \eta is the probability of flipping the ground truth label in the Massart Noise model. Their algorithm is based on performing online gradient descent on a seq of reweighted Leaky-relU loss functions. Next, they consider a semi-random k-arm contextual bandit problem, where given a list of contexts, they are consistent with some halfspace $w^*$, in expectation. That is, given a list of contexts $x_1,\cdots, x_k$, given that $w^* x_j > w^* x_i$, the expected reward of action I is larger than action j in expectation by at least \Delta. They used their online Massart learner to obtain an efficient bandit algorithm that obtains roughly (1−1/k)∆T more reward than playing at random at every round. Question: what is a lower bound here, in terms of what is the maximum reward possible? Claims And Evidence: Their results are definitely novel and interesting, although I did not check all the proofs. Methods And Evaluation Criteria: Theoretical paper, and the claims seem reasonable to me. Theoretical Claims: I did not check all the proofs but the claims seem reasonable to me. Experimental Designs Or Analyses: Theoretical paper Supplementary Material: I did not check the appendix. Relation To Broader Scientific Literature: For the class of \gamma-margin linear classifiers they present the first computationally efficient algorithm that achieves mistake bound \etaT+o(T). Essential References Not Discussed: I cannot think of anything that is missing. Other Strengths And Weaknesses: Their results are definitely novel and interesting, although I did not check all the proofs. In terms of writing of the paper, at some places, it was very hard to understand what is going on, here are some comments: Lemma 2.2., you did not define \bar{R}. Lines 245- 260, what is u? It is not defined. Alg 2, what is the function G? Although the results are novel and interesting, I think the write-up can be improved a lot. Other Comments Or Suggestions: Same as above. Questions For Authors: Same as above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer WzcY for the positive feedback on the paper's novelty. We respond to the reviewer’s questions below. **$\bar{R}$ in Lemma 2.2**: Thank you for pointing this out. $\bar{R}$ refers to the regret bound used in the online gradient descent analysis. Vector $u$ in Lines 245-260: The vector $u$ in Lines 245-260 was meant to represent an auxiliary vector, which is independent of the current vector $w$ in computing the gradients. The value of the vector $u$ is chosen to be the vector $w$ but we define it in this way so that the reweighting does not change the gradient wrt $w$. **Function $G$ in Algorithm 2**: In Algorithm 2, $G$ represents the loss function being minimized at each step of Algorithm 2. Its specific form, incorporating the reweighted LeakyReLU loss and bandit feedback, is constructed in steps 2 and 3b of Algorithm 2. We use \ell(w) as a shorthand for the instance of this loss function $G$ at a particular step, simplifying the presentation of the regret analysis. **Lower Bound Question:** The theoretical maximum expected reward achievable by any algorithm (even computationally inefficient ones or those with oracle access to the true separator $w^*$) corresponds to always selecting the optimal arm based on the current context $x^t$. If we assume, without loss of generality, that arm 1 is the optimal arm in expectation for every round $t$ the maximum possible expected reward is $E[\sum_{t=1}^T r_1^t]$. Establishing tight bounds on the maximum achievable reward specifically for computationally efficient algorithms in this precise semi-random setting is challenging, and related questions remain open even in offline contexts. For example, in the special case where $k=2$ and $r_i^t\in\{\pm 1\}$, the expected reward gain of $(1/2)\Delta T$ achieved by our algorithm is indeed a meaningful quantity. As discussed in Remarks 1.6 and 1.10, this relates directly to the mistake bound achievable in the underlying binary classification problem. Given the known computational hardness results (as stated in the aforementioned remarks in the submission), the maximum expected reward that can be achieved by computational efficient algorithms is at least $\Delta T$.
Summary: This paper considers an online learning setting where context-label pairs are generated with Massart noise. More specifically, while in standard online classification, both the context and label are generated adversarially, the authors consider a setting where the context is generated adversarially, but the label is determined based on the context at that time and is subject to Massart noise. They first derive a mistake bound for this online classification setting that matches a computational lower bound in the offline setting. Technically, they achieve this by considering a new loss function for the online learning algorithm, in which a LeakyReLU function is weighted by the margin of the sample at each time step, which allows them to establish the above mistake bound. By leveraging this technique, they further derive a desirable mistake bound for the $k$-armed contextual bandit under assumptions weaker than realizability for the reward function. When the number of arms is 2, this mistake bound matches existing results. Claims And Evidence: Yes, all propositions and claims in the paper are accompanied by proofs or corresponding references. Methods And Evaluation Criteria: Yes, the proposed algorithms are variants of existing algorithms in online classification and are thus valid. Moreover, the evaluation criteria (mistake bound and expected reward) are standard in the literature. Theoretical Claims: Yes, I have reviewed the proofs in the main body and confirmed their validity with no issues. Experimental Designs Or Analyses: NA Supplementary Material: The reviewer did not review the supplementary material. Relation To Broader Scientific Literature: In the traditional online (linear) classification setup, the setting where both the context and label are generated adversarially has been extensively studied. However, the traditional assumption that the label is generated completely independently of the context is overly pessimistic. To address this issue, the authors introduce the Massart noise assumption, which has been well-studied in the offline setting, and consider a scenario where there is some dependence between the context and the label, and this is an interesting aspect of this paper. Furthermore, instead of directly using the LeakyReLU function, which is known to be effective in existing studies, they consider a variant weighted by the margin of the sample at each time step. By doing so, they establish a mistake upper bound that matches a computational lower bound, and this is of interest to the community. Essential References Not Discussed: no Other Strengths And Weaknesses: This paper is overall well written. In particular, the problem setting, algorithms, definitions, and main results are presented in a highly clear manner. Other strengths are discussed in Relation To Broader Scientific Literature. A minor weakness is the presence of numerous apparent typos. For example, - In Line 72, the phrase "chooses an action $\alpha = 1, \dots, k$" is unclear. - In Line 80, $\mathbb{E}[r_i | x^{(t)}]$ should be $\mathbb{E}[r_i | x_i^{(t)}]$. Additionally, some notations are quite confusing. For example, - In Fact 2.1, the expression $1/2 ((1-2\lambda)|t| - t)$ is difficult to interpret. - In Fact 2.4, the regret upper bound $GD/3 \sqrt{T}$ is presented in a fraction format that makes it unclear. It would be desirable to revise these points for clarity. Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 3 Ethics Expertise Needed: ['Other expertise'] Ethical Review Concerns: a
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. We will fix the typos and address the reviewer’s suggestions to improve clarity.
Summary: The paper studies online linear classification with massart noise. The paper designs computationally efficient algorithms for online linear classification with Massart noise. The paper also extends this model to k-arm contextual badntit setting. ## update after rebuttal After rebuttal, I maintain my score. Claims And Evidence: In general, the claims are supported by sufficient evidence. The theoretical results seem sound. Methods And Evaluation Criteria: The performance measure, the mistake bound, makes sense as it is the measure that is studied in the most similar works. Theoretical Claims: I checked the general soundness and read all the proofs. Experimental Designs Or Analyses: The paper does not include experiments. Supplementary Material: I review all supplementary materials. Relation To Broader Scientific Literature: The contributions of this paper fall within the broader field of Online Linear Classification under Massart noise. The most relevant prior work is by Ben-David et al. (2009), who proposed inefficient algorithms. By contrast, this paper presents efficient algorithms. Essential References Not Discussed: I don’t know any specific part of the literature that is not addressed. Other Strengths And Weaknesses: 1. The paper is well written. 2. This paper gave the first efficient algorithm achieving a mistake bound of $\eta T+o(T)$ in online linear classification with Massart Noise. Other Comments Or Suggestions: There is a typo: 234: |t| - t -> |t| + t. Questions For Authors: 1. Could you explain why you define a loss function step d in algorithm1, what is the problem of using LeakyReLU directly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer WUUa for the positive feedback on the clarity and soundness of our theoretical results. We will fix the typos pointed out. In response to the reviewer’s question about the use of the Leaky ReLU: The standard LeakyReLU loss penalizes points that are further away from the decision boundary more than the points that are close. By reweighting with the margin we equalize the contribution of all points to the objective leading to minimizing misclassification error. Making the reweighting depend on the current vector w would result in a non-convex objective but treating it as a constant everytime results in an online sequence of convex objectives and we prove that our method converges to the desired solution. This is explained in lines 181-191 (left column) of the submission.
null
null
null
null
null
null
Computing Voting Rules with Improvement Feedback
Accept (poster)
Summary: This paper investigates the feasibility of computing voting rules using improvement feedback, a type of iterative preference refinement distinct from pairwise comparisons. It characterizes the positional scoring rules that can be learned under improvement feedback, demonstrating that while plurality can be determined, many other rules face strong impossibility results. The study also establishes that Condorcet-consistent rules cannot be computed using improvement feedback. Theoretical findings are supported by experimental analyses, showing practical implications of the proposed method. Claims And Evidence: - The authors claim that improvement feedback enables learning the plurality rule but fails for many other scoring rules. - The paper provides theoretical proofs demonstrating that improvement feedback does not suffice to compute Condorcet-consistent rules. - Empirical simulations support the theoretical findings, comparing improvement feedback and pairwise comparisons in approximating voting rule outcomes. Methods And Evaluation Criteria: - Theoretical analysis using formal models of voting and preference aggregation. - Characterization of learnability for different voting rules using improvement feedback. - Empirical evaluation through simulations on different preference distributions, including Impartial Culture (IC), Mallows, and Plackett-Luce models. - Approximation ratios are used to measure the effectiveness of improvement feedback compared to pairwise comparison feedback. Theoretical Claims: - Improvement feedback can determine the plurality winner but fails for most other positional scoring rules. - No algorithm, deterministic or randomized, can identify a Condorcet winner using improvement feedback with a probability greater than $\frac{1}{m}$. - The impossibility results hold under general improvement feedback distributions, except for a specific case where $P^t_i/P^t_{i+1}$ follows a defined ratio. Experimental Designs Or Analyses: - Simulations are conducted using synthetic ranking data generated from IC, Mallows, and Plackett-Luce models. - Different t-improvement feedback distributions (uniform, linear decay, and exponential decay) are tested. - The performance of improvement feedback is compared against pairwise comparison feedback for Borda and Copeland rules. - Results show that under certain preference distributions, improvement feedback performs comparably or even better than pairwise feedback in approximating voting rule outcomes. Supplementary Material: . Relation To Broader Scientific Literature: - Extends prior work on voting rule computation under limited feedback (e.g., Halpern et al. 2024). - Builds on research in coactive learning and interactive preference learning by examining feedback mechanisms for collective decision-making. - Contributes to computational social choice by analyzing preference aggregation under practical constraints. Essential References Not Discussed: . Other Strengths And Weaknesses: ### Strengths - Provides a rigorous theoretical foundation for analyzing improvement feedback in voting. - Well-structured proof techniques and clear characterization of learnable voting rules. - Complementary empirical analysis enhances practical relevance. ### Weaknesses - Limited discussion of how improvement feedback can be practically elicited in real-world applications. - The focus on worst-case impossibility results may not fully capture practical performance in realistic scenarios. - The paper assumes idealized access to feedback distributions, which may not hold in practical settings. Other Comments Or Suggestions: - The authors could explore hybrid feedback mechanisms combining improvement feedback with pairwise comparisons to mitigate the limitations identified. - More discussion on the implications of these results for participatory decision-making systems could enhance the broader impact of the work. - It would be helpful to consider how strategic behavior by voters might impact the effectiveness of improvement feedback. Questions For Authors: Authors consider multiple answer setting, and in that case Condorset winner cannot be determined. However, getting multiple preference answers is costly and difficult and can be decomposed as getting pairwise comparisons n times. Why should we solve the problem which happens only under multiple answers? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer. >Authors consider multiple answer setting, and in that case Condorset winner cannot be determined. However, getting multiple preference answers is costly and difficult and can be decomposed as getting pairwise comparisons n times. Why should we solve the problem which happens only under multiple answers? We would like to clarify that in the $t$-improvement model, each query returns only a single candidate—namely, a candidate from the $t$-above neighborhood of the queried option. The model does not assume multiple feedback points per query (lines 60-62). The impossibility results we derive are not due to assuming "multiple answers" but rather due to the restricted nature of the available feedback. While it's true that repeated pairwise comparisons could reconstruct full rankings, our focus is precisely on the types of settings where such fine-grained elicitation is infeasible (lines 38-40). >Limited discussion of how improvement feedback can be practically elicited in real-world applications. Improvement feedback models scenarios where users iteratively refine suggestions—e.g., editing LLM-generated text or adjusting robotic actions—by providing local, incremental improvements. This form of interaction is common in coactive learning, preference-based reinforcement learning, and broader human-in-the-loop decision-making systems. We will clarify this motivation even more in the introduction. >The focus on worst-case impossibility results may not fully capture practical performance in realistic scenarios. As is typical in theoretical work, our analysis focuses on worst-case guarantees to characterize the fundamental limitations of improvement feedback. However, we agree that worst-case behavior may not reflect typical performance in realistic settings. That is why we conducted experiments on more realistic distributions—including IC, Mallows, and Plackett-Luce—which go beyond the worst-case. These experiments show that improvement feedback can perform competitively with, and in some cases even outperform, pairwise comparisons in approximating voting rule outcomes. This contrast highlights the practical value of improvement feedback despite its theoretical limitations. >The paper assumes idealized access to feedback distributions, which may not hold in practical settings. Indeed, access to the exact feedback distribution is an idealized assumption, which **only strengthens our negative results**: if impossibility holds under full information, it certainly holds in more realistic settings with partial or noisy access (lines 80-84, 189-192). However, in practice, these distributions can be approximated via sufficiently many user queries, and one can formally justify this using standard concentration techniques. >The authors could explore hybrid feedback mechanisms combining improvement feedback with pairwise comparisons to mitigate the limitations identified. We agree this is a compelling direction, as noted in the third sentence of our Discussion section. It is a question we have also given considerable thought. The main challenge is that the indistinguishable profiles constructed by Halpern et al. for pairwise comparisons are fundamentally different from those we construct for improvement feedback. Their profiles (respectively, ours) lead to impossibility under pairwise comparisons (respectively, improvement feedback), but do not remain indistinguishable when accessed via the other feedback model. As a result, making progress in this setting would require either (i) extending the impossibility results to hold even when both types of feedback are available, which would require new indistinguishable constructions, or (ii) showing that access to both feedback models enables algorithms to recover the correct winner, by designing techniques that exploit their complementary strengths. We leave this question to future work. >More discussion on the implications of these results for participatory decision-making systems could enhance the broader impact of the work. Our work highlights fundamental limitations in inferring collective decisions from restricted forms of feedback, which are common in real-world. Understanding which social choice rules are robust under such bounded input is crucial for designing systems that are both practical and representative. We will expand the discussion in the final version to better highlight these connections. >It would be helpful to consider how strategic behavior by voters might impact the effectiveness of improvement feedback. Our negative results already hold under the assumption that voters report their preferences truthfully. Introducing strategic behavior would **weaken** these impossibility results, as it introduces an additional layer of uncertainty and misalignment between observed feedback and true preferences. However, we agree that understanding strategic behaviors in this setting is an interesting direction for future work.
Summary: This paper discusses the potential for any learning algorithm to identify a social choice winner f when sampling preferences from a distribution D. It specifically talks about t-improvement feedback, where D is hidden but can be sampled such that -- given a candidate $a$ in position $i$, one of the t candidates that are higher ranking that $i$ can be elicited at a time. Their main theorems distinguish certain sufficient conditions where an algorithm A cannot identify the winner f(.) when they have access to t-improvement feedback, in the worst-case, under position scoring rules and Condorcet-consistent rules. *NOTE: Score changed 2->3 after reading authors' rebuttal.* Claims And Evidence: It could be made clearer how the failure of t-improvement feedback entails their claim that no algorithm can learn the correct output f(.) with high enough probability. The paper is confusing about what information algorithms A have access to, and specifically how they conduct t-improvement feedback. I would like to see this spelled out more explicitly. Methods And Evaluation Criteria: Yes. Theoretical Claims: - In Lemma 3.1 why does the requirement on p need to hold? Doesn't "j-i > t" and "m-l > t" suggest that querying either a or b will (necessarily) never yield the other, since it's outside the scope of t items? - Please clarify the second-to-last sentence in the proof of Lemma 4.2. This doesn't seem to follow from prior deductions. The second section of this proof appears quite round-a-bout and I'm not sure why it's there. - In proof of Thm 4.4, why can we apply Lemma 3.2? Doesn't that require m distinct distributions, whereas the construction from Lemma 4.1 only guarantees some family of distributions? Also -- in the second to last sentence, why is the scoring rule in span(*,*)? Isn't this part discussing when the scoring rule isn't in span(*,*)? Experimental Designs Or Analyses: Experiments look good. They support and provide complementary insight into the paper's main claims. Supplementary Material: No. Relation To Broader Scientific Literature: For the most part, yes. The authors discuss relevant literature in partial-order preference elicitation and query complexity. They don't go too much into depth in the literature about pairwise elicitation, such as papers discussing elicitation in the the Bradley-Terry model. Essential References Not Discussed: In Related Work (Sec 1.2), talk about relationship of your work to the possible and necessary winner problems, as introduced by [1,2] (see also Sec 10.3.1 of (Brandt et al., 2016). I would like to see how your results compare to the impossibility results presented by this prior line of research. Also, how do your query complexity findings compare to other preference learning research, such as with pairwise comparisons in the Bradley-Terry model? [1] Kathrin Konczak and Jérôme Lang. 2005. Voting procedures with incomplete preferences. [2] Lirong Xia and Vincent Conitzer. 2011. Determining Possible and Necessary Winners under Common Voting Rules Given Partial Orders. Other Strengths And Weaknesses: **Overall review:** The paper does a good job posing an interesting question -- the learnability of outputs f given t-improvement feedback -- and taking steps to solve it under various social choice functions f. It is well-put into context of other work to learn social choice functions. It checks off the boxes I'd like to see out of this type of paper: pose interesting question, frame the model, non-trivial theoretical findings, experiments and discussion. However, there are certain questions I still have: - Certain aspects of the proofs are not clear - How the inability to distinguish between two distributions leads to the claim that algorithms cannot learn f better than random - What information algorithms A have access to when conducting their queries - How these results compare to the theoretical query complexity of t-improvement feedback or pairwise feedback - How these results compare to the NP-hardness of the possible/necessary winner problems Also: - The paper seems to significantly build on (Halpern et al., 2024), though doesn't go into too much detail on the work. - What motivates the "t-improvement" concept? In (p2 LHS) you say t reflects the "limited cognitive and practical effort" of users, though this seems like a limited jumping-off point. Therefore I would recommend the paper gets edited and resubmitted. If the authors can point to the aspects of this paper that address these issues, or answer them simply, I'd be willing to reconsider. Other Comments Or Suggestions: Minor: - (Typo) In Model (Sec 2), you use $\mathcal{L}$ and $L$ interchangeably. - In Model (Sec 2), you refer to "pi of D" as a sample from the distribution D subject to the permutation pi. Is this supposed to be a sample, or a modified distribution itself? - (p4 LHS) Please define $q^t_\sigma$. - Format of Lemma 3.1: take the definition of $D_{i,j,l}$ out of the lemma statement. - (Typo) In Lemma 3.2: "...suppose there are is a family..." - (Typo) In Lemma 4.1, you say "indistinguishable" instead of "t-indistinguishable." - In main body, tell the reader that Lemma 4.1 is proved in Appendix B - Proof of Lemma 4.2: is $\lambda$ defined as $s_i / P^t_i$ here? How is this related to $\vec{s}^*_t$? Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. >The paper is confusing about what information algorithms A has access to, and specifically how they conduct t-improvement feedback. As noted in your summary and in lines 60–62, under the $t$-improvement feedback model, when a candidate $a$ is queried, the response is a random candidate drawn from the $t$-above neighborhood of $a$ in the voter's ranking. This candidate is sampled according to a fixed $t$-improvement feedback distribution, which defines the probabilities over the $t$-above candidates (lines 63–64). Given a preference profile and a $t$-improvement feedback distribution, this sampling procedure induces a marginal probability of observing $b$ as feedback when querying $a$. While in practice, such marginals can be estimated by sufficiently many samples, here we make the strong assumption of full information: the algorithm has exact access to the probabilities of seeing $b$ when querying $a$, for every pair $a,b\in M$ (lines 74–81). This idealized access **strictly strengthens our impossibility results**--if no algorithm can succeed finding the winner with full information, it cannot succeed with less (see lines 82–84, 189-192). >How these results compare to the theoretical query complexity of t-improvement feedback or pairwise feedback. Our results do not address query complexity (lines 104–108 rhs). As noted above, we assume idealized access to the exact marginal probabilities of receiving $b$ when querying $a$, thereby abstracting away sampling concerns. As such, our results are orthogonal to work in preference learning that focuses on sample efficiency (e.g., under the Bradley–Terry model). >How the inability to distinguish between two distributions leads to the claim that algorithms cannot learn f better than random. proof of Thm 4.4, why can we apply Lemma 3.2? Indeed, Lemma 3.2 requires the existence of $m$ preference profiles that are all $t$-indistinguishable from each other and each have a unique winner. Lemma 4.1 constructs exactly such a set of profiles, as stated explicitly. We suspect the confusion may stem from our use of the term “family of profiles”, which we use in line with standard terminology (see Lemmas 4.1–4.3 in Halpern et al.). >In the second-to-last sentence, why is the scoring rule in span(,)? This is a typo. The sentence should refer to scoring vectors **not in** the span. We apologize for it. >In Lemma 3.1 why does the requirement on p need to hold? Doesn't "j-i > t" and "m-l > t" suggest that querying either a or b will (necessarily) never yield the other, since it's outside the scope of t items? Indeed, these conditions indicate that querying either $a$ or $b$ will never yield the other, so indeed by swapping them the probability of asking one and seeing the other remains the same (equal to $0$). However, to ensure that the two profiles are indistinguishable, we must guarantee that for **every** $x,y \in M$ the probability of querying $x$ and observing $y$ is the same under both profiles (see definition in lines 195-200). The value of $p$ is chosen to maintain this property (see lines 550–582). >Please clarify the second-to-last sentence in the proof of Lemma 4.2. First part of the proof concerns positions $2$ to $ m-t-1$ (line 263). Second part of the proof concerns positions $m-t$ to $m-1$ (line 234 rhs). We relate position $2$ to positions $m-t$ to $m-1$ in order to argue that the ratio $s_i/ P_i$ is the same (and equal to $\lambda$) for all positions from $2$ to $m-1$. >How these results compare to the NP-hardness of possible/necessary winner problems. That line of work focuses on computational complexity of determining whether a candidate is a possible or necessary winner given partial orders. In contrast, we assume access to full statistical feedback and study whether any algorithm—regardless of computational complexity—can recover the winner. Our impossibility results are information-theoretic, not complexity-theoretic (see lines 104-106). >What motivates the “t-improvement” concept? The notion of $t$-improvement captures interactive settings where users refine suggestions through small, local adjustments—e.g., modifying LLM outputs or adjusting robot behavior—rather than providing full rankings. This interaction model appears in frameworks such as coactive learning and interactive preference learning (see lines 25–35 rhs and 110–114). The parameter $t$ captures the granularity of improvement. >The paper seems to significantly build on Halpern et al. (2024), though doesn't go into much detail on the work. While our work is conceptually related to Halpern et al, the technical contributions are distinct. The preference profiles we construct differ significanlty and are tailored to the $t$-improvement feedback model rather than pairwise comparisons. Where we do draw on Halpern et al.—e.g., in Lemma 3.2—we reprove the results in our setting for completeness. --- Rebuttal Comment 1.1: Comment: Thank you for the comments. I have improved my score accordingly. I still would like to see this paper edited by incorporating reviewers' comments prior to publication. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to carefully read our rebuttal and revise their score accordingly. We promise to revise the paper by incorporating all reviewers' very valuable and constructive feedback.
Summary: The paper studies the concept of "improvement feedback" within computational social choice. Improvement feedback is a response given to a query of a specific voter asking a question along the lines of "which candidate do you prefer over x?" The main question of the paper is whether queries in the improvement feedback setting provide enough information to compute the winner of various voting rules. The paper provides a few possibility results: It is possible to compute the plurality winner and any positional scoring rule between plurality and a specific scoring rule determined by the setting itself. The paper shows that every other positional scoring rule cannot be exactly computed using this framework. Similarly, it is shown to be impossible to compute the Condorcet winner. Experiments show that while it is impossible to compute the winner of many voting rules in theory, it is often possible to do so in practice. Using improvement feedback to estimate the winner of the Borda and Copeland rules shows that across three different preferences profiles, it is frequently possible to achieve a high accuracy at estimating these rules. Improvement feedback is compared with the accuracy of estimating these rules using pairwise feedback and shown to be more accurate in some settings/less accurate than others. Claims And Evidence: The claims made in the paper are supported fairly well. Methods And Evaluation Criteria: The paper makes theoretical claims and supports them appropriately. Experiments add a very interesting new depth to the theoretical results. Theoretical Claims: I lightly reviewed the proofs included in the main text of the paper. Experimental Designs Or Analyses: The experiments are quite reasonable and appropriate for the question at hand. Slightly more explanation would be useful. Specifically, an explanation of the approximation ratio -- I think I know how this is defined here but I can imagine a few semi-reasonable definitions. Supplementary Material: I briefly looked at the additional experiment results. Relation To Broader Scientific Literature: The paper builds upon one recent work studying comparison feedback and, less directly, several other papers considering partial information settings and t-wise queries. This is a very natural extension of existing research and could conceivably relate to some applications, such as in RLHF. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall I find the main idea of the paper to be quite reasonable and generally like the paper. This fits well into existing and ongoing work into learning about partial preferences. There is some room for improvement in presentation which I would encourage should the opportunity arise. In particular, I would suggest ensuring that the introduction is primarily focused on building intuition around the paper. This is largely the case but the section could be further simplified. A stronger connection to possible applications would also be useful, if you do see such a connection. Including slightly more natural language/intuition-building throughout the paper would also improve the readability of the work. I very much like that you have included some experiments. That it is usually possible to effectively learn Borda/Copeland despite not being possible to guarantee highlights, for me, the importance of doing experiments to actually understand the behaviour of a setting. Additional details on the setup are important, however. Including them in the appendix would be fine. Specifically, I would hope for more detail on the distributions (is Mallow's using Boehmer's re-parameterization?) and the winner determination. I am interested in winner determination because I find the poor accuracy at learning the Copeland winner to be quite surprising. Is the algorithm learning a single weighted majority graph then using it to compute both the Borda and Copeland winners (also, is weighted majority graph defined in the paper?)? Then what is actually being learned from the impartial culture profile when Copeland is so poorly estimated? Perhaps I am missing it but some discussion about the accuracy of Copeland under IC seems quite important. Additionally, if you are learning a weighted majority graph using the pref-voting package it should be quite trivial to compute the winner of all C1/C2 rules in the package. While not the main focus of the paper, doing this could be quite interesting (especially given that you already know there is at least one rule, Copeland, which exhibits surprising behaviour). Other Comments Or Suggestions: I noticed several minor typos. I would encourage re-reading to catch any that I did not notice. Line 128, Column 1: You use $\mathcal{L}$ here and $L$ elsewhere in the paper. L194C2: "there are is a family..." L205C2: "any of this preference profiles" L309C2 (in the equation): Should there be a $D$ subscript for the probabilities? L334/335C2: The sentence "For both rules suffices..." needs rearranging. L351C2: "is assigned a r weight" Questions For Authors: Do you have any explanation for the relatively poor behaviour of Copeland in the experiments? I found this quite surprising -- is it also surprising to you? (A response is unlikely to change my evaluation but I am quite curious about this.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. > Slightly more explanation would be useful. Specifically, an explanation of the approximation ratio -- I think I know how this is defined here but I can imagine a few semi-reasonable definitions. For a fixed number of agents (n) and a single trial (out of 500), the approximation ratio is defined as follows: we identify the winner using the partial feedback and compute their true score under the full preference profile. This is then divided by the true score of the actual winner, i.e., the candidate who would have been selected using full information. We average this approximation ratio over all 500 trials and report both the mean and standard deviation. We will clarify this point in the revised version. >Additional details on the setup are important, however. Including them in the appendix would be fine. Specifically, I would hope for more detail on the distributions (is Mallow's using Boehmer's re-parameterization?) and the winner determination. We do not use Boehmer’s re-parameterization, as the number of candidates remains fixed throughout our experiments. We use the classical definition of the Mallows model, where, given a central ranking $\sigma^*$ and a parameter $\phi \in (0, 1]$, the probability of observing a ranking $\sigma$ is defined as: $\frac{\phi^{d(\sigma, \sigma^*)}}{\sum_{\sigma' \in \mathcal{L}(A)} \phi^{d(\sigma', \sigma^*)}}$. where $d(\sigma, \sigma^*)$ denotes the Kendall tau distance between $\sigma$ and $\sigma^*$. For winner determination, we construct a weighted majority graph based on the available feedback. Specifically, under pairwise feedback, each voter is asked to compare a randomly selected pair of candidates and under $t$-improvement feedback, each voter is queried with a randomly selected candidate and returns one candidate from the $t$-above neighborhood of the queried candidate, according to the $t$-improvement feedback distribution. These responses collectively define a weighted majority graph, which is then used to determine the winner under each feedback type. We will make sure to include more detailed descriptions of the setup, including formally defined the weighted majority graph. > Is the algorithm learning a single weighted majority graph then using it to compute both the Borda and Copeland winners (also, is weighted majority graph defined in the paper?)? Then what is actually being learned from the impartial culture profile when Copeland is so poorly estimated? Yes, the same graphs are used for both Borda and Copeland. We also found this behavior surprising. But our conclusion is that it stems from the statistical nature of pairwise comparisons in random preference profiles. Under the IC model, most pairwise majority margins are extremely narrow—typically close to 50-50. When we sample comparisons uniformly at random, there is a significant chance that sampling noise flips the majority outcome of a pair. For example, if candidate $a$ defeats $b$ with 51% of the vote, the number of observed comparisons may still favor $b$ in the sample. In such cases, $a$ is assigned $0$ points instead of $1$ in the Copeland score. This effect cannot be avoided even when the number of queries grows. Borda, on the other hand, is more robust in this setting. Even if the sampled pairwise data slightly misrepresents the majority—for example, favoring $b$ over $a$ 51% to 49%—the inferred Borda score for $a$ still reflects this proportionally. Since the Borda score can be interpreted as the sum of the estimated probabilities of a candidate defeating each other candidate (see line 1230), $a$ would still receive 0.49 points rather than being penalized with a full loss. This smoothness makes Borda less sensitive to small fluctuations. >Additionally, if you are learning a weighted majority graph using the pref-voting package it should be quite trivial to compute the winner of all C1/C2 rules in the package. We considered computing additional C1/C2 rules beyond Borda and Copeland. However, we ran into limitations stemming from the nature of the rules themselves. In particular, to make our definition of approximation ratio meaningful, we must restrict attention to rules that assign scores to candidates—otherwise, it's unclear how to compare the true and estimated winners. This requirement excludes many rules in the pref-voting library, such as Kemeny, Slater, and the Uncovered Set. We also considered the minimax rule, which does assign scores. However, we encountered technical issues that made it difficult to include. In particular, the true winner often has a minimax score of zero, resulting in undefined 0/0 approximation ratios. Additionally, minimax scores are typically nonpositive, with the winner being the candidate with the least negative value—this makes interpreting approximation ratios less meaningful. For these reasons, we ultimately decided not to include minimax in our evaluation.
null
null
null
null
null
null
null
null
Multidimensional Adaptive Coefficient for Inference Trajectory Optimization in Flow and Diffusion
Accept (poster)
Summary: This paper proposes to optimize model parameters and forward process interpolation coefficients with respect to a simulation-based adversarial loss. The training process consists of two stages. In the first stage, the flow model is trained w.r.t. randomly sampled multi-dimensional interpolation schemes via flow matching. To constrain the hypothesis space of interpolation schemes, interpolation coefficients are parameterized as a sum of weighted sinusoidals. In the second stage, the flow model and interpolation coefficients are optimized to minimize a simulation-based adversarial loss. Here, the weights for sinusoidals are parameterized as neural net outputs conditioned on $x_T$, providing an additional level of adaptivity. The authors demonstrate that the proposed method achieves competitive FID scores on CIFAR-10, ImageNet $32 \times 32$, FFHQ $64 \times 64$, and AFHQ $64 \times 64$. Claims And Evidence: - **(Left column, lines 260-262) "Given that $H_\theta(t, x(t)) \approx x_0$, $H_\theta$ can be adversarially refined by optimizing it with the discriminator $D_\psi$."** This claim holds only when generative paths are sufficiently straight. However, as cited below, the authors also claim that the benefit of using MAC arises from its ability to discover better non-linear trajectories. > (Left column, lines 300-305) "This suggests that a straight trajectory is not always optimal, even in OT-trained models, and MAC can adaptively discover better trajectories to correct errors that arise during transportation. Figure 3 further illustrates how MAC adjusts the trajectory direction to optimize transportation, resulting in a path that is not straight." Hence, the authors' assumption $H_\theta(t, x(t)) \approx x_0$ that justifies the usage of adversarial learning on velocity $H_\theta$ output instead of generator $G_{\theta,\phi}$ output seems to be at odds with the benefits of using MAC. Methods And Evaluation Criteria: CIFAR-10, FFHQ-64, AFHQ-64 are standard datasets / FID and IS are standard metrics for evaluating the performance of generative models. However, based on previous works such as [1,2,3], I believe results on at least ImageNet $64 \times 64$ are necessary to demonstrate the practicality and scalability of the proposed method. [1] Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion, ICLR, 2024 [2] Constant Acceleration Flow, NeurIPS, 2024 [3] Simple ReFlow: Improved Techniques for Fast Flow Models, ICLR, 2025 Theoretical Claims: There are no new theoretical claims. Experimental Designs Or Analyses: I did not find any issues with experimental designs or analysis. Supplementary Material: I reviewed the supplementary material for experimental details. Relation To Broader Scientific Literature: This paper proposes to adversarially optimize forward process interpolation coefficients for better generative modeling. Essential References Not Discussed: This paper is missing a discussion of [1,2,3]. [1] also optimizes the forward process to learn faster flows. [2,3] also incorporates adversarial learning into flow matching for improved generative modeling. [1] Minimizing Trajectory Curvature of ODE-based Generative Models, ICML, 2023 [2] Constant Acceleration Flow, NeurIPS, 2024 [3] Simple ReFlow: Improved Techniques for Fast Flow Models, ICLR, 2025 Other Strengths And Weaknesses: - **originality and significance** : while the usage of adversarial learning to improve the perceptual quality of neural ODEs have been already introduced in works such as [1] or [2], the idea of optimizing forward process parameters via simulation-based training is novel and interesting. Still, I find the experimental setup rather limited, and as discussed in **Methods And Evaluation Criteria**, additional results on higher resolution or more complex datasets will significantly strengthen this submission. [1] Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion, ICLR, 2024 [2] Constant Acceleration Flow, NeurIPS, 2024 Other Comments Or Suggestions: - Right column, lines 43-48 : $x\_{est,\theta,\phi}$ is undefined. - Right column, Eq. (2) : $\hat{x}\_{0,\theta}$ and $\hat{x}\_{1,\theta}$ are undefined. - Table 4 : please clarify the difference between $z \sim \rho_T$ and $x_T \sim \rho_T$. Questions For Authors: I am willing to raise the score to **Weak Accept** if the authors can address the following concerns. - [Q1] Can the authors provide any clarifications regarding the comment in **Claims And Evidence**? - [Q2] Can the authors provide additional experiments on ImageNet $64 \times 64$ as a demonstration of scalability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: URL for Additional Figures: https://imgur.com/a/3UiYDVF $\textbf{[A1] $H_\theta$ estimates the vector field and $\gamma_\phi$ performs inference-time planning}$ We clarify the misunderstanding here. As shown in Additional Figure 2, the roles of $\theta$ and $\phi$ differ clearly. $H_{\theta}$ estimates the vector field by predicting the endpoint $x_0$ regardless of trajectory curvature. Adaptive coefficients ($\gamma_{\phi}$) determine nonlinear trajectory planning by adjusting velocity directions and step sizes based on predictions from $H_{\theta}$. Thus, the capability for nonlinear trajectory planning depends entirely on $\gamma_{\phi}$, which adaptively guides the inference until endpoints are reached. For clarity, we added Additional Figure 1 to illustrate MAC enabling distinct, adaptively curved trajectories for inference planning. Our core contribution, MAC, provides inference-time planning, meaning optimal inference plans are computed and optimized offline via simulation and then directly deployed during inference without additional computations. $\textbf{[A2] Scalability Experiments on ImageNet}$ We agree with the reviewer that experiments on ImageNet are essential for demonstrating scalability and practical value. However, due to substantial resource requirements (32 A100 GPUs for 2 weeks needed for pre-training $EDM_\gamma$ on ImageNet-64), conducting large-scale experiments within the rebuttal period is currently impractical. Nevertheless, we emphasize our method’s scalability, supported by the following points: As in current SoTA methods on ImageNet-256 ([5], [6], [7], [8]), diffusion models’ scalability primarily relies on latent diffusion paradigms [4]. Our ongoing research explicitly integrates MAC with LDM, discrete diffusion frameworks [7], and IMM [8]. These frameworks substantially reduce computational demands, making ImageNet-scale experiments feasible by leveraging available pre-trained models. Specifically, given that discrete diffusion frameworks naturally use highly multidimensional interpolated values for pre-training, optimizing MAC on discrete diffusion frameworks enables large-scale experiments using existing pre-trained models. Additionally, combining MAC with advanced models like IMM [8] can achieve high performance on large-scale datasets by using existing pre-trained models (explained in our response to Reviewer TFVT [A3]). MAC can be readily integrated with these frameworks, and we are currently preparing dedicated large-scale follow-up studies. For revision, we propose including experiments on ImageNet-64 using our existing setup with available $EDM_\alpha$ models. We believe this sufficiently demonstrates our method’s scalability within the scope of the original submission. $\textbf{[A3] Essential References Discussion}$ Thank you for identifying the missing literature ([1], [2], [3]). Briefly, the major conceptual difference is that prior works ([1], [2], [3]) optimize vector fields by predefined trajectory properties (e.g., straightness or minimal curvature) guided by optimal transport theory to reduce numerical errors. In contrast, our method does not enforce predefined inference trajectory optimality; instead, we adaptively discover optimal nonlinear trajectories by final transportation quality through simulation-based inference-time planning. This allows greater flexibility and potentially superior performance. $\textbf{[A4] Notation Clarifications}$ $G_{\theta, \phi}(\tau, x_T) = x_{\text{est}, \theta, \phi}$ (Details in Appendix C). $\hat{x}$ values are predictions from $H_{\theta}(t, x(t))$ for terminal points $x_0 \sim \rho_0$ and $x_1 \sim \rho_1$. $x_T$ denotes the actual initial sample point, while $z$ is sampled from the same distribution but unused during inference. We sincerely appreciate your thoughtful consideration and hope these clarifications effectively address your concerns. [1] Minimizing Trajectory Curvature of ODE-based Generative Models, ICML, 2023 [2] Constant Acceleration Flow, NeurIPS, 2024 [3] Simple ReFlow: Improved Techniques for Fast Flow Models, ICLR, 2025 [4] High-resolution Image Synthesis with Latent Diffusion Models, CVPR, 2021 [5] Scalable Diffusion Models with Transformers, ICCV, 2023 [6] SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers, ECCV, 2024 [7] [Mask] is All You Need, arXiv, 2025 [8] Inductive Moment Matching, arXiv, 2025 --- Rebuttal Comment 1.1: Comment: Thank you for the reply. Given the authors rebuttal and additional experiments on ImageNet64, I have raised the score from **Weak Reject** to **Weak Accept**.
Summary: This paper introduces Multidimensional Adaptive Coefficients (MAC) for flow and diffusion models, allowing coefficients to vary across dimensions and adapt to different starting points. The two-stage approach combines pre-training with multidimensional coefficients and adversarial refinement. Experiments across multiple frameworks and datasets show improved performance, including state-of-the-art results on CIFAR-10 conditional generation. ## update after rebuttal The author resolved my confusion. However, considering that reviewer MR8r's review was not addressed, I'm unsure what happened. Finally, considering the contribution of the current work, my current rating remains unchanged. Claims And Evidence: The performance improvement claims are generally supported by comprehensive experiments across multiple frameworks and datasets. However, the "training efficiency" claim is problematic since the method requires training models from scratch rather than fine-tuning existing pre-trained models, which is a significant practical limitation. Methods And Evaluation Criteria: The evaluation metrics and benchmark datasets are appropriate for the task. However, there is a concern with the methodology. The hypothesis space design for MAC introduces many additional hyperparameters (M, s, q, LPF configuration). The paper lacks analysis of how baseline methods might perform if augmented with similar parameterization advantages. For example, it would be valuable to see how CTM would perform if pre-trained with comparable additional parameters before distillation. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: The experimental design is overall sound, with comprehensive testing across multiple frameworks and datasets. The ablation study in Table 6 shows that adversarial training provides most of the performance gain. Since adversarial training is not a novel contribution of this paper, this suggests that MAC's specific contribution to performance improvement is relatively limited. Supplementary Material: I reviewed all the supplementary materials. Relation To Broader Scientific Literature: The paper positions itself as novel by optimizing trajectory quality through simulation, but this approach is conceptually similar to: 1. Kim et al. (2024) (CTM) which also uses adversarial refinement on diffusion trajectories 2. Lu et al. (2024) (CD + GAN) which likewise employs adversarial refinement The primary difference, multidimensional coefficients, appears to provide only incremental benefits over standard adversarial refinement of model parameters. Essential References Not Discussed: I find that the existing citations sufficiently cover the related literature, though the novelty of the proposed approach remains limited. Other Strengths And Weaknesses: **Strengths**: 1. Comprehensive experiments across multiple frameworks demonstrate the method's versatility 2. Good visualizations of learned trajectories help explain the method's operation 3. The approach achieves state-of-the-art results on CIFAR-10 conditional generation **Weaknesses**: 1. The requirement to retrain models from scratch makes the method impractical for large-scale applications 2. The performance gains attributable specifically to MAC (versus general adversarial refinement) appear modest 3. The method introduces significant additional complexity through MAC hyperparameters Other Comments Or Suggestions: The paper would benefit from providing a more detailed analysis of computational overhead. Additionally, testing on higher-resolution images would help demonstrate the method's scalability to more complex generation tasks. Questions For Authors: When considering the total computation (pre-training + adversarial finetuning), how does your method compare to these alternatives? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: URL for Additional Figures: https://imgur.com/a/3UiYDVF $\textbf{[A1] MAC’s core value lies in inference-time planning, orthogonal to vector field tuning methods like CTM and CD+GAN}$ CTM and CD+GAN optimize trajectories by adjusting the vector field parameter $\theta$. In contrast, our method (MAC) performs inference-time planning, meaning the optimal inference plans are computed and optimized offline via simulation and then directly deployed during inference without additional computations. As shown in Additional Figures 1 and 2, without MAC, trajectory planning’s search scope is limited to linear paths, restricting performance gains. MAC enables adaptive, nonlinear, dimension-wise trajectory planning, significantly expanding optimization flexibility. CTM and CD+GAN do not employ actual inference-time simulation feedback. Our method dynamically optimizes trajectories and timestep plans using simulations identical to inference, making it uniquely dynamic. MAC acts as a final-stage inference planning strategy, enhancing performance beyond what is achievable by optimizing the vector field alone. This means that MAC can be integrated with existing frameworks such as CTM and other diffusion or distillation methods, as their methodologies attempt to solve different problems. $\textbf{[A2] MAC Reduces Hyperparameter Engineering Costs}$ The modest gains observed with EDM result from EDM’s already highly optimized configuration, leaving limited margin for further improvement. (This is connected to the reason why we use $EDM_\gamma$ and coefficient labeling, explained below in [A3].) However, Sections 4.1 and 4.2 demonstrate significant improvements (~10x NFE efficiency) in less optimized frameworks (DDPM, FM, SI) by using MAC. MAC alleviates the extensive manual tuning burden associated with framework-specific hyperparameters, like in EDM. Although MAC requires tuning of hypothesis space parameters ($M, s, q$, LPF configs), these parameters are framework-independent. Hence, once identified, these parameters generalize well across various flow and diffusion models, providing considerable performance benefits without framework-specific engineering, as shown in our experiments. Regarding baseline comparison with similar parameterization: As demonstrated in Table 6, directly incorporating pre-training parameterization into EDM ($EDM_\gamma$) negatively impacts performance. Using this as a teacher network will degrade CTM performance unless combined with MAC for inference-time planning. MAC provides clear advantages specifically when used for adaptive inference-time planning, not as a general pre-training strategy. $\textbf{[A3] Practicality via Existing Pre-trained Models}$ Our method indeed requires higher computational costs compared to CTM (approximately 201 Mimg vs. 25 Mimg) when using the specific $EDM_\gamma$ pre-training setup, chosen deliberately to push the limits and achieve SoTA performance on CIFAR-10. However, we emphasize that using $EDM_\gamma$ and coefficient labeling is optional. MAC can practically and effectively leverage existing pre-trained models (such as $EDM_\alpha$), significantly reducing computational overhead without substantially compromising performance. Using $\gamma$ increases the probability of multidimensional interpolated values $x(t)$, but even using $\alpha$ inherently yields multidimensional interpolations, as Gaussian noise is independently sampled across dimensions. Specifically, given $x_1 \sim \rho_1 = \mathcal{N}(0, I_d) \in \mathbb{R}^d$, each dimension $x_{1,i}$ is independently drawn from $\mathcal{N}(0,1)$, and thus even a linear interpolation $x(t) = t \cdot x_1 + (1-t) \cdot x_0$ naturally results in dimension-wise distinct noise contributions. Empirical evidence from our experiments supports this claim. Section 4.1 demonstrates effective nonlinear trajectory planning in $OT-SI_{MAC}$ using models pre-trained with $\alpha$. Additionally, Figure 6 shows the trained $\phi$‘s distribution significantly differs from the pre-training distribution, indicating MAC’s effectiveness is not critically dependent on pre-training specifics. Thus, for large-scale practical scenarios where training from scratch is costly, MAC can effectively utilize existing models (e.g., $EDM_\alpha$), ensuring practicality and computational efficiency. $\textbf{[A4] Scalability (ImageNet)}$ Please refer to our response to Reviewer jRK3’s comment [A2]. We sincerely appreciate your thoughtful consideration and hope these clarifications effectively address your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses. My final concern lies in complete experimental comparisons, just like all the other reviewers, i.e., experiments on ImageNet64. While the authors note that extensive GPU resources would be needed for these experiments, there are strategies available to reduce computational demands, such as gradient accumulation. Although a complete comparison would be time-intensive, the preliminary results produced by a not fully converged model, could provide meaningful insights into the method's scalability. Given this, I maintain my current score. --- Reply to Comment 1.1.1: Comment: $\textbf{[A] ImageNet-64 Result Demonstrating Scalability and Practicality Using Only Existing Pre-trained Models}$ We would like to inform the reviewers that we have conducted an additional experiment on ImageNet-64, as requested. We achieved an FID of $\textbf{1.47}$ with 5(+) NFE by applying inference trajectory optimization $\textbf{using an existing pre-trained}$ $EDM_\alpha$ model, with only $\textbf{30k}$ training images for MAC. This result outperforms CTM with NFE = 2 (FID = 1.73), which requires $\textbf{61.4M}$ training images. Except for the batch size, which was set to 32 for $\gamma_\phi$, and 512 for $D_\psi$, all other configurations—including the model size for $w_\phi$—were kept identical to those used in the CIFAR-10, FFHQ, and AFHQ experiments. This demonstrates a substantial gain in training efficiency and scalability—achieving better performance with $\textbf{2048}\times$ fewer training samples than CTM. Furthermore, this result empirically supports our earlier rebuttal point [A3] to Reviewer TFVT, as it was obtained $\textbf{without any pre-training stage}$, directly proceeding to the adversarial trajectory optimization stage using an existing pre-trained model without any modification. We note that further training may improve FID scores, but we report the result as early as possible within the rebuttal period to provide a timely response. Additional ablation studies on ImageNet-64 will be included in the revision. We hope this new result effectively addresses the concerns regarding the practicality and scalability of our method.
Summary: This paper introduces a new way to handle the interpolation between noise and data in diffusion and flow-based generative models. Unlike standard approaches that use an interpolation with a uniform scale across the entire image (like Rectified Flows, DDPM, and IDDPM), the authors propose extending this interpolation to a 2D space, matching the dimensions of the image and noise. This 2D interpolation is learned using a coefficient model, offering potentially more flexibility in the interpolation process. The training procedure involves three main components: a standard diffusion model trained with a 2D sampling of the time variable, a discriminator, and the coefficient model, all optimized jointly in the second stage to find better interpolation weights. ## update after rebuttal As I didn't receive any rebuttal from the author I'll remain my current rating unchanged. After reviewing the author's rebuttal with other reviewers, I believe most of my concerns remain unaddressed, and the paper, especially the experiment section, still appears incomplete and not yet ready from my perspective. Claims And Evidence: The central claim is that the proposed 2D learned interpolation offers more flexibility and can lead to improved generative models. The paper presents experimental results on CIFAR-10 to support this claim. However, the evidence might be considered somewhat limited due to the restriction to a single, relatively low-resolution dataset. The lack of thorough ablation studies on the design choices of the coefficient parameterization also weakens the evidence for the specific design being optimal. Methods And Evaluation Criteria: The proposed method, involving the joint training of a diffusion/flow model, a discriminator, and a coefficient model with a 2D time sampling, presents an interesting alternative to standard interpolation techniques. The evaluation seems to primarily rely on FID scores obtained on the CIFAR-10 dataset. While FID is a common metric for generative model evaluation, the limited scope of the experiments raises questions about the generalizability of the findings. Theoretical Claims: This paper doesn't include new proofs or theoretical claims Experimental Designs Or Analyses: The experimental design and analyses raise several concerns: - Limited Dataset: The evaluation is limited to CIFAR-10 (32x32 images), which is insufficient to demonstrate the effectiveness of the proposed method on more complex and higher-resolution datasets. I suspect that the increased dimensionality of the 2D time sampling might lead to optimization difficulties as the image size grows. - Lack of Ablation: The absence of ablation studies on the design choices for the coefficient parameterization is a significant oversight. It's quite hand-crafted in terms of the specific parameterization in equation 8, and whether other designs might yield better results. - Inconsistent Results: There are discrepancies in the reported performance across different tables (e.g., Table 2, 3, and 4 compared to Table 5), and it's unclear if different models or settings were used. The comparison in Tables 2-4 is limited to NFE=10, which is unusually low for standard diffusion and flow models to achieve good performance. - Missing Baselines: Table 5 lacks many baseline results, making it difficult to properly assess the improvement offered by the proposed method. Furthermore, the comparison only includes distillation methods and inexplicably omits 1-Rectified Flow (Flow Matching), which seems like a relevant baseline. Supplementary Material: Yes the supplementary mainly includes the experiment details and visualizations. Relation To Broader Scientific Literature: Previous work like rectified flow (flow matching), DDPM or other variants mainly focus on using a predefined and fixed intepolation. This work extend it using a learned intepolation and expand it to 2D Essential References Not Discussed: The paper include the related references. Other Strengths And Weaknesses: Strengths: The paper proposes a novel idea of using a learned 2D interpolation for diffusion and flow models, which could potentially offer more flexibility than standard interpolation. Weaknesses: Please see the questions below Other Comments Or Suggestions: Please see the questions part. Questions For Authors: 1. In the introduction, you mention a connection to Neural ODEs. Could you please elaborate on the nature of this relationship and how your work builds upon or relates to concepts from Neural ODEs? 2. Why was the specific parameterization for the coefficient model chosen as described in equation 8? Were other parameterizations explored, and if so, what were the results? An ablation study on this design choice would be beneficial. 3. The experimental evaluation is currently limited to CIFAR-10. Do you have plans to extend the evaluation to higher-resolution datasets like Imagenet 64x64 or include more comprehensive comparisons on datasets like FFHQ or AFHQ? Given the 2D time sampling, are there potential scalability challenges with increasing image size? 4. In equation 5, x_T is used but not defined. Similarly, in Algorithm 1, ρ_T is used without definition. What are these variables, and what is the difference between ρ_1 and ρ_T? 5. From Section 3.2.2 it seems that the method is currently applied to diffusion models predicting noise. Could you clarify if this approach can be extended to other types of diffusion and flow models, such as those predicting velocities? 6. Tables 2, 3, and 4 present results with NFE=10, which is unusually low for standard DDPM and Flow Matching methods. The performance also differs from Table 5. Were different models or training settings used in these tables? Please clarify these discrepancies. 7. Table 5 is missing many baseline results, including 1-Rectified Flow (Flow Matching). Could you provide these missing results for a more comprehensive comparison? 8. What is the primary motivation for using adversarial training in your method? Could you compare the performance of your method with and without the adversarial loss to isolate the impact of the proposed interpolation technique? It's unclear whether the performance gains are solely due to the proposed interpolation method or if the adversarial training plays a significant role, especially in terms of FID improvement. 9. In Table 6, the FID scores for the first two rows (presumably vanilla EDM) are significantly higher than the EDM results in Table 5. Could you explain this difference? I assume the first row is vanilla EDM so it should match table 5 row 3 in diffusion models? Code Of Conduct: Affirmed. Overall Recommendation: 2
null
null
null
null
null
null
null
null
Improved Learning via k-DTW: A Novel Dissimilarity Measure for Curves
Accept (poster)
Summary: The paper proposes k-DTW (k-Dynamic Time Warping) as a dissimilarity measure between polygonal curves. The motivation is that many diverse datasets can be thought of as curves and measuring appropriately the distance between them is a fundamental problem. Usually researchers has studied Frechet distances and DTW for this purpose and typically some transformations of one curve into another is involved. However, both aforementioned measures have their disadvantages: Frechet distance is very sensitive to outlier whereas DTW is not a metric since it does not satisfy the triangle inequality. To alleviate these issues with existing distances, the paper proposes k-DTW which combines advantages of standard measures like Frechet distance and of the standard DTW (essentially interpolating between the two), without their disadvantages. At a high-level, the parameter k controls the degree of accuracy for the transformation performed between the two curves. Oftentimes, carrying out the whole trasformation may be expensive and also overfit to noise, whereas k-DTW only cares about the most important parts of the trasformation, as measured by a small subset of size k, and ignores the rest of the transformation. Although this may seem lossy it offers some advantages. The main contributions of the paper are: 1) The first point the authors make is that k-DTW satisfies a strengthened triangle inequality compared to DTW and it is thus closer to a proper metric, while retaining some robustness of DTW. 2) The second point is that there is an exact algorithm, as well as a (1 + eps)-approximation for k-DTW using a parametric search for the k-th largest matched distance with standard DTW on modified distance matrices as a subroutine. 3) Next, they show the first dimension-free learning bounds for clustering under k-DTW and a separation result showing that k-DTW has strictly smaller Rademacher and Gaussian complexity than DTW for clustering curves. 4) Finally, the authors provide experiments showing the benefits of k-DTW over the other two measures in the setting of clustering and nearest neighbor classification, on synthetic and real world data. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes, except the learning section 4. Experimental Designs Or Analyses: yes. Supplementary Material: no Relation To Broader Scientific Literature: Important relation since k-DTW could improve efficiency for comparing curves. Essential References Not Discussed: no Other Strengths And Weaknesses: The conceptual message of the paper is that if we don't care about the whole computation of DTW or Frechet and we rather focus on the most important k parts of the computation then we can have good algorithms. The idea is that curves can be approximated by polygonal curves if we select some vertices on them and we connect them with affine segments, essentially interpolating between the points on the curves. The main definition introduced in the paper is Definition 2.2 with a parameter k: as k increases we care about more and more pairs of large distances between the two curves; specifically, for k = 1 we recover the Frechet distance, and for k large enough we recover the DTW distance. I find the definition here very natural, and it comes as no surprise that once we focus on k terms in the summation, then then triangle inequality will be respected up to a multiplicative factor of k (Lemma 2.3). The authors after introducing their k-DTW distance, they rule out some perhaps straighforward approaches. One thing that is notable is that the k-DTW distancε between two curves is NOT equal to just taking the largest k distances from the sum that yields their DTW distance. Their first main result is to give an exact algorithm with runtime proportional to the description of the curves and the number of distinct pairwise distances. The analysis is non-trivial but it follows from the top-k framework of (Bertsimas & Sim, 2003). The second main result is a (1+eps)-aox with a better runtime. The authors essentially shave off the number of distinct pairwise distances and replace it with the 1/eps x logarithm of k (say the accuracy eps is fixed small number). The third main result, is what happens when we want to learn the median of curves sampled from a distribution over curves with vertices in the unit Euclidean ball of dimension d. The authors calculate the Rademacher and Gaussian complexities both for DTW and for k-DTW and they show how they can replace a strong dependence on the size of the polygonal curve with their parameter k. Importantly, because k can be much smaller than dimension d, this provides a dimension-independent bound for learning. No major weaknesses could be found. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback.
Summary: The paper proposes a new distance for between curves, the k-DTW distance, and make several types of compelling arguments for its use. These include better near-metric properties than DTW, and better learning complexity than Frechet distance - k-DTW generalizes both. The paper provides an efficient approximation algorithm (with clear provable guarantees), heuristics for improving runtime in practice while maintaining guarantees, and learning complexity results for learning a median curve from a set. Each step involves non-trivial insights. Moreover, the paper shows empirical improvement on synthetic (for clustering) and real data sets (for classification) that demonstrate clear improvement over the standard DTW and Frechet distances. Data analysis on curve data is an active area, and this paper provides an inventive and comprehensively presented new method in this area. Moreover, I found it surprising that the new method works so well - it considers the longest k distances in a matched monotonic sequence (DTW uses all, and Frechet uses the 1 longest), and this has very different properties and seems to be more robust than either. Moreover I found the paper clearly written. It combines a variety of complicated theoretical and practical perspectives. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes (for the most part), the properties proposed to justify the metric are clearly explained and stated formally. The main proofs are proven or sketched in the main paper, and the ones deferred to the appendix make sense, and I did not find any concerns. Experimental Designs Or Analyses: They seem reasonable. There are three experiments: clustering (synthetic -- it illustrates the advantages of k-DTW well), classification on learning analytics (theirs does best among DTW, Frechet), and classification letters/trajectories (theirs often does best). I would have liked to have seen a more explicit parameter search over k on the training data, and then used once on evaluation data. Supplementary Material: I quickly reviewed proofs, and looked at experiments. Relation To Broader Scientific Literature: I think it is fine. It covers important aspects of curve distances, although it may be useful to compare to things like Edit Distance with Real Penalties: Chen and Ng. "On the marriage of lp-norms and edit distance" in VLDB 2004 Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. **Question 1:** Edit Distance with Real Penalties, as in: Chen and Ng, VLDB 2004. **Answer 1:** We will include some references, discussions and baseline experiments for your suggestion along with another suggestion of reviewer "MvAs". Please see our reply to "MvAs" for details on the other distance measures. The following table shows the results of our experiments, extended to weak Fréchet distance, partial DTW, and Edit Distance with Real Penalties, accompanied with the best performing $k$-DTW (cf. Table 4 in our submission). The best performing measure for each case is highlighted in $\textcolor{blue}{\text{blue}}$, and the worst in $\textcolor{red}{\text{red}}$. One can see that in the majority of cases, $k$-DTW either still performs $\textcolor{blue}{\text{best}}$, or $\textcolor{orange}{\text{close to the best}}$. The only exception is "Char0uw" where ERP excels and beats *all* others by a large margin. $k$-DTW is still second best in these cases. Notably, *all* competitors have some worst cases, while $k$-DTW is *never* worst. | Dataset | Distance | AUC (std.err.)| Accuracy (std.err.) | $F_1$-Score (std.err.) | |------------|-----------------|---------------------------------------|---------------------------------------|---------------------------------------| | Cars+Bus | PartialDTW | $\textcolor{blue}{0.58362 (0.00383)}$ | $\textcolor{blue}{0.60033 (0.00314)}$ | $\textcolor{blue}{0.71338 (0.00258)}$ | || WeakFréchet | $\textcolor{red}{0.51607 (0.00396)}$ | $\textcolor{red}{0.53208 (0.00346)}$ | ${0.64876 (0.00298)}$ | || EditRealPenalty | $0.56580 (0.00324)$ | $0.54900 (0.00258)$ | $\textcolor{red}{0.62282 (0.00269)}$ | || $m/4$-DTW | $\textcolor{orange}{0.57633 (0.00329)}$ | $\textcolor{orange}{0.56250 (0.00266)}$ | $0.64711 (0.00264)$ | | | Sim.C+B | PartialDTW | $\textcolor{red}{0.83221 (0.00068)}$ | $\textcolor{red}{0.63397 (0.00092)}$ | $\textcolor{red}{0.46484 (0.00178)}$ | || WeakFréchet | $0.87942 (0.00047)$ | $0.77073 (0.00071)$ | $0.75483 (0.00089)$ | || EditRealPenalty | ${0.90888 (0.00041)}$ | ${0.79709 (0.00059)}$ | ${0.76120 (0.00088)}$ | || $m/4$-DTW | $\textcolor{blue}{0.91712 (0.00041)}$ | $\textcolor{blue}{0.79836 (0.00052)}$ | $\textcolor{blue}{0.76566 (0.00076)}$ | | | Char0uw | PartialDTW | $0.98622 (0.00027)$ | $\textcolor{red}{0.91445 (0.00059)}$ | $\textcolor{red}{0.91930 (0.00059)}$ | || WeakFréchet | $\textcolor{red}{0.97652 (0.00033)}$ | $0.92185 (0.00072)$ | $0.92257 (0.00075)$ | || EditRealPenalty | $\textcolor{blue}{0.99819 (0.00009)}$ | $\textcolor{blue}{0.97906 (0.00025)}$ | $\textcolor{blue}{0.97930 (0.00029)}$ | || $\ln(m)$-DTW | $\textcolor{orange}{0.98374 (0.00025)}$ | $0.93341 (0.00061)$ | $0.93553 (0.00061)$ | | | Char1nw | PartialDTW | $0.94604 (0.00051)$ | ${0.87546 (0.00109)}$ | ${0.87802 (0.00107)}$ | || WeakFréchet | $\textcolor{red}{0.91767 (0.00083)}$ | $\textcolor{red}{0.83866 (0.00139)}$ | $\textcolor{red}{0.82806 (0.00164)}$ | || EditRealPenalty | $\textcolor{blue}{0.95442 (0.00048)}$ | $0.86093 (0.00100)$ | $0.85042 (0.00121)$ | || $m/4$-DTW | $\textcolor{orange}{0.95289 (0.00045)}$ | $\textcolor{blue}{0.88475 (0.00111)}$ | $\textcolor{blue}{0.88545 (0.00113)}$ | | | Char2nu | PartialDTW | ${0.99355 (0.00019)} $ | ${0.95519 (0.00045)}$ | ${0.95334 (0.00049)}$ | || WeakFréchet | $\textcolor{red}{0.98281 (0.00025)}$ | $\textcolor{red}{0.93364 (0.00066)}$ | $\textcolor{red}{0.92836 (0.00075)}$ | || EditRealPenalty | $\textcolor{blue}{0.99830 (0.00007)}$ | $\textcolor{blue}{0.96695 (0.00030)}$ | $\textcolor{blue}{0.96573 (0.00035)}$ | || $m/4$-DTW | $\textcolor{orange}{0.98949 (0.00016)}$ | $\textcolor{orange}{0.94757 (0.00048)}$ | $\textcolor{orange}{0.94475 (0.00057)}$ | | | TwoPersons | PartialDTW | ${0.94616 (0.00052)}$ | ${0.91961 (0.00026)}$ | ${0.92637 (0.00037)}$ | || WeakFréchet | $0.95376 (0.00056)$ | $0.94832 (0.00002)$ | $0.95324 (0.00021)$ | || EditRealPenalty | $\textcolor{red}{0.85989 (0.00115)}$ | $\textcolor{red}{0.77931 (0.00004)}$ | $\textcolor{red}{0.76642 (0.00051)}$ | || $m/10$-DTW | $\textcolor{blue}{0.96191 (0.00053)}$ | $\textcolor{blue}{0.94832 (0.00002)}$ | $\textcolor{blue}{0.95324 (0.00021)}$ | **Question 2:** More explicit parameter search over $k$. **Answer 2:** We refer to Table 2 in the appendix of our submission for a more extensive parameter search over $k$, cross-validated and independently repeated $100$ times. We may add the suggested evaluation in the next revision. --- Rebuttal Comment 1.1: Comment: thanks for updated experiments with EditRealPenalties. I retain my score of 4: accept.
Summary: This paper proposes a new distance measure called $k$-DTW, which is positioned as a interpolation between the classical DTW distance and the Fréchet distance. The technical novelty is to consider only the top $k$ matched distances in the alignment path, rather than summing all distances (as in DTW) or taking the maximum distance (as in Fréchet). The authors prove that $k$-DTW interpolates these two classical measures for $k = 1$ and $k$ sufficiently large. Empirical results show that $k$-DTW can yield stronger triangle-like properties than DTW, while still being more robust to outliers than Fr\'echet. A parametric search algorithm is introduced for computing $k$-DTW exactly (as well as a $(1+\varepsilon)$-approximation). The paper demonstrates applications in clustering (via hierarchical agglomerative clustering) and in nearest neighbor classification on synthetic and real-world datasets. The main contribution is that $k$-DTW addresses key shortcomings of Fréchet (outlier-sensitivity) and DTW (lack of metric-like properties) and, from experiments, appears beneficial in tasks like clustering and classification. Claims And Evidence: Claim 1: k-DTW is more robust to outliers than Fréchet distance. Evidence: The synthetic-clustering experiment shows how spikes in type-A curves heavily penalize Fréchet but do less harm under k-DTW. However, the paper does not provide a standalone "robustness theorem." The authors rely primarily on intuitive arguments and empirical demonstrations. Claim 2: k-DTW has stronger metric-like properties than DTW. Evidence: They prove a relaxed triangle inequality with a factor of k, whereas DTW's violation can be worse by factors depending on curve length. The paper contains theoretical lemmas (especially Lemma 2.3) and the associated proofs. Empirically, the authors illustrate fewer pathological merges in clustering under k-DTW compared to DTW. Claim 3: The proposed k-DTW algorithm is feasible for practical use. Evidence: An exact algorithm and a (1+ε)-approximate algorithm are provided, though in worst cases it can be more expensive than classical DTW or Fréchet. The experiments show that it is slower than DTW and discrete Fréchet in practice. Methods And Evaluation Criteria: The authors' methods include: - Defining k-DTW and demonstrating how to compute it via parametric search. - Evaluating clustering via HAC with single-linkage and complete-linkage. - Conducting l-nearest neighbor classification on real-world data. These choices are appropriate to showcase the value of k-DTW in both supervised and unsupervised tasks. The design of synthetic data for highlighting robustness to peaks/outliers is quite clear. However, adding additional baseline distances like partial DTW or weak Fréchet could further solidify the empirical evaluations. Theoretical Claims: I checked the theoretical claims as laid out in Sections 2 and 3 of the paper: - The proof of the relaxed triangle inequality (Lemma 2.3) is good. - One point that is less formally addressed is the outlier-robustness: the paper does not contain a specialized "robustness theorem." Instead, it uses examples and partial discussions to argue that k-DTW dilutes spikes in the alignment cost. Experimental Designs Or Analyses: - The authors constructed three curve types (A, B, C) to highlight the different shortcomings of Fréchet and DTW. This approach is sound: it clearly demonstrates how one or two spikes can dominate Fréchet, or how DTW can produce surprising alignments. - They then used hierarchical clustering (single-linkage and complete-linkage) and visually showed the intra-/inter-cluster distances. The analysis effectively highlights differences among the measures. - The real-world classification experiment (l-NN) is straightforward but adequate to underscore classification improvements. However, the paper could be enriched by including additional distance measures (partial DTW, weak Fréchet) for completeness. Supplementary Material: I checked see references to additional proofs and extended experiments in the appendix, but we did not observe a separate outlier-robustness theorem in the supplementary. Relation To Broader Scientific Literature: The paper explicitly compares $k$-DTW to the two classic distances: DTW and Fréchet. That is a direct relationship to established literature in computational geometry and time series analysis. We note that variants like weak Fréchet distance or partial DTW also exist, but they are not tested. The top-$k$ approach has precedents in top-$k$ optimization and the Ky-Fan norm (summing the $k$ largest singular values).I recommend referencing additional distances like partial DTW and weak Fr\'echet which might be beneficial to help the robustness comparison. Essential References Not Discussed: I recommend referencing additional distances like partial DTW and weak Fréchet. Other Strengths And Weaknesses: Strengths: - Proposes a simple and new bridging distance between two classic curve distances. - Demonstrates some theoretical foundations like relaxed triangle inequality, dimension-free learning bounds, and provided practical benefits. - Proposed an algorithm to compute k-DTW, plus the (1+ε)-approximation. Weaknesses: - No explicit "robustness theorem" about outliers, only partial arguments via examples. - Time complexity is worse than classical DTW or discrete Fréchet distance, both in theory and experimental results. - The experimental comparisons could be further supported by including partial DTW, weak Fréchet, or other established distances for completeness. Other Comments Or Suggestions: na Questions For Authors: You primarily demonstrate robustness to outliers through experiments. Could you formalize the observed outlier-robustness into a dedicated theorem or formal analysis? Are there specific heuristic strategies or data-structural optimizations you foresee to further enhance the runtime efficiency of the k-DTW algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. **Question 1:** Formalize robustness. **Answer 1:** The concept of robustness for curve distance measures could be formalized as follows: given two curves whose matched vertices are at constant distance, say $1$, if we move one vertex away to increase the distance by a large value $\Delta$, then the average distance contributed per vertex increases through this modification by $\Delta/\Theta(k)$ for $k$-DTW, $k\in\{1,\ldots, m\}$. This means that Fréchet is largely dominated by the single outlier, while for $k$-DTW the increase averages out, so that up to a factor $(1\pm\varepsilon)$, one single large perturbation of order $\Delta \approx \varepsilon k$ is indistinguishable from tiny $\varepsilon$ perturbations of (all) single vertices. This supports and quantifies the intuition and may be included as a lemma in our next revision. To make the intuition more rigorous, we will examine the statistical breakdown point for $k$-DTW, which is defined to be the smallest fraction of vertices to be perturbed to corrupt the distance. A larger breakdown point indicates more robustness to arbitrary perturbations. As all $k$-DTW variants are closely related to the geometric median (minimizing a sum of distances), we will adapt the breakdown point analysis of the median [1] to our setting. It seems to be provable that the breakdown point of $k$-DTW is $\lfloor \frac{k+1}{2}\rfloor/m$. This would imply that as long as $k\in \omega(1)$, the breakdown point is asymptotically larger than for Fréchet and thus considerably more robust. However, to reach a constant breakdown point that provides robustness comparable to DTW for arbitrarily long curves, $k$ would still need to be of order $\Omega(m)$. [1] Lopuhaä, Rousseeuw - Breakdown Points of Affine Equivariant Estimators of Multivariate Location and Covariance Matrices, The Annals of Statistics, 1991 **Question 2:** Heuristic strategies or data-structural optimizations to enhance the runtime. **Answer 2:** To enhance the runtime of our $k$-DTW algorithm *without* losing theoretical guarantees, we currently see two options: 1) The current DTW subroutine is the plain DP algorithm without any tweaks. It could potentially be enhanced using heuristic DTW speed-ups such as [2] as a black box. 2) Our first heuristic (line 206, right) could be enhanced by computing first only a subset of the DTW paths in a narrow band along the main diagonal. This works in linear $O(m)$ time and may be promising for reducing the variable 'mincost' early on so to cut off many unnecessary iterations. We will also add the challenging open problem of improving the top-$k$ optimization framework that our algorithms build upon. This will hopefully yield provably faster exact algorithms for other top-$k$ problems as well. [2] Silva, Batista - Speeding up all-pairwise dynamic time warping matrix calculation, ICDM 2016. **Question 3**: Additional baselines like partial DTW and weak Fréchet. **Answer 3**: Our claim is that $k$-DTW interpolates between the two extreme cases Fréchet and DTW. Comparisons to these two are thus most natural for supporting our claims. Please note that weak Fréchet has the same sensitivity to outlier vertices as standard Fréchet. Similarly, partial DTW admits only a factor $m$ triangle inequality. Hence, they do not contribute towards the goals of our paper. However, we agree on adding a few more baselines for completeness. We will thus include references, discussions and baseline experiments for partial DTW or weak Fréchet distance in addition to the suggestion of reviewer "uWfM". Please see our reply to "uWfM" for details and preliminary experiments.
null
null
null
null
null
null
null
null
Permutation-based Rank Test in the Presence of Discretization and Application in Causal Discovery with Mixed Data
Accept (poster)
Summary: This paper introduces the Mixed data Permutation-based Rank Test (MPRT) for testing the rank of cross-covariance matrices in the presence of discretized variables. The authors address a critical gap in existing rank tests, which assume continuous measurements and fail when variables are discretized. MPRT leverages permutation-based resampling and maximum likelihood correlation estimation to handle mixed data. Theoretical guarantees on Type I error control and empirical validation on synthetic and real-world datasets demonstrate its effectiveness. The method is also applied to causal discovery, showing improved performance over traditional conditional independence tests in discretized settings. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes, I checked the experiments in Section 4. Supplementary Material: Yes, I reviewed Appendix C in the supplementary material. Relation To Broader Scientific Literature: This paper utilizes the permutation strategy to extend classical rank tests to discretized data. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: This paper proposes a valid rank test for discretized data, addressing limitations of classical CI tests that fail under discretization. Weakness: The theoretical guarantees rely on joint Gaussianity. While Appendix C.2 briefly discusses non-Gaussian cases, robustness to severe non-Gaussianity or nonlinear relationships is unclear. Other Comments Or Suggestions: None. Questions For Authors: 1. Can discretization create or break CI relationships, especially when the data is non-Gaussian or exhibits nonlinear dependencies? Specifically, are there cases where CI holds before discretization but not after, or vice versa? 2. It seems that the threshold estimation in Equation (14) assumes the underlying variables follow a standard Gaussian distribution. How might deviations from this assumption affect the validity of correlation estimates or rank test results? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the time dedicated to reviewing our paper, the insightful comments, and valuable feedback. Please see our point-by-point responses below. **Q1:** While Appendix C.2 briefly discusses non-Gaussian cases, robustness to severe non-Gaussianity or nonlinear relationships is unclear. **A1:** To the best of our knowledge, under the linear Gaussian assumption, none of the existing rank tests can properly handle the issue of discretization and thus we believe our method is already a non-trivial contribution to this field. At the same time, we totally agree that non-Gaussiani or nonlinear cases can also be highly relevant in practical applications. Extending our framework to accommodate non-Gaussian cases is certainly feasible, as discussed in Appendix C.2. Addressing nonlinear cases is certainly more challenging because, even without discretization, the rank constraints (i.e., vanishing determinant constraints implied by trek separation) relies on linearity; extending this to arbitrary nonlinear relationships remains an open problem in the field of causal discovery. We thank the reviewer for the insightful comments and we plan to leave it for future exploration. **Q2:** Can discretization create or break CI relationships? **A2:** Yes, discretization generally breaks CI relations regardless of the existence of non-Gaussianity or non-linearity (as long as we are not referring to the trivial case where variables are discretized into just one value, resulting in complete loss of information). For example, consider continuous variables $\mathsf{X},\mathsf{Y},\mathsf{Z}$ following a causal graph $\mathsf{X}\leftarrow\mathsf{Z}\rightarrow\mathsf{Y}$. By d-separation we know that $\mathsf{X} \perp \mathsf{Y} | \mathsf{Z}$. However, if we can only observe a discretized version of $\mathsf{Z}$, say $\mathsf{Z}'$, the CI relation $\mathsf{X} \perp \mathsf{Y} | \mathsf{Z}'$ generally does not hold. The reason is that discretization from $\mathsf{Z}$ to $\mathsf{Z}'$ introduces information loss and thus $\mathsf{Z}'$ no longer retains the complete information to explain the dependence between $\mathsf{X}$ and $\mathsf{Y}$. The same reasoning also applies to the rank test. Roughly speaking, a lower rank of the cross-covariance indicates greater independence between two sets of variables. The presence of discretization generally induces higher rank than the correct one, leading to a false indication of dependence and thus breaks the independence. Foe example, as shown in in Figure 1 of our submission, panel (a) shows the population cross-covariance without discretization, where the rank is 1, while panel (b) shows the cross-covariance computed from discretized observations, resulting in a rank of 3. **Q3:** It seems that Equation 14 assumes the underlying variables follow a standard Gaussian distribution. How might deviations from this assumption affect the validity of correlation estimates or rank test results? **A3:** Violation of this assumption, e.g., shifting and rescaling, does not affect the validity of the whole method, as long as all the variables are still jointly Gaussian. This is because we care about the rank of the cross-covariance matrix, which is equal to the rank of the cross-correlation matrix; the latter is clearly invariant to shift or rescaling of either some or all variables (also mentioned in line 229). Thus, we can simply assume that all variables are standardized for Equation 14. We genuinely appreciate the reviewer's effort and hope that your concerns/questions are addressed.
Summary: The authors consider the task of finding the rank of cross covariance matrices, when some variables have been discretised. The authors consider a permutation based rank test that is able to handle both discrete and continuous variables. They show that their method is able to perform well with both continuous and mixed data. ## update after rebuttal I thank the authors for their response. I will keep my positive score. Claims And Evidence: The claims made in the paper are supported. Methods And Evaluation Criteria: I can't find any information about the data generation for the experiments. It would be useful to see the performance under varying data generating assumptions and varying degrees of discretisation. Theoretical Claims: Theoretical claims seem correct. Experimental Designs Or Analyses: - The PC algorithm outputs the CPDAG, not a single DAG, but its Markov equivalence class. As such, the SHD and F1 scores might not make sense. The SHD between CPDAGs might be a better metric. If this is the actual metric that is calculated, it should be made clear. Supplementary Material: Proofs. Relation To Broader Scientific Literature: The main contribution is a permutation based rank test that can also handle discrete data. It is not clear which how other parts are the contributions of the paper. Is the estimation of the correlation with discretisation novel (section 3.3)? It seems to follow known results. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths: - The paper tackles an important problem of discretisation. Weaknesses: - Lack of information on the experiments makes it hard to judge whether they were comprehensive. - Although they tackle the problem of discretisation, they only do this for rank tests. Causal discovery has frameworks that can handle both mixed and continuous data. For example [1] is tested on both continuous and discrete data. [1] Lopez-Paz, David, Philipp Hennig, and Bernhard Schölkopf. "The randomized dependence coefficient." Advances in neural information processing systems 26 (2013). Other Comments Or Suggestions: N/A Questions For Authors: - I don't understand the arguement in L241 LHS, why can Lemma 4 not be used for discrete values? - Section 4.2: Just to be clear, CCART-C when applied on the mixed data, is not actually fed mixed data, but works on the original continuous values? If so, it might also be interesting to see what happens when mixed data is provided to CCART-C. - What is the ground truth for section 4.4? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the time dedicated to reviewing our paper, the insightful comments, and valuable feedback. Please see our point-by-point responses below. **Q1:** About data generation for experiments? **A1:** We assume that data is generated following a linear structural causal model $\mathsf{V}\_i=\sum \nolimits\_{\mathsf{V}\_j \in \text{Pa}(\mathsf{V}\_i)} f\_{j,i} \mathsf{V}\_j + \epsilon\_{\mathsf{V}\_i}$, where each edge coefficient $f_{j,i}$ is uniformly sampled from $[-2, 2]$ and the noise terms are Gaussian with variance uniformly from $[1,5]$. Within this model, the population covariance can be directly specified once $f\_{j,i}$ and the variance of $\epsilon\_{\mathsf{V}\_i}$ are specified. % Information about the process of discretization can be found in line 369-373. We thank the reviewer and have revised our paper to include the above description. **Q2:** The metric used in Table 1? **A2:** As mentioned in line 425, the metrics used in Table 1 are SHD and F1 scores for the skeleton, which is also commonly used in the literature. In light of your suggestion, we conducted additional experiments to compare the CPDAGs in terms of SHD and F1, and our proposed MPRT still consistently outperforms other methods across different sample sizes. We thank the reviewer for the valuable comment and have revised our paper to include this additional result. **Q3:** Although they tackle the problem of discretisation, they only do this for rank tests. Causal discovery has frameworks that can handle both mixed and continuous data. For example [1] is tested on both continuous and discrete data? **A3:** Thank you for mentioning this interesting work and we have revised our manuscript to add a related discussion. From one perspective, since conditional independence constraint (in the linear case) is a special case of rank constraint of covariance matrices [3], the proposed method can also be used to test CI in the presence of discretization, as empirically validated by the results in Table 1. From another perspective, as you mentioned, there are some existing works that can handle mixed data for CI, e.g., RDC [1] and KCI [2]. In light of your suggestion, we conducted additional experiments to compare with KCI in Table 1 (it is hard to compare to RDC as it can only measure independence rather than conditional independence). We observe that the proposed MPRT outperforms KCI by a clear margin, e.g., when $N=2000$, MPRT achieves a F1 score of 0.96 while KCI achieves only 0.86. This outcome is expected, as KCI (like other CI tests designed for mixed data) directly test the relations between observed variables, while the real objective is to identify the relations between the underlying continuous variables. Please kindly refer to our response to Q2 for reviewer QSUp for a related discussion. **Q4:** Regarding the arguement in L241 LHS, why can Theorem 4 not be used for discrete values? **A4:** There are two reasons. (i) Directly using the ordinal values cannot produce the correct linear transformation $A$ (does not converge to the true one). (ii) Even if we can produce the correct $A$, when some columns of data are just ordinal values, after the linear transformation the entries of $\mathbf{C_X}_{k:}$ would be linear combination of some continuous and some ordinal values, which can be arbitrary (as the choice of ordinal values can take either $\{1,2,3\}$ or $\{1,2,10^{10}\}$) and does not have psychical meaning. **Q5:** Regarding Section 4.2, just to be clear, CCART-C when applied on the mixed data, is not actually fed mixed data, but works on the original continuous values? If so, it might also be interesting to see what happens when mixed data is provided to CCART-C. **A5:** Yes, CCART-C takes the original continuous values as input and thus in our experiments its performance serves as the upper-bound. When CCA-based rank test takes the mixed data as input, it is named as CCART-D in our experiments (as mentioned in section 4.1). As expected, CCART-D cannot properly control the type-I error and it does not benefit from the increase of the sample size, as shown in Figure 2. **Q6:** The ground truth for section 4.4? **A6:** The structure that underlies human personality remains an open research problem and thus there is no established ground truth for it yet. We genuinely appreciate the reviewer's effort and hope that your concerns/questions are addressed. [1] Lopez-Paz, The randomized dependence coefficient. 2013. [2] Zhang, Kernel-based conditional independence test and application in causal discovery. 2012. [3] Sullivant, Trek separation for gaussian graphical models. 2010.
Summary: This paper introduces the Mixed Data Permutation-Based Rank Test (MPRT), an approach designed to address the challenge of discretization in rank tests. The proposed MPRT estimates the asymptotic null distribution by leveraging data permutation, which is grounded in the exchangeability condition of the linear projection of the relevant variables. The authors demonstrate the efficacy of the MPRT through comprehensive synthetic and real-world experiments. Claims And Evidence: No. The correctness of Thm. 4, which is the key result that the proposed test relies on, is problematic. First, there is no clear definition of "asymptotic independence". There are various definitions in the literature [1,2], but it is unclear which definition the authors adopt. From the proof, I presume that the authors define two sequence of random vectors $X_n$ and $Y_n$ are asymptotically independent if there exists $X, Y$ such that $X \perp Y$ and $X_n \to X, Y_n \to Y$ in distribution [1]. But this needs clarification. Second, the asymptotic independence between $C_{X k:}$ and $C_{Y k:}$ is not discussed in a rigorous way. The authors should follow the definition and show the convergence of $C_{X k:}, C_{Y k:}$ to some $C^\prime_{X}, C^\prime_{Y}$ such that $C^\prime_{X} \perp C^\prime_{Y}$. From the proof, I conjecture that the authors may want to show this by first showing the convergence of $\hat{\Sigma}_X, \hat{\Sigma}_Y$ to $\Sigma_X, \Sigma_Y$, and then using the continous mapping theorem to show the convergence of C{X k:}, C{Y k:}, which are computed from $\hat{\Sigma}_X, \hat{\Sigma}_Y$ with SVD decomposition. But this can be highly problematic: 1. The convergence of $\hat{\Sigma}_X, \hat{\Sigma}_Y$ obtained from pseudo-maximum likelihood is unkown. I do not find results in the paper cited by the authors (Besag, 1974). 2. The SVD is not unique and thus cannot be considered as a continuous function (see [3]). So the continous mapping theorem may not apply here. Can the authors provide further clarifications to my questions? I ask for particular rigor because the proposed test highly rely on this independence result. I am happy to increase my score if the authors can address these questions properly. [1] https://math.stackexchange.com/questions/1272661/is-there-a-concept-of-asymptotically-independent-random-variables [2] https://arxiv.org/pdf/1910.04243 [3] https://math.stackexchange.com/questions/3389899/continuity-of-singular-value-decomposition Methods And Evaluation Criteria: The permutation-based test relies on the exchangeability result (Thm. 4), which can be problematic (see Claims and Evidence). Therefore, the proposed test may not be valid. Theoretical Claims: Yes. The proof of Thm. 4 is problematic. See Claim and Evidence. Experimental Designs Or Analyses: Yes. The experiment is comprehensive. Supplementary Material: Yes. Appx. A mainly. Relation To Broader Scientific Literature: This paper extends the classifical rank test (Jordan, 1875; Hotelling, 1992) to cases with discretization. Essential References Not Discussed: Yes. Other Strengths And Weaknesses: 1. The problem of discretization in rank-based test is important. 2. The writting is clear. Other Comments Or Suggestions: See Claim and Evidence. Questions For Authors: See Claim and Evidence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments, which have greatly helped us refine the quality of our paper. We first provide responses to your concerns and then give an updated sketch of proof for Thm 4. **Q1:** The convergence of $\hat{\Sigma}\_\mathbf{X}$ by pseudo-maximum likelihood. **A1:** Yes, $\hat{\Sigma}\_\mathbf{X}$ in our method converges in probability to the population one. [1] is one of the earliest on pseudo-likelihood, and its consistency was later discussed in [2]. As a member of M-estimators, the consistency can also be derived from the general theory for M-estimators [3]. Specifically in the presence of discretization, [6] derived the consistence of estimating precision matrix under mixed non-paranormal model; similar arguments can be applied for the consistency of $\hat{\Sigma}\_\mathbf{X}$ by pseudo-maximum likelihood in our paper. **Q2:** Continuity and uniqueness of SVD. **A2:** SVD is not continuous only when the input matrix has repeated singular values. Specifically, if a matrix $A$ has distinct singular values, then SVD is continuous in the neighborhood of $A$, and unique only up to sign flip (chapter 2 section 5.3 of [4]). Thus, to make use of the continuous mapping theorem, we assume that $\Sigma\_\mathbf{X}^{-\frac{1}{2}}\Sigma\_{\mathbf{X},\mathbf{Y}}\Sigma\_\mathbf{Y}^{-\frac{1}{2}}$ does not have repeated singular values (the justification of which is discussed in Q3). To further eliminate the sign indeterminacy, we can follow scikit-learn to ensure the largest coefficient in each column, in terms of absolute value, is positive (svd flip in scikit-learn). **Q3:** The justification of assuming non-repeated singular values. **A3:** Faithfulness is one of the most important assumptions in causal discovery and is typically justified by that the set of parameters violating it is of Lebesgue measure zero [7]. Similarly, it has been shown that the set of matrices with repeated singular values has Lebesgue measure zero (Lemma 1.4.2 in [8], also in [5]). An updated sketch of proof for Thm 4. > Sketch of proof \ Given $(\hat{\Sigma}\_\mathbf{X},\hat{\Sigma}\_\mathbf{Y},\hat{\Sigma}\_{\mathbf{X},\mathbf{Y}})\overset{p}{\to}(\Sigma\_\mathbf{X},\Sigma\_\mathbf{Y},\Sigma\_{\mathbf{X},\mathbf{Y}})$, we aim to show the desired asymptotic independence. Specifically we want to show (i) $\mathbf{C_X}\_{k:} \overset{p}{\to} \mathbf{C_X}\_{k:}^\*$ and $\mathbf{C_Y}\_{k:} \overset{p}{\to} \mathbf{C_Y}\_{k:}^\*$, and (ii) $\mathbf{C_X}\_{k:}^\*,\mathbf{C_Y}\_{k:}^\*$ are independent under the null hypo. \ Here $\mathbf{C_X}=A^T\mathbf{X},\mathbf{C_Y}=B^T\mathbf{Y}$, $\mathbf{C_X}^\*={A^\*}^T\mathbf{X}$, and $\mathbf{C_Y}^\*={{B^\*}^T}\mathbf{Y}$, where $(A,B)$ and $(A^\*,B^\*)$ are produced by SVD using estimated covariance and population one respectively as follows. $$USV=\hat{\Sigma}\_\mathbf{X}^{-\frac{1}{2}}\hat{\Sigma}\_{\mathbf{X},\mathbf{Y}}\hat{\Sigma}\_\mathbf{Y}^{-\frac{1}{2}},A=\hat{\Sigma}\_\mathbf{X}^{-\frac{1}{2}T}U,B=\hat{\Sigma}\_\mathbf{Y}^{-\frac{1}{2}T}V^T,~~~U^\*S^\*V^\* =\Sigma\_\mathbf{X}^{-\frac{1}{2}}\Sigma\_{\mathbf{X},\mathbf{Y}} \Sigma\_\mathbf{Y}^{-\frac{1}{2}},A^\*=\Sigma\_\mathbf{X}^{-\frac{1}{2}T}U^\*,B^\*=\Sigma\_\mathbf{Y}^{-\frac{1}{2}T}{V^\*}^{T}.$$ For (i): By continuous mapping theorem, under the assumption of no repeated singular values, we have $U \overset{p}{\to} U^\*$. As $\Sigma\_\mathbf{X}$ is positive definite, the matrix inverse square root is continuous and thus $\hat{\Sigma}\_\mathbf{X}^{-\frac{1}{2}T}\overset{p}{\to}\Sigma\_\mathbf{X}^{-\frac{1}{2}T}$. Given $(U,\hat{\Sigma}\_\mathbf{X}^{-\frac{1}{2}T})\overset{p}{\to}(U^\*,\Sigma\_\mathbf{X}^{-\frac{1}{2}T})$, we have $\hat{\Sigma}\_\mathbf{X}^{-\frac{1}{2}T}U=A\overset{p}{\to}A^\*=\Sigma\_\mathbf{X}^{-\frac{1}{2}T}U^\*$. Similarly, we have $B\overset{p}{\to}B^\*$. \ Thus $$((A^T-{A^\*}^T)\mathbf{X},(B^T-{B^\*}^T)\mathbf{Y})\overset{p}{\to}0\Rightarrow (((A^T-{A^\*}^T)\mathbf{X})\_{k:},((B^T-{B^\*}^T)\mathbf{Y})\_{k:})\overset{p}{\to}0\Rightarrow(\mathbf{C_X}\_{k:},\mathbf{C_Y}\_{k:})\overset{p}{\to}(\mathbf{C_X}\_{k:}^\*,\mathbf{C_Y}\_{k:}^\*).$$ For (ii): Under the null hypo, the cross-covariance between $\mathbf{C_X}\_{k:}^\*$ and $\mathbf{C_Y}\_{k:}^\*$ are all zeros. As $\mathbf{C_X}\_{k:}^\*,\mathbf{C_Y}\_{k:}^\*$ are jointly gaussian (linear mixing of $\mathbf{X,Y}$), zero cross-covariance implies independence. Please feel free to let us know if any part remains unclear or you have further questions. Thank you again for your valuable feedback! [1] Besag, Spatial interaction ... system. 1974. [2] Gourieroux, Pseudo maximum likelihood methods: Theory. 1984. [3] Gourieroux, Consistent pseudo ... estimators. 2017. [4] Kato, Perturbation theory for linear operators. 2013. [5] Bochnak, Real algebraic geometry.2013. [6] Fan, High dimensional ... mixed data. 2017. [7] Spirtes, Causation, prediction, and search. 2000. [8] Kunisky, Lecture Notes on Random Matrix Theory. 2024.
null
null
null
null
null
null
null
null
Finite-Time Analysis of Discrete-Time Stochastic Interpolants
Accept (poster)
Summary: This work provides a theoretical analysis of time-discretized stochastic interpolant models. Specifically, they address the problem of convergence with respect to the number of steps in the time discretization, characterizing the error in the modeled distribution with respect to the discretization scheme, the model approximation error, and the prior approximation error. This extends prior analysis focusing solely on diffusion models while remaining consistent, and addresses the effect of time discretization in particular. Motivated by their theoretical results, the authors outline a new time discretization strategy which can theoretically improve convergence of discrete-time solvers using stochastic interpolants, which they demonstrate empirically. ## update after rebuttal: I believe my original score holds, as this remains a strong paper and contribution. Claims And Evidence: The primary claim made by the authors concerns the convergence of discrete-time solvers (namely, Euler-Maruyama) for stochastic interpolant models. They provide a bound on the KL divergence between the true and model target distributions, which is a function of the model approximation error, time discretization error, and prior approximation “distance”. The proof of this method is explained intuitively, with a detailed proof provided in the Appendix. This bound is then used to design an efficient time discretization schedule to ensure more rapid convergence with respect to the number of time steps. They show theoretically and empirically that their new strategy yields more efficient sampling. Methods And Evaluation Criteria: While the primary focus of this work is a theoretical characterization of practical methods for computing stochastic interpolants, the authors also provide limited experimental validation of their claims. I find this evaluation very convincing, as they demonstrate the enhanced performance of their scheduling scheme as predicted by their theory. While this is demonstrated on toy datasets, the point is made very clearly. Theoretical Claims: The authors provide intuitive interpretations and walkthroughs of their theorems and proofs. I found these descriptions intuitive and easy to follow, and they are convincingly backed by rigorous proof. Experimental Designs Or Analyses: As mentioned above, I believe their limited experiments provide sufficient empirical evidence that their theory is sound. While the level of experimental evaluation is not the same as a typical practical generative modeling paper, I believe their experiments do well to demonstrate the validity of their theoretical results. Supplementary Material: The supplementary material provides additional background, details regarding the proofs for sections 4 and 5, and additional experiment details. I believe the main text does well to explain the intuition behind the proofs, and therefore I mainly used the main text descriptions to develop an understanding of the theoretical results. However, the details in the appendix provide a rigorous outline of the theoretical results. Relation To Broader Scientific Literature: The authors position their work within two main areas in the literature. First, their theoretical analysis focuses on stochastic interpolants related to continuous-time normalizing flows and diffusion. Such approaches have seen widespread application in high-dimensional, structured generative modeling settings (e.g., image generation). Second, they provide an analogous study of stochastic interpolants to those conducted for the convergence and error analysis of diffusion. Essential References Not Discussed: I do not believe there is any missing relevant literature. Other Strengths And Weaknesses: Overall, I believe this would be a very valuable contribution to the community. The authors do well to highlight the importance of considering time discretization in addition to model approximation error in modern generative modeling convergence/error analysis. Moreover, their theoretical insights provide immediate practical considerations that can be employed in stochastic interpolant samplers, as they demonstrate both theoretically and empirically that their proposed sampling scheme can yield better convergence than a simple scheme used in practice, such as uniform discretization. They provide a theoretical basis for designing SDE-based sampling schemes targeting specific error bounds. Other Comments Or Suggestions: Right Column Line 170: “initial distribution mismatch (i.e., KL( ρ(t0)||ρ^(tN) ))” it should be KL( ρ(t0)||ρ^(t0) )? Right Column Lines 361-365: There seems to be a grammatical error here, part of the sentence is repeated. Right Column Line 373: aspeccts - > aspects Questions For Authors: Is there some way to selected or design \gamma to yield efficient sampling? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper! We are grateful for your constructive suggestions, which have significantly guided our improvements. Please find our responses to your comments below. ### Other Comments Or Suggestions: > Right Column Line 170: “initial distribution mismatch (i.e., $\text{KL}( \rho(t_0)\Vert\hat{\rho}(t_N))$)” it should be $\text{KL}( \rho(t_0)\Vert\hat{\rho}(t_0))$?\ > Right Column Lines 361-365: There seems to be a grammatical error here, part of the sentence is repeated.\ > Right Column Line 373: aspeccts - > aspects A: Thanks for your suggestion! We will update them in the final version. ### Questions For Authors: > Is there some way to selected or design $\gamma$ to yield efficient sampling? A: Thanks for your insightful question! Our current analysis regarding schedule design and its associated sample complexity is predicated on a fixed latent scale, $\gamma$. In Section 5, we adopt $\gamma(t)=\sqrt{at(1-t)}$ due to its prevalence and natural appeal, stemming from its connection to the Brownian bridge process. However, the systematic design of $\gamma$ to optimize sampling efficiency represents a compelling avenue for future research, potentially necessitating a synergistic approach encompassing both theoretical analysis and practical experimentation. --- We hope our response addresses your concerns. If so, we wonder if you could kindly consider raising your score? We will also be happy to answer any further questions you may have. Thank you very much! --- Rebuttal Comment 1.1: Comment: Thank you for addressing my limited concerns/questions. I believe my original score holds, and that this work represents a useful contribution.
Summary: This paper present discrete-time analysis of the stochastic interpolant framework or as known as flow models, by theorical results, this paper design a schedules for convergence acceleration. ## update after rebuttal My view has not changed, so I maintain my original score. Claims And Evidence: I think the discrete-time analysis is reasonable. Methods And Evaluation Criteria: 1. I believe the authors should explain the novelty of the Exponentially Decaying Time Schedule. Given that flow models and diffusion models are not substantially different, it appears that the authors may have merely transferred a time schedule from diffusion to flow models. 2. If that is the case, the main contribution seems to be theoretically verifying that the Exponentially Decaying Time Schedule offers certain advantages over a Uniform Schedule in flow models. Theoretical Claims: Theoretical claim is valid. Experimental Designs Or Analyses: 1. Although the authors provide theoretical justification for their method’s superiority, the experiments themselves appear overly simple. I recommend conducting toy experiments on low-resolution datasets such as CIFAR-10 or ImageNet32 since training flow models on these datasets are relatively fast and computationally manageable. 2. Experiments conducted solely on Gaussian data only confirm the theoretical foundations. Given that the authors’ theory largely transfers discrete-time analysis to flow models, it would be more convincing to present additional experiments that demonstrate the claimed ability to “design efficient schedules for convergence acceleration.” Supplementary Material: See Theoretical Claims. Relation To Broader Scientific Literature: This paper relates to many applications of diffusion models. Essential References Not Discussed: Flow straight and fast: Learning to generate and transfer data with rectified flow ICLR 2023 Flow matching for generative modeling ICLR 2023 Other Strengths And Weaknesses: No Other Comments Or Suggestions: I believe the authors’ statement that the paper focuses on theory is valid. However, if they aim to provide an improved method, they should at least include standard experiments on datasets like CIFAR-10 to support their claims. Relying solely on simple two-dimensional datasets is not sufficient. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper! We are grateful for your constructive suggestions, which have significantly guided our improvements. Please find our responses to your comments below. ### Methods And Evaluation Criteria > I believe the authors should explain the novelty of the Exponentially Decaying Time Schedule. Given that flow models and diffusion models are not substantially different, it appears that the authors may have merely transferred a time schedule from diffusion to flow models. A: We first would like to emphasize that our work introduces the first discrete-time convergence bound for SDE-based generative models within the stochastic interpolants framework. Theorem 4.3 provides an explicit upper bound on the estimation error, expressed in terms of step sizes $h_k$, dimension $d$, latent scale $\gamma$, and the distance between two distributions measured by $\mathbb{E}\Vert x_0-x_1\Vert^p$. This theorem establishes an error bound applicable to arbitrary time schedules, thereby offering insights into the design of schedules that minimize sample complexity. The schedule presented in Section 5 is designed based on this theorem, wherein we stipulate that $h_k$ should be proportional to $\bar{\gamma}_k^2$ to ensure a well-balanced discretization error. Specifically, for the case where $\gamma(t)=\sqrt{at(1-t)}$, the derived schedule manifests as an exponentially decaying schedule, characterized by a reduction in step size on both ends of the interval $[0,1]$. While this schedule shares similarities with the exponentially decaying schedule employed in diffusion models, a key distinction lies in the fact that diffusion models typically decay the step size on only one side of the interval. Furthermore, it is noteworthy that for alternative choices of $\gamma$, the optimal schedule may deviate from the above schedule. ### Experimental Designs Or Analyses > Although the authors provide theoretical justification for their method’s superiority, the experiments themselves appear overly simple. I recommend conducting toy experiments on low-resolution datasets such as CIFAR-10 or ImageNet32 since training flow models on these datasets are relatively fast and computationally manageable. A: Primarily, this work constitutes a theoretical investigation, wherein we establish the first discrete-time complexity analysis for the stochastic interpolants framework. Our contributions include the derivation of an explicit error bound for the discrete-time sampler, as presented in Theorem 4.3, and the subsequent exploration of schedule design strategies aimed at minimizing sample complexity. The objective of our numerical experiments is to validate our theoretical findings. While we acknowledge the value of demonstrating our method's efficacy on more complex datasets, such as CIFAR-10 or ImageNet32, we must emphasize that the application of our framework to these datasets introduces a multitude of practical challenges. These challenges encompass the design of effective network architectures, the meticulous tuning of hyperparameters, and the optimization of learning algorithms. Addressing these practical considerations falls outside the primary scope of this theoretical study. However, we recognize the importance of empirical validation on standard datasets and consider it an interesting direction for future research. > Given that the authors’ theory largely transfers discrete-time analysis to flow models, it would be more convincing to present additional experiments that demonstrate the claimed ability to “design efficient schedules for convergence acceleration.” A: A core contribution of our work lies in the derivation of the inaugural theoretical error bound for distribution estimation within the stochastic interpolants framework, explicitly expressed in terms of the step sizes. Leveraging this result, we theoretically demonstrate the feasibility of designing schedules that accelerate convergence by optimizing the balance between error terms. To empirically validate the efficacy of our schedule design methodology, we have conducted numerical experiments for both $\gamma=\sqrt{(1-t)t}$ (Section 6) and $\gamma=\sqrt{(1-t)^2t}$ (Appendix D.1), comparing the performance of our specifically designed schedules against that of the standard uniform schedule. While we acknowledge the potential for further experimentation to strengthen our claims, we consider the current numerical results as a solid validation. Furthermore, the exploration of optimal choices for $\gamma$ in practical applications represents an intriguing avenue for future research. ### Essential References Not Discussed A: Thank you for pointing them out! We will discuss them in the revision. --- We hope our response addresses your concerns. If so, we wonder if you could kindly consider raising your score? We will also be happy to answer any further questions you may have. Thank you very much! --- Rebuttal Comment 1.1: Comment: Thank you for clarifying your concerns. While I acknowledge the paper's theoretical contributions and the exploration of a flow-based model, I remain unconvinced that the proposed “stochastic interpolants” framework significantly differs from standard SDE-based diffusion models. In addition, the experimental results appear limited. Given the paper’s theoretical focus and the solid proofs provided, I recognize the authors’ contributions. However, these do not fully address my reservations regarding novelty and empirical support. Therefore, I maintain my initial score of 3.
Summary: This paper proposes a finite time analysis of discretization of stochastic interpolants. The paper presents assumptions on the initial and final distributions, score estimators, then provides a complexity bound in KL divergence. Claims And Evidence: The paper claims to approach the problem of analysis discrete-time stochastic interpolants, and does provide clear assumptions and results to support the claim. There are also numerics that demonstrate the convergence rate. No claims seem problematic. Methods And Evaluation Criteria: Not applicable as this is mainly a theoretical paper. Theoretical Claims: While I did give a careful look at statements and proof sketches, I did not check every single line of proofs. Below are my high-level questions about theoretical claims Experimental Designs Or Analyses: The experimental design is very limited, limited to 2D examples. Given the explicit dimension dependence in the bound, I suggest authors to provide experiments with variable $d$ (maybe Gaussian case or something similar) and let $d$ grow, to obtain a scaling result to see whether their bound is predictive of what will happen with increasing $d$. Supplementary Material: Briefly, not all parts, not all proofs are checked. Relation To Broader Scientific Literature: The paper extends the work on stochastic interpolants, particularly Albergo et al. (2023) and provides a discrete time analysis. However, my opinion in general is that given the assumption on $\varepsilon$-accurate drift for the SDE given in the Albergo et al. (2023), I think the contribution is a bit incremental - this is a discretisation analysis in essence. Which is usually standard adapting the usual analysis methods as authors did. Essential References Not Discussed: None. Other Strengths And Weaknesses: Analysis seem very favourable dependence to dimension and $\varepsilon$ accuracy. Other Comments Or Suggestions: 1) line 149 lhs, "Both $b(t,x)$ and $b_F(t,x)$ can be expressed as linear combinations", clarify for reader's convenience. 2) Line 383, $N$ is the number of iterations, line 375 $N(0, I_d)$ is a Gaussian, use $\mathcal{N}$ for the density. Questions For Authors: 1) The authors assume in Assumption 4.2 that the drift is estimated with high accuracy. However, can authors clarify how this can be relevant to practice? We know that $b_F$ can be decomposed as the score + velocity term. For the score term, very good estimators exist from diffusion models literature. Can the same be said for the second term? Is this a practical setting? Please mention some practical estimators for the second term after this assumption, their behaviour, whether it is easy to ensure good training of it, as in the score case. 2) While the bound in Theorem 4.3 has a favourable dependence to dimension, I feel like some of this is hidden in the term $\mathbb{E}\|x_0 - x_1\|^8$. Please exemplify a bound with two multivariate Gaussians - provide this expression and provide its dependence to $d$. Similar question extends to Corollary 5.2. 4) What is the reason in Assumption 4.1 that you have an 8th moment assumptions, vs. 4th moments in the literature. Please give some intuition following line 190. 5) Figure 1: is there any issue with $t = 0.001$. Is the estimator $\hat{b}_F$ stable around this region? Please provide the plot around this time for the SDE. 6) I suggest authors to not assume uniformly that $b_F$ is estimated $\varepsilon$-accurate. Perhaps given the score estimators, you can provide $\varepsilon_{score}$-accurate score and $\varepsilon_{vel}$-accurate velocity term estimate - thus see the interplay between these two error sources and how they play out in your final bound. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper! We are grateful for your constructive suggestions, which have significantly guided our improvements. Please find our responses to your comments below. ### Experimental Designs Or Analyses A: Thanks for the suggestion. We tested the samplers on $d$-dimensional Gaussian mixtures ($\rho_0$ and $\rho_1$) and compared the distribution estimation error for different $d$. The real drift terms used in the sampler are analytically computed [1]. You can check [link1](https://drive.google.com/file/d/1PuZy-aR9OYwBtB9xdnPaxn5qAP2G3-3O/view) or [link2](https://drive.google.com/file/d/1wkOsPsqq0wzCCd8paGZL8qIpK2aYMD52/view) for results. The first figure compares the error for different $d$ with fixed iterations, and the second shows convergence for higher dimensions. Both results align with our theory and validate the $d$ dependence. ### Other Comments Or Suggestions A: Thank you for your suggestions! We will update them in the revision. ### Questions For Authors A1: We first emphasize that our work primarily focuses on the theoretical analysis of the general stochastic interpolant method. We provide the first discrete-time analysis and propose an upper bound for the distribution estimation error. Additionally, we develop a discretization time schedule that optimizes this bound for a specified $\gamma$. Theorem 4.3 offers a general upper bound, controlling the error using $\varepsilon_{b_F}^2$, the distance between distributions, data dimension $d$, and step sizes $h_k$. Assumption 4.2 quantifies how close $\hat b_F(t,x)$ is to $b_F(t,x)$ and holds for some $\varepsilon_{b_F}^2$ if $\hat b(t,x_t)$ has finite second moments. Theorem 4.3 illustrates how this impacts the final distribution error. Our numerical experiments validate these findings. We use the loss function $\mathcal{L}[\hat{v}]=\frac{1}{2}\int_{t_0}^{t_N}\mathbb{E}[|\hat{v}|^2-2\partial_tI\cdot\hat{v}]\text{d}t$ to train the estimator $\hat{v}(t,x)$ for $v(t,x)$. This choice proved effective and is commonly used (e.g., [1], [2], [3]). However, creating robust estimators for general applications is challenging, requiring careful designs for network architectures, learning algorithms, and learning rate schedules, which is a direction for future research. A2: Regarding dimension dependence in Theorem 4.3, the key terms contributing to the discretization error are: $$\varepsilon_{\text{dis}}\lesssim\sum_{k=0}^{N-1}h_k^3(d^3\bar \gamma_k^{-6}+d\sqrt{\mathbb{E}|x_0-x_1|^8}\bar\gamma_k^{-2})+\sum_{k=0}^{N-1}h_k^2(d^2\bar\gamma_k^{-4}+d\sqrt{\mathbb{E}|x_0-x_1|^4}\bar\gamma_k^{-2}).$$ For a multivariate Gaussian $z\sim\mathcal{N}(0,I_d)$, $\mathbb{E}|z|^{2p}\le C(p)d^p$. If $x_0$ and $x_1$ are multivariate Gaussians, $\mathbb{E}|x_0-x_1|^8=O(d^4)$. Substituting this shows the dependence on $d$ is $O(d^3)$ in the $h_k^3$ term and $O(d^2)$ in the $h_k^2$ term. The dimensional dependence is also consistent in Corollary 5.2. A3:Our work is focused on discrete-time analysis, which is new for the stochastic interpolant framework. Unlike continuous-time error bounds that use the 4th moment assumption (see [1]), our discrete-time analysis requires the 8th moment assumption. In Theorem 4.2, we control $\mathbb{E}|\nabla b_F(t,x_t)\cdot b_F(t,x_t)|^2$, requiring $\mathbb{E}|x_0-x_1|^8<\infty$. This term appears as $h_k^3\bar{\gamma}_k^{-2}d\sqrt{\mathbb{E}|x_0-x_1|^8}$ in Theorem 4.3's error bound. However, this term doesn't dominate when the step size is small, with the error bound being dominated by $h_k^2\bar{\gamma}_k^{-2}d\sqrt{\mathbb{E}|x_0-x_1|^4}$. Relaxing the 8th moment assumption is a future research direction. A4: It is true that the drift term $b_F(t,x)$ has larger variation near boundaries, motivating our exponentially decaying schedule in both theory and experiments. Our estimator approximates $b_F(t,x)$ well for $t_0=0.001$, so this choice does not pose a problem. We will add more details and visualizations in the revision. A5: While analyzing the estimation errors of score and velocity terms separately is suggested, we find it unnecessary for our analysis. Partitioning $b_F(t,x)=v(t,x)+(\epsilon-\dot{\gamma}\gamma)s(t,x)$ results in $\varepsilon_{b_F}^2\lesssim\varepsilon_v^2+\sup(\epsilon-\dot{\gamma}\gamma)^2\cdot\varepsilon_s^2$, which is similar to [1] for continuous-time stochastic interpolants. ### References [1] Albergo, M. S., Boffi, N. M., and Vanden-Eijnden, E. Stochastic interpolants: A unifying framework for flows and diffusions, 2023. [2] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling, 2023. [3] Albergo, M. S. and Vanden-Eijnden, E. Building normalizing flows with stochastic interpolants, 2023. --- We hope our response addresses your concerns. If so, we wonder if you could kindly consider raising your score? We will also be happy to answer any further questions you may have. Thank you very much! --- Rebuttal Comment 1.1: Comment: _I posted this comment yesterday as an 'official comment' without realizing that this is apparently not visible to authors. Please read below my response:_ Thank you! Thank you for the plots. Can you elaborate how one would interpret your linear growth plot in link1 in the context of Theorem 4.3? Does it connect with the theoretical result? Also, if you were to fix some and run numerics with increasing , scaling the number of steps necessary for this, would you get the same scaling as in theory? I am happy to raise my score, if these experiments were conducted with good care. To be honest, I don't expect perhaps the behaviour in these experiments to match theory, as theory is just an upper bound. But this would give a quite good picture whether there is more work to be done, or bounds derived here are tight. --- Reply to Comment 1.1.1: Comment: Thank you for your comment! We are glad to further discuss the questions with you. Please see our response below. In our experiments, the KL error scales almost linearly w.r.t. $d$. The $O(d^2)$ KL error bound provided by our theorem holds for this case, and the linear growth of KL error in the experiments can be caused by several reasons. For example, there might be some special properties of gaussian mixtures which does not hold for general distributions, or there simply exists a sharper bound that haven't been found. Trying to improve the bound would be an interesting work in the future. In addition, we have scaled the number of iterations ($N$) in our experiments on Gaussian mixture data, which makes the convergence clearer, see [link3](https://drive.google.com/file/d/1YIIf0KlcrXrw1u1ybS-KDya_-twRKle0/view?usp=drive_link) and [link4](https://drive.google.com/file/d/1_7aaaSrkrgYOKl6KM1plM1scW3S0nfzm/view). Moreover, compared to [link1](https://drive.google.com/file/d/1PuZy-aR9OYwBtB9xdnPaxn5qAP2G3-3O/view), we draw multiple curves in [link4](https://drive.google.com/file/d/1_7aaaSrkrgYOKl6KM1plM1scW3S0nfzm/view) to show the scale of KL error w.r.t. the dimension $d$ for fixed number of iterations. --- We hope our response addresses your concerns. If so, we wonder if you could kindly consider raising your score? We will also be happy to answer any further questions you may have. Thank you very much!
Summary: This paper derives a convergence bound for the stochastic interpolants framework. The discretized SDE analyzed in this work is more general than the SDE analyzed in existing diffusion model convergence bounds. Like in the diffusion setting, the derived bound suggest an that an exponentially decaying discretization schedule is a better choice. The authors show empirically that this is indeed the case. ## update after rebuttal I thank the authors for their clarifications. The more general interpolant analysis remains quite close to existing work but I believe it could be interesting to have the result out there, I therefore increase my score. Claims And Evidence: The central claim in the paper is the convergence bound is provided in Theorem 4.3. The proof adapts the result of Chen et al [A] on the discretization of SDE to the stochastic interpolant framework. The differences introduced by the slightly more general interpolant I(x_0, x_1, t) are dealt with in Lemmas B.3 and C.1, which are the main technical contribution of the work (Could the authors confirm?). A secondary claim is derived from the theorem which shows that an exponentially decaying schedule cancels out terms appropriately and leads to a simplified bound in Proposition 5.1. This claim is confirmed experimentally in Figure 3. --- [A] Chen, Sitan, et al. "Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions." Methods And Evaluation Criteria: . Theoretical Claims: The analysis is sound. The only unfortunate element is the need for $t_0 > 0$ and the initialization error resulting from this. It creates a discrepancy between the bounds in the paper and bounds for diffusion models whose initialization error can be made exponentially small by increasing the simulation interval T. Experimental Designs Or Analyses: . Supplementary Material: . Relation To Broader Scientific Literature: There are some papers already providing analysis of the flow matching framework and the authors clearly compare their techniques with prior work. Essential References Not Discussed: . Other Strengths And Weaknesses: The strengths: The paper is clearly written and easy to follow. It adapts prior work and adds to the literature on discretized SDEs used for sampling. The derived bound gives indication on a good choice of step-size schedule and the authors confirm this experimentally. A minor weakness: The technical extension over prior work might be slightly limited but it can still be worthwhile to have the bound in the literature. Other Comments Or Suggestions: . Questions For Authors: Could an additional assumption on the target distribution remove the unfortunate initial KL term in the bound? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper! We are grateful for your constructive suggestions, which have significantly guided our improvements. Please find our responses to your comments below. ### Claims And Evidence / Weakness > The differences introduced by the slightly more general interpolant $I(x_0, x_1, t)$ are dealt with in Lemmas B.3 and C.1, which are the main technical contribution of the work (Could the authors confirm?). > A minor weakness: The technical extension over prior work might be slightly limited but it can still be worthwhile to have the bound in the literature A: The main contributions of our work include the following. First, we propose the first discrete-time analysis for the stochastic interpolant framework, where we propose a new discrete-time sampler and provide the rigorous distribution estimation error bound for the sampler (Theorem 4.3). Second, based on the error bound, we further develop a time schedule for the discretization to achieve lower sample complexity (Corollary 5.2), when the latent scale is defined by $\gamma(t)=\sqrt{at(1-t)}$. Lastly, we validate our theoretical results with numerical experiments. Among our technical results, Lemma B.3, Lemma C.1 and Appendix B.2 address the discretization error of the drift term $b_F(t,x)$. In this part, the main difference between our result and previous results is that we utilize a more general interpolant $I(t,x_0,x_1)+\gamma(t)z$ between two general distributions. This introduces a novel velocity term $v(t,x)$ in the process, and demands new analysis for the estimation error compared to existing results on diffusion models. ### Questions > Could an additional assumption on the target distribution remove the unfortunate initial KL term in the bound? A: Firstly, the initial KL term quantifies the discrepancy between the true initial distribution $\rho(t_0)$ and the estimated distribution $\hat{\rho}(t_0)$, where $\hat X_{t_0}$ is sampled from. We incorporate this term into our theorem to comprehensively cover the scenario where $\hat{X}_{t_0}$ differs from $\hat{\rho}(t_0)$, and provide a corresponding error bound in Theorem 4.3 Next, we look at two examples on how this term affects our error bound. If we define $I$ such that $I(t,x_0,x_1)=x_0$ within the interval $t\in[0,t_0]$, the initial error becomes exactly $0$ because we can directly sample from $\rho(t_0)$ (note that during generation, data from $\rho_0$ is accessible). Furthermore, if $\gamma^2(t)=\Theta(t)$ near $t=0$, we can achieve an $O(t_0)$ initial KL error if we choose $\hat{\rho}(t_0)$ such that $\hat{X}_{t_0}=x_0+\gamma(t)z$ for $x_0\sim\rho_0$ and $z\sim\mathcal{N}(0,I_d)$. In this case, the initial KL error is the same to the initial KL error of diffusion models with $T=\Theta(\log(1/t_0))$ [1]. ### References [1] Benton, J., Bortoli, V. D., Doucet, A., and Deligiannidis, G. Nearly d-linear convergence bounds for diffusion models via stochastic localization --- We hope our response addresses your concerns. If so, we wonder if you could kindly consider raising your score? We will also be happy to answer any further questions you may have. Thank you very much!
null
null
null
null
null
null
Sub-Sequential Physics-Informed Learning with State Space Model
Accept (poster)
Summary: This paper addresses the two fundamental challenges in training PINN, continuous-discrete mismatch and simplicity bias. The proposed PINNMamba employs 1) State Space Model (SSM) to effectively capture continuous information in discrete temporal sequences and 2) sub-sequence contrastive alignment not to make the model trapping in the over-smooth local optimum (simple but incorrect solution). Claims And Evidence: I have concerns on the novelty of problem statement. While I agree that the continuous-discrete mismatch and simplicity bias are important to enhance the learnability of PINN, those two challenges are not considered a novel problem. As we know, there has been numerous literatures to model continuous time series from discrete one, shifting the focus of spatio-temporal modeling methods from conventional RNNs to neural ODEs and PDEs. Ultimately, they try to model continuous time series from discrete sampling. Also, I cannot directly connect if this would block the propagation of the “initial condition” (L57), as there could be more causes blocking its propagation. Simplicity bias is also an empirically very well-known problem, when comparing model and data complexity. Hence, the author’s main problem statement “How can we effectively introduce sequentiality to PINNs?” with a toy example in Figure 1 does not sound a challenging problem. Methods And Evaluation Criteria: While the motivation to deploy SSM into PINN model makes sense, its technical novelty is limited. It simply adds a SSM module between MLP modules, which could be considered a simple variation from common PINN designs. Sub-sequence contrastive alignment is straightforward and also makes sense. However, again, I doubt if we can say if it significantly differs from well-known temporal contrastive learning. Learning from sub sequences is widely used to efficiently learn long sequences with better stability. Overall, I understand the author’s motivation to use SSM and sub-sequence contrastive alignment, yet find their limited technical novelties. Theoretical Claims: I appreciate the author formulate Theorem 3.1. and its proof in Appendix A. However, it is not clearly connected to the proposed claim (i.e., continuous-discrete mismatch). There could be of course more solutions that satisfy discrete samples, as they are partial observations from original continuous source. However, I think it does not directly indicate that continuous-discrete mismatch is the key challenge in propagating initial condition in training PINN. Experimental Designs Or Analyses: More detailed experiments are required to validate the proposed methods are effective to address the continuous-discrete mismatch and simplicity bias. For example, in section 6.4. Ablation Study, PINNMamba without Sub-Sequence Align (in Table 3) demonstrates the higher loss than the original PINNMamba. This number does not directly indicate the simplicity bias is relieved, and related discussion is missing in the section. At the same time, I think "relieved continuous-discrete mismatch and simplicity bias" are challening to generally validate with few empirical results, and hence it would be nice to present relevant analytical studies. Supplementary Material: Yes, Proof of Theorem 3.1. and Training Details. Relation To Broader Scientific Literature: Physics-informed neural networks (PINN) have a wide range of application, as PDEs are the fundamental component of scientific researches and applications. This paper focuses on enhancing the learnability of PINN, particularly when the discrete training data degrades its training process and the model easily traps in a naive local minimum. Such scenarios can often happen in practical setups. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments from reviewer PWyp and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below. >**[C&E:]** I have concerns on the novelty .... with a toy example in Figure 1 does not sound a challenging problem. **[Re:]** To understand our work's contribution, one must first understand the current status of research on **PINN failure modes**, which is the core problem we are addressing. PINN failure mode is not as simple as a regular learnability problem. It is a problem unique to PINN and was first identified by Krishnapriyan et al in 2021 NeurIPS. PINN failure modes refer to a phenomenon where the residual loss of a PINN is extremely low, but its average error (even on training collection points) is extremely high. This phenomenon is distinguished from concepts like local minimal (PINN failure modes loss can be as small as ground truth), overfitting, and gradient vanishing, in which the core problem is optimization or generalization. The essential difficulty of this problem is that ground truth is unknowable, and the only known fully optimized PDE-based governing loss is unreliable. In real-world applications, this can result in serious misestimation of the system. The central contribution of our paper is a unique understanding of this unique phenomenon. The fact that the continuous-discrete mismatch and simplicity bias are well known does not mean that their explanatory role for PINN failure modes is well established. The ideas in the traditional spatiotemporal modeling do not necessarily apply to PINN, since training a PINN is a data-free and ground-truth-unknown problem. A great deal of discussion has arisen in the research community regarding failure modes for PINNs, but to the best of our knowledge, including PINNsFormer, which introduced the Transformer to PINNs, has failed to intrinsically understand and address the PINN failure modes. The distortions in the propagation of the initial conditions we propose are empirically observable, as shown in Fig.1, 2, 6, 7, and 8. We propose that continuous-discrete mismatch and simplicity bias are the core causes of these propagation failures, which is a completely novel understanding of the subject. We also need to point out that the convection problem shown in Fig.1 is a well-known challenging failure mode case, proposed by [1] and used by several related works. >**[M&EC:]** technical novelty is limited. **[Re:]**: Our contribution lies not only in this simple and effective SSM-based approach but also in the deep understanding of PINN failure modes. Our proposed SSM-based approach is an embodiment of this understanding. Without this understanding, such an approach that can address failure modes in a wide range of equations would never have been produced. In general, we think the machine learning community always appreciates and prefers simple and effective methods. Sub-sequence contrastive alignment differs from temporal contrastive learning in that, instead of a model learning contrastively for multiple similar data/frame features, it aims to allow different stages of time-varying SSM to form a consensus on the prediction of spatiotemporal collection points. Sub-sequence contrastive alignment emphasizes the ability of the model to inherit information at subsequent times and the ability to form an SMM-step-width agreement, while temporal contrastive learning focuses on representation learning of similar data features, which are not available in PINN. >**[TC:]** Theorem 3.1 ... does not directly indicate that continuous-discrete mismatch is the key challenge in propagating initial condition in training PINN. **[Re:]** What we need to show through Theorem 3.1 is that continuous-discrete mismatch may lead to disconnections in the pattern propagation from the initial condition if only the losses at discrete collection points are optimized. This is because a pattern defined by a point may only act in its small neighborhood, and this neighborhood may not contain any other collection points, which can therefore cause the failure of propagation. Also, a concurrent work, ProPINN [2] observed a lower gradient correlation phenomenon, which is empirical evidence for our proposal that continuous-discrete mismatch is the cause of propagation failure. >**[ED & A:]** Detailed Ablation and Discussion. **[Re:]** We add some ablation studies to discuss the combinatorial effect. See response to reviewer yZBS. Due to the rebuttal limit, we can't include a more detailed analytical discussion of these ablations but will do so in the next release. Given these explanations, we sincerely invite you to reconsider the rating of our work. [1] Krishnapriyan, et al. "Characterizing possible failure modes in physics-informed neural networks." NeurIPS 2021 [2] Wu, et al. "ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks." arXiv:2502.00803
Summary: The authors find the reason of failure modes of PINNs is the dismatch between continous nature of PDE and the discrete nature of sampled observations, and the simplicity bias of PINNs fails fixing this gap. To address this, they propose to use Mamba's sequence modeling ability and enhance it with alignment. ### **After rebuttal** Thanks to the authors for their feedback. I think my concerns have been addressed, and I would like to keep my score 3 unchanged. Claims And Evidence: Yes. The experimental results demostrate the claims of good performance. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the proof. Experimental Designs Or Analyses: Yes. The experimental designs and alalyses are sound. Supplementary Material: Yes. I checked all the supplementary matrial. Relation To Broader Scientific Literature: The paper is based on related work on PINN failure modes (Krishnapriyan et al., 2021), state-space models (Gu & Dao, 2023), and sequence modeling for PDEs (Zhao et al., 2024). However, it lacks thorough comparisons with adaptive sampling (Wu et al., 2023; Gao et al., 2023) and optimization strategies like RoPINN (Wu et al., 2024) and NTK-based methods (Wang et al., 2022), which also address PINN failure modes. Essential References Not Discussed: Based on my limited knowledge, I am unable to identify any missing essential related works. Other Strengths And Weaknesses: Strengths: 1. Proposes PINNMamba, integrating Selective SSM (Mamba) with PINNs to improve temporal information propagation and mitigate failure modes. 2. Demonstrates significant performance gains on benchmark PDEs, reducing errors compared to previous PINN architectures. 3. This paper is very clearly written. Weaknesses: 1. Claims to resolve the continuous-discrete mismatch but still relies on a discretized version of Mamba, which remains dependent on time-step selection. Therefore, the continuous-discrete mismatch still persists, making this an ad-hoc rather than a fundamental solution. 2. Although the authors mentioned various methods for addressing failure modes in the related works, including optimization techniques (Wu et al., 2024; Wang et al., 2022a), adaptive sampling strategies (Gao et al., 2023; Wu et al., 2023), model architectures (Zhao et al., 2024; Cho et al., 2024; Nguyen et al., 2024b), and transfer learning approaches (Xu et al., 2023; Cho et al., 2024), they did not provide a thorough comparison with these methods in their analysis or experiments. Other Comments Or Suggestions: 1. The theoretical result Theorem 3.1 is interesting. It formalizes the well-known issue that optimizing PDE constraints at discrete points does not ensure global correctness. Though interesting, it is not novel. There is also no theory justifying why PINNMamba outperforms PINN. 2. The proposal cannot fundamentally resolve simplicity bias. While sub-sequence contrastive alignment mitigates the issue by enforcing consistency across overlapping sub-sequences, it does not eliminate the tendency of neural networks to favor simpler solutions. Questions For Authors: 1. Novelty: How is this work fundamentally different from PINNsFormer (Zhao et al., 2024), given that it seemly replaces Transformers with Mamba, a common modification in recent models? 2. Writing: Is there a typo in Line 326 "$(x,k+\Delta t)$"? 3. In Page 17, the authors say "because solving the remaining problems with a sequence-based PINN model will cause an out-of-memory issue". Does this mean PINNMamba needs more memory than traditional solutions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments from reviewer Ce7f and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below. >**[W1]**: Claims to resolve the continuous-discrete mismatch but still relies on a discretized version of Mamba.. **[Response to W1]**: We need to clarify that we do not claim to solve the continuous-discrete mismatch. In fact, we think that the computer's finite precision inherently makes it impossible to describe continuous systems (including continuous-time SSM). Instead, we can only try to mitigate the effect of this mismatch. A discretized SSM is a description of the behavior of a continuous SSM. The point here is that even discrete-time SSM **describes the dynamic in a continuous/differential manner** since there is a set of rules for a direct mapping between a discretized SSM and a continuous SSM. Although it is actually an approximation of continuous dynamics, this ability to directly approximate/describe continuous dynamics is not available with traditional neural network models such as MLP and Transformer. Indeed, SSM may not be the ultimate solution to this problem, but we have still taken a huge step towards solving continuous-discrete mismatch. We hope that our work will inspire more researchers to achieve a better solution to this problem. >**[W2]**: Comparsion with other methods. **[Response to W2]**: Since our approach is an on model architecture, we have mainly listed multiple model architectures as baselines. But we do value your opinion and we will include the following results in the next release. Since the experimental settings of some of the papers differ from ours and the source code is not published, we report those works that we successfully reproduced. Model|Convection rMAE|Convection rRMSE| Reaction rMAE | Reaction rRMSE | Wave rMAE| Wave rRMSE| |-|-|-|-|-|-|-| |RoPINN (Wu. 2024 NeurIPS) (Optimization)|0.635| 0.720|0.056 | 0.095|0.063 |0.064| |P2INN (Cho. 2024 ICML) (Model Archi)|0.1023| 0.1035|0.0098|0.0227|0.2134|0.2157| |DCGD (Hwang. 2024 NeurIPS) (Optimization)|0.0232|0.0246|0.9780|0.9800|OOM|OOM| |R3 (Daw. 2023 ICML)(Sampling)|0.0267|0.0277 |PINNMamba|0.0188 |0.0201|0.0094| 0.0217|0.0197|0.0199| >**[C&S1]**: Theorem is not novel. There is also no theory justifying why PINNMamba outperforms PINN. **[Response to C&S1]**: Theorem 3.1 is very intuitive, so we also tried to find a direct proof of Theorem 3.1 when preparing the manuscript, but failed. We would welcome some relevant references from the reviewers. As for a theory justifying why PINNMamba outperforms PINN. We agree that an in-depth theoretical analysis of the proposed method is helpful to better understand our model. But to be honest, currently, we could not find a good theorem to prove rigorously such mechanisms in deep networks, since theoretically analyzing a complex system like PINNMamba is very difficult. However, we still have some intuitive and empirical analysis to show the methodology and philosophy behind our continuous-discrete mismatch and simplicity bias perspective. We hope such analysis can help readers to better understand our motivation and provide some guidance to design better models in future research. >**[C&S2]**: The proposal cannot fundamentally resolve simplicity bias. **[Response to C&S2]**: As previous work[1,2,3] has pointed out, simplicity bias is an intrinsic problem for neural networks. Currently, there is no single way to completely solve this problem, only to evade or mitigate it. [1] H Shah. The pitfalls.. NeurIPS 2020 [2] D Teney. Evading the simplicity bias.. CVPR 2022 [3]R Tiwari. Overcoming simplicity bias.. ICML 2023 >**[Q1]**: Novelty: How is this work fundamentally different from PINNsFormer, given that it seemly replaces Transformers with Mamba... **[Response to Q1]**: Our work is not a simple block-wise replacement. Although both works are based on model architecture, they are quite different in the following points. 1. Macro Architecture: PINNsFormer employs an Encoder-Decoder architecture, which we found unnecessary for PINN. So in PINNMamba, we use an Encoder-Only archi for better performance and efficiency. 2. Sequence Modeling: As shown in Fig.5, PINNsFormer uses a pseudo-sequence, which is not a real collection points sequence, instead, a regional extrapolation, which cannot propagate information. We build collection point sub-sequences and an alignment to realize info propagation across time. 3. Understanding: Our work provides a deeper understanding of PINN failure modes that it is caused by C-D mismatch and simplicity bias. We believe it can inspire more valuable work in the future. >**[Q2]**: typo **[Response to Q2]**: It should be $(x,t+k\Delta t)$. Will fix. >**[Q3]**: Memory Usage **[Response to Q3]**: We greatly optimized memory usage in a follow-up. The OOM is no longer an issue when using a reduced model size. See response to reviewer yZBS.
Summary: This paper proposes PINNMamba, enabling PINNs with state-space model's continuous-discrete capability to address the limitations (simplicity bias and continuous-discrete mismatch) of existing PINNs. Claims And Evidence: 1. Mainstream PINNs predominantly use MLPs and suffer from the inability to accurately propagate physical patterns informed by initial conditions. Evidenced by a toy example in Fig. 1. 2. PINNs have issues of continuous-discrete mismatch and simplicity bias. Evidenced by Section 3. Methods And Evaluation Criteria: The authors propose to use SSM to address the continuous-discrete mismatch and sub-sequential alignment to improve the simplicity bias limitation. Theoretical Claims: Theorem 3.1 is proposed to show the continuous-discrete mismatch failure mode of PINN by demonstrating that there exist infinitely many functions when given a discrete collection of points. Experimental Designs Or Analyses: The authors evaluated the approach on three public benchmarks (convection, wave, and reaction equations) and compared it with other approaches such as PINN, QRes, PINNsFormer, and Kan. Supplementary Material: Code is provided as part of the supplementary material. I have also read the appendix section in the supplementary material. Relation To Broader Scientific Literature: Addressing the failure modes in PINNs can potentially be applied to a wide range of scientific and engineering disciplines, such as computational fluid dynamics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow. 2. From the results presented by the authors, the proposed approach consistently outperforms other approaches in the three tested benchmarks. Weaknesses: 1. The proposed approach is computationally expensive, as shown in Table 5 of the supplementary material. The memory overhead is the second largest, and the training time is seven times slower than that of PINN. Other Comments Or Suggestions: See the questions section. Questions For Authors: 1. From the sensitivity analysis, the proposed approach is very sensitive to the sub-sequence length, especially comparing the length of 3 and 5 (about 70 times smaller). Are there any explanations for such a big gap? 2. The current ablation study only removes one component at a time, making it difficult to see the effect of other combinations of proposed components. 3. Given that the computational overhead of the proposed PINNMamba encoder is significantly higher than PINN, it would be interesting to see the performance comparison using a similarly sized model to PINN (i.e., with a smaller embedding size). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments from reviewer yZBS and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below. >**[W1]**: The proposed approach is computationally expensive, as shown in Table 5 of the supplementary material. The memory overhead is the second largest, and the training time is seven times slower than that of PINN. >**[Q1]**: From the sensitivity analysis, the proposed approach is very sensitive to the sub-sequence length, especially comparing the length of 3 and 5 (about 70 times smaller). Are there any explanations for such a big gap? **[Re W&Q1]** We answer these two questions together. We start by fixing a bug in Table 6. In our follow-up experiments, we found an error in the experimental data for sequence lengths 3 and 5 in Table 6. The reason for this error is that we accidentally reduced the numerical precision of the calculations when performing these two sets of experiments, which led to an unexpected performance degradation of PINNMamba when using these two sets of parameters. So the conclusion of the original submission about the sensitivity of sequence length needs to be corrected, and we will fix this in the next release. Our latest experimental data are shown in the table below. Convection: Length|MLP width| rMAE| rRMSE | Mem. | Time/iter| |---|---|---|---|---|---| |3|512|0.0102|0.0126|4042MB|1.10s| |5|512|0.0059|0.0068|6020MB|1.59s| |7|512|0.0188|0.0201|7899MB|1.99s| Reaction: Length|MLP width| rMAE| rRMSE | Mem. | Time/iter| |---|---|---|---|---|---| |3|512|0.0164|0.0352|4042MB|0.90s| |5|512|0.0109|0.0244|6020MB|1.25s| |7|512|0.0094|0.0217|7899MB|1.56s| This group of experiments shows that our model can reduce computational and memory overhead by reducing the sequence length. There is a slight accuracy degradation on the reaction problem, but the performance is even better on the convection problem. Nevertheless, they successfully address the failure modes in all cases. On the other hand, we found that setting the MLP layer's width to 512 is unnecessary. Reducing the MLP width from 512 to 32 yields a model that still solves the failure modes. Convection: Length|MLP width| rMAE| rRMSE | Mem. | Time/iter| |---|---|---|---|---|---| |3|32|0.0140|0.0167|1900MB|0.79s| |5|32|0.0284|0.0321|2586MB|1.16s| |7|32|0.0240|0.0269|3310MB|1.35s| Reaction: Length|MLP width| rMAE| rRMSE | Mem. | Time/iter| |---|---|---|---|---|---| |3|32|0.0042|0.0085|1900MB|0.62s| |5|32|0.0032|0.0069|2586MB|0.82s| |7|32|0.0060|0.0126|3310MB|1.01s| If we set sequence length to 3 and MLP width to 32, the memory usage is 1900MB, only slightly larger than 1605MB of vanilla PINN. Given the addressing of failure modes and an rMAE that is only 1/60th of vanilla PINN's, we consider this slight increase in memory and computational overhead to be insignificant. Our model is robust w.r.t sequence length and MLP width. Model with smaller sequence length and MLP width can also eliminate failure modes. This further enhances the generalizability of our approach. The gap between 3 and 5 sequence length is in fact not large, but when it is 1, it degrades to PINN and failure modes present again. We sincerely apologize for this confusing. >**[Q2]**: The current ablation study only removes one component at a time, making it difficult to see the effect of other combinations of proposed components. **[Re Q2]**: We added the following experiments. Model|Convection rMAE|Convection rRMSE| Reaction rMAE | Reaction rRMSE | Wave rMAE| Wave rRMSE| |-|-|-|-|-|-|-| |PINNMamba|0.0188 |0.0201|0.0094| 0.0217|0.0197|0.0199| |-Sub Seq Align & Time Varying SSM|0.1534|0.1572|0.0333|0.0351|0.0701| 0.0702| |-Sub Seq Align & SSM|0.9833|0.9836|0.9801|0.9821|0.4211| 0.4431| |-Sub Seq Align & Wavelet|0.5000|0.5021|0.0345|0.0399|0.3519| 0.3573| |-Time Varying SSM & Wavelet|0.3921|0.3971|0.0287|0.0287|0.2919| 0.2873| |-SSM & Wavelet|1.0263|1.0348|0.9987|0.9988|0.5137| 0.5222| |-Sub Seq Align & Time Varying SSM & Wavelet|0.3472|0.3521|0.0487|0.0534|0.3437| 0.3492| |-Sub Seq Align & SSM & Wavelet|1.2263|1.2748|0.9821|0.9833|0.5421| 0.5453| These experimental data show that elimination of continuous-discrete mismatch with SSM is the most important factor, illustrating the significance of the PINNMamba model. Also, Sub Sequence Alignment and Time Varying SSM play an important role in eliminating simplicity bias, and the use of wavelet activation function leads to better numerical precision. >**[Q3]**: ... it would be interesting to see the performance comparison using a similarly sized model to PINN (i.e., with a smaller embedding size). **[Re Q3]**: For the convection problem, we further reduce the width of MLP and Mamba to 8. This reduces the memory overhead of the model to 1300MB, and the average optimization time per iteration to 0.23s (less than PINN). The rMAE is 0.0262, the rRMSE is 0.0346. It addresses the failure modes and outperforms all baselines.
Summary: The paper introduces PINNMamba, a framework that integrates state space models (SSMs) into physics-informed neural networks (PINNs) to address failure modes in solving partial differential equations (PDEs). It identifies two key issues: the continuous-discrete mismatch, which disrupts initial condition propagation, and simplicity bias, which leads to over-smoothed solutions. PINNMamba mitigates these by using SSMs for continuous-discrete articulation and sub-sequence modeling to enhance pattern propagation. Experimental results show that PINNMamba outperforms existing PINN architectures in solving PDEs with improved accuracy and generalization. Claims And Evidence: The paper's claims are supported by theoretical analysis, empirical experiments, and comparative evaluations. PINNMamba's ability to mitigate continuous-discrete mismatch and simplicity bias is justified through theory and demonstrated by improved accuracy over baselines. The effectiveness of sub-sequence modeling and state space models (SSMs) is validated through ablation studies and performance metrics. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem and align with prior works. The use of state space models (SSMs) and sub-sequence modeling effectively addresses known limitations in PINNs, and the evaluation on standard PDE benchmarks ensures comparability with existing approaches. The inclusion of ablation studies and comparisons with multiple baselines further supports the validity of the methodology. Theoretical Claims: I do not specialize in PINNs, so I can only perform a general check of the theoretical claims. The proofs appear logically structured and follow standard mathematical reasoning, but I rely on other reviewers with more expertise in this area to verify their correctness in detail. Experimental Designs Or Analyses: The experimental designs and analyses appear sound. The paper evaluates PINNMamba on standard PDE benchmarks, compares it against multiple baselines, and includes ablation studies to validate key design choices. The metrics used, such as relative MAE and RMSE, are appropriate for assessing model accuracy. I did not identify any major issues. Supplementary Material: I reviewed the PDE setups, training details, and additional results on the Navier-Stokes equation and the PINNacle benchmark. These sections provide further context on the experimental setup and support the main paper’s claims. Relation To Broader Scientific Literature: The paper builds on prior work in PINNs and addresses known failure modes by incorporating state space models (SSMs) and sub-sequence modeling. It aligns with existing research on improving PINN stability and accuracy, particularly studies that explore sequential modeling and optimization strategies to mitigate simplicity bias and continuous-discrete mismatch. Essential References Not Discussed: The paper provides a strong review of PINNs but could benefit from discussing other machine learning approaches for solving PDEs, particularly graph neural network (GNN)-based methods like MeshGraphNet. Including a discussion of such methods would provide a more comprehensive view of the broader landscape of neural network-based PDE solvers. Other Strengths And Weaknesses: The authors placed related works in the appendix, which I find unusual and not ideal. Integrating it into the main text would provide better context for their contributions. They could reduce the length of the methods and introduction sections to make space. Additionally, some results, such as Table 2 (which studies the effect of training strategies), could be moved to the appendix to streamline the main presentation. Other Comments Or Suggestions: - Reorganize Related Works: Move the related works section from the appendix into the main text for better context on contributions. - Condense Writing in Methods/Intro: The methods and introduction sections could be more concise to improve readability. Some results, such as Table 2, could be moved to the appendix. Questions For Authors: The paper has a major limitation in memory usage, which the authors acknowledge. However, a deeper analysis of the computational bottleneck is needed to study which operations caused the issue. Additionally, would reducing the sequence length or modifying memory-intensive operations help mitigate these issues? Given that PINNsFormer also encounters OOM errors, is there a fundamental limitation in sequence-based PINNs and would this prevent the method to scale up to large scale simulations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive comments from reviewer PwV2 and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below. >**[W1]**: The authors placed related works in the appendix, which I find unusual and not ideal. Integrating it into the main text would provide better context for their contributions. **[Response to W1]**: We highly agree with your opinion. We will place the related works section in the main text in the next release since another page will be allowed in camera-ready according to ICML author guidance. >**[Un RW]**: ... Including a discussion of such methods (NN/Graph-based PDE solvers) would provide a more comprehensive view of the broader landscape of neural network-based PDE solvers. **[Response]**: We will add the following paragraph in the related works section. **Learning-Based PDE Solvers.** In addition to Physics-Informed Neural Networks (PINNs), other neural-network-based approaches have emerged for solving PDEs, each offering unique advantages. Graph Neural Networks (GNNs) like MeshGraphNet [1] excel at handling irregular domains by treating computational meshes as graphs, making them particularly effective for complex geometries. Neural operators [2], including Fourier Neural Operators (FNOs) [3] and Graph Neural Operators (GNOs) [4], learn mappings between function spaces, enabling generalization across different PDE parameters without retraining. Hybrid approaches combine neural networks with traditional numerical methods, such as neural finite element techniques [5], to enhance solver efficiency. While PINNs uniquely enable data-free solutions through direct physics constraint enforcement, GNNs, and neural operators provide complementary capabilities - GNNs for mesh-based problems and neural operators for parametric systems. These diverse approaches collectively demonstrate the expanding toolkit of neural-network-based PDE solvers across scientific computing applications. [1] Pfaff, Tobias, et al. "Learning mesh-based simulation with graph networks." in ICLR. 2021. [2] Kovachki, Nikola, et al. "Neural operator: Learning maps between function spaces with applications to pdes." in JMLR. 2023. [3] Li, Zongyi, et al. "Fourier Neural Operator for Parametric Partial Differential Equations." in ICLR. 2021. [4] Li, et al. "Multipole graph neural operator for parametric partial differential equations." in NuerIPS. 2020. [5] Hennigh, Oliver, et al. "NVIDIA SimNet™: An AI-accelerated multi-physics simulation framework." International conference on computational science. 2021. >**[W2]**: reduce the length of the methods, introduction and remove some results to the appendix. **[Response to W2]**: We will streamline these sections in the next release depending on space. >**[Q]**: About the memory usage. **[Response to Q]**: The main reason for the large Memory Usage is that, initially, we set the sequence length to 7 and MLP width to 512 in our main experiment, which was made with the consideration of making a fair comparison with the then SOTA model PINNsFormer. High memory usage is one of the main drawbacks of sequence modeling on PINNs, as the gradient information needs to be preserved for every point in the sequence. However, we find in a follow-up that our model is robust w.r.t sequence length and MLP width. Our model with a smaller sequence length and MLP width can also eliminate failure modes. This further enhances the generalizability of our approach. To scale to more complex problems, we suggest reducing the sequence length or reducing the MLP width. Our follow-up experiments show that the model is not very sensitive to sequence length and that a sequence length of 3 makes PINNMamba very effective in combating failure modes. (This is contrary to our results in Table 6 because we found that we had incorrectly set the computational precision in the length 3 and 5 setting in our original sensitivity analysis, resulting in severe performance degradation. We will fix this in the next release.) Adjusting the sequence length to 3 leads to a reduction in the memory overhead of the model by about 57%. In addition, we found that a large MLP width (512) is not necessary. Setting the width to 32 is sufficient to make PINNMamba effective. This saves up to 53% of memory usage. The combination of the two adjustment can lead to a reduction in memory consumption from 7899MB to 1900MB on the convection problem. We added the following experiments. It is worth noting that the model successfully eliminates failure modes under all these settings. When length is set to 1, PINNMamba degrades to PINN and failure modes present again. Length|MLP width| rMAE| rMRSE | Memory | Time/iter| |---|---|---|---|---|---| |3|32|0.0140|0.0167|1900MB|0.79s| |5|32|0.0284|0.0321|2586MB|1.16s| |7|32|0.0240|0.0269|5932MB|1.35s| |3|512|0.0102|0.0126|4042MB|1.10s| |5|512|0.0059|0.0068|6020MB|1.59s| |7|512|0.0188|0.0201|7899MB|1.99s|
null
null
null
null
null
null
UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent
Accept (poster)
Summary: This paper presents a new training paradigm for Vision-Language-Action (VLA) models by training with both multi-modal understanding and future prediction objectives. Multi-modal Understanding is in the form of question-answering given a paired image-text, and the future visual prediction aims to predict the target image given the current image and instruction pair. Prior research usually tackles one of these two, which the authors frame as "high-level semantic content" and "low-level feature". Experiments are conducted on the simulation CALVIN benchmark and a real-world manipulation environment to verify the effectiveness of our UP-VLA framework. ## post rebuttal I appreciate the authors' rebuttal, especially about pretraining and extra experiment results. After reading opinions from other reviewers, I think the paper presents clear evidence that pretraining with mixed multi-modal understanding and future prediction objectives helps the VLA models, and will keep my original recommendation as accept. Claims And Evidence: I think the major claim of the paper "combining both vision-language understanding and future prediction objectives, enabling the capture of both high-level semantic and low-level visual patterns essential for embodied agent" is verified by the experimental results compared with the baselines on the CALVIN benchmark and real-world manipulation tasks. It is further verified by the ablative study, where a significant difference is observed when dropping either of them. Methods And Evaluation Criteria: While visual prediction and language prediction are quite mature in model architecture on their own, and there are existing models that aim to make multi-modal predictions, I believe the proposed method, especially the unified prompting and attention mechanism, is reasonable for verifying the effectiveness of different pre-training objectives. Theoretical Claims: No major theoretical claims are made in the paper. Experimental Designs Or Analyses: The experiment results on the CALVIN benchmark demonstrate the proposed outperforms the listed baselines on the zero-shot evaluation. However, this cannot fully verify the proposed dual-objective pretraining as the pretrained data and model architecture vary across different methods. This is similar in real-world manipulation tasks. The results difference looks statistically significant in both settings. The ablation results more evidently demonstrate the effectiveness of the proposed pretraining paradigm. Supplementary Material: Yes, the code is presented in the supplementary material. However, I cannot verify it leads to the results in the paper. Relation To Broader Scientific Literature: I think to improve the effectiveness and generalizability is a major focus of the VLA models in the era of embodied AI, and I think the major problem the paper aims to tackle is relevant and important. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: [+] I find the motivation of this paper to introduce both multi-model and visual prediction as pretraining is valid, the architecture design is reasonable and the experiments verify its effectiveness. [-] I'm curious about the quantitative performance of multi-modal and visual prediction after the pretraining stage, on top of the action performance currently presented in the paper. Could the model address some multi-model tasks directly? How is the performance of next-frame prediction, working as a world model? I think providing such evaluations will further help people understand how the model performs before and after the fine-tuning stage. [-] Another thing I find interesting, but currently missing in the paper is potentially the insights from the pretraining. Do the two objectives conflict with each other? Or is the pretraining simply just working? Other Comments Or Suggestions: Missing punctuation (periods or commas) in the equations. Questions For Authors: See the strengths and weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion. --- **Q1: I'm curious about the quantitative performance of multi-modal and visual prediction after the pretraining stage, on top of the action performance currently presented in the paper.** ANS: We would like to clarify that our primary focus is on robot decision-making. Due to limited computational resources, our pretraining stage uses the LLaVA instruction-tuning dataset for multi-modal understanding (MMU) training and the Bridge robot dataset for future visual prediction training. Our pretraining process adapts the general-purpose Show-o model to robotic domain-specific tasks, which may come at the cost of reduced performance in general settings, such as VQA, OCR, and text-to-image generation. In our approach, we expect the VLM to learn visual predictions that are consistent with language. Although this approach may lead to the loss of ability on general tasks, it will be more conducive to visual prediction and low-level action learning. Results on both simulation and real-world show that this adaptation significantly benefits robot control tasks. --- **Q2: Another thing I find interesting, but currently missing in the paper is potentially the insights from the pretraining. Do the two objectives conflict with each other? Or is the pretraining simply just working?** ANS: Thank you for your insightful comments! Pretraining models for both multi-modal understanding and generation is becoming increasingly popular. Many works in the generative modeling field have explored mixed-pretraining approaches, such as TransFusion [1] and Show-o [2]. Their original objective functions focus on multi-modal understanding and text-to-image generation. Inspired by the success of these models, we designed an objective function tailored for embodied agents, combining MMU tasks with future image prediction. Since our model builds upon the Show-o framework, we observed that pretraining proceeds without conflicts, and the training remains stable without collapse. [1] Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model https://arxiv.org/abs/2408.11039 [2]Show-o: One Single Transformer to Unify Multimodal Understanding and Generation https://arxiv.org/abs/2408.12528 --- Thank you again for your time and effort in reviewing our work! We hope this clarification can solve your concerns!
Summary: The paper presents UP-VLA, a unified Vision-Language-Action (VLA) model designed for embodied agents. The model aims to enhance both high-level semantic comprehension and low-level spatial understanding by integrating multi-modal understanding and future prediction objectives while current VLMs focus on high-level semantics while neglecting fine-grained spatial and dynamic features. Claims And Evidence: yes Methods And Evaluation Criteria: Please refer to 'Other Strengths and Weaknesses.' Theoretical Claims: Application paper without making theoretical claims. Experimental Designs Or Analyses: Please refer to 'Other Strengths and Weaknesses.' Supplementary Material: I reviewed the code in the supplementary material, but the absence of a README file makes it difficult to execute. Relation To Broader Scientific Literature: Effectively constructing a VLA model for robotics. Essential References Not Discussed: This paper lacks a comparison with the latest VLA models. Other Strengths And Weaknesses: Strengths: 1. The paper proposes a unified training approach that merges semantic understanding and future prediction, incorporating three types of datasets —— Vision-Language datasets for high-level semantic understanding, Internet video datasets for low-level visual dynamics, and Robotic action datasets for embodied control. Through the constructed dataset and training strategy, UP-VLA explicitly enhances physical spatial comprehension. 2. UP-VLA achieved promising results in the Calvin simulation benchmark, demonstrating strong long-horizon manipulation capabilities. Weaknesses: 1. The primary concern is the model's efficiency. While incorporating future image prediction before action prediction can significantly enhance performance, it raises the question of whether this autoregressive next-token prediction paradigm could lead to a substantial decrease in inference speed. The author's exploration is interesting, but inference speed plays a critical role in determining the robot's control frequency, which is essential in robotics. 2. When pretraining on an internet video dataset, if robotic videos are used for future prediction, why not pretrain action prediction simultaneously? For example, leveraging large-scale simulator data or real-world robotic data (e.g., Open X-Embodied or DROID). 3. The paper's writing is incomplete and lacks many details. For example: Which internet videos were used for pretraining? Why are human hands present in Figure 5? Why is there no execution video demonstration? 4. Lacks comparison with SOTA autoregressive-based VLA methods, such as OpenVLA. Other Comments Or Suggestions: Please refer to 'Other Strengths and Weaknesses.' Questions For Authors: Please refer to 'Other Strengths and Weaknesses.' Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments. --- **Q1: About model efficiency** ANS: We would like to clarify that UP-VLA operates at **almost the same control frequency as previous VLA methods**. We did not use autoregressive image generation, we **jointly predict future images and actions** with a single forward LLM pass during policy execution, resulting in almost similar computational cost to prior VLA models. The table below presents runtime efficiency based on the same backbone (Showo), comparing models with and without image prediction (measured as the reciprocal of the average inference time over 100 iterations on the same hardware): | Method | Inference Speed ↑ | |---------------|-------------------| | UP-VLA-RT-2* | 13.2Hz | | UP-VLA | 13.0Hz | Additionally, our method employs an action chunking size of 10. We observed that action chunk can further enhances performance and produces smoother trajectories. --- **Q2: When pretraining on an internet video dataset, if robotic videos are used for future prediction, why not pretrain action prediction simultaneously?** ANS: Thank you for your question. We did not include action prediction in pretraining stage based on these two reasons: (1) Different robot datasets have varying action spaces with distinct meanings, requiring additional design effort to standardize them under a unified output format. (2) We believe that robot trajectory videos provide more consistent representations across datasets and already contain valuable dynamics information. Following your suggestion, we conducted an additional experiment incorporating robot action learning during pretraining while keeping the architecture unchanged (by padding actions to the same dimensions). The results indicate that this modification provides no benefit. We hypothesize that this is because RGB images already contain sufficient prior information, and the differences in action spaces across robots make direct action prediction less effective. | CALVIN ABC-D | Avg Len ↑ | |---------------|-------------------| | UP-VLA | 4.08 | | UP-VLA with Action Pretraining | 3.98 | --- **Q3: About details: Which internet videos were used for pretraining? Why are human hands present in Figure 5? Why is there no execution video demonstration?** ANS: We use the Bridge dataset for image-prediction pretraining, as mentioned in Section 5.2. The human hands in Figure 5 serve as disturbances during policy rollout. For your convenience, we have included additional execution details in the following anonymous video links: https://sites.google.com/view/upvla-rebuttal. --- **Q4: About SOTA autoregressive-based VLA methods** ANS: Following your suggestions, we conducted experiments on the influential models pi0, openvla, and octo. Since these methods do not have official test results, we utilize the open-source code to fine-tune and test them on the calvin data (with settings consistent with UP-VLA). The results on the ABC-D task are shown in the table below, where an asterisk indicates the results we reproduced using the open-source code. | Method | Type | Avg. Len ↑ | | :------: | :------------: | :--------: | | Openvla* | VLA | 1.60 | | Octo* | VLA | 0.59 | | pi0* | VLA | 3.63 | | UP-VLA | Prediction&VLA | 4.08 | --- Thank you again for your time and effort in reviewing our work! We hope our clarification can solve all your concerns and shows the improved quality of our paper! --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I still have two questions I would like to discuss with the authors: 1. Is the inference speed provided by the authors the same as the model's inference speed, or is it the model's inference speed multiplied by the action chunk size? 2. Can a detailed manipulation success rate be provided for comparison with other SOTA autoregressive-based VLA methods? ------- Thank you for the authors' response (as shown in below). The responses address all of my previous concerns, and I will raise my rating to "weakly accept." Finally, I hope the authors can perform further real-world validation on a broader range of long-horizon and non-pick-and-place tasks. --- Reply to Comment 1.1.1: Comment: Dear Reviewer dQTi: We are delighted to receive your response! We are very gald to provide further details in the blow: --- Q1: Is the inference speed provided by the authors the same as the model's inference speed, or is it the model's inference speed multiplied by the action chunk size? ANS: **The inference speed is the same as the model’s forward inference speed.** A forward path of a 1.3B VLM model cost about ~80ms which result in 13Hz control frequency. If we multiply the action chunk size (which is set to 10 in our configuration), the final frequency will be approximately 130 Hz. More detailed information are provided below: | | Inference Speed (without action chunk) | Action Frequency (with action chunk) | | ------------------ | -------------------------------------- | ------------------------------------------ | | Openvla (7B) | ~4Hz | ~80Hz (action chunk-25 in **OpenVLA-oft**) | | UP-VLA-RT-2 (1.5B) | ~13Hz | / | | UP-VLA (1.5B) | ~13Hz | ~130Hz (action chunk-10) | Since we **jointly predict the image and actions within a single LLM forward pass**, the model’s inference speed is nearly the same as that of previous VLA methods. --- Q2: Can a detailed manipulation success rate be provided for comparison with other SOTA autoregressive-based VLA methods? Ans: Yes! We are happy to provide a more detailed success rate for comparison on the standard simulated CALVIN ABC-D benchmark. The CALVIN ABC-D benchmark consists of 34 different types of manipulation tasks, where agents are required to complete five random-selected tasks sequentially, following a given instruction. Here is the detailed manipulation success rate. | **Success Rates** | Type | 1_st task | 2_nd task | 3_th task | 4_th task | 5_th tasks | Num Task Success ↑ | | :------: | :------------: | :---: | ----- | ----- | ----- | ----- | ---------- | | Openvla* | VLA | 73.1\% | 42.4\% | 24.0\% | 12.9\% | 7.5\% | 1.60 | Octo* | VLA | 46.6\% | 11.1\% | 1.6\% | 0.1\% | 0.0\% | 0.59 | | pi0* | VLA | 91.6\% | 82.1\% | 71.7\% | 64.1\% | 53.8\% | 3.63 | | UP-VLA | Prediction&VLA | 92.8 \%| 86.5\% | 81.5\% | 76.9\% | 69.9\% | 4.08 | --- Thank you again for your time and effort in reviewing of paper! We hope our explanations can solve your concern and demonstrate the improved quality of our paper! Best Regards, The Authors
Summary: This paper introduces UP-VLA, a vision-language-action model that can understand, generate predicted future images, and plan actions in the embodied environment. It devises a novel VLA training paradigm that unifies policy learning with visual prediction and multi-modal understanding. The results show that the use of future image prediction can significantly improve the precision and visual generalization of policy. ## update after rebuttal: After reading the results from the reviewer dmnE and FLHB, I was convinced and recognize the contribution "pretraining with mixed multi-modal understanding and future prediction objectives helps the VLA models". So I will change the score to "Weekly accept". Claims And Evidence: The claims made in the submission supported by clear and convincing evidence Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes, the experiment well designed with good analyses. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: This paper contributes for the direction of VLA foundation model training, which is a foundmental and important area. The proposed training strategy and model design are valuable for that community. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: The VLA is a big and important community, and the solution of this paper is reasonable. Experiments are sufficient and solid. Other Comments Or Suggestions: I think this paper lying a competitive area which is highly care about the metric number. I think the baseline could be better and stronger e.g. openvla/octo/openvla-oft pi0 series - https://www.physicalintelligence.company/blog/pi0 - https://www.physicalintelligence.company/research/fast VLA + reasoning - https://embodied-cot.github.io/ I suggest the author to dig some extra benefits of your investigate, otherwise you have to compare with the sota results. For example, the predicted image is pretty good and could enforce some downstream tasks? Besides that, I think the framework in this paper is hard to train. May have to carefully desgin the ablation study which is not enough in this paper. Questions For Authors: refer to the aboves. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments. --- **Q1: About stronger baselines.** ANS: Thank you for your suggestions! We conduct more experiment on Pi0, OpenVLA, and Octo model. Since these methods only conduct experiments in real world which is hard to replicated, we utilize their open-source code to fine-tune and test them on the Calvin ABC-D (with settings consistent with UP-VLA). The results on the ABC-D task are shown in the table below, where an asterisk* indicates the results we reproduced using the open-source code. | Method | Type | Avg. Len ↑ | | :------: | :------------: | :--------: | | Openvla* | VLA | 1.60 | | Octo* | VLA | 0.59 | | pi0* | VLA | 3.63 | | UP-VLA | Prediction&VLA | 4.08 | For the pi0-fast method, it primarily improves training efficiency by preprocessing actions. In terms of performance, it is almost identical to pi0. Therefore, we only present the reproduced results of pi0 here. --- **Q2: I suggest the author to dig some extra benefits of your investigate, otherwise you have to compare with the sota results. For example, the predicted image is pretty good and could enforce some downstream tasks?** ANS: Thanks for your insightful suggestions. We provide discussions from the following aspects: (1) Performance: As you noted, OpenVLA and Pi0-series models leverage powerful VLMs to enhance semantic understanding, similar to our UP-VLA, which employs Showo, a Unified VLM. However, we observed that these methods are good at semantic understanding but lack action precision. The results in Table [Q1] highlight their limitations on the Calvin benchmark. We notice the primary failure mode of these methods in CALVIN stems from imprecise actions(e.g., the robot arm correctly approaches the target object based on the command but fails to grasp it). In contrast, our approach mitigates these shortcomings by integrating visual prediction into the VLA model, leading to more precise actions and superior performance, as shown in Table 1 and Figure 6 of our paper. (2) Additional benefits of our proposed unified VLA training paradigm: Previous VLA models did not fully exploit the temporal and visual information encoded in video datasets. Our approach introduces a scalable training paradigm that integrates MMU datasets, video datasets, and robot datasets. This allows the robot to benefit from the language comprehension capabilities of VLMs while also excelling in fine-grained manipulation tasks with complex dynamics. As you mentioned, we indeed found that **precise visual prediction can leads to precise actions**, which is a key factor in our method's effectiveness. Some of these precise predictions are visualized in Figure 7. As a result, UP-VLA demonstrates strong performance on both the Calvin benchmark and real-world scenarios. --- **Q3: I think the framework in this paper is hard to train.** ANS: We completely agree that training with multiple objectives across multiple datasets can be more challenging than training with a single objective. Fortunately, several works in the generative modeling field have unified understanding and generation within a single model, such as TransFusion, Show-O, and others. These works provide a valuable foundation for our approach. In this paper, we build UP-VLA upon Show-O, and we have included all the source code in the supplementary material. Feel free to check it for more details on the training process! --- **Q4: May have to carefully design the ablation study** ANS: Yes! We have conducted comprehensive ablation studies by systematically removing different components respectively from our original framework to verify the effectiveness of each training objective. As reported in Table 3 of the original paper, eliminating multi-modal understanding pretraining, bridge image prediction pretraining, or image prediction fine-tuning all result in **a clear performance drop**. Also, following Reviewer-dmnE, we conduct extra ablation study on multi-steps. Specifically, we change the length of action chunk with and without visual prediction to validate the effectiveness of our proposed visual prediction objective. | Ablating Effectiveness of Visual Prediction | 1 | 4 | 7 | 10 | | :-----------------------------------------: | :---: | :---: | :---: | :--: | | w/o predict future image | 1.44 | 1.94 | 2.17 | 2.25 | | w/ predict future image | 2.42 | 3.72 | 4.00 | 4.08 | --- Thank you again for your time and effort in reviewing our work! We hope our clarification can solve all your concerns, and we are always ready to answer any further questions! --- Rebuttal Comment 1.1: Comment: I have read the authors' rebuttal and will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer oZvA: We sincerely appreciate the time and effort you have taken to review our rebuttal. In response to your insightful feedback, we have **carefully incorporated all the requested baseline** comparisons and **highlighted the novel training paradigm** of our approach. Could we politely ask if there are any further concerns existed? We are always willing to address any of your further questions. If there are no additional concerns, we sincerely wish you could reconsider your score. Thank you once again for your valuable time! Best Regards, The Authors
Summary: This paper presents UP-VLA, a Unified Vision-and-Language Alignment (VLA) model trained with dual objectives: multimodal understanding and future prediction. Building on the foundation of the show-o framework, UP-VLA significantly improves both in-domain performance and generalization to unseen scenarios across real-world and simulation tasks. Claims And Evidence: The claim regarding the unified VLA paradigm is clear. However, I personally disagree with the authors' argument about "limiting their ability to capture detailed spatial information and understand physical dynamics." Firstly, many existing works have already explored similar ideas, including GR-1 cited by the authors. Secondly, there is substantial evidence [1] showing that even predicting videos does not necessarily lead to a comprehensive understanding of physical dynamics. Finally, in my opinion, performance improvements are more likely attributed to forcing the model to predict future outcomes, which provides a prior for planning actions before execution. A more detailed discussion on this point is provided below. [1] How Far is Video Generation from World Model: A Physical Law Perspective Methods And Evaluation Criteria: The proposed method makes sense. This is an attempt to adapt the unified MLLM to VLA, and it also yields some interesting conclusions. Theoretical Claims: I have checked the method, but since there is hardly much theoretical proof, I won't discuss it further here. Experimental Designs Or Analyses: I think the experimental section is relatively comprehensive, but there are still some issues with missing comparisons. 1. The authors reproduced UP-VLA-RT-2* and phi-w/o-mmu*, but one type is VLA and the other type is Prediction. Therefore, we should first determine whether the performance differences are related to the VLA backbone. To address this, results for both methods on other types should be supplemented. 2. In [1] and [2], they both reproduced VLA-RT-2 and found that simply allowing the model to predict multiple future steps could significantly improve the performance of the backbone model. So, is it possible to understand that as long as the model is tasked with predicting future steps, its performance can skyrocket? A similar conclusion comes from OpenVLA-OFT [3], where simple multi-step reasoning alone brought significant performance improvements. Additionally, the experiments on "w/o prediction" in the authors' Table 3 also demonstrate that predicting the future has a substantial impact on performance improvement. Based on the above, is the core reason for the performance improvement of UP-VLA the ability of the VLA model to predict the future, regardless of the specific prediction method? (This point relates to the Claims and Evidence section.) 3. 3D-VLA is not a good baseline. First, its performance is not very strong to begin with. Additionally, its control mode requires reasoning over many steps at once, which prevents the model from dynamically adjusting its actions based on current observations. Furthermore, prediction errors from earlier steps in the point cloud can accumulate significantly. As a result, it is not an appropriate baseline to demonstrate the advantages of the proposed method. 4. My suggestion is for the authors to compare their method with Seer [4], VPP [5], and RoboVLMs [6]. Both [4] and [5] also focus on Prediction and achieve excellent performance, while [6] falls under the VLA category and also performs very well. From this perspective, I do not see a clear necessity for using UP-VLA, which is crucial for me to understand the contribution of this work. [1] VLAS: Vision-Language-Action Model With Speech Instructions For Customized Robot Manipulation, ICLR25 [2] Accelerating Vision-Language-Action Model Integrated with Action Chunking via Parallel Decoding [3] Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success [4] Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation, ICLR 25 [5] Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations [6] Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models Supplementary Material: The author has provided the code, and I just took a quick glance at it. Relation To Broader Scientific Literature: I think combining VLA with a world model has potential, but based on the author's current explanation, I don't see it yet. Essential References Not Discussed: [0] How Far is Video Generation from World Model: A Physical Law Perspective; [1] VLAS: Vision-Language-Action Model With Speech Instructions For Customized Robot Manipulation, ICLR25 [2] Accelerating Vision-Language-Action Model Integrated with Action Chunking via Parallel Decoding [3] Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success [4] Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation, ICLR 25 [5] Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations [6] Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Please refer to the experiments. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments. --- **Q1: predicting videos does not necessarily lead to a comprehensive understanding of physical dynamics** ANS: Whether video models can truly understand physical dynamics remains an open research question, with arguments on both sides [1]. In our experiments, we found that training with a video prediction objective enables UP-VLA models to generate physically consistent future frames. Moreover, we think that better understanding can be reflected in better decision-making and control capabilities. As shown in Table [Q2], our ablation studies demonstrate that video prediction objective consistently improves robot control performance in all settings. We hope this clarification can solve your concern! [1] https://openai.com/index/video-generation-models-as-world-simulators --- **Q2: Maybe performance improvements are more likely attributed to forcing the model to predict multiple future outcomes (future steps), as shown in VLAS, OpenVLA-OFT** ANS: Thank you for your insightful question! We agree that multi-step prediction can benefit action learning, which we adopted as the default in our experiments. However, beyond multi-step prediction, our primary goal was to show that **predicting future images** (RGB modality) also improves action learning by facilitating information transfer from video datasets (image modality) to action learning (action modality). To further illustrate the impact of multi-step action prediction and RGB image prediction, we conducted a detailed ablation study along these two axes. Our results show that increasing the number of predicted action steps and incorporating future image predictions both contribute to performance improvements. We hope this ablation further clarifies the effectiveness of UP-VLA, which integrates image prediction objectives into VLA model. | Length of predicted actions | 1 | 4 | 7 | 10 | | :-----------------------------------------: | :---: | :---: | :---: | :--: | | w/o predict future image | 1.44 | 1.94 | 2.17 | 2.25 | | w/ predict future image | 2.42 | 3.72 | 4.00 | 4.08 | --- **Q3:The authors reproduced UP-VLA-RT-2 and phi-w/o-mmu, but one type is VLA and the other type is Prediction. Therefore, we should first determine whether the performance differences are related to the VLA backbone. To address this, results for both methods on other types should be implemented.** ANS: Thank you for your comments. The Show-o model is fine-tuned from the Phi LLM on multimodal understanding datasets. Since we aimed to fully remove the influence of multi-model understanding tasks, we choose to start from the original Phi model in the phi-w/o-mmu ablation. Following your suggestion, we conducted a comprehensive comparison to better highlight the advantages of UP-VLA across different backbones. These experiments further demonstrate that both video prediction and multimodal understanding contribute to improved action learning. | Method | Backbone | Avg. Len ↑ | | :--------: | :------: | :--------: | | VLA | Phi1.5 | 0.79 | | Prediction | Phi1.5 | 3.13 | | VLA | Show-o | 1.44 | | Prediction | Show-o | 3.99 | --- **Q4: Compare with Seer, VPP, and RoboVLMs. Both Seer and VPP also focus on Prediction and achieve excellent performance. RoboVLM falls under the VLA category and also performs very well.** ANS: Thank you for your insightful comments! We would like to argue that UP-VLA has significant difference on learning paradigm with these previous works. Seer and VPP leverage internet video datasets to aid robot learning, while RoboVLM utilizes a pretrained VLM. Each of these methods relies on a single type of dataset—either video datasets or multimodal understanding datasets. To the best of our knowledge, UP-VLA is the first model to leverage both image prediction (video) and multimodal understanding (MMU) datasets for embodied decision-making. Furthermore, our ablation studies confirm that both types of data significantly contribute to the final performance. We also note that Seer, VPP, RoboVLM, and UP-VLA all achieve an average task completion length exceeding 4.0 on the CALVIN benchmark. However, these three works adopt distinct architectures from UP-VLA and are concurrent with UP-VLA. To the best of our knowledge, UP-VLA achieves the best performance on CALVIN ABC benchmark among none-concurrent works. --- We hope our clarifications address your concerns and demonstrate the improved quality of our paper! Please feel free to reach out with any further questions. Thank you again for your valuable time! --- Rebuttal Comment 1.1: Comment: I appreciate the additional experiments conducted by the authors, as training these models within a short period is indeed not an easy task. I believe the authors have understood my comments on the paper, and the added experiments align with my expectations: the method of predicting future goals essentially forces the VLA to engage in planning, thereby improving the success rate of task execution. I think the contribution of this paper lies in the novel application of a unified MLLM architecture to robotic tasks, which is undoubtedly significant. Since the authors have addressed my concerns, I am inclined to recommend acceptance of this paper, as it could provide valuable insights for the design of future VLA models. Of course, regarding the concerns about writing details, I hope the authors can address them before the final version is submitted. --- Reply to Comment 1.1.1: Comment: Dear Reviewer dmnE: We are thrilled to receive your feedback! Your support for our work is truly appreciated. We will incorporate all the new experiments into the final version and continue refining it to meet the highest standards! Best Regards, The Authors
null
null
null
null
null
null
Reaction Graph: Towards Reaction-Level Modeling for Chemical Reactions with 3D Structures
Accept (poster)
Summary: Main Contributions: - Reaction Graph (RG) Representation: The authors introduce Reaction Graph (RG), a novel unified graph representation for chemical reactions that integrates both reactants and products into a cohesive framework. This representation incorporates 3D molecular structures, which are crucial for accurately modeling chemical reactions. - Incorporation of Reaction Edges: RG includes reaction edges that connect atoms in reactants to their corresponding atoms in products based on atomic mapping. This allows the model to capture changes in molecular structures during reactions. - 3D Structure Integration: The authors propose a method to incorporate 3D information into RG using bond lengths and angular edges, which implicitly convey bond angles. This approach is rotationally and translationally invariant. - Empirical Results: The authors demonstrate the effectiveness of RG through extensive experiments on various tasks, including reaction condition prediction, yield prediction, and reaction classification. RG achieves state-of-the-art accuracy across multiple datasets. Main Findings: - The proposed Reaction Graph representation significantly outperforms existing methods in modeling chemical reactions. - Incorporating 3D structural information and reaction edges enhances the model's ability to understand and predict reaction outcomes. - The model achieves high accuracy in predicting reaction conditions, yields, and types, showcasing its potential for accelerating drug design and material science. Main Algorithmic/Conceptual Ideas: - Unified Graph Representation: RG integrates molecular graphs of reactants and products, capturing interatomic relationships pertinent to the reaction process. - Reaction Edges: These edges enable the model to exchange information between reactants and products during the message-passing phase of GNNs. - 3D Information: The use of bond lengths and angular edges provides a simple yet effective way to incorporate 3D molecular structures into the graph representation. - Attention-based Aggregation: The model uses an attention mechanism combined with an LSTM to aggregate node features into a unified reaction feature vector. Claims And Evidence: Claims: - Effectiveness of Reaction Graph: The authors claim that Reaction Graph (RG) is more effective than existing methods in modeling chemical reactions. - Incorporation of 3D Information: The authors assert that incorporating 3D structural information improves the model's performance. - Reaction Edges: The introduction of reaction edges helps the model capture changes in molecular structures during reactions. Evidence: - The authors provide extensive experimental results on various tasks (reaction condition prediction, yield prediction, and reaction classification) across multiple datasets (USPTO-Condition, Pistachio-Condition, USPTO-Yield, USPTO-TPL, and Pistachio-Type). - The results show that RG outperforms existing methods, achieving higher accuracy in predicting reaction conditions, yields, and types. - Ablation studies demonstrate the effectiveness of incorporating 3D information and reaction edges. Problems: - The claims are generally well-supported by the experimental results. However, the authors could provide more detailed analysis on the impact of different components (e.g., reaction edges, 3D information) on specific types of reactions or datasets. Methods And Evaluation Criteria: The evaluation criteria are standard and widely accepted in the field, making the results comparable to other studies. The datasets used are large-scale and diverse, providing a comprehensive assessment of the model's performance. Theoretical Claims: The paper does not present any theoretical proofs. The contributions are primarily algorithmic and empirical, focusing on the development and evaluation of the Reaction Graph representation. Experimental Designs Or Analyses: Validity: - The experimental designs are sound and follow standard practices in the field. - The authors provide detailed descriptions of their methods and datasets, allowing for reproducibility. - The results are consistent across different tasks and datasets, supporting the robustness of the proposed method. Issues: - The authors could provide more detailed error analysis to understand the failure cases and potential limitations of the model. - The impact of different hyperparameters and training strategies could be further explored to provide a more comprehensive understanding of the model's behavior. Supplementary Material: No, I didn't. Relation To Broader Scientific Literature: I think that important and relevant chemical reaction prediction works have been discussed. Essential References Not Discussed: To my knowledge, Yes, they are Other Strengths And Weaknesses: ### Strengths - **Originality:** The paper introduces Reaction Graph (RG), a novel unified graph representation for chemical reactions that integrates reactants, products, and reaction edges, along with 3D molecular structures. - **Effectiveness:** RG achieves state-of-the-art performance across multiple tasks and datasets, demonstrating its superior ability to model chemical reactions. - **Clarity:** The paper is well-structured, with clear methodology, extensive experiments, and detailed results, making it easy to follow and reproduce. ### Weaknesses - **Data Quality:** The accuracy of 3D coordinates and reaction labels in the datasets may limit the model's performance and generalization. - **Error Analysis:** A more detailed analysis of failure cases and error patterns could provide deeper insights into the model's limitations. - **Computational Efficiency:** The model's inference time and computational requirements could be optimized for real-time applications and larger datasets. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for acknowledging that our method is **novel**, **crucial**, and **effective**. We also appreciate your valuable suggestions for improving this work. ## Ⅰ. Impact of Reac and 3D Info in RG ### 1. Impact on Reaction Types **Tab 1: Impact of Reac and 3D info on 12 reaction types.** Reac|3D|0|1|2|3|4|5|6|7|8|9|10|11 |-|-|-|-|-|-|-|-|-|-|-|-|-|- |||0.292|0.292|0.317|0.323|0.396|0.197|0.347|0.244|0.372|0.312|0.259|0.299 √||0.292|0.306|0.322|0.33|0.403|0.216|0.359|0.246|0.391|0.327|0.277|0.319 √|√|0.309|0.318|0.327|0.333|0.424|0.229|0.368|0.264|0.391|0.326|0.298|0.338 3D is more critical for Unknown reactions (type 0), as their mechanisms are diverse and reaction patterns are difficult to capture. For C–C bond formation (type 3) and Oxidation (type 8), the contribution of Reac info is greater. ### 2. Impact on Specific Datasets **Tab 2: Impact of Reac and 3D info on different datasets.** |Reac|3D|U-C|P-C|U-T|P-T |-|-|-|-|-|- |||0.305|0.381|0.992|0.966 √||0.313|0.385|0.998|0.986 √|√|0.325|0.392|0.999|0.987 Both 3D and Reac info contribute to condition prediction. For reaction classification, which is directly related to the reaction change, Reac info is more effective. ## Ⅱ. Error Analysis ### 1. Condition The overall error pattern is in Appendix Fig 18. **A**: US20110105766A1 and **B**: US20040204386A1 are representative failure cases. - **Case A.** Correct: PdCl₂ (0.05%) Pred: Pd(PPh₃)₄ (1.82%) Conditions with low frequencies (0.05%) may be misclassified, but the Pred still exhibit similarity to the ground truth. - **Case B.** Correct: CuBr₂ (0.01%<) Pred: None (86.81%) For extremely rare conditions (0.01%<), model struggles and classifies them as None. Data augmentation or pretraining are promising solutions for future exploration. ### 2. Yield Apart from low label quality, the non-smooth relation between structure and yield is the bottleneck of yield prediction. We find the top-5 closest reactions of **C** using RXNFP. They have similar structures, yet the yield variance is large (0.5–0.94). This distinction hinders model from correctly predicting **D** and **F**. C: CCC(C)(C)N.CCN(C)C.Cl.[Cl-]>>CCC(C)(C)NCCN(C)C **Tab 3: Correct and Pred yields of 5 similar reactions.** |Reaction|Correct|Pred |-|-|- C|0.60|0.50 D|0.50|0.68 E|0.64|0.58 F|0.94|0.64 G|0.91|0.83 ### 3. Reaction Classification The overall error pattern is in Appendix Fig 21. **H** and **I** are representative failure cases. - **Case H.** Correct: FGI; Pred: Unrecognized C1=C[C@H]2[C@H]3C=C[C@H](C3)[C@H]2C1>>C1C=CCC=1 For rare types of reaction changes, the model may misclassify them as Unknown. - **Case I.** Correct: Oxidations; Pred: FGI Cl.N#CC1CCCC=1N.NO>>N#CC1CCCC=1NO In this case, the annotated and predicted types are both acceptable, but the Pred seems more reasonable. ### 4. Other Tasks Detailed analysis is in Sec H1 and H2. ## Ⅲ. Hyperparameter Selection and Training Strategy ### Hyperparameter Selection Results show that the current setting is optimal, while the performance may scale with the hidden dim. **Tab 4: Impact of hyperparameters.** |Hid Dim|MPNN Iter|Pool Iter|T1|T15 |-|-|-|-|- 200|3|2|0.325|0.518 50|3|2|0.307|0.505 100|3|2|0.315|0.512 200|2|2|0.320|0.520 200|4|2|0.317|0.514 200|3|1|0.319|0.514 200|3|3|0.318|0.514 ### Training Strategy According to **Tab 5** and Appendix Fig. 19, our proposed 2-stage training strategy effectively addresses the negative transfer problem in multi-task learning like condition prediction. **Tab 5: Impact of two-stage training strategy.** |Method|T1|T3|T5|T10|T15 |-|-|-|-|-|- One-Stage|0.30|0.41|0.45|0.48|0.50 Two-Stage|0.33|0.43|0.47|0.51|0.52 ## Ⅳ. Data Quality ### Accuracy of 3D Coords We control the 3D accuracy by adding gaussian noise to the bond lengths. Results in **Tab 5** show that when noise level is within a certain margin (5%), the performance decrease is minor (0.03). **Tab 6: Impact of 3D accuracy on 1/8 USPTO-Condition.** Noise|0.0|0.05|0.1|0.2|0.4 |-|-|-|-|-|- Accuracy|0.177|0.174|0.170|0.164|0.152 ### Label Quality Label density is an important indicator of label quality. We investigate its impact on Pistachio-Condition, where the model suffers from sparse condition annotations. Results in **Tab 6** show that label quality is more critical to model performance than 3D accuracy, and is the primary bottleneck. **Tab 7: Impact of label density on Pistachio-Condition.** Label density | 1/1 | 1/2 | 1/4 |1/8|1/16|1/32 |-|-|-|-|-|-|- Accuracy|0.39|0.36|0.32|0.28|0.22|0.19 ## Ⅴ. Computational Time Analysis As shown in Appendix F2, for most of the reactions, the construct and inference time of RG are **<50ms**, meeting the real-time requirement. The inference time of large dataset (USPTO test set) are **<3min**, demonstrating the efficiency of RG. We are optimistic that, with the advancement of the conformer prediction, the speed of RG construction will further improve.
Summary: The paper introduces a new representation learning framework for chemical reactions. Specifically, the authors propose a graph neural network architecture that takes into account a) explicit inter-reactant/product interactions, and b) the three-dimensional structures of reactants and products. The framework is applied to several reaction-related tasks including reaction classification, yield prediction, and condition recommendation, and is shown to outperform SOTA methods on several benchmarks. Claims And Evidence: The main claim of this work is that previous reaction representations suffer from two significant limitations: a) They do not adequately capture inter-reactant/product interactions, since they treat all molecules separately and then concatenate their features. B) They ignore three-dimensional structure information. These issues are addressed by the development of a 3D-aware reaction graph model. Experimental evidence (performance metrics on several benchmark tasks) clearly supports the claim of improved predictivity by including interaction and 3D information. Methods And Evaluation Criteria: From my perspective, a number of model design choices remain unclear: * Why does the model employ separate nodes for the atoms in the reactants and products instead of using a unified node, as would be expected given that these atoms are chemically identical? (similar to the idea of condensed reaction graphs) * The treatment of non-reactant species (e.g. reagents, catalysts, or solvents) is not explained. Are they included in the graph – which would raise concerns about data leakage in the condition prediction task? Are they excluded from the graph – potentially omitting important information for reaction yield prediction? * 3D structures of molecules are represented using internal coordinates (i.e. bond/edge lengths and bond angles) rather than Cartesian coordinates commonly used in other works. However, to fully represent the 3D structure of a molecule, bond length and bond angle information is insufficient; and dihedral angles / torsional angles would be required. While this is mentioned in the Supplementary Materials, it is unclear why these features were omitted from the model architecture. Theoretical Claims: not applicable Experimental Designs Or Analyses: Experimental evaluations are done on four main tasks: Leaving group identification, reaction classification, reaction condition prediction, and yield prediction. All tasks are well-established in the literature (including datasets and train–test splits). The authors provide systematic benchmarks against SOTA methods for each of these tasks. The experimental design, including the evaluation metrics, is sound. For each task, the authors additionally conduct an ablation study to dissect the contributions of inter-reactant/product interactions and 3D information, respectively. Supplementary Material: The Supplementary Materials are very well prepared. They include an extensive discussion of methodology, details on code and model checkpoints, and further elaboration on the techniques employed. Relation To Broader Scientific Literature: As highlighted in the introduction, compared to molecule representations, reaction representations are somewhat underdeveloped. Therefore, the introduction of the 3D-aware model represents a relevant advancement. As discussed in the `Methods and Evaluation Criteria`, some design choices remain unclear, but the improvments in predictivity are significant and noteworthy. Essential References Not Discussed: The introduction of the paper introduces GNNs for molecular property prediction as a relatively recent development. This is misleading, and some foundational works (Scarselli et al. 2009, Duvenaud et al. 2015) are omitted. Other Strengths And Weaknesses: While the technical discussion appears sound, the discussion of chemical concepts could be improved notably. Examples include: * While the proposed architecture requires molecular 3D structures, the challenges associated with obtaining such structures are largely ignored. In reality, molecules do not have a dingle 3D structure, but a distribution of 3D structures which is dynamic and depends on the environment. This is inherently hard to capture – and the authors use only equilibrium structures for their model. These structures are generated using a cheap force field method rather than a more accurate, but computationally expensive quantum chemical approach. A discussion of these simplifications and tradeoffs would be valuable. * The introduction on the applications of AI in chemistry could be notably improved (for a starting point see e.g. a recent review by Cheng et al., Faraday Discuss. 2024). As an example, “analysis of retrosynthesis” and “streamlines synthetic pathways” effectively describes the same problem. * The datasets used, particularly for reaction yield prediction, could be better described to emphasize the specific challenges associated with them. For example, the B–H and S–M datasets originate from focused combinatorial experimentation efforts (meaning high quality, low quantity, low diversity), whereas the USPTO datasets are derived from the patent literature (meaning lower quality, higher quantities, higher diversity). Along the same lines, some statements are too simplistic in my opinion, and should be more nuanced: * “study of interactions between molecules, particularly chemical reactions, has been overlooked” * “The advantage of RG on LvG identification demonstrates its ability to understand the reaction mechanism.” From a chemical perspective, identifying leaving groups based on the reaction equation is a relatively simple pattern recognition task, and should not be confounded with mechanistic understanding. Other Comments Or Suggestions: Organization and clarity of the manuscript could be improved at certain points: * The current presentation of experiments in Section 3 is somewhat confusing, as the discussion shuffles between focusing on model capabilities and specific tasks. As an example, section 3.2 studies the effects of different 3D featurization techniques using the condition prediction task – which is then re-introduced in section 3.3. A more logical organization would enhance readability of this section. * Figure 1 would benefit from a clearer structure with better defined divisions and sub-headings. * The labeling of functional groups in Figure 3 does not conform to standard chemical conventions. The term “carboxyl group” refers to the full COOH unit, and the used labels “carboxyl-hydroxyl” and “carboxyl” are nonsensical from a chemical standpoint. * In Figure 4 and the accompanying discussion, it is necessary to specify what type of timings are discussed (training time? inference time? timing of generating a 3D structure in the first place?) Questions For Authors: * Given that GNNs are usually considered rather “data-hungry”, what are the data requirements for training a predictive 2D / 3D model? Is it possible to train the model from scratch, even on the small (S–M, B–H) datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for acknowledging that our method "**outperforms SOTA methods**" and that the "**results clearly support the claim**". We also appreciate your insightful suggestions for improving this paper. ## Ⅰ. Separate vs. Unified Node RG separates atoms in reactant and product into two nodes, which can **preserve the uniqueness of molecular structures** and facilitate structural modeling. CGR unifies atoms into a single node, causing their structural features to become entangled during message passing. This may hinder structural understanding. Results in **Tab 1** show that RG outperforms CGR in condition and yield prediction, which rely on structural modeling. **Tab 1: Comparison of RG (separate nodes) and CGR (unified nodes).** Method|UC|PC|BH|SM|Test|Gram|SubGram|UT|PT |-|-|-|-|-|-|-|-|-|- CGR|0.20|0.26|0.94|0.85|0.74|0.13|0.20|0.99|0.98 RG|0.33|0.39|0.97|0.89|0.78|0.13|0.22|0.99|0.99 ## Ⅱ. Non-reactant Species For condition prediction, we exclude non-reactant species from graph and take them as prediction target, avoiding data leakage. For yield prediction, we keep them in the graph as they have a significant impact on the yield. ## Ⅲ. Why not Cartesian Coords or Torsional Angles Our work aims to predict reaction properties (e.g. conditions), which are **invariant to the specific conformations** of molecules. RG uses bond lengths and bond angles, which vary slightly across different conformations and suit the task. Cartesian Coords and Torsion Angles vary significantly between conformations, introducing redundant information and hindering model learning. Results in **Tab 2** support this point. **Tab 2: Comparison of 3D structure representations.** Method|T1|T5|T15 |-|-|-|- Cartesian Coords|0.312|0.463|0.511 Torsion Angles|0.302|0.447|0.494 Ours|0.325|0.472|0.518 ## Ⅳ. Tradeoffs in 3D Structure Calculation The reason for using equilibrium structures is the same as in **Ⅲ**. ETKDG+MMFF provides controllable errors (avg 5%< with DFT) and high efficiency (20ms< for 100 atoms). DFT are unaffordable (>10min for 6 atoms and polynomial growth), as USPTO contains >680k samples and reactions with >300 atoms Results in **Tab 3** show that controllable errors (5%) in 3D coords have a minor impact (0.003). **Tab 3: Impact of 3D error on 1/8 USPTO-Condition.** Error %|0|5|10|20|40 |-|-|-|-|-|- Accuracy|0.177|0.174|0.170|0.164|0.152 ## Ⅴ. Challenges of Datasets - The key bottlenecks of **USPTO-Yield** lie in the low label quality and missing condition annotations[1]. Although large in scale, the distribution remains sparse due to the large variety of reaction types, making it hard to learn the non-smooth relation between reaction and yield. - **B–H/S–M** are come from HTE, which provide dense, high-quality labels but limited data(<10K samples, <50 molecules). Test sets have additives held out from train set, which places high demands on model's generalization on small data scale. More stats and discussions can be found in Appendix Sec G. ## Ⅵ. Writing, Content Arrangement, and Figures Your suggestions are very helpful for improving the manuscript's quality. ### Introduction Revision - **AI4Chem:** In the field of chemistry [2], AI enables precise spectral analysis [...] and quantum chemical simulation [...], improves inverse design of molecular structure [...] and retrosynthesis planning [...]. - **GNN:** Among various representation methods, molecular graphs [3,4] have proven inherently advantageous for various chemical tasks [...]. ### Figure Revision https://huggingface.co/reactiongraph/Revision. ### Section Arrangement Sec 3.2 focuses on exploring suitable 3D features, whereas Sec 3.3 focuses on specific task. But you made well-reasoned point that these two Secs are not logically parallel. Therefore we rearrange them as followed: - 3.1 The roles of Reaction and 3D Information - 3.1.1 The Effect of Reaction Information - 3.1.2 The Effect of 3D Information - 3.2 Reaction-related Tasks - 3.2.1 Reaction Condition Prediction - 3.2.2 Reaction Yield Prediction - 3.2.3 Reaction Classification ## Ⅶ. Data Requirement Reac info is effective across different data scales. 3D info requires larger amounts of data. Models trained from scratch on B–H and S–M shows advantage of reac info (Sec 3.4), and we also conduct experiments on 1/8 of USPTO dataset (see **Tab 4**). **Tab 4: Results on 1/8 data scale.** Method|U-C|U-T |-|-|-| MG|0.13|0.92 RG|0.17|0.97 We use different scaffolds on Pistachio to further explore the data requirements of 3D info. **Tab 5** shows that limited data scale hinders the model from learning 3D priors. Results in [5] also leads to a similar conclusion. **Tab 5: Results of different data scale.** Scale|w 3D|w/o 3D |-|-|- 1|0.392|0.385 1/2|0.355|0.350 1/8|0.277|0.272 1/32|0.186|0.191 [1] Prediction of chemical reaction yields using deep learning [2-4] correspond to the papers in your comment. [5] Uni-Mol: A Universal 3D Molecular Representation Learning Framework --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses to my questions and concerns. I would like to clarify one aspect of miscommunication between my review and the authors' rebuttal: A complete 3D molecular conformation is defined either by a full set of Cartesian coordinates or by *internal coordinates* (i.e., the complete set of bond lengths, bond angles, and dihedral angles). I agree with the authors that, in general, variability across conformers increases in the following order: bond lengths < bond angles < dihedral angles. Given this reasoning, the authors' choice to apply a "cutoff" by excluding dihedral angle information from their model appears reasonable. I recommend explicitly including this justification in the manuscript. Overall, I remain convinced by the paper's central idea and continue to lean towards acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer vEQY, Thank you for your time and effort in reviewing our work. We appreciate your thoughtful clarification regarding the use of internal coordinates. We are glad that our explanation helped clarify the rationale behind excluding dihedral angle information. The manuscript has been revised accordingly to explicitly justify this cutoff. Thank you once again for your insightful comments. Best regards.
Summary: This paper proposes a new graph representation for reaction related tasks, named Reaction Graph (RG). Compared to traditional molecule graph representation, RG introduces a new edge type, ie, reaction edge, which indicates the edges that have been changed during the reaction process. The experimental results show that the proposed RG is effective in various reaction tasks. Claims And Evidence: 1. Unclear claim: "However, this method still separates reactions, which also causes loss of reaction information." How RXN Hypergraph separates reactions and what information is lost remains unclear; 2. In the experiments sections, what specific prediction model is conducted to compare MG and RG representation is ambiguous. Conducting a framework specifically designed for RG on both MG and RG representations may introduce significant bias, as the results could be influenced by the framework's unsuitability for MG representation. For a fair comparison, the author should conduct a typical used framework on the two representations. 3. Similar to 2, how the 3D information is excluded for comparison is not stated. Will the results of w/o 3D information introduce any bias due to the change of framework? Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: How the ablation experiments are conducted is unclear, which may cause some biased results. Does the framework keep consistency in different settings? Will the framework introduce bias to specific settings? Please also check Claims And Evidence part. Supplementary Material: Yes. Relation To Broader Scientific Literature: The proposed Reaction Graph representation can benefit a broad area in chemical reaction related tasks. Essential References Not Discussed: All essential references are covered. Other Strengths And Weaknesses: ### Strengths: 1. The proposed reaction graph is intuitive, which can naturally benefit the chemical reaction related tasks. 2. The authors conduct a variety of experiments to show the effectiveness of the proposed framework. 3. The experimental results can demonstrate the priority of proposed framework. ### Weaknesses: 1. Some implementation details about ablations are not clear. Please check Claims And Evidence. 2. The experiments mainly focus on demonstrating the effectiveness of RG, but the modules in the framework lack discussion, eg., Attention-based aggregation, Edge-embedding, and Vertex-edge embedding. 3. The main focus of this paper is not clear. The authors proposed RG and a framework, but how they benefit each other, why they specifically designed a framework for RG, and how effective both modules are, are not clear. The authors should discuss more on each of them, but not only discuss the effectiveness of RG while employing experiments with the overall framework. 4. Some contributions lack discussion, eg, the framework design. Other Comments Or Suggestions: * A typo at Line 3330. * Details about how $l_{ij}$ is calculated. * Details about how 3D information is included. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for acknowledging that this paper proposes a "**new graph representation**" and is "**effective in various reaction tasks**." We also appreciate your suggestions. ## Ⅰ. Clarification on Reaction Separation and Information Loss in RXN Hypergraph **Seperate Reactions**: In the RXN Hypergraph, reactants and products are represented as separate graphs, rather than integrating the entire reaction into a single connected graph. **Information Loss**: This representation lacks atom mapping between reactants and products, leading to the loss of key information on bond breaking, bond formation, and atomic reorganization. **Experimental Verification**: We try to convert the standard non-connected RXN Hypergraph (w/o Reac) into a connected graph (w/ Reac). The improved performance confirms our claim (see **Tab 1**). **Tab 1: Impact of reaction connectivity.** | Method | U-T | P-T | |-|-|-| w/o Reac | 0.954|0.911 w/ Reac |0.984|0.936 ## Ⅱ. Results of MG and RG in Typical Framework We use the typical used RGCN implementation from DGL offical repo. As in **Tab 2**, RG outperforms MG on all the tasks. **Tab 2: Results of RG and MG on RGCN.** Method|1/8 U-C|BH 1-4|1/8 U-T |-|-|-|-| MG|0.13|0.75|0.92 RG|0.17|0.78|0.97 We also test MG on its own SOTA framework UniMol. As in **Tab 3**, our method shows advantage in reaction tasks. **Tab 3: Results of RG and MG on their own SOTA framework.** |Method|U-C|BH 1-4|U-T| |-|-|-|-| |UniMol|0.30|0.62|0.98 |RG|0.33|0.78|0.99 ## Ⅲ. Details about Ablation **Method Overview:** The main contribution of this work is proposing Reaction Graph (RG), a new representation for chemical reactions, characterized by two key features: - **Reaction info** are used to model reaction change, which is incorporated through reaction egde; - **3D structures** are incorporated through bond lengths and angular edges. **Ablation Settings:** All ablations are conducted on the **same framework**, with **same hyperparameter setting**. We remove reaction edges for reaction info ablation, and set all edge lengths to 0 for 3D ablation. ## Ⅳ. Design and Effect of Each Module in the Framework Our main contribution is **proposing a novel reaction graph representation**. Still, we have carefully considered the framework and conducted extensive experiments. For more details, please refer to Appendix Sec.H.7. ### 1. Edge Embedding We employ typical embedding layer for edge type, and RBF for edge length. RBF lifts edge lengths into high-dim vector by smooth mappings, which help in capturing variations in continuous data. It focus more on local structural pattern, which is suitable for tasks involving local continuous spatial relationship (e.g. molecular structures). As in **Tab 4**, RBF outperforms linear projection and discretization. **Tab 4: Comparison of different edge embedding methods.** | Method | T1 |T5|T15 | |-|-|-|-| Linear Projection | 0.317|0.468|0.516| Discretization | 0.310|0.457|0.505 RBF | 0.325|0.472|0.518 ### 2. Vertex-Edge Integration We map each edge type to a unique learnable message function $f_e(v,e,l_e)$, which computes messages for node feature update. Through this approach, the network can handle bond, reaction and angular edge in a targeted manner, as their semantics are different. Compared with other vertex-edge integration methods, our method achieves better accuracy (see **Tab 5**). **Tab 5: Comparison of different vertex-edge integration methods.** | Method | T1 |T15 | |-|-|-| PaiNN | 0.290|0.488 DimeNet | 0.318|0.514 Graph Transformer | 0.300|0.477| EGAT | 0.304|0.502 GINE | 0.299|0.487 Ours |0.325|0.518 ### 3. Aggregation To capture RG's global representation, we use an LSTM with attention aggregation. It helps the network focus on the sites related to the reaction change. Ablation results validates the effectiveness of each component (See **Tab 6**). **Tab 6: Ablation results of aggregation module.** Attention|LSTM|T1|T5|T15| |-|-|-|-|-| |||0.3156|0.4642|0.5110 |√||0.3187|0.4670|0.5136 |√|√|0.3246|0.4715|0.5181 ## Ⅴ. Calculation of Edge Length $l_{ij}$ First, we input each molecule into RDKit and calculate 3D atomic coords using ETKDG + MMFF. The 3D coords of atoms $i$ and $j$ are $p_i$ and $p_j$, respectively. The length of edge $e_{ij}$ is calculted as: $$ l_{ij} = \begin{cases} 0, & \text{$e_{i,j}$ is reaction edge} \\\\ ||p_i - p_j||_2, & \\text{otherwise} \end{cases} $$ ## Ⅵ. How 3D Info is Included 1. We calculte the edge length $l_{ij}$ (refer to Ⅴ). 2. We input $l_{ij}$ to RBF kernel network to get the edge length embedding $\boldsymbol{l}_{ij}$. 3. We concat $\boldsymbol{l}ij$ with edge type embedding $\boldsymbol{e}_{ij}$. 4. Edge length and type features are combined with vertex features through vertex-edge integration, thereby incorporating 3D info into RG. ## Ⅶ. Typo Correction The typo in line 3330 has been corrected, and the text has been re-proofread. Thank you for your reminder. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply and comprehensive experiments. Most of my concerns are addressed. I hope the authors can revise the main text accordingly. I will increase my score to weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer S7nT, Thank you for acknowledging the comprehensiveness of our experiments. We appreciate your feedback and will ensure that the revisions to the main text address your concerns thoroughly. We are grateful for your decision to increase your score to a weak accept. Best regards.
Summary: This paper introduces the Reaction Graph to model the chemical reaction as a graph and capture the molecular transformations during reaction. The reaction edge connects nodes representing the same atom in both reactants and products based on atomic mapping relationships. Main Results: 1. A condition prediction model on USPTO to show the advantages of the Reaction Graph (RG), demonstrating their model could focus on different parts of molecules, especially reaction centers. 2. The proposed model shows superiority on the Leaving Group (LvG) identification, Reaction Condition Prediction, and Reaction Yield Prediction tasks. Main algorithmic ideas: 1. The paper proposed a Reaction Graph based on atomic mapping relationships, whose edge connects nodes representing the same atom in both reactants and products. 2. Integrate 3D molecular information into reaction modeling. Claims And Evidence: 1. The experimental results lack comparison with the results of newer papers, which makes it difficult to evaluate the contribution. [1] Teasing out missing reactions in genome-scale metabolic networks through hypergraph learning. [2] Self-supervised contrastive molecular representation learning with a chemical synthesis knowledge graph. [3] Bridging the gap between chemical reaction pretraining and conditional molecule generation with a unified model Methods And Evaluation Criteria: 1. The proposed method, which models relationships between reactants and products, is pertinent to chemical reaction characterization. It achieves significant results across six datasets, including tasks like reaction center prediction. But, the evaluation lacks comparison with recent studies. 2. It mainly focuses on enhancing the learning of edge information in 3D graph representations. However, it does not deeply consider molecular equivariance, which is popular in 3D molecular learning. While this approach works, it is quite common in graph network learning. There is limited innovation in addressing more complex properties of molecular structures. Theoretical Claims: The theoretical claims presented in the article are straightforward and do not present any issues. Experimental Designs Or Analyses: 1. The method's evaluation lacks comparison with recent research, making it difficult to assess its advantages. 2. It remains unclear whether the entire reaction graph is used as input for calculating 3D coordinates. 3. While the model outperforms baseline methods significantly (0.30 vs 0.21) even without Reaction Edges and 3D structures, there is insufficient ablation analysis to explain which components contribute to this performance. More detailed ablation studies are needed. 4. The impact of using 3D information in other frameworks, such as D-MPNN on MG graphs, has not been explored. Investigating this could provide additional insights into the utility of 3D information across different model Supplementary Material: I have reviewed all the content in the supplementary material. Relation To Broader Scientific Literature: 1. This paper innovates by constructing reaction graphs to model chemical reactions, diverging from traditional hypergraph-based approaches validated in extensive experiments. 2. Some studies enhance chemical representations through knowledge graphs, integrating broad chemical information for improved performance, unlike the proposed reaction graph method. 3. Incorporating 3D information into the reaction graph framework is uncommon yet adds potential accuracy; however, it raises questions about the necessity and cost-effectiveness due to high generation costs. Essential References Not Discussed: The experimental results lack comparison with the results of newer papers, which makes it difficult to evaluate the contribution. [1] Teasing out missing reactions in genome-scale metabolic networks through hypergraph learning. [2] Self-supervised contrastive molecular representation learning with a chemical synthesis knowledge graph. [3] Bridging the gap between chemical reaction pretraining and conditional molecule generation with a unified model A paper that considers 3D transition state structures in chemical reactions is also worth including in the discussion: [4] Accurate transition state generation with an object-aware equivariant elementary reaction diffusion model Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: 1. It mainly focuses on enhancing the learning of edge information in 3D graph representations. However, it does not deeply consider molecular equivariance, which is popular in 3D molecular learning. While this approach works, it is quite common in graph network learning. There is limited innovation in addressing more complex properties of molecular structures 2. The method's evaluation lacks comparison with recent research, making it difficult to assess its advantages. 3. It remains unclear whether the entire reaction graph is used as input for calculating 3D coordinates. 4. While the model outperforms baseline methods significantly (0.30 vs 0.21) even without Reaction Edges and 3D structures, there is insufficient ablation analysis to explain which components contribute to this performance. More detailed ablation studies are needed. 5. The impact of using 3D information in other frameworks, such as D-MPNN on MG graphs, has not been explored. Investigating this could provide additional insights into the utility of 3D information across different model. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for acknowledging that our method "**shows superiority on various tasks**." We also appreciate your suggestions. ## Ⅰ. Comparison with Recent Studies The performance of our method **outperforms** recent methods ReaKE [2] and UniRXN [3] (see **Tab 1**). **Tab 1: Comparison with recent baselines. [\*] are reported in [2-3]** |Method|U-C|B-H|S-W|Test|U-T |-|-|-|-|-|-| |ReaKE [2]|0.23|0.89|0.76|0.76*|0.95| |UniRXN [3]|0.20|0.94|0.85|0.58|0.92*| |Ours|0.33|0.97|0.89|0.78|0.99| CHESHIRE [1] may **NOT** be directly related to our paper. It focuses on filling missing edges in a Genome-scale Metabolic Graph, while we focus on predicting chemical reaction properties. ## Ⅱ. Innovation in Using 3D Previous methods typically use bond lengths and bond angles. However, their differing physical meanings and numerical distributions may cause feature mismatches. To address this, we innovatively introduce angular edge, which uses edge length to implicitly represent angle, achieving a unified representation. ## Ⅲ. Why not Equivariant NNs? **Equivariant Networks (EqvNs)** are suitable for tasks that depend on specific conformations, such as energy prediction. **Invariant Networks (InvNs)** are suitable for tasks that rely solely on the type of molecule rather than specific conformations, such as predicting reaction conditions and yields. This paper focuses on predicting reaction properties, which are invariant to specific conformations, requiring an InvN. Results in **Tab 2** validate the advantage of InvNs in reaction modeling. Hence, our RG uses the invariant features. **Tab 2: Comparision of EqvN and InvN.** |Method|T1|T5|T15| |-|-|-|-| |EqvN|0.29|0.44|0.49| |InvN|0.33|0.47|0.52| ## Ⅳ. Calculation of 3D Coords The entire RG is **NOT** directly used as input for calculating 3D coords. Coords of each molecule are calculated individually. Specifically, we input each molecule in reaction into RDKit, and then use the ETKDG + MMFF to calculate 3D atomic coords. ## Ⅴ. Further Ablation Analysis We conduct additional ablations on model components and training strategies. Our backbone, equipped with **CRM-H** and trained with a **two-stage strategy**, achieves a top-1 accuracy of 0.3, surpassing the baseline’s 0.2. Incorporating RG further boosts the accuracy to 0.33. Details are in Appendix Sec. H.4.3. **Tab 3: Ablation results.** |CRM-H|2 Stage|RG|T1|T5|T10| |-|-|-|-|-|-| ||||0.25|0.32|0.39| |√|||0.30|0.44|0.47| |√|√||0.31|0.45|0.49| |√|√|√|0.33|0.47|0.51| ## Ⅵ. Using 3D Info in Other Frameworks The results in **Tab 4** show that introducing 3D info into other frameworks leads to accuracy improvements. **Tab 4: Using 3D Info on other frameworks.** |Method|3D|T1|T5|T15| |-|-|-|-|-| |D-MPNN| |0.1977 | 0.3341 | 0.3924 | ||√|0.2030|0.3410|0.3971| RHG||0.2127|0.3447|0.3927| ||√|0.2149|0.3464|0.3949| ## Ⅶ. The Necessity of 3D Info Several methods have explored incorporating 3D info into molecular modeling. Uni-Mol [5] uses large-scale 3D positions, GraphMVP [6] uses bond lengths, and GEM [7] uses bond angles. Inspired by these works, we explore 3D in reaction modeling, which improves accuracy (see **Tab 5**). Although generating 3D conformations is currently costly, many studies [8–9] are actively addressing this issue. We are optimistic about the potential of incorporating 3D into the RG framework. **Table 5: The impact of 3D info.** |3D|T1|T3|T5|T10|T15 |-|-|-|-|-|-| ||0.313| 0.425| 0.461| 0.496| 0.509 | |√|0.325 |0.434 |0.472 | 0.506 | 0.518 | ## Ⅷ. Discussion of Transition State (TS) Structures 1. TS methods are inspiring to us. However, as discussed in Ⅲ, TS structures are related to specific 3D conformations, whereas reaction properties like conditions are independent of specific conformations. Therefore, their focus in 3D modeling differs. 2. We explore TS generation backbones for predicting reaction properties. Our RG achieves better performance (see **Tab 6**). 3. Methods for calculating TS, such as DFT, are very time-consuming (~10min for 10 atoms[10] and polynomial growth) and unaffordable in dataset like USPTO, which includes 680k reactions with up to 347 atoms. Hence, we didn't use TS structures as inputs for the network. **Tab 6: Comparison with TS generation backbones.** |Method|T1|T5|T15| |-|-|-|-| |OARD [4]|0.13|0.20|0.31| |EquiReact [11]|0.12|0.24|0.30| |Ours|0.18|0.31|0.37| [1]-[4] correspond to those in the comments. [5] Uni-Mol: A Universal 3D Molecular Representation Learning Framework [6] Pre-training Molecular Graph Representation with 3D Geometry [7] Geometry-enhanced molecular representation learning for property prediction [8] Predicting molecular conformation via dynamic graph score matching [9] Torsional diffusion for molecular conformer generation [10] Fast and Automatic Estimation of Transition State Structures Using Tight Binding Quantum Chemical Calculations [11] 3DReact: Geometric deep learning for chemical reactions --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response and thorough experiments. In light of these results, I've updated my score to 3. I believe incorporating some of the above comments into the revised manuscript will also improve the paper, and hope you do so. --- Reply to Comment 1.1.1: Comment: Dear Reviewer DHMj, Thank you very much for your time and effort in reviewing our submission. The suggested revisions have been incorporated into the main text to further improve the quality of the paper. We sincerely appreciate your feedback and your decision to raise the score. Best regards.
null
null
null
null
null
null
Regress, Don't Guess: A Regression-like Loss on Number Tokens for Language Models
Accept (poster)
Summary: Authors start with a clear objective in mind: rather than relying on strategies to fix the number token behaviour, it would be better to really treat numbers as special class and impose a new loss which allows the model to be strongly penalized if the prediction is far off. So authors introduced a new loss, called NTL, to improve model performances on arithmetic tasks. Authors also introduced this line work to be model agnostic and can be added without overhead. Claims And Evidence: Claims are pretty rigorous. Authors provide extensive experiments across different datasets and show that NTL outperforms standard CE loss and additional baselines in numerical task. Another critical point shown by the authors is the fact that NTL does not degrade performance in text-only task, which is a strong point in favor on author's claim of being easy to plug an play. Methods And Evaluation Criteria: The evaluation criteria (accuracy, MAE, R²) are appropriate for the tasks being evaluated, and the authors provide a thorough comparison with baseline methods. Theoretical Claims: No theoretical claim has been made by the authors, and for future work I strongly suggest to understand if theoretical guarantees can be made on this new loss. Experimental Designs Or Analyses: The experimental design makes sense with a good ablation study to validate their proposed approach. They compare NTL with multiple baselines on different tasks and model size. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is within the literature on improving numerical abilities from the language models. Authors cite relevant prior work, like Regression Transformer, CoT and verifiers. I can suggest to add symbolic reasoning methods or program aided language models, so we can compare NTL with these techniques as well. Essential References Not Discussed: Although this comes from a different area, https://arxiv.org/pdf/2402.01082 provides a new loss to treat numerical tokens. Similarly https://arxiv.org/abs/2410.03569 shows how to use this loss on very hard mathematical problem. Other Strengths And Weaknesses: The paper is particularly original and significative. Before integrating to any large scale pretraining, it would be beneficial if authors can train a larger model (say 7B or 20B params) to be more conclusive on the pretraining part. Other Comments Or Suggestions: N/A Questions For Authors: How does NTL perform on even larger models (say 7B or 20B)? It's relatively unclear how the model performance on floating number. Can you add a breakdown of where the model fails (maybe numerical ranges or specific types of arithmetic operations) to add more intuition of where NTL can be improved in the future? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your constructive and positive review of our work! (1) **Larger models**: We completely agree that evaluating NTL on larger models beyond 3B would provide further evidence regarding its scalability. We aim to do this in future work, however, at present, our computational resources are limited to a standard academic setting and running experiment on models that dont fit on single GPUs is beyond our capacity for this rebuttal. However, please be aware of the additional experiments on integer multiplication with multiple decoder-only architectures, namely GPT2 (up to 1.5B) and Granite (up to 3B), that confirm the previous results from our paper (see response to R1 aka p8jQ for details). (2) **Floats**: Regarding your comment on unclear performance on floating numbers: First note our existing experiment on a proper regression task (rJokes dataset, Table 4 and also Table 5) where labels are in $\in [0, 11]$. In this experiment NTL **matches** the performance of a regression head, whereas CE performs substantially worse. Secondly, we added an experiment on a real-world task from chemistry, where the labels are floats with $\sim$ 2 digits precision, and NTL outperformed CE (see response to R3 / bnfs). Third, note that DeepMind’s Mathematics dataset (see Table 1) contains almost 5M samples (18.7% of all 25M training samples) where labels are floats and not integers. This is particularly pronounced in the extrapolation test (28%) where we see a substantial improvement with NTL over CE. Together, these findings make us confident that the benefits of NTL extends well to floats. (3) **Error analysis:** One practical issue for models that tokenize numbers in multiple digits is that some tokens have large numerical value which disproportionally affects the loss (even if the logit is low), especially if the number tokens are not regularly spaced. We recommend to enforce digit-level tokenization as this ensure NTL is well-behaved. As a more conceptual mitigation strategy we note that NTL is not limited to using Euclidean distance between numbers. We will update Eq (4) in the paper to reflect this and emphasize that distances can be defined in a fully flexible manner. For example one can squash the distances so that, for a label 0, predicting 9 is not exactly 9x worse than predicting 1 but only 2x. We have done an experiment on this that confirms that it works still better than CE but less good than vanilla NTL on multiplying integers with up to 6 digits with GPT2-Large as measured by Mean Absolute Percentag Error (MAPE): | Loss | MAPE | | --- | --- | | CE | 0.502% | | NTL-Squash-2 | 0.491% | | NTL | 0.485% | For further details on this experiment see our response "decoder-only" to R1/p8jQ. Moreover, note that this transformation does not need to adhere to the mathematical definition of a distance, the user could provide any pairwise distance matrix between number tokens. This allows to handle even exotic cases like modular arithmetics (for details see comment from R2 aka fuzE). We will clarify this in the final manuscript. Additionally, we conducted a detailed error analysis on the GSM8K dataset to examine predictions for numbers ending with specific digits (0–9), comparing CE and NTL. The error histograms (see [last_digit_vs_distance_histogram.png](https://anonymous.4open.science/r/number-token-loss-5137/resources/last_digit_vs_distance_histogram.png)) reveal a consistent pattern across all digit groups: NTL error distributions are narrower and concentrated around zero, confirming improved numerical reasoning and lower systematic biases compared to CE. We further investigated errors specifically at digit boundaries (e.g., numbers ending in 0 or 9) on the GSM8K dataset. The table below breaks down model errors at specific digit boundaries and highlights how often predictions are overestimations, underestimations, and exact matches. | **Sample Type** | **Metric** | **CE** | **NTL** | | --- | --- | --- | --- | | **Ends with 0** | Overestimation Rate | 28.4% | 29.4% | | | Underestimation Rate | 56.5% | 51.1% | | | Exact Match Rate | 15% | 19.4% | | **Power of 10** | Overestimation Rate | 54.4% | 49.1% | | | Underestimation Rate | 24.6% | 24.6% | | | Exact Match Rate | 21.1% | 26.3% | | **Ends with 9** | Overestimation Rate | 33.3% | 38.8% | | | Underestimation Rate | 46.7% | 32.6% | | | Exact Match Rate | 20% | 28.6% | The results show that NTL achieves a more balanced error distribution. The exact match accuracy is consistently higher, but particularly so for samples ending with the 9 token, implying that NTL leads to a better handling on those digit boundaries. (4) **Literature:** Thanks for sharing the literature on modular arithmetics and cryptography. We will highlight these works in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for these answers and for the MAPE, this is exactly what I had in mind. I will keep the score!
Summary: The paper introduces a Number Token Loss (NTL), a regression-like loss function designed to improve the numerical reasoning capabilities of Language Models (LMs). The core contribution is twofold: NTL-MSE: A loss function that computes the Mean Squared Error (MSE) between the numerical value of the label and the predicted probability distribution of number tokens. NTL-WAS: A loss function based on the Wasserstein-1 distance, measuring the difference between predicted and true distributions of number tokens. The authors claim that NTL improves arithmetic tasks, can match regression heads, and scales well to large models while adding minimal computational overhead. Empirical results on mathematical reasoning tasks, regression benchmarks, and large-scale models (T5-3B on GSM8K) support these claims. Claims And Evidence: NTL improves arithmetic performance compared to standard CE loss. The results on the DeepMind Mathematics Dataset (Table 1) show that both NTL variants improve accuracy and reduce MAE. The improvement is most pronounced in interpolation tasks, while extrapolation benefits are more limited. On the rJokes dataset, NTL matches a regression head in RMSE and Pearson correlation (Table 4). NTL does not degrade performance on non-numerical tasks. The MultiRC dataset results (Table 7) confirm that adding NTL does not harm text generation. NTL scales well to large models. The authors apply NTL to a 3B parameter T5 model on GSM8K, improving accuracy from 13.5% to 17.7% (Table 8). The computational overhead is minimal (Figure 4). Methods And Evaluation Criteria: The experimental design is mostly rigorous but has a few gaps: Appropriate Choice of Tasks: The use of arithmetic benchmarks (DeepMind Mathematics, GSM8K) is well-justified. However, evaluating real-world numerical tasks (e.g., time series. finance, physics) would strengthen applicability. Evaluation Metrics: Accuracy, MAE, and R² are appropriate but could be complemented by a finer analysis of failure cases, such as systematic bias towards certain number magnitudes. Theoretical Claims: The theoretical justification for NTL is mostly sound: Cross-entropy's failure in numerical tasks is well-motivated. The issue that CE treats numbers as categorical rather than ordinal is a widely recognized problem. NTL-MSE's non-uniqueness issue (where a sum of probabilities can approximate the correct number without a peaked distribution) is correctly identified, and NTL-WAS addresses this. Wasserstein-1 distance as a better loss function is conceptually strong. However, the claim that NTL-WAS is always preferable to NTL-MSE is not fully substantiated, as it depends on task properties. Experimental Designs Or Analyses: Tokenization Choices: While single-digit tokenization improves performance, it is unclear whether models trained with multi-digit tokenization + NTL can reach comparable results. Supplementary Material: I reviewed the supplementary materials, including: Algorithmic details for NTL: Pseudo-code for NTL-MSE and NTL-WAS. Additional ablation studies: Testing different λ values and combining NTL with Gaussian Cross Entropy. Implementation details: Training settings, tokenization choices. The supplementary material is well-structured and clarifies implementation details Relation To Broader Scientific Literature: The paper positions itself well within the literature on numeracy in LMs Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: Novel and simple loss function that is easy to integrate. Comprehensive experiments covering multiple architectures and tasks. Solid theoretical motivation for why CE is suboptimal for numerical tasks. Other Comments Or Suggestions: n/a Questions For Authors: For NTL-MSE, does the model tend to predict the highest-probability digit slightly below the true number? If the true number is 9, does the model predict 8 with a probability mass summing to 9? How does NTL-WAS affect model confidence in its numerical predictions? Does the Wasserstein loss lead to over-smoothing, making models less confident? In practice you need to extract all numbers in the training data and add your NTL, what happens to numbers like phone number, years? If you directly project the last hidden state to number using a linear layer, will your method be better? This should be a baseline Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the constructive feedback. Below are detailed responses & new analyses: 1. **Real-world task from physics:** To demonstrate applicability to scientific data, we evaluate NTL on estimating molecular solubility — as studied by the Regression Transformer (RT). Each molecule is a SMILES string & the goal is to predict its solubility as a float. Following the RT paper we report performance over 3 random splits. Our results show, again, that training with NTL significantly improves performance over standard CE loss, reducing RMSE and increasing $R^2$:. | Loss | RMSE | $R^2$ | |:------------ |:------------------ |:------------------ | | CE | $1.08 \pm 0.16$ | $0.72 \pm 0.07$ | | NTL | $0.91 \pm 0.07$ | $0.80 \pm 0.03$ | Our approach outperforms the baselines from the RT paper, including a RF (RMSE: 1.16) and XGBoost (RMSE: 1.05). Regarding time series: this can be modeled with NTL in principle but time series transformers do not typically employ token-based generation like language models do. Since NTL is an improvement to cross entropy, we believe that studying utility for time series forecasting goes beyond the scope of this paper. 2. “**The claim that NTL-WAS is always preferable to NTL-MSE is not fully substantiated, as it depends on task properties”:** Indeed from a theoretical perspective, NTL-WAS is always preferable since it has a unique minimum as shown in Figure 3. However, there is one advantage of NTL-MSE: It explicitly computes a numerical value from the logits (via dot product), during loss calculation. This float can be combined with arbitrary loss functions (MSE, MAE etc), or even be used at inference time for decoding numbers as dot product of all logits (rather than via beam search). In the future we aim to test an expression-loss where all parts of a mathematical expression are transformed to numbers in this way & the loss penalizes the consistency of the equation. Thus, in some usecases NTL-MSE can be advantageous, although the theoretical properties are weaker. We will clarify this in the manuscript. 3. **Extrapolation benefits are more limited:** Note that the extrapolation performance with NTL **doubles** (from 5% to 10%) on the arithmetics dataset. In relative terms, the extrapolation benefits are stronger than interpolation benefits (Table 2). To strengthen the point, we ran new experiments with 2 decoder-only models (GPT-2, IBM Granite) on a multiplication dataset. NTL again outperforms CE in both interpolation & extrapolation but particularly for extrapolation tasks. For details, see response “Decoder-only” to R1/p8jQ. 4. **Finer analysis of error cases:** Please see the error analysis for R4/4obq 5. **Effect of NTL-WAS on confidence**: We analyzed the logit distributions over all number tokens for simple arithmetic tasks in throughout training: NTL increases the model’s confidence in its numerical prediction, particularly in early training. **NTL-WAS** yields logits that are more sharply centered around the correct number compared to CE. See plot at: http://bit.ly/445tlAD 6. **Tendency to predict the highest-probability digit slightly below the true number for NTL-MSE?** Since NTL-MSE computes the dot product, or *weighted* sum, there is no reason why it should systematically underestimate the value. We also confirmed this empirically, see analysis right above: logits are more centered for NTL, plot at: http://bit.ly/445tlAD 7. **Extracting of numbers in training data & handling of phone numbers:** In practice, we dont extract numbers explicitly but use an indicator vector from the tokenizer to index the logits of the number tokens to compute NTL. NTL is not suitable in cases where numerical proximity is irrelevant (e.g., phone numbers). However, such tokens could either be excluded from NTL or a squashing transformation could be applied (for details see “Error Analysis” to R4/4obq). Such cases are exceptions rather than the norm. Also note that we still minimize CE for all tokens, NTL is just an extra loss term. For many common number types—such as years, quantities, and measurements—numerical proximity matters and NTL is more meaningful. 8. **Unclear results for NTL+Multi-digit tokenization (MDT):** We already ran this experiment on a regression task (Table 5). Single-digit tokenization (SDT) and NTL yield complementary benefit: NTL+MDT is better than just MDT, but SDT + NTL is the best. 9. **Projection of last hidden state to linear layer:** We already ran this experiment, see Section 4.3 (”NTL can match regression models”). We even used a more complex regression head than just a simple linear layer (i.e., 2 linear layers with dropout). Nevertheless, NTL performs on-par with the model trained with this regression head (see Table 4). This shows its competitiveness, considering that it is a LM that can still be used on non-numeric tasks --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. I have a follow-up question: How practical is it to apply NTL in large-scale LLM pretraining, given that it requires knowing which tokens represent numbers? Specifically, how do you obtain the indicator vector over a large, noisy corpus? And if NTL still needs such a mechanism, isn’t that functionally equivalent to selectively activating a regression head — meaning NTL isn't truly general-purpose without explicit number detection? In the molecular solubility task, why not use a dedicated regression head instead of NTL? --- Reply to Comment 1.1.1: Comment: Thank you for your follow up questions! We appreciate the opportunity to clarify further: **(1) Practicality of NTL in large-scale pretraining:** Applying NTL is extremely practical for large-scale LLM pretraining. Access to the data corpus is not needed, we only need to run a single pass over the tokenizer instance before training. This offline step identifies which tokens have numerical values by trying to convert each token string to a float. The result is the indicator vector you mention. This step is basically instant, it takes 80ms for Llama3 or DeepSeekV3 which have vocabularies of ~128K tokens. Also, it only needs 10 lines of code (see anonymous code: https://bit.ly/4lkvpee). During training, no scanning is needed, we just index the logits at the number token positions. We also already showed in Figure 4 that there is no overall runtime overhead. **(2) NTL vs. selective regression head:** “Selectively activating a regression head” would require model modifications (extra parameters) and a custom decoding strategy (which head to use when?) and is thus not general-purpose. Instead NTL does not have a notion of heads. It is fully general purpose, e.g., it can be added as a plug-and-play loss function to any LM without requiring any model modifications (for a minimalistic training example with Llama see anonymous code: https://bit.ly/4iXLtBp). Please re-read the end of our introduction for details (page 2 bottom, line 089 following). Thus, calculating a regression-like loss directly on a token head is probably the most general purpose approach, exactly because it requires zero modifications to model and tokenizer. The loss class only needs access to the tokenizer once — to extract the tokens corresponding to numbers. So, NTL is fully practical for large-scale LLM pretraining whereas carrying an extra regression head is impractical. Note that our approach is entirely novel. Despite its simplicity nobody has yet demonstrated how a regression-like loss can be calculated directly on the logits produced by a standard LM head. **(3) Molecular solubility regression example:** We did not include a regression head in this rebuttal experiment because we already show an extensive comparison of NTL against an explicit regression head in the main body of our paper (see Table 4 and Table 5). Those experiments already demonstrate that performance of a regression head can be matched by NTL. Beyond this, as we said above, please note that regression heads are generally impractical in LMs and not commonly used. Taken together, our main competing method should be the standard cross entropy loss, because this is the loss used to train general-purpose LMs in practice. We would appreciate if you would reconsider your score in light of these clarifications.
Summary: This study proposes Number Token Loss (NTL), a new regression-like loss to better handle numbers in texts. With the NTL loss, the prediction of a number token is determined by the weighted average of numbers with their softmax probability from logits, and the NTL loss measures MSE between the weighted average and the ground truth. This approach is advantageous over prior methods in that a) it is model agnostic with minimal assumptions about the vocabulary and b) it comes with minor computational overhead. The experiments demonstrate its superiority in accuracy over the standard cross-entropy based approach and several prior methods including Gaussian Cross Entropy, Regression Transformer, and xVal, particularly in classification tasks. **update after rebuttal.** I appreciate the author's full elaborations and answers to my concerns, which greatly deepened my understanding of their work. The explanation and additional results address my concerns, and I suppose that this work is worth being presented in the main conference. Claims And Evidence: The claims are generally well tested by experiments. The experiments cover classification and regression tasks, and they examine the variants of the NTL loss, the scaling characteristics on model size, and training speed. Methods And Evaluation Criteria: The NTL loss naturally implements the idea that incorrect predictions on the number tokens should be evaluated with some distance metric, so that the model can know whether the predictions were close or not. While this idea itself has been tested by several methods including the Gaussian Cross Entropy and xVal, the NTL is more advantageous in that it can work within the framework of classification. This allows users to introduce minimal modification in their codes. The experiments show comparisons of these methods with standard metrics including prediction accuracy, MAE, and R2 scores. A caveat of the proposed method is that it assumes the Euclidean topology or its variants. Namely, it assumes that 2 and 3 are closer than 2 and 16. However, this is not always the case, this should be carefully discussed. For example, when one performs modular arithmetic, or some operations that involve periodicity, these numbers do not equip any distance (it violates triangular inequality). Thus, in such a case, the injection of the Euclidean topology should be harmful to the learning, and the cross-entropy loss should perform better. It agrees that the NTL is useful, but the authors should make readers aware of this. Theoretical Claims: This study does not contain any theoretical claims. Experimental Designs Or Analyses: I checked the setup of experiments. Overall, the experiments investigate the proposed method and variants well. There are several unclear points. > [l.190, right] this scheme (of xVal) is incompatible with T5. Appendix A.5 explains that the xVal is not compatible with T5 model, but then the authors should test it on other reasonable models. The experiments can underestimate the performance of xVal since it does not use xVal as it is. > [l.212, left] The effective number range of xVal is limited to [-5, 5] due to the combination of its scaling of the number token embedding and the layer-norm in the backbone. Over the experiments, the performance of the xVal is very poor. This part needs more explanation. Further, Tables 1(a) and 1(b) do not show the accuracy by xVal. While the xVal does not directly predict discrete tokens, the accuracy can be computed simply by nearest neighbor. > To take this into account, we scale our dataset for xVal with log(1+x) Are the results in Table 1 using this logarithm map, or not? I assume not, but then where can I find the results? Supplementary Material: I read part of it (e.g., Appendix A.5) to find if it answers my concerns and questions. Relation To Broader Scientific Literature: The key contributions of this study relate to the studies that address number tokens. This study addresses general documents and basic math tasks, but it also relates to the literature on hard mathematical problems as given in the next cell. Essential References Not Discussed: Relevant to the earlier comment on the modular arithmetic, I encourage the authors to include the literature on modular arithmetic, or more broadly arithmetic/symbolic computation tasks. 1. Learning the greatest common divisor: explaining transformer predictions, Francois Charton, ICLR'24 2. Impact of Pretraining Term Frequencies on Few-Shot Numerical Reasoning, Yasaman Razeghi, Robert L. Logan IV, Matt Gardner, Sameer Singh, ACL'22 3. Learning to compute Gröbner bases, Hiroshi Kera, Yuki Ishihara, Yuta Kambe, Tristan Vaccon, Kazuhiro Yokoyama, NeurIPS'24 4. Learning Label Encodings for Deep Regression, Deval Shah & Tor M. Aamodt, ICLR'23 The first one discusses the dependency of the successful GCD calculation on the base to represent numbers. The second one provides an insight in the number embedding from the the frequency perspective. The third introduces a regression head to regress the coefficients in polynomials (similar to xVal) and observes that it particularly performs poor on finite field coefficients (i.e., the case involving modular arithmetic). The last one is not about Transformer, but regression by classificaion has been widely studies, so covering this literature makes this study richer. Other Strengths And Weaknesses: Most of major strengths have been mentioned above, but I appreciate the proposed method in terms of the compatibility of the standard classification-based pipeline. It is natural to introduce regression to provide richer supervisory signals to incorrect number token prediction, but simple introduction of regression head requires additional modification in the codes such as in auto-regressive generation. The proposed method does not require this. Other Comments Or Suggestions: Nothing. The paper is well-written and easy to follow. Questions For Authors: Please refer to the other cells, but major concerns from me are: a) Are baseline methods (e.g., xVal) implemented and compared in a fair manner? b) The applicability of the proposed method to modular arithmetic and other hard math tasks is not tested and discussed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable feedback! We appreciate your recognition of NTL’s effectiveness and practicality, considering its compatibility with standard pipelines. To address your questions: **(1) Non-euclidean topology:** Thanks a lot for raising this interesting point regarding transferability beyond Euclidean topology. We agree that our specific loss (Equation 4) makes this assumption but note that the general NTL-WAS formulation (Equation 2) uses an arbitrary cost function $c$ defining the pairwise cost between tokens. This allows to cover more general relationships between the numbers. We confirmed this experimentally. Instead of using euclidean distances, we squashed the distances so that for a label 0, predicting 9 is not exactly 9x worse than predicting 1 but only 2x. Results showed performance better than CE but slightly worse than vanilla NTL for multiplying integers with up to 6 digits (GPT2, measured by MAPE; details in response **Error Analysis** for R4/4obq). Moreover, regarding modular arithmetics: Consider the modular addition task, where $y(n,m) = n+m\ mod\ p$ as described in “Grokking modular arithmetic” (Gromov, 2023). Here, a reasonable cost function could be found in $c(y_1, y_2) = min(abs(y_1-y_2), m-abs(y_1-y_2))$, which considers the “warping” property / periodicity of numbers in modular arithmetics. **(2) Extended explanation for xVal:** We have carefully revisited the corresponding section and would like to clarify several points. 1. **xVal Incompatibility with T5** As noted in Appendix A.5 of our paper, xVal multiplyies the [NUM] token embedding $X$ by the number value $a$. In T5, however, a per-sample pre-layer normalization is applied immediately after the embedding, which effectively removes the scaling by $a$. Specifically: $$ \frac{aX-E[aX]}{\sigma(aX)}=\frac{aX-aE(X)}{\sqrt{a^2E(X^2)-a^2E(X)^2}}=\frac{X-E(X)}{\sigma(X)} $$ Hence, under T5’s architecture, **all numbers collapse to the same embedding**, making xVal incompatible with T5. Consequently, we do **not** use xVal with T5 in our experiments. Instead, we follow the **original** xVal encoder architecture (as in the xVal paper). 2. **Limited Dynamic Range of xVal & Log-scaling** Even in the original xVal architecture, the range of values xVal can process meaningfully is limited by the layer normalization that follows the positional embedding step. For further Information on that, please see [xVal](https://openreview.net/pdf?id=KHDMZtoF4i) under section 2 “Implicit normalization via layer-norm”. Therefore, xVal normalizes each value to [-5,5] prior to training to mitigate this issue. We argue that this approach cannot be applied in practice, since in real texts the range of numbers is not known in advance, and thus a simple min-max normalisation to [-5,5] prior to training or inference is not really practical. Therefore we opted for a **simpler** approach in our experiments: applying a signed log⁡(1+x) transformation to all numeric inputs. This avoids the overhead of parsing and re-scaling each number to [-5,5] prior to training, but it also has the drawback that **large numbers** are squashed in the embedding space, making fine-grained distinctions difficult for the model. Thus: Yes, the results in Table 1 **do** use this logarithmic mapping. We will clarify this explicitly in the table caption and experimental setup. 3. **Experimental results on dataset from xVal paper** For a direct comparison without any modifications to the xVal processing, we repeated the 3-digit multiplication experiment from the xVAl paper. Again, our model beats xVal (see response **Simpler baseline** for R1/p8jq). 4. **Accuracy of xVal Predictions** We appreciate your request for explicit accuracy metrics and your idea of using the nearest neighbor for xVal predictions. When rounding xVal predictions to match the decimal places of our dataset, the resulting accuracies are quite low: - Interpolate: 0.052 - Extrapolate: 0.018 If we reduce precision by rounding to only two decimals, the accuracy improves somewhat, but remains modest: - Interpolate: 0.096 - Extrapolate: 0.058 These findings underscore that xVal struggles with larger numbers in particular. (3) **Hard math tasks:** While the paper indeed only covers tasks related to euclidean topology, we respectfully disagree that it does not include “hard math tasks”. E.g., the extrapolation task of DeepMind’s math dataset is very difficult (→ maximal accuracy of 10%, see Table 3). It includes a large variety of tasks, including not only arithmetics but also algebra, number conversions and polynomials. Furthermore, the GSM8k dataset (see Table 8) is an accepted benchmark for reasoning and even commercial LLMs cannot solve it perfectly so far. (4) **Literature:** Thanks for sharing additional references about arithmetics & number representation in LMs. We will add those to the final version of the manuscript. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' answers to the details. The explanation and additional results address my concerns so I'll keep my positive score. --- Reply to Comment 1.1.1: Comment: Thanks for the response. We're glad that all your concerns have been addressed and look forward to updating the final paper, if it gets accepted. Since our paper has received very tight scores overall, we would truly appreciate if your score would fully reflect your support of our paper.
Summary: This paper introduces Number Token Loss (NTL), a loss function designed to improve numerical reasoning in language models (LMs). The core idea is that standard cross-entropy (CE) loss treats numbers as categorical variables, disregarding numerical proximity. NTL aims to address this by incorporating numerical relationships into training, proposing two variants: 1) NTL-MSE: Uses Mean Squared Error (MSE) between predicted number token distributions and the ground truth. 2) NTL-WAS: Uses the Wasserstein-1 distance to align predicted and target number distributions. The authors argue that NTL is model-agnostic, computationally efficient, and improves numerical reasoning while maintaining performance on text-based tasks. Experiments on mathematical tasks and a real-world regression dataset suggest that LMs trained with NTL perform comparably to those with dedicated regression heads. ======= *update after rebuttal.* I appreciate the authors for additional experiments on a simple arithmetic baseline and on decoder-only models. Although I am not fully convinced about the results on the effectiveness of the claimed methods, I would like to increase my score to 3. Claims And Evidence: Their claim that NTL improves performance in arithmetics is clear and convincing. 1. NTL improves performance on arithmetic tasks - Supported by empirical results from the DeepMind Mathematics dataset, showing improved accuracy and lower error rates compared to CE. - NTL shows consistent improvements in both interpolation and extrapolation tests. - However, a straightforward baseline using simple arithmetic tasks is missing. 2. NTL does not degrade text generation performance - MultiRC dataset experiments confirm that NTL does not negatively impact text-based tasks. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not present formal theoretical claims. Experimental Designs Or Analyses: Yes, I assessed the validity of experiement designs. The experiments are well-designed, with strengths including: - Model scale comparisons: Results on both small-scale (T5-small) and larger models (T5-3B) suggest scalability. - Computational efficiency analysis: Confirms that NTL adds minimal overhead (<2%). However, some gaps remain: - Limited alternative baselines: While xVal and Regression Transformer are tested, other numeracy-enhancing strategies (e.g., GCE, continuous number embeddings) are not considered. - No decoder-only architecture evaluation: The method is only tested on an encoder-decoder (T5) model. It is unclear whether NTL applies effectively to decoder-only models like GPT-style transformers. Supplementary Material: Only roughly checked the experiment setting and their psuedo-code algorithm. Relation To Broader Scientific Literature: The paper situates itself within work on: - Numeric representations in LMs (Geva et al., 2020; Golkar et al., 2023) - Mathematical reasoning and arithmetic in LMs (Cobbe et al., 2021; Dziri et al., 2024) - Tokenization strategies for numbers (Born & Manica, 2023) Essential References Not Discussed: Some relevant references are not discussed: Number embeddings: - Do NLP Models Know Numbers? Probing Numeracy in Embeddings (Wallace et al., 2019) - Tokenization counts: the impact of tokenization on arithmetic in frontier LLMs (Sing and Strouse, 2024) Sequential Predictions, Training Objective: - Teaching Arithmetic to Small Transformers (Lee et al., 2023) - The Pitfalls of Next-Token Prediction (Bachmann and Nagarajan, 2024) - Length Generalization in Arithmetic Transformers (Jeleassi et al., 2023) Other Strengths And Weaknesses: ### Strengths - The method is straightforward and easy to implement. - The writing is clear and well-organized. - The empirical results show consistent improvements across tasks. ### Weaknesses - The method is only evaluated on an encoder-decoder model (T5). It is unclear if NTL applies to decoder-only architectures. - The connection between NTL and learned numerical representations is not analyzed. - The paper lacks a direct comparison with Gaussian label smoothing, which is another method for handling numeric proximity. Other Comments Or Suggestions: - Synthetic arithmetic tasks: Simple arithmetic tasks, such as those from Length Generalization in Arithmetic Transformers (Jeleassi et al., 2023), could serve as a baseline. This would allow comparison of NTL’s performance, sample efficiency, and training efficiency. - Applicability to decoder-only models: The paper should test whether NTL works for autoregressive models like GPT. Questions For Authors: 1. For digit-level tokenization, how does NTL handle edge cases where predictions differ by a single digit? For example, if the ground truth is "20" but the model predicts "19", does NTL penalize this more than it should because "9" and "0" are numerically distant tokens? Can the authors analyze error cases to determine whether this issue occurs frequently? 2. How does NTL affect learned number embeddings? A visualization of how number tokens cluster in embedding space (e.g., using PCA) would help illustrate whether NTL improves numerical representation learning. For example, it is known that training on CE loss leads to the embeddings of number tokens to form a circular shape with PCA. 3. Can NTL be used in decoder-only architectures? Since the method is only tested on T5, it is unclear if it generalizes to decoder-only transformers like GPT. 4. Does NTL improve sample efficiency for learning arithmetic tasks? Does it allow models to learn numerical relationships with fewer training samples compared to CE? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive feedback. We are delighted that you agree that NTL shows consistent improvements. To clarify your questions, we ran additional experiments with decoder-only models on “simpler arithmetic tasks” (multiplication): (1) **Decoder-only**: NTL is simply a loss function and can be used with any model, also beyond Transformers. To prove our point, we constructed an arithmetic task, much akin to the length generalization task by Jeleassi et al (2023): multiplications of 2 numbers with $k$ and $l$ digits, with $k, l \in [1..5]$ in training and $k, l \in [1..6]$ in evaluation. The model was trained to answer the question: “What is the result of multiplying x with y?”. We tested **2 different decoder-only models (GPT2 and IBM’s Granite)** both on varying sizes from 125M to 2B parameters. We report the mean absolut percentage error (MAPE) separately for unseen interpolation (up to $5 \times 5$ digits) and extrapolation ($m \times 6$ digits) samples: | Model | Size | Interpolation CE | Interpolation NTL | Extrapolation CE | Extrapolation NTL | | --- | --- | --- | --- | --- | --- | | GPT2 Small | 125M | 0.55% | **0.49%** | 1.11% | **1.00%** | | GPT2 Medium | 350M | 0.43% | **0.42%** | 0.82% | 0.82% | | GPT2 Large | 774M | 0.39% | **0.37%** | 0.76% | **0.75%** | | GPT2 XL | 1.5B | 0.43% | **0.40%** | 0.83% | **0.82%** | | Granite 3.2 | 2B | 0.35% | **0.21%** | 0.60% | **0.42%** | | Granite 3.1 | 1B | 0.28% | **0.15%** | 0.68% | **0.23%** | NTL works consistently better for 2 different decoder-only architectures across all model sizes. Length generalization is also improved for NTL, see detailed results (digit by digit and extra metrics): http://bit.ly/41Sx8jp We hope that this comprehensive experiment with 6 models & 2 backbones rules out your concern regarding generalizability to decoder-only models. (2) **Simpler baseline** **task & limited baselines:** We had already done experiments on the (simple) 3-digit multiplication problem from xVal. But since this dataset is too easy, we didnt show it in the paper, in favor of more challenging experiments. This compares also to other number encoding strategies than xVal. We will add this Table to the final Appendix. | Encoding | R2 Value | | --- | --- | | P10 | 0.9989 | | P1000 | 0.9997 | | B1999 | 0.9998 | | FP15 | 0.7119 | | xVal | 0.9986 | | T5 CE | 0.999934 | | T5 NTL-WAS | **0.999997** | | T5 regression head | 0.999891 | (3) **Comparison with** **Gaussian label smoothing:** Note that we already reported results for Gaussian label smoothing (GCE) in the submitted version (Table 3). We add a more direct comparison below. The results on the arithmetic dataset show that GCE combined with NTL yields best performance, showing that both methods are complementary. Interpolation test set | GCE | σ | NTL | Accuracy | MAE | R² | |------|----|------|----------|------|------| | X | - | X | 0.34 | 2.15 | 0.95 | | X | - | O | 0.43 | 0.91 | 0.99 | | O | 0.5| X | 0.42 | 0.95 | **0.99** | | O | 0.5| O | **0.48** | **0.76** | **0.99** | Extrapolation test set | GCE | σ | NTL | Accuracy | MAE | R² | |------|----|------|----------|------|------| | X | - | X | 0.05 | 61.92 | 0.61 | | X | - | O | **0.10** | **58.18** | **0.68** | | O | 0.5| X | **0.10** | 58.55 | 0.65 | | O | 0.5| O | **0.10** | 66.97 | 0.59 | (4) **Additional questions** 1. **Single-digit edge cases (19 vs 20 vs 21)**: Yes, with single-digit tokenization, NTL would give a higher loss for 19 than for 21 since 9 is far from the ground truth. However, cross entropy has the same problem: The loss for 19 is higher than for 21, since two tokens are incorrect. However, NTL also works with multi-digit tokens, where such a case would not occur. Nevertheless, we looked into the frequency of such cases. Please see the response to R4 aka 4obq for details. 2. **Number embedding:** A PCA of number token embeddings of CE and NTL did not show significant differences, embeddings were indeed roughly circular in both cases. We will add the plots to the final appendix. 3. **Continuous number embeddings:** You mention that we lack such comparison where actually xVal does use continuous number embeddings (see R2/fuzE for details). 4. **Sample efficiency:** As suggested, we analyze the sample efficiency for the decoder-only digit multiplication task described above. We show the evolution of the MAPE during training. As expected, the errors decrease much faster for NTL than for CE loss. | MAPE @Epoch | 1 | 5 | 10 | 20 | 40 | 100 | | --- | --- | --- | --- | --- | --- | --- | | CE | 3.1% | 2.0% | 1.6% | 0.6% | 0.7% | 0.4% | | NTL | 2.7% | 1.3% | 1.1% | 0.9% | 0.5% | 0.3% | This corresponds to $3.43$ epochs on average to achieve a MAPE $<0.5\%$ with CE loss, compared to $2.55$ epochs with NTL. This difference is pronounced for more difficult multiplications, in case of interest see Figure at: [bit.ly/4lc6wkY](https://bit.ly/4lc6wkY)
null
null
null
null
null
null
Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models
Accept (spotlight poster)
Summary: The paper introduces a novel framework for enhancing the logical consistency of large language models (LLMs). The authors propose a universal evaluation framework based on three key properties: transitivity, commutativity, and negation invariance. They also introduce REPAIR, a data refinement and augmentation method that improves logical consistency while maintaining alignment with human preferences. The study demonstrates significant improvements in model performance across various tasks, including abstractive summarization evaluation, document reranking, and temporal event ordering. The main findings include substantial enhancements in logical consistency metrics and better performance in logic-dependent algorithms. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The authors provide extensive experimental results across multiple datasets (SummEval, NovelEval, CaTeRS) to demonstrate the effectiveness of their proposed framework. The metrics used (transitivity, commutativity, negation invariance) are clearly defined and appropriately applied to evaluate logical consistency. The results show significant improvements in logical consistency and performance, which validate the claims made in the paper. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem at hand. The authors use a combination of ranking estimation and data augmentation (REPAIR) to enhance logical consistency. The evaluation criteria, including transitivity, commutativity, and negation invariance, are relevant and effectively measure the logical coherence of LLMs. The use of benchmark datasets (SummEval, NovelEval, CaTeRS) makes sense for evaluating the proposed methods. Theoretical Claims: The paper does not present any formal theoretical proofs. However, the conceptual framework and the proposed methods are logically sound. The authors provide a clear rationale for the choice of logical consistency properties (transitivity, commutativity, negation invariance) and their application to LLMs. No specific theoretical claims require further validation. Experimental Designs Or Analyses: The experimental designs and analyses are valid. The authors conduct extensive experiments across multiple datasets to evaluate the effectiveness of REPAIR. The results are presented in a clear and structured manner, with detailed comparisons to baseline methods. The experimental setup is appropriate for the tasks considered, and the results support the claims made in the paper. Supplementary Material: There is no supplementary material except the appendix. I have read the appendix. Relation To Broader Scientific Literature: The key contributions of the paper are well-aligned with the broader scientific literature on improving logical consistency in neural models. The paper builds on prior work in natural language processing (NLP) and machine learning (ML) that focuses on enhancing model reliability and coherence. Specifically, the paper relates to works such as: Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge (Minervini & Riedel, 2018): This work investigates the problem of generating adversarial examples to improve logical consistency in NLI models. Logic-Guided Data Augmentation and Regularization for Consistent Question Answering (Asai & Hajishirzi, 2020): This study integrates logical rules with neural models to improve consistency in QA tasks. Essential References Not Discussed: To my knowledge, I do not find any other related work essential to understanding the key contributions of the paper, but not currently cited/discussed in the paper. Other Strengths And Weaknesses: Strengths: The paper presents a novel and comprehensive framework for evaluating and improving logical consistency in LLMs. The proposed REPAIR method is innovative and shows significant improvements in logical consistency and performance. The experimental results are robust and validate the effectiveness of the proposed methods. Weaknesses: The paper could be strengthened by exploring more potential limitations of the proposed methods, such as their applicability to other types of models or tasks. Other Comments Or Suggestions: The paper could benefit from additional visualizations or examples to illustrate the effectiveness of the REPAIR method. The authors may ensure that the code and refined datasets are publicly available to facilitate reproducibility. Questions For Authors: Limitations and Future Work: What are the potential limitations of the proposed methods, and what are the key areas for future research? How do the authors plan to address these limitations? Comparison to Adversarial Training: How does REPAIR compare to adversarial training methods (e.g., Minervini & Riedel, 2018) in terms of improving logical consistency? Are there any specific advantages or disadvantages? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful feedback and acknowledgment. Below, we provide our clarifications and explanations. --- - Potential Limitations of the proposed methods: In the appendix, we analyze the limitations of different rank estimation methods. Another potential limitation is that the proposed REPAIR method may not be effectively applied to preference data when there are only two responses per query (i.e., when the data is too sparse). In such cases, the rank estimation method cannot reliably distinguish between true preference and noisy preference. --- - Future direction: A promising future direction is to investigate whether the improved logical consistency achieved by REPAIR can be generalized. For example, if we augment a general-purpose query-response preference dataset using REPAIR, would training on this augmented dataset improve logical consistency in other domains? To what extent? While we believe these are valuable research directions, they require more exhaustive investigations, which are beyond the scope of this work. --- - Additional visualizations: We will make every effort to improve the clarity of the REPAIR-related figures to enhance their readability and comprehension. --- - Code and dataset release: We confirm that we will release all code and datasets to facilitate reproducibility and further research. --- - Comparison to adversarial training (Minervini & Riedel, 2018) We appreciate the reviewer for pointing out this relevant work. The primary differences between Minervini & Riedel (2018) and our REPAIR method are as follows: - The previous work focuses solely on the Natural Language Inference (NLI) task, where a well-defined logical relationship of entailment exists. In contrast, our framework is more general and applicable to any rankable items, including entailment relations, causal relations, temporal orders, and preference orders. - The prior work mitigates inconsistency by adding regularization to the loss function, whereas REPAIR focuses on a data augmentation approach to address inconsistency. As discussed in the Related Work section and Appendix D, other data-centric methods exist, but they are often restricted to NLI relations or factual knowledge consistency, which limits their applicability to broader domains. --- We will revise the manuscript according to the reviewer's suggestions and hope we have addressed all the concerns.
Summary: This paper investigates whether large language models (LLMs) make pairwise preference judgments without logical contradictions, focusing on three consistency properties: transitivity (A > B > C implies A > C), commutativity (choices remain the same regardless of order/phrasing), and negation invariance (consistent answers when preferences are reversed or negated). The authors propose metrics for each property and show that state-of-the-art LLMs often violate them, despite aligning well with human judgments. To address this, they introduce REPAIR, which aggregates noisy preference data into a self-consistent global ranking and then adds logically implied or negated comparisons for fine-tuning. REPAIR raises consistency scores without jeopardising human alignment and boosts performance in downstream LLM-as-judge tasks. Claims And Evidence: The paper makes several claims supported by empirical evidence to certain extent: - Logical preference consistency (as defined by transitivity, commutativity, and negation invariance) is crucial for reliable LLM decision-making, yet current models often exhibit inconsistencies. - The proposed consistency metrics serve as indicators of model robustness and alignment, showing strong correlations with human preference agreement rates. - The REPAIR method effectively improves an LLM’s logical consistency without compromising alignment with human preferences. - Models with higher logical consistency perform better in downstream tasks that require iterative judgments, such as ranking algorithms. However, the paper overstates its contributions by describing the evaluation framework as "universal". In reality, the study is specific to pairwise preference-ranking scenarios, and its applicability to broader LLM tasks remains unverified. Also, the experiments do not sufficiently support the claims (see one critical aspect in Experimental Designs Or Analyses). Methods And Evaluation Criteria: The methodology introduced in this paper is mostly appropriate for the stated problem. The authors identify transitivity, commutativity, and negation invariance as key measurable aspects of logical consistency in pairwise judgments, and they formalise metrics for each. These choices are sensible: transitivity directly addresses the avoidance of preference cycles, commutativity tests that the order or wording of input does not flip the outcome, and negation invariance ensures a model handles logically equivalent inquiries consistently. However, one particular thing remains arguable is the use of win-loss rate in its REPAIR algorithm: - Win-loss treats all preferences equally, whereas in real-world data, some preferences may be stronger or more certain than others. Methods that incorporate confidence scores or margins of preference may produce more reliable rankings. - If a model produces inconsistent rankings (e.g., due to randomness or bias), win-loss does not provide an effective way to smooth or account for contradictions. - Win-loss does not estimate a likelihood of preference correctness but rather a raw win ratio, which lacks formal statistical grounding. Theoretical Claims: This work is largely empirical. One nuanced theoretical claim is its choice of sampling sub-graphs for transitivity check. Experimental Designs Or Analyses: An critical issue lies in the experiment design is that the authors conduct their logical consistency evaluation on three datasets (SummEval, NovelEval, CaTeRS) but then demonstrate the effectiveness of their REPAIR method on a different dataset (Summarize From Feedback). This raises several concerns and questions: - Did the fine-tuned model show improved logical consistency when re-evaluated on SummEval, NovelEval, and CaTeRS? - If so, why were those results not reported? - If not, does this suggest that REPAIR’s improvements are dataset-specific or do not generalise across tasks? Supplementary Material: Not provided. Relation To Broader Scientific Literature: As illustarted, this is largely within the scope of preference ranking in LLMs. Essential References Not Discussed: This submission largely builds upon [1], which is cited. However, it does not sufficiently discuss how it improves over that prior work, leaving unclear whether its contributions are meaningful advancements or merely incremental refinements. [1] Liu, et al. "Aligning with human judgement: The role of pairwise preference in large language model evaluators." COLM (2024). Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: - There appears to be a contradiction in Figure 6’s right-most subfigure, where the preference matrix entry $(X_2, X_4)$ is labeled "A" (indicating $X_2 > X_4$), but the side prompt states $X_2 < X_4$ (indicating the opposite relationship). Questions For Authors: My main questions and concerns have been discussed above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful feedback. Below, we provide our clarifications and explanations. --- - Overstatment on "universal framework": We agree that our work focuses on the logical preference consistency of pairwise preference-ranking scenarios. However, we would like to clarify that **by "universal framework" we mean that our proposed method applies across all domains with multiple rankable items**, unlike previous works that primarily focus on tasks such as NLI and factual knowledge consistency. If the reviewer still finds "universal framework" to be an overstatement, we are happy to revise or remove the term accordingly. --- - Concerns about the Win-loss rate rank estimation method We acknowledge the potential limitations of using the win-loss rate as a rank estimation method. However, we would like to refer to our **ablation study in Appendix N** (referenced in the main paper, Line 326), where we provide **detailed analysis of the advantages and disadvantages of two alternative rank estimation methods, ELO rating and Bradley-Terry model**. Additionally, we compare REPAIR's performance across different rank estimation methods. We believe this ablation study sufficiently addresses the concerns raised. However, if further clarification is needed, we would be happy to expand on this discussion. --- - Concerns about experimental design The use of different datasets for the REPAIR method (Summary from Feedback and MS MARCO, Appendix L) and for the quantification method (SummEval, NovelEval, and CaTeRS) is intentional. The latter datasets are designed as evaluation benchmarks and are too small to be split for training. For details on dataset sizes, please refer to Appendix B. --- - Concerns about Generalization: We believe the reviewer is questioning whether the learned consistency can generalize across different domains. We would like to clarify that **we do not claim that the REPAIR method improves consistency across domains**. While we agree that cross-domain generalization is an important future direction, it is beyond the scope of this paper. Systematically justifying such a claim is challenging due to the multiple factors influencing generalization performance, such as the specific preference learning method (e.g., DPO or PPO) and the domain gap between training and target tasks. For example, training on summaries with feedback may improve SummEval but not necessarily document reranking tasks. If the reviewer is concerned about potential overstatement, we are open to revising our claim to specify "improving in-domain logical consistency." --- - Comparison to previous work PairS[1]. We would like to clarify that **our work and PairS address completely different research problems and are not directly comparable**. PairS focuses on ranking estimation via search-based methods, whereas our paper aims to **quantify and improve the logical preference consistency of LLMs**. The only connection to PairS appears in Section 5, where we use it to illustrate **how logical preference consistency impacts real-world logic-dependent algorithms**. [1] Liu, et al. "Aligning with human judgement: The role of pairwise preference in large language model evaluators." COLM (2024). --- - Error in Figure 6 We thank the reviewer for spotting this typo. We will revising it accordingly. --- We appreciate the reviewer's feedback and hope our clarifications address their concerns. We are open to revising our claims and further refining our discussion if needed.
Summary: This paper introduces a universal framework for evaluating logical preference consistency in LLMs, focusing on three properties: transitivity, commutativity, and negation invariance. The authors propose quantitative metrics for these properties, conduct comprehensive empirical analyses across state-of-the-art LLMs, and introduce REPAIR, a method for refining and augmenting data to improve logical consistency. Experimental results indicate enhanced consistency correlates with better model reliability and improved performance in logic-dependent tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The paper improves reliability of LLMs by addressing logical consistency. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Clearly defined formulations and rigorous metrics for evaluating logical consistency. 2. Extensive experimentation across diverse LLMs and multiple tasks validates the utility of proposed metrics and methods. 3. REPAIR method effectively improves consistency without sacrificing alignment with human preferences. Solid empirical analysis showing strong correlations between proposed consistency metrics and model reliability. Weaknesses: 1. Evaluation is limited to datasets with relatively well-structured logical tasks; extending evaluation to more complex or noisy real-world applications could strengthen claims of universality. 2. The observed negative effects of Chain-of-Thought prompting on logical consistency are intriguing but insufficiently explored; deeper analysis or theoretical insight would be valuable. Other Comments Or Suggestions: 1. The paper lacks a detailed ablation study on components of the REPAIR method. 2. Clarify more explicitly how human judgments were utilized in the data augmentation step. 3. It would be useful to clarify the computational overhead introduced by the REPAIR method. Questions For Authors: 1. Have you explored the effectiveness of the proposed method on larger-scale and less structured datasets (e.g., open-ended reasoning tasks)? 2. How sensitive is your consistency metric to variations in annotation quality? 3. Could you discuss potential trade-offs in consistency versus expressivity or creativity of the model outputs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful feedback and acknowledgment. Below, we provide our clarifications and explanations. - Extending the Experiment and Method to Less Structured Datasets We focus on analyzing logical preference consistency, and by '**universal** framework' we mean that our proposed method applies across **all domains with multiple rankable items**. As stated in Line 218, our dataset selection is guided by **subjectiveness** considerations: SummEval (evaluating summaries) represents a more subjective task, whereas CaTeRS (comparing temporal and causal order) is a more objective one. We believe covering this spectrum of subjectivity ensures broad applicability to real-world tasks. However, if the term "universal framework" seems overstated, we are open to refining it to "improve in-domain consistency." --- - CoT Analysis Thank you for raising this point. Our findings show that evaluation with **CoT reasoning enhances accuracy but does not necessarily improve consistency**. While we recognize that systematically analyzing CoT’s impact on LLM judgments is a valuable research direction, conducting an exhaustive set of experiments is beyond the scope of this paper. Our primary focus is on **quantifying consistency, assessing its impact, and proposing methods for improvement**. --- - REPAIR Method: Ablation Study on Components We appreciate the reviewer's suggestion. The REPAIR method consists of two key steps: 1) **Rank estimation** from noisy and sparse human annotations. 2) **Preference augmentation** based on the estimated rankings. In Appendix M and N, we conduct two ablation studies: - **Evaluating different rank estimation methods.** - **Assessing the performance impact of using the same amount of augmented data.** To the best of our knowledge, these cover all potential ablation studies relevant to the REPAIR method. --- - REPAIR Method: Use of Human Judgments in Data Augmentation and Computational Overhead. As illustrated in Figure 6, **human judgment annotations are solely used in the first step of REPAIR, where a ranking is estimated from the noisy human annotations**. We will improve the visualization and explanation to make this clearer. Regarding computational overhead, since REPAIR is a data augmentation method, its **additional computational cost is directly proportional to the size of the augmented dataset**. Table 2 (# data, Avg Comp/Inst) details the amount of data generated by REPAIR and its computational implications. --- - Sensitivity of Consistency Metric to Annotation Quality Our consistency metrics—transitivity, commutativity, and negation invariance—**do not rely on human annotations**. Instead, they are derived solely from the model’s own judgments, making them independent of annotation quality. --- - Trade-offs Between Consistency and Creativity in Model Outputs. This is an excellent question. We believe that consistency and creativity are distinct dimensions of LLM behavior, each desirable in different applications. For instance, when simulating human behavior, perfect logical consistency may not be necessary for all queries. Therefore, **we do not advocate for universal consistency improvements across all domains. Instead, we propose that when logical consistency is a critical factor for an LLM’s intended functionality**, e.g. used as an logical operator in a logic-dependent high-level algorithm, our method provides a systematic way to quantify and enhance this characteristic. ---
Summary: The main topics of this paper are (1) introducing consistency metrics for several logic/preference orders of LLMs, (2) evaluation of a large number of models and datasets, and (3) impact on downstream applications. Specifically, the metrics measure transitivity, commutativity, and negation invariance in LLMs based on in-context learning, as shown in Figures in the Appendix, such as Figure 7. The proposed data augmentation scheme, REPAIR, also enhances the consistency behavior. **Post rebuttal** I checked the paper again and read reviews from other reviewers. One reviewer had a different evaluation (1) than others (4/3/3). In short, two reasons for rejection were (1) related to the citation of one paper and (2) essentially, asking for more experiments. I wanted to clarify how serious these issues are, but I couldn't find substantial evidence supporting such arguments. Therefore, I will maintain my original score. Claims And Evidence: The main claim is that given LLM preference orders under some relations, the proposed method measures the transitivity, commutativity, and negation consistency over the pairs of the statement. The formulation is convincing. The evaluation of LLMs shows the scores from various models over different datasets. The outcome in Table 1 is also convincing. Some interesting hypotheses in Section 3.3, such as transitivity as a proxy for self-agreement, commutativity correlates with human preferences, are also provided with the evidence in Figure 4 and 5. Methods And Evaluation Criteria: Yes, the definition of metrics, selection of the models, and datasets make sense. Theoretical Claims: I believe there is no theoretical claim in this paper. Experimental Designs Or Analyses: Overall, the experiment is well designed, and the appendix supports its validity and soundness. Supplementary Material: Yes, I went over the overall supplementary material and it helps to better understand the paper. Relation To Broader Scientific Literature: This empirical result might impact the consistency behavior around LLMs. The observation made in this paper is consistent with the earlier findings in smaller LMs and matches a new trend over very large LMs. Essential References Not Discussed: NA Other Strengths And Weaknesses: The main strength is the empirical work that supports various claims and reproducible details in the paper. Other Comments Or Suggestions: NA Questions For Authors: I don't have any further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's feedback and acknowledgment. We are grateful for the time and effort dedicated to reviewing our work. Best, Authors
null
null
null
null
null
null
Score as Action: Fine Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning
Accept (poster)
Summary: The paper proposes a continuous-time reinforcement learning framework to fine-tune diffusion models by treating score functions as actions in a stochastic control problem. The authors derive a policy gradient theorem for continuous-time RL and connects KL regularization to a tractable running reward. The authors also develop a policy optimization theory by adapting ideas from TRPO/PPO to the continuous setting. ## update after rebuttal Thanks for the response. I will keep my score. Claims And Evidence: The authors claim that the proposed continuous-time RL framework, specialized value function design and policy optimization algorithm can improve the quality of diffusion generative model’s output. Empirical results on both small-scale (CIFAR-10) and large-scale (Stable Diffusion v1.5) settings provide convincing evidence of the method’s effectiveness. Methods And Evaluation Criteria: YES Theoretical Claims: The authors claim that their continuous-time policy optimization leads to a tighter bound and a closed-form advantage-rate function compared with conventional continuous-time PPO approaches. There is no obvious error in the proofs. Experimental Designs Or Analyses: The experiment design is mostly reasonable, but it would be better to elaborate more on the advantages of their policy optimization algorithm. Supplementary Material: The supplement materials are reviewed. Relation To Broader Scientific Literature: Most of the prior works of leveraging RL to fine-tune diffusion generative models use discrete-time formulation. The paper’s key idea of connecting diffusion generative models to continuous-time reinforcement learning is novel. Essential References Not Discussed: There is no essential reference not mentioned. Other Strengths And Weaknesses: Strengths: 1. The idea of treating scoring function as action and aligning diffusion model fine-tuning with continuous-time RL is novel. 2. The theoretical derivations provide a robust foundation for the proposed method and help to connect with the continuous framework. Weaknesses: 1. There is limited discussion on the additional computational cost that might arise from adopting a continuous-time formulation compared to conventional discrete-time methods. Other Comments Or Suggestions: Nil Questions For Authors: 1. Could you please provide some intuitive explanations for the value function design in Equation (20)? 2. The authors claim that they provide a tighter bound and closed-form advantage-rate function compared with conventional continuous-time PPO approaches. How does this help to generate images with higher quality? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for providing thoughtful suggestions to improve our paper. Please find our responses to your questions below: ***Q1: intuitive explanations for the value function design in Equation (20)*** ***A1***: The design of our value function in Equation (20) is motivated to capture the structural property of score-based diffusion models (we explain in (a) below) and to meet the boundary condition of the value function (we explain in (b) below). - **(a)** Recall that the value function $V(t,x)$ is defined as the expected reward of final generated image (if regularization penalty parameter $\beta$ is 0) conditional on the current time $r$ and state/latent $x$, so an natural and naive approximation of it can be the reward of the denoised image $\hat{x}_\theta(t,x)$ of the current state $x$, as the denoised image can be understood as "a prediction of the image at the end of SDE trajectory". This illustrate our first term reward mean predictor; of course such approximation to the value function is not accurate, so we use the second term with the same network architecture to further approximate the possible residual term which can improve the performance by reducing the mean square error. - **(b)** The value function has a boundary condition that at the final time $T$, $V(T,x)=R(x)$, so we choose two proper coefficient functions $c_{\text{skip}}(t)$ and $c_{\text{out}}(t)$ to satisfy such needs: we let $c_{\text{skip}}(T)=1$ and $c_{\text{out}}(T)=0$, thus the boundary condition is naturally satisfied. We hope that this clarifies our design space and we are happy to take further questions. ***Q2: How does the closed-form advantage-rate function to generate images with higher quality?*** ***A2***: We illustrate this mainly through the RL prospective as images with higher quality can be reflected as higher average reward for RL. As far as we know, the most relevant work about the conventional continuous-time PPO approaches is in [1], which estimates the advantage rate function through a model-free manner, but requires sampling many more examples to make the estimation accurate and algorithms stable. In our method, thanks to the closed-form advantage-rate function and further subtraction of second order term (which is hard to compute) as the baseline function (please find more detailed discussion about this in Line 309-322 after Theorem 1), we get a much simpler way to compute the advantage function which is also much more sample efficient, since we do not need to query the generative model for the next state to go. We believe the leverage of such structural property is one of the key reasons we get quite stable training reward curves as in illustrate in orange curve of Figure 9. [1]. Zhao et al. 2023. Policy optimization for continuous reinforcement learning. NeurIPS 2023
Summary: The paper proposes a continuous-time RL approach for fine-tuning diffusion generative models. It develops a policy optimization framework tailored for continuous-time RL, and empirically validate on the Stable Diffusion v1.5 model. Claims And Evidence: 1. Most importantly, it is still not clear to me why continuous RL is a preferred framework. Finetuning diffusion models in continuous SDEs has been widely studied in the literature. But regarding continuous RL, it is still not made entirely clear why it is needed. In the Figure 1, it is claimed that "discretizing into 50 steps has more overfitting than discretized with 25 or 100 steps". This is confusing. If I understand correctly, overfitting refers to 'mode collapse'. However, such a claim is not evident. How many runs have you conducted to draw this plot? Is it just for one run? Besides, the reward differences are not significant. Finally, even if this plot is properly drawn, what is the goal of this plot anyway? As far as I am concerned, in practice no matter what continuous formulation is used in modeling, in the final inference stage you would need to choose a discretization number of steps to denoise samples. In Figure 7, it seems the claim is model fine-tuned by continuous RL and does not exhibit performance difference when using discretization steps. But it is not entirely convincing to me as the experiments can have randomness. Besides, in practice if one discretization step works well it is already sufficient, what is the actual motivation to have several discretization steps with similar performances? 2. The implications and interpretations of Figure 2 are confusing. The authors compare several value network architectures. Firstly the training of value networks with MC across time steps is standard in RL finetuning. The novelty seems to be showing the representative power of introducing sin functions. However I dont think this representativeness can be definitely generalized to all domains of diffusion models. For example if the goal is to train a value function for sequence diffusion model, there are some embedding forms that might outperform sin and cos. Such a randomness and specialty makes me doubt the meaning of showing figure 2 and table 1, which are not generalizable. Methods And Evaluation Criteria: The methods and evaluation criteria generally make sense Theoretical Claims: Claims are mostly sound. Experimental Designs Or Analyses: A major concern is about the baseline selection. If the continuous-time RL method proposed in this work intends to demonstrate superiority over existing fine-tuning methods, only comparing against DDPO appears insufficiently justified, as DDPO is relatively slow and computational heavy (Besides, Figure 9 does not seem to indicate a significant difference). Why not include comparisons against common RL-finetuning baselines such as DRaFT and AlignProp , which are already effective on simple "discrete RL"? Even if the original reward is nondifferentiable, the parametrized value functions would be differentiable so implementation would not be a problem. It is advised to test if continuous RL can further boost a reward direct propagation-based method. Additionally, the authors completely omit comparisons to a line of inference-time techniques, i.e., Sequential Monte Carlo (SMC)-based methods [1-2]. These inference-time techniques are critical baselines since they do not even incur the computational overhead associated with fine-tuning and can still have good performances. At the cost of fine-tuning, it is necessary to show better performances over the finetuning-free ones. At minimum, clarifying clear superiority over these other potential baselines would substantially strengthen the importance of using 'continuous RL' versus ordinary "discrete RL". [1] https://arxiv.org/pdf/2402.06320 [2] https://arxiv.org/pdf/2408.08252 Supplementary Material: The supplementary material was reviewed partially, specifically the appendices related to theoretical derivations. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: 1. I appreciate the theoretically sound framework of policy optimization in the continuous RL formulation presented in Section 4. The contents are self-contained and implicative for future potential works studying continuous RL settings. Other Comments Or Suggestions: 1. Eq 10-12 builds that finding the optimal score becomes a policy optimization problem in RL. However, this is known in the literature; see examples [1-2]. References are lacking. Additionally, how are these results different from the known results? Similarly, Thm 3.1 is also a known result. These take up much space in the main text. Can the authors comment on why they are novel and what the underlying difficulty of proving them is? 2. It is recommended to enhance interpretations of results (make figures and tables clear in their limitations as much as possible to avoid over-statements that might not definitely be true in other scenarios), add essential experimental baselines (at least explain why some important ones are missing). [1] https://arxiv.org/pdf/2402.16359 [2] https://arxiv.org/abs/2403.06279 Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for providing detailed and thoughtful feedbacks to improve our paper. Please find our responses to your questions below: ***Q1: Clarification on the motivation in Figure 1 and Continuous-RL framework*** ***A1***: We are sorry if our wording has caused confusion, but we didn't claim on such of "mode collapse". What we claimed or observed is: the discrete RL trained policy, as a diffusion model, "overfits" to using 50 time steps as inference steps, as illustrated in Figure 1. Diffusion models has a nice natural tradeoff of generation quality and speed, as more inference steps could typically lead to more satisfactory images while causing extra time for inference. The most advanced models typically still haves such tradeoffs. We are motivated to let the fine-tuned models not lose such property (which is a dimension of freedom for users of different purposes), as it's pitiful that the performance improvement brought by RL fine-tuning like DDPO when inferencing the fine-tuned models on other time discretization steps like 25 and 100 could be discounted compared to the improvement typically reported only on fixed time steps like 50. Comparing our framework to [1] and [2], we agree that the continuous-time control formulation already exists and is similar to our learning objective, but 1) we adopt different parameterizations that focus on treating the score function as the first class citizen and as the primary action space, which motivates us to incorporate the structural property in value function design, 2) the methods in these two works are pure stochastic optimal control methods, which are fundamentally different from our RL-based approaches. The effectiveness and scalability of RL methods have been largely investigated in LLM fine tuning which makes them quite popular, and we believe that our work can help improve the study of RL-based fine-tuning methods for diffusion models by the continuous-time framework with better leverage of structural property and thus resulting scalable, more efficient and more stable algorithm. ***Q2: The implications and interpretations of Figure 2*** ***A2***: To summarize, Figure 2 and Table 1 mainly ablate on how to smartly incorporate the diffusion model policy network in the value function design space and partial effects of the link function. As we already theorectically explained our motivations after Equation (20), we conduct experiments in Figure 2 and Table 1 to empirically showcase how things differ when putting the denoised mean $\hat{x}_{\theta}(t,x)$ in the first reward mean predictor term or the second residual corrector term. Our experiments indeed showcase the promise of incorporating such structural property in our design space, as shown by the reduced MSE of green line compared to the orange line. Incoporating the denoised mean in the second part almost does not improve performance while introducing extra computational burden. The performance of the green line and purple line also showcases that the choice of link function also matters. We agree that there could be other link functions that might outperform sin and cos, however it's impossible for us to exhaust all. We would like to pursue possible theorectical guidance of chosing this as future work. ***Q3: Comparisons against common RL-finetuning baselines such as DRaFT and AlignProp.*** ***A3***: We would like to remark that DRaFT and AlignProp are algorithms for fine-tuning diffusion models, but they are not RL-finetuning methods, which might partically deviate from our primary study on RL-based methods. Nevertheless, we agree that they are baselines for fine-tuning diffusion models and we conduct additional experiments comparing our method, DDPO, DRaFT and AlignProp. Since DRaFT code is not released, we implemented it by revising the codebase of AlignProp. Please find the reward curves comparisons [in the link here](https://ibb.co/KxX6SxhF). Our experimental results show that DRaFT and AlignProp perform similarly (the time costs are also similar), but they both underperform our CTRL approach, or even DDPO when using ImageReward as both the reward signal and the evaluation metric. This demonstrates the advantage of our proposed method. ***Q4: If continuous RL can further boost a reward direct propagation-based method*** ***A4***: Thank for raising up a direction that could be insightful. Since DRaFT and AlignProp are not RL-based algorithms, we agree and believe that such methods could have continuous time extensions. This is by high level related to our paper of utilizing continuous-time RL over discrete RL, but we believe that the overall framework will be quite different and will not be RL-related anymore. We would add this point to possible direction which can further showcase the insights brought by our framework; we would like to pursue this as future work. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. Most of my concerns have been addressed. I have raised the score. I would suggest adding necessary references, interpretations of figures, and comparisons with RL-based finetuning techniques in the next version for completeness and clarity.
Summary: The paper reformulates fine-tuning diffusion generative models as a continuous-time reinforcement learning problem by treating the score function as a control action in the backward SDE. This continuous-time framework enables direct computation of policy gradients and leads to novel continuous-time analogs of popular RL algorithms (like TRPO/PPO), which are shown to yield more robust and efficient fine-tuning. Additionally, the paper proposes a value network design that leverages the structural properties of diffusion models, and experiments on CIFAR-10 and Stable Diffusion v1.5 demonstrate improved convergence and generation quality over traditional discrete-time methods. ## update after rebuttal I confirm my score. Authors addressed comments and added clarity and results to the original submission. Claims And Evidence: In Section 5.2, the paper mentioned that “Continuous-time RL outperforms Discrete-time RL baseline methods in both efficiency and stability because CTRL only require time discretization for estimating the policy gradient. ”, which should provide detaild description and more evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are well-suited to the problem. The continuous-time RL formulation aligns naturally with the intrinsic continuous nature of diffusion processes, and evaluating on benchmarks like CIFAR-10 and Stable Diffusion v1.5 is appropriate for demonstrating improvements in fine-tuning generative models Theoretical Claims: There are no apparent flaws in the theoretical arguments as presented. Experimental Designs Or Analyses: **Section 5.1** Compare DxMI with CTRL: DxMI is an inverse reinforcement learning based method, which is different from the direct use of reinforcement learning fine-tuning. Supplementary Material: Supplementary material provides code of finetuning EBM and SD v1.5. Relation To Broader Scientific Literature: The paper related to the field of alignment, diffusion model and reinforment learning. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - A novel continuous time RL finetuning framework **Weaknesses:** - Clarity of the paper: add pseudocode, figure of method. - Small experiments: only compared with 2 models, and add more set of the number of discretization timesteps compared with DPPO. Other Comments Or Suggestions: Add more experient to show CTRL is practical. Questions For Authors: How long does it take of finetune SD v1.5 with CTRL? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for providing thoughtful suggestions to improve our paper. Please find our responses to your questions below: ***Q1: add pseudocode, figure of method*** ***A1***: Please find the pseucode of our algorithm [here](https://ibb.co/KjhQ4s3m). We will update this part to the revised version. ***Q2: provide detaild description and more evidence of "Continuous-time RL outperforms Discrete-time RL baseline methods in both efficiency and stability because CTRL only require time discretization for estimating the policy gradient."*** ***A2***: As illustrated in the earlier Section 4, we are esstentially motivated to approximate the policy gradient formula in Equation (23), since PPO can be understood as more efficient implementations of such policy gradient by introducing the surrogate loss that shares the same gradient and clipping mechanism. This is shown in the definition of loss objective in the [pseucode](https://ibb.co/KjhQ4s3m). Also we showed in the experiments of fine-tuning stable diffusion v1.5 (the second environment) as our method yields higher rewards training on the same amount of samples compared to DDPO, and the variance of the reward curves also appear to be much smaller. That's why we claim "efficiency and stability", and we will add this heuristic explanation part in the revised version. ***Q3: only compared with 2 models, and add more set of the number of discretization timesteps compared with DPPO.*** ***A3***: We conduct additional experiments comparing our method, DDPO, DRaFT [1] and AlignProp [2]. Since DRaFT code is not released, we implemented it by revising the codebase of AlignProp. Please find the reward curves comparisons [in the link here](https://ibb.co/KxX6SxhF). Our experimental results show that DRaFT and AlignProp perform similarly, but they both underperform our CTRL approach, or even DDPO when using ImageReward as both the reward signal and the evaluation metric. Due to time constraints, we haven't tested the more set of the number of discretization timesteps of our methods compared with DPPO since they would require more hyparameter tuning for a fair comparison, but we will also add the corresponding results and comparisons in the later version. ***Q4: How long does it take of finetune SD v1.5 with CTRL?*** ***A4***: The current version of our implemented CTRL takes twice as much time as DDPO for our current implementation, since it requires the value network update and inference which is similar to PPO vs REINFORCE methods in LLMs. ***References:*** [1] DRaFT: Directly fine-tuning diffusion models on differentiable rewards [2] AlignProp: Aligning text-to-image diffusion models with reward backpropagation --- Rebuttal Comment 1.1: Comment: Thank you for addressing my and other reviewers' comments.
Summary: This paper proposes a novel continuous-time reinforcement learning (RL) framework for fine-tuning diffusion models, reframing the learned “score” function as the policy’s action. Unlike discrete-time RL methods for text-to-image (T2I) or other diffusion settings, the paper leverages the underlying continuous-time SDE nature of diffusion models to avoid artifacts from fixed time-discretization and to better exploit structural properties. Claims And Evidence: Yes, the claims proposed are supported by experiments shown in Section 5.2. Methods And Evaluation Criteria: Yes. The proposed methods make sense for the problem and the benchmark models are also reasonable (DxMI and SD v1.5). Theoretical Claims: I briefly checked the proofs and they look sound. Experimental Designs Or Analyses: The experimental design is generally strong: - With T=10 steps, they show that continuous-time RL plus carefully chosen sampling steps can drastically improve generation quality (lower FID) compared to a baseline IRL approach. - Stable Diffusion Fine-Tuning: They measure the ImageReward score across training and across different sampling step counts (25, 50, 100) to confirm time-discretization invariance and consistent improvement brought by the continuous-time RL. Supplementary Material: I went over the proofs in the supplementary material. Relation To Broader Scientific Literature: The paper belongs to topics related to RL for diffusion models. The paper references existing results on continuous-time RL. The authors refine these ideas for the specific scenario of KL-regularized reward and time-homogeneous diffusion coefficients. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - By treating the score as an action in a stochastic control problem, the approach neatly sidesteps time-discretization artifacts. - The paper presents a clear policy-gradient theorem (Theorem 4.1) and a TRPO/PPO-inspired surrogate bound (Theorem 4.4) specifically for continuous-time, state-independent diffusion coefficients. Weaknesses: - The paper only compares its method with DDPO while missing comparisons to other important baselines, such as RAFT and DRaFT, which makes the results less convincing. - The paper uses only SDv1.5 as the base model; including more models for comparison would improve the evaluation. Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for providing thoughtful suggestions to improve our paper. Please find our responses to your questions below: ***Q: Comparisons to other important baselines, such as RAFT and DRaFT*** ***A***: We conduct additional experiments comparing our method, DDPO, DRaFT [1] and AlignProp [2]. Since DRaFT code is not released, we implemented it by revising the codebase of AlignProp. Please find the reward curves comparisons [in the link here](https://ibb.co/KxX6SxhF). Our experimental results show that DRaFT and AlignProp perform similarly, but they underperform our CTRL approach, or even DDPO when using ImageReward as both the reward signal and the evaluation metric. We would like to comment that we agree these are important diffusion models fine-tuning baseline methods, but our paper primarily focuses on RL-based methods, so we didn't pursue these baseline in our current submitted version. Nevertheless, we agree that these comparisons could strengthen our results, and we will update the experiments of these non-RL baseline methods in the revised version. Due to the time constraints, we haven't tested the RAFT [3] based on ImageReward, but we will also add the corresponding results and comparisons in the later version. ***References:*** [1] DRaFT: Directly fine-tuning diffusion models on differentiable rewards [2] AlignProp: Aligning text-to-image diffusion models with reward backpropagation [3] RAFT: Reward rAnked Fine-Tuning Algorithm
null
null
null
null
null
null
Beyond The Rainbow: High Performance Deep Reinforcement Learning on a Desktop PC
Accept (poster)
Summary: Similar to Rainbow DQN, this paper integrates several improvements from existing RL literature to improve the performance of reinforcement learning agents. The authors show that their ensemble method BTR achieves state-of-the-art performance on Atari tasks using 200M frames on a desktop PC. The authors also show performance on other complex 3D games such as Super Mario Galaxy, Mario Kart, and Mortal Kombat. The ablation experiments in the paper show the impact of each improvement proposed. ## Update after rebuttal During the rebuttal session, since the authors have added more recent baseline results that greatly improve the persuasiveness of the paper, I now tend to weak accept this paper. Good work! Claims And Evidence: As the title and experiments show, this paper mainly proposes an integrated method that integrates multiple existing reinforcement learning technologies, and proves through experiments that it has achieved SOTA performance on 200M Atari (mainly) and can run on personal PCs. However, the methods mainly compared in the paper, such as Rainbow, DQN, etc., may be far from the current SOTA algorithms. Even if BTR can surpass them, it is difficult to prove its SOTA performance, and thus it is difficult to prove the main contribution of this paper (i.e. the advantages of integrating multiple RL technologies). If the author can give a comparison with more recent papers, it may greatly enhance the persuasiveness of the paper, such as MEME [1], EfficientZero [2], etc. [1] Human-level Atari 200x faster [2] Mastering Atari Games with Limited Data Methods And Evaluation Criteria: The proposed method is an integration of several existing improvements (at the algorithm level). The main experiments were conducted on the 200M Atari task. The evaluation metrics used in the experiment, such as IQM, are reasonable. However, it is worth noting that the baseline method compared in this paper may be far from representing the current SOTA method on the 200M Atari task. As mentioned in (Claims And Evidence), there are at least more recent baselines like MEME and EfficientZero. If the author can add corresponding comparisons, it may be more convincing. Furthermore, the ability to run RL algorithms on personal PCs is certainly exciting, but its practical implications may still require extensive discussion and clarification. Theoretical Claims: This paper is mainly an experimental paper, that is, to verify the effectiveness of the proposed algorithm, and there are no outstanding theoretical results in the main body of the paper. Experimental Designs Or Analyses: The experiment in this paper mainly focuses on using RL algorithms to train multiple video game agents, and mainly compares evaluation metrics such as IQM, Mean HNS, and Median HNS. However, the main problem may be the lack of more recent and more convincing baselines. In addition, the method in the paper seems to use life info from Atari, so what is the performance without using life info? Supplementary Material: I mainly looked at the experimental settings and supplementary experimental figures and tables in the appendix. Relation To Broader Scientific Literature: Since the paper mainly conducts experiments on Atari tasks, it mainly compares a series of other existing RL algorithms, such as Rainbow, Impala, DQN, etc. However, there is still a lack of more recent paper comparisons, such as MEME, EfficientZero, etc. Essential References Not Discussed: Since the main contribution of the paper is to propose a set of practical and efficient RL algorithms, this paper lacks comparison with some recent existing efficient RL algorithms, which at least include MEME, EfficientZero, etc. Other Strengths And Weaknesses: The main advantage of this paper is that it proposes a set of efficient RL algorithms that can run on high-performance personal PCs, but this paper lacks comparison with the key latest related literature, and also lacks more convincing applications of BTR method in PC. Other Comments Or Suggestions: If the author can compare it with some more recent RL algorithms, it will greatly enhance the persuasiveness of the paper. Questions For Authors: 1. How does BTR compare with newer RL algorithms on the 200M Atari task? For example, MEME, EfficientZero, etc. 2. In addition to video games, can the author provide other more convincing application scenarios for BTR? 3. On the Atari task, how does BTR perform without using life info? 4. In addition to the integration of multiple methods, can the author prove its original contribution? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their concise review. **Life Information** - We do not use life information in any main paper results, as we specify in Section 4.1. Furthermore, we provide a detailed comparison of BTR with and without life information in Appendix I. If your low score was due to doubts about our empirical performance stemming from this matter, we hope you are willing to reconsider now that this has been clarified. **Comparison against baselines** - We already compare against SoTa algorithms for Atari 200M in Table 3. We did not include such baselines in Figures 1 and 2 as we wanted to limit this comparison to those algorithms which can be run with widely accessible compute power. However, we are willing to replace Figure 1 with a walltime efficiency curve against Dreamer v3, as we feel this better conveys the message of the paper (MEME did not release this data). We can also add Dreamer v3 and MEME to Figure 2, with a note indicating their walltime to allow for a fairer comparison as we believe that with the appropriate context this will improve the paper. We do, however, argue against the inclusion of sample efficient algorithms (Atari-100K), as these algorithms use drastically different resources (100K frames vs 200M), and critically are benchmarked on a 26-game subset, rather than the full suite, making for an unfair comparison. If you believe this to be pivotal, we can include a figure in the appendix for the 26-game subset. Below is a comparison against SoTa algorithms on the mentioned 26-game subset: | Algorithm | Frames | A100 Walltime | IQM (26-Game Subset) | |--------------------------------------|---------------|----------------|----------------------| | MEME | 200M | Not Reported* | 18.491 | | Dreamer v3 | 200M | 7.7 Days | 14.305 | | Beyond The Rainbow (BTR) | 200M | 22 Hours | 11.202 | | PQN [1] | 400M | 1 Hour | 5.014** | | EfficientZero v2 | 100K | 2.7 Hours*** | 1.305 | | BBF | 100K | 7.8 Hours | 1.045 | | Dreamer v3 | 100K | 2.4 Hours | 0.543 | *MEME used a shared server with a TPUv4. **PQN used life information, making its results appear significantly higher. ***EfficientZero was tested on a server with 8 RTX 3090s, not an A100. **Practical Implications of RL on Desktop PCs** - Could you further clarify this point? We are unsure exactly what you are asking. [1] Gallici, Matteo, et al. "Simplifying deep temporal difference learning." arXiv preprint arXiv:2407.04811 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for the author's detailed response, which addresses my concerns. Adding more recent baseline results can indeed greatly improve the persuasiveness of the paper, so I am happy to update my evaluation to weak accept. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for the insightful discussion and raising of their score. Additionally, we would like to clearly set out our key contributions and applications as per your request. Aside from integrating, testing, tuning and ablating a wide variety of different components, we believe the final algorithm is a novel contribution as it provides a practical algorithm for our targeted audience of academics, hobbyists and students. Beyond this, our analysis section provides new insights into many different components which were not explored in the respective papers. Furthermore, our appendices provide important information to the community for applying RL algorithms, such as the impact of life information (and why using it invalidates results), and how we designed MDPs for the Wii games. As for the practical implementations of BTR besides games, in its current state the algorithm can be used for any image-based, discrete action environment. This has numerous potential applications such as robotics, healthcare and UAVs. BTR also presents future work to be combined with other areas of RL such as value-based methods for complex action spaces [1, 2], Sim-to-real [3, 4] and offline-RL [5, 6] making for a more widely applicable algorithm and widely usable algorithm. [1] Tavakoli, Arash, Fabio Pardo, and Petar Kormushev. "Action branching architectures for deep reinforcement learning." Proceedings of the aaai conference on artificial intelligence. Vol. 32. No. 1. 2018. [2] Tavakoli, Arash, Sina Ghiassian, and Nemanja Rakićević. "Learning in complex action spaces without policy gradients." arXiv preprint arXiv:2410.06317 (2024). [3] Wagenmaker, Andrew, et al. "Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL." Advances in Neural Information Processing Systems 37 (2024): 78715-78765. [4] Zhao, Wenshuai, Jorge Peña Queralta, and Tomi Westerlund. "Sim-to-real transfer in deep reinforcement learning for robotics: a survey." 2020 IEEE symposium series on computational intelligence (SSCI). IEEE, 2020. [5] Prudencio, Rafael Figueiredo, Marcos ROA Maximo, and Esther Luna Colombini. "A survey on offline reinforcement learning: Taxonomy, review, and open problems." IEEE Transactions on Neural Networks and Learning Systems (2023). [6] Ada, Suzan Ece, Erhan Oztop, and Emre Ugur. "Diffusion policies for out-of-distribution generalization in offline reinforcement learning." IEEE Robotics and Automation Letters 9.4 (2024): 3116-3123.
Summary: This paper proposes Beyond the Rainbow (BTR), an algorithmic successor to Rainbow DQN that improves asymptotic performance, data-efficiency, and wall-time efficiency through a series of algorithmic and architectural modifications informed by existing but recent literature. Key changes include using an Impala backbone rather than the traditional 3-layer Nature CNN, adaptive maxpooling, spectralnorm, IQN value estimation instead of C51, Munchausen RL for TD-learning instead of Double DQN, and finally use of vectorized environments and hyperparameter-tuning for improved wall-time efficiency. Experiments are conducted on Atari as well as a number of 3D video game environments, and the authors demonstrate that BTR performs significantly better than Rainbow DQN and a number of other model-free baselines. ## Post-rebuttal assessment As mentioned in my rebuttal reply, I will maintain my score of weak accept but appreciate the authors' response to my comments as well as those of my fellow reviewers. Claims And Evidence: The main claim of the paper is that the algorithmic and architectural improvements upon Rainbow DQN lead to a new state-of-the-art for model-free (non-recurrent) RL on the chosen domains. The proposed changes are well motivated and validated through a series of ablation studies that provide empirical evidence for the claims. Experiments establish that BTR outperforms Rainbow DQN by a large margin on each of the chosen domains. I appreciate the inclusion of a number of unconventional domains for evaluation (the 3D environments); they all look fairly challenging and represent real use cases for RL in the context of video games. My main issue with the experimental results is that all of the baselines are considered rather old and outdated. For example, the authors mainly compare against Rainbow DQN (2018), as well as methods for which numbers are reported in RLiable (2021). There has been substantial progress in the area of RL in the past 4 years, which is largely ignored in the experimental evaluations. This makes the claim that BTR achieves a new state-of-the-art somewhat shaky. Given that most RL algorithms for discrete action spaces report benchmark numbers for Atari (either 100k or 200M) I see little reason not to include more recent results, such as DreamerV3 (2023) as opposed to DreamerV2 (2020). If I read the results correctly, it appears that DreamerV3 achieves 8.30 human-normalized median score on Atari 60 vs. 4.69 for the proposed method, and 6.93 for MuZero [1]. While I undoubtedly find the proposed method appealing due to its wall-time efficiency, I feel that not positioning the results wrt any RL results more recent than 2021 is highly problematic. I include two model-based RL results here to prove my point, but I'm sure that there are more recent model-free results readily available for Atari 200M as well. [1] Hafner et al., DreamerV3: https://arxiv.org/abs/2301.04104 (2023) Methods And Evaluation Criteria: Yes, I believe that the chosen domains are appropriate for benchmarking. I have concerns about the choice of baselines, as discussed above. Theoretical Claims: Not relevant. Paper is empirical in nature. Experimental Designs Or Analyses: Experimental design appears solid. I have concerns regarding baselines, as previously discussed. Supplementary Material: I briefly skimmed through the code. I appreciate that the authors include code and model checkpoints in their submission. Relation To Broader Scientific Literature: The contributions are well motivated and positioned wrt. previous work. There is a serious lack of discussion and comparison to more recent literature; the chosen baselines are 3-12 years old. Essential References Not Discussed: I am not very familiar with recent literature on model-free RL algorithms for Atari 200M, but there has been substantial work on model-based algorithms for Atari 200M (MuZero and DreamerV3 come to mind [1]), as well as both model-free (DrQ, SPR, BBF) [2] and model-based (EfficientZero, EfficientZero-V2) [3] algorithms for Atari 100k. The paper would benefit from discussion and empirical comparison to more recent literature. [1] Hafner et al., DreamerV3: https://arxiv.org/abs/2301.04104 (2023) [2] Schwarzer et al., BBF: https://arxiv.org/abs/2305.19452 (2023) [3] Wang et al., EfficientZero-V2: https://arxiv.org/abs/2403.00564 (2024) Other Strengths And Weaknesses: The paper is generally well written, the proposed algorithmic and architectural changes are well motivated, and ablations provide insights into the relative importance of each design choice. I appreciate the use of RLiable metrics. Other Comments Or Suggestions: The paper has a few minor typos but they do not detract from my understanding in any meaningful way. Questions For Authors: I would like the authors to provide justification for their choice of baselines (in particular the wrt omission of more recent literature) as well as a general lack of discussion of newer literature. I have provided some references but hope that the authors can conduct a more thorough literature study as well given their focus on model-free methods for Atari 200M in particular. It would be helpful to provide additional model-free baselines for the 3D video game domains but I understand that this may not be possible within the strict time frame of a rebuttal. I highly recommend comparison to a more recent algorithm for a future revision. Ethical Review Concerns: No notable concerns Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive review, and for your appreciation of the evaluation on unique environments. **Comparison against baselines** - We already provided a discussion on SoTa algorithms for Atari 200M in Table 3, and two of the listed algorithms (BBF and EfficientZero V2) are sample sample efficient algorithms which only used 100K frames rather than 200M. Furthermore, they were never benchmarked on the full Atari set, but rather the reduced 26 game sample efficient benchmark. Therefore, we cannot include these in Figures 1 and 2. We did not include MEME and Dreamer v3 in Figures 1 and 2 as we did not want to present an ‘apples to oranges’ comparison, since these SoTa algorithms are inaccessible to our targeted users due to heavy compute requirements. However, to avoid any confusion around BTR’s performance, we are willing to replace Figure 1 with a walltime efficiency curve that includes Dreamer v3, as we feel this better conveys the message of the paper. We can also add Dreamer v3 and MEME to Figure 2, with a note indicating their walltime to allow for a fairer comparison as we believe that with the appropriate context this will improve the paper. If this is your main critique of the paper, given that this is simple to remedy, we hope it may be grounds for improving your score. | Algorithm | Frames | A100 Walltime | IQM (26-Game Subset) | |--------------------------------------|---------------|----------------|----------------------| | MEME | 200M | Not Reported* | 18.491 | | Dreamer v3 | 200M | 7.7 Days | 14.305 | | Beyond The Rainbow (BTR) | 200M | 22 Hours | 11.202 | | PQN [1] | 400M | 1 Hour | 5.014** | | EfficientZero v2 | 100K | 2.7 Hours*** | 1.305 | | BBF | 100K | 7.8 Hours | 1.045 | | Dreamer v3 | 100K | 2.4 Hours | 0.543 | *MEME used a shared server with a TPUv4. **PQN used life information, making its results appear significantly higher. ***EfficientZero was tested on a server with 8 RTX 3090s, not an A100. **More Baselines for Wii Games** - As you mention, unfortunately this may not be possible during the rebuttal period. Out of interest, what algorithm would you like to see? We found other SoTa algorithms such as Dreamer v3 to be too computationally expensive to run on more demanding environments. [1] Gallici, Matteo, et al. "Simplifying deep temporal difference learning." arXiv preprint arXiv:2407.04811 (2024).
Summary: Rainbow now for few years has been a SoTA DQN based RL method. In this paper, authors redo the basic idea behind the Rainbow and collect a new, since Rainbow, a set of tips and tricks and include them into Rainbow, thus obtaining BTR, Beyond-the-Rainbow. They mostly evaluate performance of the completed system with ALE, but also run some experiments with three games from WII and some Procgen experiments. ## update after rebuttal I appreciate authors rebuttal, but taking into account rebuttal and other reviewers opinions I will keep my score. Claims And Evidence: Claims are convincingly supported by evidence. Methods And Evaluation Criteria: Mostly makes sense, but the focus on wallclock time instead of environment steps can be misleading. Wallclock time measurement favors the use of vectorization, which is useful if the goal is to engineer a system that is able to get more bang out of the hardware, but I would argue that most RL researchers are interested in sample efficiency and there wallclock time is misleading as is the number of gradient steps. This point obviously does not discount the usefulness of vectorization that is amply demonstrated by this paper. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes I did check and no issues. Supplementary Material: Not a thorough review. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: It is clear that this paper is very useful for the RL researchers and practitioners. It contains an ample evidence on what tricks together are the most useful. I am sure the emplrical results in this paper will be greatly studies and scrutinized. Authors have taken their time to look at the results from many different points of view, I find Table 2 to be very instructive in terms of really evaluating the effectiveness of respective tricks. This paper is styled from the benchmarking point of view, where idea seems to be to get the most out of ALE. This itself is not necessarily a bad thing as it is useful know what works and does not work with respect to experimenting with ALE. But it is known to have serious issues, such as non-stochasticity that is fixed in Procgen. But in this paper authors run only cursory experiments with Procgen, I would have liked to see maybe the roles reversed where Procgen is the focus and ALE is shown for benchmarking reasons. Finally, three games using Delphi, WII games, are just a curioisity. So, to strenghen the scientific message of this paper I would suggest to refocus this paper for Procgen. Continuing from the previous point, sparse reward scenarious would benefit from better sample efficiency, so environment that emphasizes it would then be most useful in this respect. One option is to use some scenarious in the MiniWorld. Other Comments Or Suggestions: - Questions For Authors: - I am curious about the Fig 1, why there is a bump in the BTR curve? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and their appreciation of how our work will be useful for both researchers and practitioners. **Steps vs Walltime** – We feel the reviewer has somewhat misunderstood the main point of this paper. As stated in the title, abstract, and contributions, the purpose of our work is to create a high-performance algorithm that is computationally accessible—something that is poorly measured by environment steps. For example, while BTR takes approximately 12 hours to process 200 million frames, Dreamer v3 takes almost 8 days to do the same and requires considerably more RAM and GPU memory, illustrating that environment steps are a poor metric for computational accessibility. **Use of the ALE** – The ALE remains the most widely used RL benchmark for good reason, and we feel it is being unfairly dismissed here. The standard evaluation protocol for the ALE [2] (which we follow) includes both no-op starts and sticky actions, which are explicitly intended to prevent agents from exploiting determinism in the environment. Moreover, the ALE gained its popularity due to its extremely diverse set of tasks, a strength we believe is being overlooked. Other SoTa algorithms, such as MEME [3], benchmark exclusively on Atari for this very reason. The ALE also contains numerous sparse-reward environments, which can be found in our Atari-60 results. **Bump in Figure 1** – As discussed in Appendix F, this bump is due to epsilon-greedy exploration being disabled partway through training. While we found exploration beneficial early in training, disabling it halfway through led to improved performance. [1] Schwarzer, Max, et al. "Bigger, better, faster: Human-level atari with human-level efficiency." International Conference on Machine Learning. PMLR, 2023. [2] Machado, Marlos C., et al. "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents." Journal of Artificial Intelligence Research 61 (2018): 523-562. [3] Kapturowski, Steven, et al. "Human-level atari 200x faster." arXiv preprint arXiv:2209.07550 (2022).
Summary: The paper presents BTR, which integrates several well-established techniques to achieve strong performance on Atari and Procgen with limited computational resources. Through detailed ablation studies, the authors demonstrate how each component contributes to their method, showcasing the trade-offs between computational efficiency and RL performance. While falling short of DreamerV3 and MEME, BTR significantly surpasses Rainbow DQN on the Atari-60 benchmark in under 12 hours of training for 200 million frames on a Desktop PC. Claims And Evidence: 1. The authors state that BTR outperforms the state-of-the-art performance for non-recurrent RL. However, the distinction of *non-recurrent* alone is not particularly meaningful or impressive, as employing RNN encoders is typically a deliberate design choice rather than a limitation. Although, in principle, this is a valid claim, due to the stronger baselines employing RNNs by default, I don’t think it is very noteworthy. 2. The authors claim that BTR can solve modern games by demonstrating performance on three Wii games. However, these games do not present significantly greater challenges than typical Atari or ProcGen tasks. They feature limited discrete action spaces, dense rewards, and lack the complexity, variety, and stochasticity characteristic of modern games: - *Mortal Kombat: Armageddon*, despite its 3D physics engine, barely utilizes the 3rd dimension and effectively functions as a 2D side-scroller. - *Mario Kart Wii*, with only 4 discrete actions and repetitive lap-based gameplay, offers limited complexity and variety. - *Super Mario Galaxy* is fully deterministic, allowing an agent to directly overfit to a single level. To better substantiate the claim that BTR can handle modern games, the authors should evaluate it on more open-ended, truly 3D environments [1, 2]. [1] Raad, Maria Abi, et al. "Scaling instructable agents across many simulated worlds." *arXiv preprint arXiv:2404.10179* (2024). [2] Fan, Linxi, et al. "Minedojo: Building open-ended embodied agents with internet-scale knowledge." *Advances in Neural Information Processing Systems* 35 (2022): 18343-18362. Methods And Evaluation Criteria: Although the authors did not introduce novel algorithmic enhancements, their research is valuable as it effectively combines existing RL components into a cohesive and efficient algorithm. By integrating several independently validated improvements, the authors successfully demonstrate a method that achieves high performance while remaining computationally accessible, targeting smaller research labs and hobbyists with limited computing resources. The authors attempt to bridge the gap between low-cost simple algorithms and resource-intensive, cutting-edge RL methods, thoroughly justifying their design choices for the trade-offs in computational efficiency versus performance. Theoretical Claims: This paper does not contain theoretical claims or formal proofs; its primary contributions are experimental and methodological. Experimental Designs Or Analyses: 1. The component impact analysis is well executed, clearly showing how each part of BTR affects performance and training hours on Atari. The authors convincingly demonstrate how each design choice contributes not only to BTR's performance improvements but also to other metrics such as the percentage of dormant neurons, network weights, SRank, action gaps, and policy churn. Detailing the network architecture choices, loss components, and hyperparameters is particularly helpful. Explicitly discussing which attempts did not work, further adds transparency and makes the results more credible. 2. The authors selectively omit evaluations of other baselines from key results without justification, artificially inflating the apparent superiority of BTR. **DreamerV3** and **MEME** achieve higher IQM scores on Atari-60 (9.6 vs. BTR’s 7.4, Table 3), yet they are excluded from Figures 1 and 2, Tables A1 and A2. Meanwhile, **PQN**, which runs ~20x faster but performs worse (Table A3), is omitted from Table 3, where BTR's lower walltime and network size are emphasized. This cherry-picking distorts the comparison and inflates BTR’s perceived advantage. 3. While demonstrating BTR on three Wii games demonstrates the algorithm's versatility, these games are not established benchmarks, making it difficult to determine how notable BTR’s performance is relative to other methods. Without baseline comparisons or prior evaluations, it is unclear whether BTR’s results represent a unique achievement or if comparable performance could be attained by existing methods. To strengthen the evaluation, the authors should include comparisons with other RL baselines. Notably, basic RL algorithms have managed to beat similar games such as *Super Mario Kart*, *Mortal Kombat 3*, and *Super Mario 64*, although in informal, non-academic contexts, such as blog posts and YouTube videos. Supplementary Material: I reviewed the full appendix and watched the gameplay videos. Relation To Broader Scientific Literature: The contributions of this paper clearly build on existing RL literature. The authors integrate 6 previously established methods and algorithmic tricks to work with Rainbow DQN to achieve strong performance with limited computational resources. Essential References Not Discussed: Prior works have achieved higher performance on ProcGen with similar or lower data budgets [1, 2, 3]. [1] Jesson, Andrew, and Yiding Jiang. "Improving Generalization on the ProcGen Benchmark with Simple Architectural Changes and Scale." *arXiv preprint arXiv:2410.10905* (2024). [2] Cobbe, Karl W., et al. "Phasic policy gradient." *International Conference on Machine Learning*. PMLR, 2021. [3] Hafner, Danijar, et al. "Mastering diverse domains through world models." *arXiv preprint arXiv:2301.04104* (2023). Other Strengths And Weaknesses: I’ve outlined the strengths and weaknesses above. Other Comments Or Suggestions: 1. Section E.3 consists only of Tables E7 and E8, which are placed on the following page, making it difficult to follow. Explicit in-text references to these tables would improve clarity and readability. 2. For the camera-ready version, the authors could update the BTR results with LayerNorm, as Appendix H suggests it further improves performance. 3. Figures 2 and B2 include results for the baseline *REM*, but this method is never introduced or referenced in the paper. This baseline likely refers to Random Ensemble Mixture [1], 4. The authors could evaluate BTR on well-established image-based 3D benchmarks with discrete action spaces like DMLab [2] and ViZDoom [3]. 5. Typos 1. Lines 131-134 missing punctuation: *the convolutional layers,* ***which*** 2. Line 180 *Boo**t**strapping* 3. Line 190 *with ~~with~~* 4. Line 320 *we find **that** maxpooling* 5. Line 1245 *many other ~~the other~~ techniques* 6. Line 1376 *we use a forked repository ~~of~~ to allow* 7. Mortal Combat —> Mortal **K**ombat [1] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. "An optimistic perspective on offline reinforcement learning." *International conference on machine learning*. PMLR, 2020. [2] Beattie, Charles, et al. "Deepmind lab." *arXiv preprint arXiv:1612.03801* (2016). [3] Kempka, Michał, et al. "Vizdoom: A doom-based ai research platform for visual reinforcement learning." *2016 IEEE conference on computational intelligence and games (CIG)*. IEEE, 2016. Questions For Authors: 1. How is *consistent completion* established in the Wii games (Figure 3)? 2. Why aren’t the Atari results of MEME and DreamerV3 not included in Figure 1 and Figure 2? 3. How do other baselines perform on The Wii games? 4. What is noteworthy about non-recurrent RL? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their very thorough and constructive review, which clearly took a great deal of time and effort. We appreciate that the reviewer clearly understands the value of producing an accessible and high-performance algorithm. **Complexity of Wii Games** - We would like to argue that in particular, Mario Kart Wii is a very challenging environment, more so than it has been given credit for. This environment contains 11 other racers and the use of randomized items, introducing a very high degree of stochasticity into the environment, far beyond the stochasticity introduced in Atari. Furthermore, we tested the agent on the track “Rainbow Road”, commonly known as the game’s hardest track, taking over a minute even for a single lap, and featuring a very complex and noisy observation space (fully 3D, with multicoloured roads and constantly changing 3D backgrounds that resemble the track). Although Super Mario Galaxy uses a single fixed level, we would argue this is unique and more challenging than typical benchmarks due to complex movement mechanics and high-resolution images. **Claim of SoTa for non-recurrent RL** - While non-recurrent RL may not be a category of interest, we chose this phrasing as it is indicative of walltime efficiency, since off-policy recurrent algorithms (particularly image-based ones) are very expensive [1]. We thought this better than stating “SoTa for walltime efficiency”, or “SoTa for Desktop PCs”, as these are somewhat tricky to formally verify (though we believe it is important to the field). As you mention, strong baselines heavily rely on expensive recurrent models. **Dreamer v3 and MEME baselines** - As you mention, we already included these baselines in Table 3; we did not include these results in Figures 1 and 2 as it would be misleading to compare BTR against algorithms which are inaccessible to our targeted use case (academics, hobbyists, etc). However, to be more transparent about BTR’s place among these algorithms, we would be happy to add Dreamer v3 and MEME to Figure 2 (the box plot), while also stating the walltime required for these algorithms to emphasize the difference. As for Figure 1, we can replace this with a walltime efficiency curve that includes Dreamer v3 as we feel this better aligns with the paper’s contributions (MEME, however, did not release sample efficiency curves to our knowledge). We didn’t include PQN in any main paper figures/tables as they used Life Information, which we discuss at length in Appendix I, and leads to completely non-comparable results and is not standard (Table 3 also uses Atari-60, not Atari-5, and we didn’t perform experiments on Atari-60 using life information). **Wii Game Results** – As you mention, the setting is somewhat informal, and thus we find it difficult to make direct comparisons to other benchmarks. We did not intend to use these environments as formal benchmarks (we already rely on Atari and Procgen for that purpose), but rather as a demonstration of what BTR is capable of. Furthermore, we have been unable to find any other work on Super Mario 64 that uses a reinforcement learning approach, and we would be interested if you could reference such work. As for Super Mario Kart, we would like to highlight the difference in complexity between it and Mario Kart Wii, as the latter includes far more complex graphics, tracks, and driving mechanics (e.g., mini-turbos, wheelies, tricks, etc.). **Missing References** – We are happy to add the references you suggested to our section on Procgen to provide a more comprehensive discussion. We also found some of the work you referenced [2] very interesting, and it may help improve BTR’s performance on the Procgen benchmark. **Comments and Suggestions** – We are happy to address the minor adjustments you suggested in points 1, 3, and 5 of this section. As for point 2, we also considered adding LayerNorm, though it may introduce inconsistencies across the ablations and other figures, and could complicate the narrative of the paper. Regarding point 4, we are currently running additional experiments for the VizDoom environment, which we will add to the appendix. **Questions** – We define consistent completion as achieving over a 90% success rate, and we will add this definition to the caption of Figure 3. We believe your remaining questions have been addressed throughout our response. [1] Kapturowski, Steven, et al. "Recurrent experience replay in distributed reinforcement learning." International conference on learning representations. 2018. [2] Jesson, Andrew, and Yiding Jiang. "Improving Generalization on the ProcGen Benchmark with Simple Architectural Changes and Scale." arXiv preprint arXiv:2410.10905 (2024). --- Rebuttal Comment 1.1: Comment: 1. **Wii Games**. I agree that the Wii games are more complex than most atari or procgen tasks, however, I am not sure to what extent. The environments’ more advanced graphics and physics certainly contribute to that, however RL agents tend to struggle more with things like sparse and long-horizon rewards. In Mario Kart, the agent can neglect 1) obtaining or using items, 2) other racer types, and 3) avoiding items being used on itself by others and still do a reasonable job beating its opponents. I doubt that the original high-resolution rendering in Super Mario Galaxy complicates the task. The agent is likely to learn an equally good policy from down-scaled inputs. It only needs to detect itself in relation to the platforms, walls, and obstacles. Nevertheless, I don't think this is a notable weakness of the paper, and the extra environments highlight BTR as a generally capable algorithm. If the authors wish to better demonstrate the complexity of these environments, I suggest evaluating other baselines on them. 2. **SoTa RL**. Thanks for clarifying this. I understand it is difficult to find or justify a sweet-spot in the trade-off between performance and computational efficiency (as with any multi-objective problem with a trade-off). Although, the RSSM in Dreamer and MEME adds a lot of overhead because it cannot be parallelized, I don’t think it’s the main or only reason slowing down these models. It’s rather a combination of large, overparameterized networks, BPTT over full sequences, and running thousands of imagined rollouts in the latent space per update. Nevertheless, I cannot suggest a better aspect to distinguish BTR from the rest. 3. **Stronger Baselines**. Figures 1 and 2 would certainly benefit from more recent baselines. The authors can themselves determine what is best to represent the results comparison, as long as it is reflective of stronger baselines, while highlighting the strengths of BTR in good nature. Regarding PQN, since including life information to BTR seemed like as easy adaptation, and since PQN runs very fast, why didn’t the authors run PQN themselves without the life information and on 200M (adapted to their setting), instead of reporting the results form the paper? Due to a convincing rebuttal of other reviews and mine, I have decided to increase the score. I believe BTR is a valuable contribution for low-budget setups, and that wallclock time is an important measure to enable a tight feedback loop for rapid experimentation with reasonable performance from a generally capable algorithm. The core weakness of the paper still, as also pointed out by other reviewers, is the omission of strong baselines from 1) recent literature targeting sample-efficiency (SR-SPR, EfficientZero, and BBF), and 2) Figures 1 and 2 without explanation. I suggest the authors incorporate these points in their final revision. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for their insightful discussion and raising of their score. We appreciate your suggestions and will consider these in future work.
null
null
null
null
null
null
MODULI: Unlocking Preference Generalization via Diffusion Models for Offline Multi-Objective Reinforcement Learning
Accept (poster)
Summary: This paper introduces MODULI (Multi-Objective Diffusion planner with sliding guidance), a novel algorithm for offline multi-objective reinforcement learning (MORL). MODULI employs a conditional diffusion model for representing the agent’s policy. Besides MODULI, the paper also introduces two techniques for normalizing multi-objective returns that are used to condition the policy, and a “slider adapter” neural network used to make the diffusion model generalize to unseen OOD preferences. --- ### Post Rebuttal I thank the authors once again for providing detailed answers to my questions. I increased my score since the authors addressed my raised concerns. Claims And Evidence: The claims are evaluated empirically by evaluating the method in offline datasets with missing examples for some regions of preferences, or with narrow distribution of solutions in the Pareto frontier. The empirical results seem to be in accordance with the claims. That is, 1) the proposed method achieves generally better metrics than the baselines, 2) the addition of the slider adapter network seems to indeed improve the generalization of the method, and 3) the normalization techniques introduced improve the method when compared to the global normalization baseline. However, in Figure 3 (bottom), the authors claim that “MODULI can significantly extend the Pareto front to both sides”. However, this is not true if we inspect the plot. MODULI can reproduce the solutions already in the dataset on the sides, but it does not generalize to more extreme points. Although the results seem to validate the claim, the performance improvements compared to the MORvS(P) baseline are not that significant. Methods And Evaluation Criteria: The method is compared with other recent offline algorithms in the D4MORL benchmark, which is a recent Mujoco benchmark for offline MORL. They constitute appropriate baselines and benchmarks for evaluating the method. Theoretical Claims: The paper does not introduce any theorems or theoretical results. Experimental Designs Or Analyses: I checked the details of the experimental designs. One issue with the paper is that, for each experiment, the authors used only 3 random seeds to report the mean and standard deviation. The authors should justify why only 3 random seeds are enough. Supplementary Material: The submission does not have supplementary material. I have checked the pseudo codes and additional experiments in the Appendix. Relation To Broader Scientific Literature: The paper tries to improve upon previous offline MORL approaches by employing different techniques that are related to diffusion models for sequential decision-making. Prior works have employed other supervised learning techniques to tackle this problem, and the paper tries to tackle the problem with a different set of techniques. Essential References Not Discussed: The authors discussed all previous relevant papers in the field of offline MORL. Other Strengths And Weaknesses: Below, I point out a few other weaknesses in the paper: * The Introduction has a few technical terms that are not explained, making it confusing to understand the contributions. For instance, what is a “slider adapter”, “latent directions of preference”, or “refine guidance”? Please explain such terms in more detail. * The paper has a few grammatical issues and sentences that are not flowing well. I suggest the authors review the text for such issues. * Section 4.3 is currently very difficult to follow. I strongly suggest the authors provide intuition for Equation 11 and the associated variables. For instance, in the sentence “we can derive a direct fine-tuning scheme that is equivalent to modifying the noise prediction model.”, do you mean modifying the noise prediction model to achieve what goal? Also, the variable $\eta$ was not defined. * The paper does discuss its limitations in sufficient detail. For instance, in the conclusion the authors could better state the necessary assumptions and when they are not valid. Other Comments Or Suggestions: * A few references are citing the ArXiv version of papers that have been published in peer-reviewed venues, e.g., “Alegre et al. Sample-efficient multi-objective learning via generalized policy improvement prioritization. arXiv preprint arXiv:2301.07784, 2023” has been published at AAMAS 2023. * ”As a result, direct guidance using the target conditions of $y = [\omega_{target}, 1^n]$ may be unachievable for some preferences, leading to a performance drop.” Since this is a multi-objective setting, the target $1^n$ will always be unachievable. The vector $1^n$ is only achievable if all objectives are aligned/not conflicting, * “This enables controlled continuous concept during generation” It is not clear what a “controlled continuous concept” is. * Typo: ”and P demotes Pareto front” -> denotes Questions For Authors: * “MODULI also employs a loss-weight trick, setting a higher weight for the next state s1 of the trajectory $x0(\tau)$ to encourage more focus on it, closely related to action execution.” This sentence is not clear, please elaborate on how this is implemented. * It is not clear how Equation 7 is implemented. In particular, how is each $(\omega, g)$ pair constructed? Just before the Equation, $D_P$ is defined as a set of trajectories, but then the method is sampling pairs of weights and returns. How do you ensure that each $\hat{g}$ in $D_P$ is actually the maximum return for each $\omega$? * It is not clear how the Return Deviation (RD) metric is employed for the baseline algorithms. A different generalized return predictor is learned for each algorithm? When algorithm A has a better RD metric than an algorithm B, why does it imply better generalization? * “However, there are limitations, such as experiments conducted in structured state space and continuous actions.” What are structured state space and continuous actions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Q1: Comparison with the performance of MORvS MODULI consistently outperforms the strong MORvS baseline, leading in 20/24 metrics across 12 Complete datasets(Tab. 4) and surpassing the baseline in 59/72 metrics on Shattered & Narrow datasets(Tab. 5 & 6). ## Q2: Why use 3 seeds? We collect 501 sample points for each seed to calculate the metrics, fully covering the preference space, which results in small variances. At the same time, we aligned with the setting of D4MORL benchmark, it also uses 3 seeds. We conducted a large number of performance and ablation experiments(2 levels * 3 types * 6 tasks), demonstrating the reliability of the MODULI. ## Q3: Significantly extend the Pareto front Due to task constraints, the Pareto front in Fig.3 can only expand by a small margin. Compared to other methods, MODULI demonstrates stronger generalization and an "expansion" phenomenon. To avoid misunderstanding, we replace "significantly" → "slightly" in revision. Additionally, extensive experiments and visualization(fig.7) proves that MODULI can enhance generalization. ## Q4: Confusing expressions, incomplete issue and typos We apologise for the unclear description in the introduction. We will provide a brief introduction to the concept in the Introduction and explain them in detail in the Method. And we have carefully reviewed your suggestions part. Thank you for pointing them out! We will thoroughly review the paper, check for grammatical issues and verify the references in revision. ## Q5: Intuition for Equation 11 We trained a diffusion model $p$ to generate trajectories corresponding to ID preferences. Then, we attempt to learn the pattern of change (for example, when the preference shifts from energy efficiency to high speed, the amplitude of the Swimmer's movements gradually increases). Therefore, our goal is to learn an adapter $p^*$, which applies this pattern to OOD preferences to achieve better generalization. In Equ. 11, the exponential term represents increasing the likelihood of preference $c+$ and decreasing the likelihood of preference $c−$. η Represents the guidance strength weight in classifier-free guidance for the diffusion model. Based on Equ. 11, we can use an adapter-style method to adjust the original noise prediction model ($p$). Now, the noise prediction combines the noise from the original model and the adapter. We apologize again for the confusing expressions and undefined symbols. We will provide definitions and include thorough intuitive explanations. ## Q6: Detailed limitations We assume a linear preference space, such as $w_1 + w_2 = 1$, which is the standard practice in most MORL papers. When the linear assumption does not hold, our core method remains applicable, but additional representations are required. And very sorry for the confusion. We will change the “structured state space” to “low-dimensional state space” (Distinguished from image input). “Continuous action” should be “continuous action space”, distinguished from discrete action space. We believe that MODULI can support tasks with these new modalities with minimal additional adaptation (e.g., encoder). We left it for future work. ## Q7: Loss-weight trick Sorry for the confusion, we will include detailed implementations in the appendix. The diffusion planner generates a state sequence $x^0(\tau) = \left[ s_0, \cdots, s_{H-1} \right]$. Since closed-loop control is used, only $s_0$ and $s_1$ are most relevant to the current decision. When calculating the loss, the weight of $s_1$ is increased from 1 to 10, which is considered more important. ## Q8: Compute Equation 7 In D4MORL dataset, each trajectory contains a target preference $\omega$, and $g$ is the return of the trajectory. Since $D_p$ consists of trajectories corresponding to Pareto front points, it represents the optimal level of policy in the dataset. Therefore, for each $(g, \omega)$ sampled from $D_p$, $g$ is the maximum return corresponding to $\omega$. Then, a predictor $R(\omega)$ for the maximum possible return can be obtained with Equ.7. It can be seen as a fit to the Pareto front in the dataset. ## Q9: RD metric **Please note that we trained a predictor for each task (e.g., ant-expert) to evaluate RD, rather than one for each algorithm**. The detailed process: we fixed the Ant-**expert-complete** dataset to trained $R_\psi$. Since expert and complete trajectories were used, we can estimate the optimal $R$ for each sampled preference $w$, these ground truth information is only used for evaluation. Then, for **Ant-expert-{complete/narrow/shattered}**, for the trajectories sampled from OOD preference points, RD is defined as the difference between the ground truth return from $R_\psi$ and the actual obtained return. The results are averaged over all OOD preference points. A smaller RD indicates that the algorithm can better align with the target preference when the target preference is OOD, thus demonstrating better generalization ability. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. The point regarding Q7: Loss-weight trick is still unclear to me. Can the authors mathematically define such loss and its weighting terms? Moreover, the authors should discuss the limitations regarding the fact that their method employs closed-loop control, in contrast to standard TD-learning RL methods. --- Reply to Comment 1.1.1: Comment: Thank you for your fast responses and efforts to improve the paper! Define the current state as $s_t$, when calculating the loss function, the loss-weight trick assigns a higher weight to the state prediction at the next step $s_{t+1}$, . This is because only the predicted next state step $s_{t+1}$, and the current state step $s_t$, directly participate in the inverse dynamic model $a_t=h(s_t, s_{t+1})$. For clarity, we provide pseudocode below for the loss calculation of the diffusion model incorporating the loss-weight trick and explain it in detail. Thank you for pointing this out, we will include a detailed explanation in the appendix of the revised version. ```python ... # Create a loss_weight and assign a higher loss weight to the next observation (o_1). next_obs_loss_weight = 10 loss_weight = torch.ones((batch_size, horizon, obs_dim)) loss_weight[:, 1, :] = next_obs_loss_weight def diffusion_loss_pseudocode(x0, condition): """ Pseudocode function to demonstrate the loss function in diffusion models Variable descriptions: x0: Original noise-free data - [batch_size, horizon, obs_dim] condition: Conditional information - [batch_size, condition_dim] xt: Noisy data after noise addition - same shape as x0 t: Time points in the diffusion process step - [batch_size] eps: Noise added to x0 - same shape as x0 loss_weight: Loss weight coefficient - scalar or tensor matching x0's shape """ # Step 1: Add noise to the original data xt, t, eps = add_noise(x0) # Generate noisy xt from x0 # Step 2: Process conditional information condition = condition_encoder(condition) # Encode the original condition information # Step 3: Predict noise through the diffusion model predicted_eps = diffusion_model(xt, t, condition) # Model predicts the added noise # Step 4: Calculate mean squared error loss = (predicted_eps - eps) ** 2 # Mean squared error between predicted noise and actual noise # Step 5: Apply loss weights # We assign a higher weight to the prediction of the s_1 loss = loss * loss_weight # Step 6: Calculate average loss final_loss = loss.mean() # Calculate the average loss return final_loss ``` In comparison with TD-learning, after a careful review, **we regret to point out that a clerical error occurred in our response to Q7: Loss-weight trick.** We mistakenly used the term **"closed-loop control"** to describe decision process of MODULI. Below, we describe our decision-making process in detail and compare it with the TD-Learning: **Ours**: Given the current state $s_t$, we use a diffusion model to generate the $x^0(\tau) = \left[ s_t, \cdots, s_{t+H-1} \right]$. Using $s_t$ and $s_{t+1}$, we calculate the action $a_t=h(s_t, s_{t+1})$ that needs to be executed. After executing $a_t$, the next state $s_{t+1}$ is obtained, and the above process is repeated. The diffusion model does not adjust based on environmental feedback when generating trajectories, so it operates as **an open-loop control system**. **TD-Learning Policy**: Given the current state $s_t$, the policy $\pi(a_t|s_t)$ directly outputs the action $a_t$, which is then executed to obtain $s_{t+1}$, and the process is repeated. Therefore, **the environmental information used during the decision-making phase is completely consistent** between the two approaches. However, from the perspective of sequence modeling, planning by generating trajectories and then making decisions results in a lower decision frequency compared to TD learning policy. We consider this a limitation and will incorporate a revised version—thank you for your suggestion! In the MODULI implementation, we made a simple attempt at this. We used smaller model size and fewer sampling steps, achieving a decision frequency of 10-20 Hz while maintaining high performance. Further exploration of decision frequency is left for future work. --- We hope our replies have addressed your concerns. We are always willing to answer any of your concerns about our work and we are looking forward to more inspiring discussions.
Summary: This work proposes MODULI(Multi Objective DiffUsion planner with sLIding guidance), which employs a preference-conditioned diffusion model as a planner to generate trajectories that align with various preferences and derive action for decision making. MODULI also introduces two techniques 1) new return normalization method 2) slider adapter to achieve better generation for both ID and OOD preferences. Extensive experiments on the D4MORL benchmark demonstrate superiority of MODULI. ## update after rebuttal The authors' responses addressed most of my concerns so I've raised my score from 3 to 4. Claims And Evidence: Yes. The article clearly reflects the main arguments and the experiments can support the authors' claims. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: Yes. The experimental designs are reasonable and can support the authors' claims. Supplementary Material: Yes. All the parts. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: #### Strengths 1. The article is well-written, with clear expression and logic, effectively reflecting the main argument. 2. The experiments are very comprehensive. The authors validate the effectiveness of their method across a series of tasks including 'complete' datasets for expressiveness evaluation and 'shattered and narrow' datasets for generalization evaluation. #### Weaknesses 1. I think the author should clearly specify the parameter sizes of this algorithm and the other baseline algorithms for comparison, and try to keep the number of parameters consistent across different algorithms as much as possible. This is because diffusion-based methods may benefit from a larger number of parameters, and doing so will make the results more convincing. 2. I think the innovation of this work may be somewhat insufficient because the main diffusion-based planner is basically equivalent to the Decision Diffuser, and the proposed sliding guidance also draws on the work of predecessors. However, considering that these components are quite practical and have effectively improved the performance of the algorithm, I think it still meets the threshold for acceptance. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and useful feedback, please see the following for our response. ### **Q1: Comparison of Parameters for Different Baseline Algorithms** Thank you for your suggestion! We would like to clarify that in our experiments, we standardized the parameter size of all baseline algorithms to the same magnitude as much as possible. A complete list is provided in the table below: | **Algo.** | **Model Size** | | --- | --- | | MORvS | 8.55M | | MOBC | 8.83M | | MODT | 9.75M | | MODULI Diffusion Model | 11.14M | | MODULI Inverse Dynamic Model | 1.11M | We believe your suggestion is very correct, and we will include this comparison in the revised version. ### Q2: Novelty of MODULI We would like to clarify that MODULI is not merely an application of diffusion models in offline RL. The novelty of MODULI lies in the following three aspects: - **A new research problem**: When only incomplete offline data (narrow/shattered) is available, the preference for generalization to OOD is crucial for multi-objective RL. For the first time, **We model Offline MORL from the perspective of generative models** and introduce diffusion models into offline MORL. We found that their expressive and generalization capabilities are highly effective for MORL problems. Relying solely on Decision Diffuser[1] cannot achieve the performance of MODULI, the following two techniques are important. - **Improved return normalization methods**: Traditional return normalization guided by the only maximum reward has critical flaws in scenarios with conflicting multi-objectives. We designed two **return normalization methods (NPN/PPN) tailored for multi-objective problems**, which are crucial for diffusion models in MORL. - **Better OOD Preference Generalization:** Through the study of multi-objective task datasets, we found that as preferences change, policies in multi-objective tasks will show certain changing trends, suggesting a potential direction for addressing the challenge of policy generalization in multi-objective optimization tasks. Inspired by prior work, we introduced a sliding adapter to enhance generalization capability, achieving significant results while many implementation details differed from [2], including the fine-tuning method, the calculating unit strength and the final integration of the outputs from two models. We focus on stronger generalization ability rather than precise conceptual control over new attributes. [1] Ajay A et al. Is conditional generative modeling all you need for decision-making. ICLR2023. [2] Gandikota et al. Concept Sliders: LoRA Adaptors for Precise Control in Diffusion Models. ECCV2024.
Summary: The paper studies the problem of offline multi-objective RL with preferences over the objectives. The key contribution of the paper is to introduce a new preference-conditioned diffusion model to generate trajectories aligned with specific preferences and derive actions accordingly. The generation process is enhanced with return normalization (across multiple objectives with different value scales and slide-guidance to learn preference direction to handle OOD preferences. Experiments are conducted using the D4MORL benchmark dataset, showing the proposed method outperforms baselines such as BC, MORvS, MODT, and MOCQL. Claims And Evidence: The advantages of the proposed method are well supported by a comprehensive experimental analysis. Methods And Evaluation Criteria: --- The D4MORL dataset used in the paper is a benchmark dataset used in multi-objective RL, which contains different types of data quality including expert and amateur datasets. --- The evaluation focuses on various metrics including hyper volume, sparsity, and return deviation, which are commonly used metrics in MORL. ---The idea of applying diffusion models with two enhanced techniques (return normalization and slide guidance) are well justified. Theoretical Claims: The proof for sliding guidance is provided in the appendix and looks correct. Experimental Designs Or Analyses: The experimental design of the paper is quite standard in MORL with commonly used evaluation criteria and benchmark datasets and state-of-the-art baselines. Supplementary Material: I only read the proof of the sliding guidance. Relation To Broader Scientific Literature: Findings of the paper are related to the RL research community. Essential References Not Discussed: I'm not aware of any missing essential references. Other Strengths And Weaknesses: Diffusion models have been used extensively in offline reinforcement learning, which is not something new. However, the strength of the paper lies on the performance enhancement obtained though return normalization and slide guidance which enable the generation process to align with preferences over multiple objectives. Other Comments Or Suggestions: I don't have other comments or suggestions. Questions For Authors: --- How does the level of conflict between multiple objectives impact the performance of our proposed method? --- Most of the evaluations are for the two-objective RL tasks. Can you comment on how your method would perform when the number of objective increases? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and useful feedback, please see the following for our response. ### Q1: Novelty of MODULI We would like to clarify that MODULI is not merely an application of diffusion models in offline RL. The novelty of MODULI lies in the following three aspects: - **A new research problem**: When only incomplete offline data (narrow/shattered) is available, the preference for generalization to OOD is crucial for multi-objective RL. For the first time, **We model Offline MORL from the perspective of generative models** and introduce diffusion models into offline MORL. We found that their expressive and generalization capabilities are highly effective for MORL problems. - **Improved return normalization methods**: Traditional return normalization guided by the only maximum reward has critical flaws in scenarios with conflicting multi-objectives. We designed two **return normalization methods (NPN/PPN) tailored for multi-objective problems**, which are crucial for diffusion models in MORL. - **Better OOD Preference Generalization:** Through the study of multi-objective task datasets, we found that as preferences change, policies in multi-objective tasks will show certain changing trends, suggesting a potential direction for addressing the challenge of policy generalization in multi-objective optimization tasks. Inspired by prior work, we introduced a sliding adapter to enhance generalization capability, achieving significant results while many implementation details differed from [1], including the fine-tuning method, the calculating unit strength, and the final integration of the outputs from two models. We focus on stronger generalization ability rather than precise conceptual control over new attributes. ### Q2: The impact of the degree of conflict between multiple objectives on the MODULI performance We conducted a comprehensive evaluation on datasets of **varying quality (expert, amateur)** and datasets with **different levels of OOD (complete, narrow, shattered)**, all of which have varying degrees of conflicting objectives, such as speed and energy. As shown in Fig. 3 (page 7), we were surprised to find that when the dataset is incomplete, baseline algorithms like BC fail completely under OOD preferences due to conflicts, and RvS tends to fit a single solution regardless of different preferences. In contrast, MODULI adapts to varying levels of conflict and dataset quality, demonstrating excellent performance. ### Q3: Can you comment on how your method would perform when the number of objective increases? - Our experiments include the **hopper-3obj** dataset, which involves 3 conflicting objectives. It is worth noting that as the number of objectives increases, the policy for given preference becomes more complex. MODULI demonstrates outstanding performance in the three-objective task, achieving over a **20% improvement** compared to the strongest baseline MORvS **(Table 1)**. - From the perspective of method design, MODULI does not assume any specific number of objectives. The proposed new normalization methods and sliding adapter can handle scenarios with more objectives. [1] Gandikota et al. Concept Sliders: LoRA Adaptors for Precise Control in Diffusion Models
null
null
null
null
null
null
null
null
BSO: Binary Spiking Online Optimization Algorithm
Accept (poster)
Summary: The paper introduces the Binary Spiking Online Optimization (BSO) algorithm, an approach to training BSNNs that reduces training memory overhead while maintaining computational efficiency. The basic BSO eliminates latent weight storage and uses momentum-based gradient accumulation to generate weight-flipping signals, while the temporal-aware T-BSO variant captures gradient information across time steps with an adaptive threshold mechanism. Extensive experimental validation validates the effectiveness of BSO and T-BSO. Claims And Evidence: Yes, most of the claims are supported by appropriate evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the BSO paper are generally well-aligned with the problem they aim to address. Theoretical Claims: I examined the theoretical claims and proofs presented in the paper, focusing on the convergence guarantees for both BSO and T-BSO algorithms. The theoretical analysis is mathematically sound. Experimental Designs Or Analyses: I examined the experimental designs and analyses presented in the paper, focusing on the validation of the BSO and T-BSO algorithms across different datasets and comparisons with baseline methods. (1) The authors compare BSO and T-BSO against both online training methods (OTTT, NDOT, SLTT) and BSNN methods (Q-SNN, CBP). (2) The training paradigms are analyzed using the FF ratio and C2I ratio metrics to investigate the optimization stability. Supplementary Material: Yes, I reviewed the supplementary material included in the paper. These material provides essential context for understanding the theoretical guarantees and for reproducibility of the experiments. Relation To Broader Scientific Literature: The BSO paper makes contributions to several domains, such as SNNs, online learning, and efficient computing. By integrating these research domains, BSO addresses the contradiction between BSNNs' inherent efficiency and their memory-intensive training processes. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. By eliminating latent weights and making memory requirements independent of time steps, the BSO and T-BSO provide a meaningful solution for resource-constrained applications. 2. The theoretical analysis is comprehensive. The formal regret bounds provide mathematical guarantees for convergence. 3. The ablation studies illustrate the effectiveness of the methods, particularly the role of momentum in stabilizing training for binary networks. Weaknesses & Questions: 1.The paper lacks analysis of how the proposed methods would perform with more complex network architectures beyond standard VGG and ResNet models, such as transformer-based architectures or recurrent networks. 2.Do BSO and T-BSO lead to reductions in operations (OPs/SOPs) and energy consumption? These metrics are crucial for edge deployment scenarios. 3.The authors only validated the proposed methods on simple image classification tasks. Can they be extended to other applications? 4.There's no analysis of BSO's convergence behavior under noisy data, which would be valuable for real-world applications where data quality varies. 5.There is significant room for improvement in the writing of the paper. Other Comments Or Suggestions: No. Questions For Authors: See Weaknesses & Questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to Q1:"The paper lacks analysis of how the proposed methods would perform with more complex network architectures beyond standard VGG and ResNet models, such as transformer-based architectures or recurrent networks."** We agree that evaluating BSO and T-BSO on more advanced architectures would strengthen our claims. While we focus on VGG and ResNet for their interpretability and popularity in the BSNN literature, our method is **model-agnostic**. This makes it naturally extendable to hierarchical and attention-based architectures such as spiking Transformers or recurrent SNNs. We have discussed the Spike-Former results in our response to Reviewer Tbrf (Response to W2) and kindly refer you to that section for transformer-based results. In a similar vein, we explored the FeedBack (recurrent) SNN [1] and employed VGG11 with T-BSO, training for 300 epochs on CIFAR-10, achieving an accuracy of 92.43%. This demonstrates the versatility of our method and its ability to extend naturally to **different network architectures**. **Response to Q2:"Do BSO and T-BSO lead to reductions in operations (OPs/SOPs) and energy consumption?"** As mentioned in our response to Reviewer Q2xs (Response to W4), we have explained that BSO and T-BSO result in reduced operations and energy consumption due to their lower firing rates. We kindly refer you to that section for a detailed explanation. **Response to Q3:"The authors only validated the proposed methods on simple image classification tasks. Can they be extended to other applications"** Our initial experiments focused on classification as a standard benchmark for spiking models. However, the proposed method is task-agnostic and can be integrated into other applications. As stated in our response to Reviewer Q2xs (Response to W2), we conducted experiments on the GSC dataset, which demonstrates the effectiveness of our method across different tasks. **Response to Q4:"There's no analysis of BSO's convergence behavior under noisy data, which would be valuable for real-world applications where data quality varies."** This is an important consideration for real-world deployment. The update mechanism of BSO, which relies on the momentum of the gradient exceeding a threshold to induce a sign flip, inherently offers robustness against noise in the data. This is due to the thresholding operation, which acts as a filtering mechanism, allowing only **significant gradient information** to influence the weight updates. As a result, smaller fluctuations in the gradient caused by noise are less likely to trigger updates, making the algorithm less sensitive to noisy data. The training methods of BSO and T-BSO are not only resilient to gradient noise fluctuations but also to the noise in the input data. To validate this, we conduct experiments using the CIFAR-10, CIFAR-100, and VGG-11 architectures, where three different types of Gaussian noise are added to the input images during testing. The performance of BSO and T-BSO is then compared in terms of accuracy. The experimental results demonstrate that both BSO and T-BSO exhibit a certain level of noise robustness, with T-BSO outperforming BSO under identical noise conditions. | Dataset | Algorithm | Base | $(\mu=0,\sigma=0.1)$ | $(\mu=0,\sigma=0.4)$ | $(\mu=0.3,\sigma=0.1)$ | |----------|-------|-------|----------------------|----------------------|------------------------| | **CIFAR-100** | BSO | 68.73 | 65.82 (-2.91) | 46.42 (-22.31) | 64.95 (-3.78) | | | T-BSO | 74.17 | 70.34 (-3.83) | 51.21 (-22.96) | 69.45 (-4.72) | | **CIFAR-10** | BSO | 93.30 | 91.02 (-2.28) | 80.61 (-12.69) | 92.75 (-0.55) | | | T-BSO | 94.32 | 93.01 (-1.31) | 85.47 (-8.85) | 93.12 (-1.20) | **Response to Q5:"There is significant room for improvement in the writing of the paper."** Thank you for your candid feedback. We sincerely apologize for our poor writing. We will carefully revise the manuscript to improve clarity, coherence, and presentation. This includes refining the descriptions of our methods, improving figure quality throughout the paper. [1] Xiao, Mingqing, et al. "Training feedback spiking neural networks by implicit differentiation on the equilibrium state." Advances in neural information processing systems 34 (2021): 14516-14528.
Summary: This paper introduces the Binary Spiking Online Optimization (BSO) algorithm, designed to reduce memory overhead in training Binary Spiking Neural Networks (BSNNs) by eliminating latent weight storage and making memory requirements time-independent. It also presents T-BSO, a temporal-aware variant that adjusts thresholds dynamically using gradient information across time steps for improved optimization. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I have checked the proof and theorems. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, indicators introduced include FF ratio and C2I ratio and proof. Relation To Broader Scientific Literature: This article proposes the BSO algorithm and its variant T-BSO, further promoting edge intelligent computing. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: BSO reduces memory overhead by eliminating latent weight storage, making it ideal for resource-constrained environments; Experimental results show better accuracy and efficiency than existing methods. Weakness: T-BSO introduces more computational overhead due to additional gradient computations. The algorithms require careful tuning of hyperparameters, which can be time-consuming. Other Comments Or Suggestions: No Questions For Authors: 1. The author mentioned that BSO is based on a direct learning method. Why not consider the conversion-based method, but choose the direct training method? 2. The author points out in the introduction, 'leverage the substantial efficiency advantages in BSNN training'. Specifically, how does the BSO algorithm demonstrate its advantages in BSNN? 3. Does the introduction of adaptive thresholds in the T-BSO version of BSO affect the neural dynamics of the online learning framework? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to W1:"T-BSO introduces more computational overhead due to additional gradient computations."** As noted in our Response to Reviewer Q2xs (Response to W1), we have discussed that T-BSO incurs additional computational overhead due to the necessity for extra gradient computations. We kindly direct you to that section for a comprehensive explanation. **Response to W2:"The algorithms require careful tuning of hyperparameters, which can be time-consuming."** Regarding hyperparameter tuning, we agree that the method introduces several new parameters (e.g., momentum decay factor, threshold). That said, we found the method to be **relatively robust** to a wide range of settings, and we provide the default configurations in the supplementary material to facilitate reproducibility. We provide an ablation experiment on hyperparameters in the table below, showing that the convergence of T-BSO and BSO is not sensitive to hyperparameters. The base setting is$(\gamma = 1 \times 10^{-6}, \beta_1 = 1 \times 10^{-3}, \beta_2 = 1 \times 10^{-5})$. As observed from the table, the final convergence result is not significantly affected by variations in hyperparameters, provided they remain within a certain range. | Model | Base | $\gamma $ |$\beta_1$ | $\beta_2$ | |:-----------:|:-------:|:--------------:|:----------------------:|:------------------------:| | T-BSO | 75.72 |`75.68(5e-7)`, `74.68(2e-6)` | `74.95(5e-4)`,`75.36(2e-3)` | `75.10(5e-6)`,`75.52(2e-5)` | **Response to Q1:"The author mentioned that BSO is based on a direct learning method. Why not consider the conversion-based method, but choose the direct training method?"** We opt for direct training over ANN-to-SNN conversion primarily due to its superior compatibility with online training and streaming data scenarios. While conversion-based methods are effective in static image classification tasks, they generally lack the ability to **learn continuously**, rendering them less suitable for online and event-driven environments. In contrast, direct training enables frame-by-frame updates and localized gradient propagation. **Response to Q2:"The author points out in the introduction, 'leverage the substantial efficiency advantages in BSNN training'. Specifically, how does the BSO algorithm demonstrate its advantages in BSNN?"** The BSO algorithm directly associates the update of binary weights with the sign flip induced when the momentum of the gradient exceeds a threshold. This eliminates the need for latent weights, and this flip mechanism is theoretically more energy-efficient. **Response to Q3:"Does the introduction of adaptive thresholds in the T-BSO version of BSO affect the neural dynamics of the online learning framework?"** We designed the adaptive threshold mechanism in T-BSO to be biologically plausible and hardware-friendly, updating thresholds based on the momentum of temporal gradient. While it alters the flip condition of weights, it does not fundamentally change the underlying online training framework or disrupt spiking dynamics. Instead, it allows the network to **dynamically adjust** its sensitivity to incoming signals, enhancing robustness and efficiency. We also observed that adaptive thresholds help stabilize training and improve generalization across varying input distributions. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The authors have addressed my concerns well.
Summary: The paper introduces Binary Spiking Online Optimization (BSO), a novel training algorithm for Binary Spiking Neural Networks (BSNNs) that significantly reduces memory overhead during training. The key innovations are two-fold: (1) making memory requirements independent of time steps, and (2) eliminating latent weight storage by directly updating binary weights through flip signals triggered when gradient momentum exceeds a threshold. The authors also propose T-BSO, a temporal-aware variant that captures gradient information across time steps for adaptive threshold adjustment. Through theoretical analysis and experiments on datasets like CIFAR-10, CIFAR-100, ImageNet, and DVS-CIFAR10, the authors show that BSO and T-BSO achieve comparable performance compared to existing methods while substantially reducing training memory costs. Claims And Evidence: The claims are supported by comparative experiments, theoretical convergence proofs, and ablation studies. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are sound, with the authors using various benchmark datasets (CIFAR-10, CIFAR-100, ImageNet, DVS-CIFAR10) that test their BSO and T-BSO algorithm. Theoretical Claims: I carefully reviewed the convergence proofs in Appendix A (pages 11-12). Experimental Designs Or Analyses: I carefully examined the experimental design and found it to be generally sound. Supplementary Material: Yes, I reviewed the supplementary material in the paper. Relation To Broader Scientific Literature: The paper makes contributions to binary spiking neural network research by proposing an online training algorithm that reduces memory overhead. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper presents a novel approach to training BSNNs by integrating online training with a unique gradient momentum-based weight-flipping mechanism, which is a promising solution to existing memory and computational constraints. 2. The authors provide comprehensive theoretical analysis, including detailed convergence proofs for their proposed BSO and T-BSO algorithms. Weaknesses & Questions: 1. The T-BSO may introduce some additional computational complexity compared to the base BSO method. 2. While promising, the results are primarily demonstrated on image classification tasks, and the method's performance on other domains remains to be explored. 3. The proposed method introduces additional computational complexity through momentum-based weight flipping and adaptive thresholding. 4. As a efficiency-focused work, why the authors did not compare efficiency metrics such as training or inference time, OP, SOP, or energy consumption. [1] Bitsnns: Revisiting energy-efficient spiking neural networks [2] Towards energy efficient spiking neural networks: An unstructured pruning framework [3] Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive Gradient Modulation Mechanism Other Comments Or Suggestions: It is recommended that the author conduct more thorough experiments to verify the effectiveness and efficiency of the method from multiple perspectives. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to W1. "The T-BSO may introduce some additional computational complexity compared to the base BSO method."** Thank you for your observation. It is true that T-BSO incorporates a lightweight second-order temporal gradient mechanism on top of the basic BSO method, potentially increasing computational overhead. Nonetheless, since this computation is averaged over the time dimension, it substantially enhances the network's **fitting capability** without significantly affecting the training time and memory usage. During inference, the temporal-aware threshold remains constant, thereby incurring only minimal computational overhead. **Response to W2. "While promising, the results are primarily demonstrated on image classification tasks, and the method's performance on other domains remains to be explored."** In this study, we primarily focus on image classification tasks to offer a clear and controlled evaluation of the proposed method. However, we acknowledge the importance of exploring the applicability of our approach to other domains, such as **speech recognition**. To this end, we train a ResNet19 model with T-BSO on the Google Speech Command (GSC) dataset [1] for speech recognition. The ResNet19 model with T-BSO was trained for 300 epochs on GSC's 35 categories, achieving an accuracy of 96.12%. This result demonstrates the potential of T-BSO in diverse application scenarios. **Response to W3. "The proposed method introduces additional computational complexity through momentum-based weight flipping and adaptive thresholding."** Thank you for pointing this out. In the proposed method, additional components such as momentum-based weight flipping and adaptive thresholds are introduced. Notably, the **latent weight** present in existing BSNN works is eliminated, and the subtraction operation is revised to flip signals, thereby enhancing the method’s lightweight and computationally efficient nature. The momentum mechanism is based solely on simple element-wise operations, which impose minimal computational overhead, while the adaptive threshold does not necessitate extra backpropagation. Crucially, the increased complexity yields significant advantages in terms of performance enhancement, resulting in a favorable trade-off. Moreover, these additional computational operations are not executed during inference. **Response to W4. "As a efficiency-focused work, why the authors did not compare efficiency metrics such as training or inference time, OP, SOP, or energy consumption."** Thank you for your insightful comment. We fully agree that evaluating efficiency using metrics such as training/inference time, OPs, SOPs, or energy consumption would provide a more comprehensive understanding of the proposed method's advantages. The table below provides a comparative analysis of the proposed method against SOP, OP, and energy with QSNN on the CIFAR-10 and CIFAR-100 datasets. We acknowledge that explicitly considering energy consumption and operation count is crucial for efficiency-oriented research, particularly in the context of spiking neural networks. Experimental results demonstrate that our BSO algorithm outperforms Q-SNN in both SOP and energy, owing to its lower spiking firing rate. | Dataset | Algorithm | SOP | OP | Energy | |------------|-------|------------------------------------------|-----------------------------------------|---------------------------------------------| | CIFAR-10 | BSO | `0.93M(T=2)`, `1.51M(T=4)`, `1.69M(T=6)` | `1.78M` | `9.01uj(T=2)`, `9.54uj(T=4)`, `9.71uj(T=6)` | | CIFAR-10 | QSNN | `1.70M(T=2)`, `2.80M(T=4)`, `3.88M(T=6)` | `1.78M` | `9.72uj(T=2)`, `10.71uj(T=4)`, `11.69uj(T=6)` | | CIFAR-100 | BSO | `0.96M(T=2)`, `1.78M(T=4)`, `2.19M(T=6)` | `1.87M` | `9.48uj(T=2)`, `10.21uj(T=4)`, `10.59uj(T=6)` | | CIFAR-100 | QSNN | `1.70M(T=2)`, `2.78M(T=4)`, `3.86M(T=6)` | `1.87M` | `10.13uj(T=2)`, `11.10uj(T=4)`, `12.08uj(T=6)` | [1]Warden, Pete. "Speech commands: A dataset for limited-vocabulary speech recognition." arXiv preprint arXiv:1804.03209 (2018).
Summary: The paper proposes Binary Spiking Online Optimization (BSO) and its temporal variant T-BSO for training Binary Spiking Neural Networks (BSNNs) with reduced memory overhead. The work is well-motivated, technically sound, and demonstrates significant improvements in training efficiency and performance across static and neuromorphic datasets. While the core contributions are compelling, the paper would benefit from expanded discussions on related works and scalability to modern architectures like Transformers. With minor revisions, this work has the potential to advance resource-efficient neuromorphic computing. Claims And Evidence: Yes, the claims are largely supported by evidence: Claim 1: BSO/T-BSO reduce memory overhead. Table 1 and Figure 5 convincingly show time-independent memory costs (e.g., 3GB vs. Q-SNN’s linear scaling). Claim 2: Superior performance. Table 2 validates T-BSO’s accuracy gains (e.g., 94.70% on CIFAR-10 vs. OTTT’s 93.73%), though statistical significance testing (e.g., standard deviations) is missing. Theoretical claims: Regret bounds (Theorem 4.1) are rigorously proven in Appendix A, assuming convexity and bounded gradients. Methods And Evaluation Criteria: Yes, the methods and evaluations are appropriate: Methods: BSO’s flip-signal mechanism (Eq. 12) and T-BSO’s second-order momentum (Eq. 14) are novel and well-justified. The elimination of latent weights (Fig. 2) aligns with BSNN efficiency goals. Evaluation: Benchmarks (CIFAR, ImageNet, DVS-CIFAR10) are standard. Energy metrics (Table 2) and memory analysis (Fig. 5) effectively highlight efficiency gains. Theoretical Claims: Yes, theoretical analysis is correct under stated assumptions: Experimental Designs Or Analyses: Mostly sound: Ablations: Table 3 and Figure 4 effectively validate T-BSO’s temporal adaptation. Statistical rigor: Missing standard deviations in Table 2 and Figure 4 reduce reproducibility. Baselines: Comparisons with OTTT, NDOT, and Q-SNN are thorough, but exclude recent BSNN works like [Cite concurrent BSNN methods]. Supplementary Material: Yes. Relation To Broader Scientific Literature: Strong. Essential References Not Discussed: I suggest add more advanced SNNs models recently. Other Strengths And Weaknesses: Strengths: 1. The proposed method is novel. 2. The technique is solid. 3. The experiments are sufficient. Weakness: 1. There could add more comparison with recent BSNN-online hybrid methods or other related advanced SNNs model. 2. Discuss applicability to spiking Transformers. Other Comments Or Suggestions: None. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation and valuable feedback. Our responses are provided below. **Response to Claim 2 "statistical significance testing (e.g., standard deviations) is missing."** We acknowledge that including standard deviations in Table 2 and Figure 4 would enhance the reproducibility of our results. We will revise the table and figure to include the standard deviations alongside the mean values to provide a clearer representation of the variability and ensure the results are more reproducible. **Response to Experimental Designs and W1: "cite concurrent BSNN methods and compare with BSNN-online hybrid methods", "There could add more comparison with recent BSNN-online hybrid methods or other related advanced SNNs model."** Thank you for pointing this out. We appreciate your suggestion to include comparisons with recent BSNN works such as [concurrent BSNN]. We would greatly appreciate it if the reviewer could kindly provide relevant references for these concurrent BSNN methods, which we would be happy to incorporate into our analysis. While our current baseline selection was based on a careful review of related methods at the time of submission, we recognize the importance of including more recent approaches to provide a comprehensive comparison. We consider extending BSO and T-BSO to more complex network structures like transformer-based and recurrent model to adapt to different application scenarios. **Response to W2 "Discuss applicability to spiking Transformers."** Regarding the applicability to spiking Transformers, we believe our proposed method can be naturally extended to spiking Transformer-based models. In particular, our proposed T-BSO benefits from the adaptive threshold in the time dimension and can provide better convergence while preserving important **temporal information**. We extend our T-BSO to spikingformer[1] on CIFAR-10 with 4 timestep to prove the applicability. In particular, the binary spikingformer with T-BSO achieved an accuracy of 90.15%, indicating that the T-BSO algorithm can still ensure the **convergence** of binary online training in the transformer-like structure. [1]Zhou, Chenlin, et al. "Spikingformer: Spike-driven residual learning for transformer-based spiking neural network." arXiv preprint arXiv:2304.11954 (2023).
null
null
null
null
null
null
Loss Functions and Operators Generated by f-Divergences
Accept (poster)
Summary: The paper proposes using Fenchel-Young losses derived from f-divergences to perform image classification and language model pretraining, fine-tuning, and distillation. The authors present an efficient bisection method for solving for the f-softmax function involved in optimizing the proposed loss. The paper analysis f-Fenchel-Young losses in the context of image classification and language model pretraining, fine-tuning, and distillation, along with ablations for softmax aggregation of f-trained logits at generation time. Claims And Evidence: The paper clearly steps through the derivation of the proposed losses using the analogy with the well-known softmax and soft-argmax functions. I have minor questions regarding specific claims in *Other Comments* below. Methods And Evaluation Criteria: The methods and evaluation criteria appear sound. It appears that one challenge of comparing training with different $f$ is that the training losses are different and thus validation or test losses can not be easily compared. Thus, the authors use accuracy for classification or next-token prediction, and downstream task (summarization) scores for finetuning and distillation. Theoretical Claims: I am familiar with Proposition 1 from existing work, and the convergence of the algorithm appears correct. Experimental Designs Or Analyses: Experiments appear to be soundly designed. Language model distillation and fine-tuning are particularly active areas of research. Supplementary Material: I appreciate the care in providing numerically stable implementations of the operations in the Appendix. While I did not investigate, I envision this will be very useful to myself and other researchers! "Differentiating through $f$-softmax" should be detailed in the Appendix for completeness. Relation To Broader Scientific Literature: The representation of the f-softmax in Proposition 1 moves far beyond Wang et. al 2024 Thm 1. Even Eq 13 needs to be rearranged using $(f_*^\prime)^{-1} = f^\prime$ to recover their result. Since this discussion may not be necessary, the citation could also be dropped. Essential References Not Discussed: One might also wonder how to directly optimize the Fenchel-Young loss in Eq 11. It seems that CvxPy Layers [1] might be used, although this optimization over the vocabulary (rather than bisection over a scalar) seems less efficient and convenient. [1] Agrawal et. al 2019, "Differentiable Convex Optimization Layers" Other Strengths And Weaknesses: The paper is well-written and comprehensive. It would be useful to explicitly emphasize novelty (see below), for the review process at the very least. In any case, the applications to language models are creative and interesting. Other Comments Or Suggestions: *Notation for the Loss Function* I initially thought it might be more insightful to write the loss in Eq. 5 as $\min \limits_{\boldsymbol{\theta}} -\langle \boldsymbol{\theta}, \boldsymbol{y} \rangle + \text{softmax}_{\Omega}(\boldsymbol{\theta})$ (i.e. a conjugate optimization), but I also see why the authors need to include this in Eq. 11. - The main concern is that $\Omega(\boldsymbol{y})$ is a constant in Eq. 5 and the reader has to parse that this term does not contribute to the optimization (amidst use of $\boldsymbol{p}$ above and $\boldsymbol{q}$ later). Perhaps the authors could add a comment here. - However, the payoff is that we can optimize Eq. 11 with respect to q. *Minor questions:* m1) Why is the relation between (5) and (6) and upper bound? Is it tight e.g. for $f$ of Legendre type? m2) Do the authors have any comment on the role of $q=1$ vs. $q=1/K$? The latter seems more principled. Panel 3 in Fig 10 ($q = 1/K$) resembles Panel 5 of Fig 9 ($q=1$) Questions For Authors: 1) What is the novelty compared to the original Fenchel-Young losses? Blondel et. al 2019 "Learning with Fenchel-Young Losses" - a similar bisection scheme appears in their Algorithm 1 - Tsallis 1.5, sparsemax, etc. are discussed. JSD and Hellinger appear to fall under the original framework, even if not stated explicitly or tested empirically - generalization to reference $q$ is a distinction from Blondel et. al 2019 but appears to be known (perhaps up to $0 \in \text{dom}(f^\prime)$) - I invite the authors to spell these out for those of us reviewers and ACs less versed in this line of work. *I will strongly argue for acceptance if this concern is met.* I do value the empirical study, especially in language modeling settings. My remaining questions arise mostly from a genuine interest in the work rather than a critical evaluation informing the review score. 2) Do the authors have any comment on why modifying the FY loss improves performance in general? (It's ok not to, I often dread this question when deriving generalized losses!) - In particular, for SFT/Distillation, one of my hypothesis in reading the paper was that allowing for sparsity might be useful for next-token prediction where greedy/top-k/nucleus procedures perform well. - However, the soft-max decoding from f-softmax logits appears to undermine this hypothesis - Do we expect any interesting interaction between sparse-f finetuning/distillation and decoding procedures beyond standard (temperature=1) decoding? 3) Is the f-soft-argmax invariant to addition by a constant? ($\text{softargmax}_f(\theta + c, q) = \text{softargmax}_f(\theta, q)$) - $\text{softmax}_f(\theta + c, q) = \text{softmax}_f(\theta, q) + c$ is stated in Terjek 2021, and I've proven it myself at some point. This corresponds to shifting $\tau^* \rightarrow \tau^*+c$ and appears to not modify the softargmax. - This would be a useful property to specify in general. - I came to this by thinking whether one might also consider f-softmax aggregation from standard logits (the reverse setting of Lines 353-376R). Although "open-logits" is an uncommon access model, one could recover softmax logits up to a constant from next-token probabilities and use this for f-softmax decoding (if the above holds). 4) Temperature scaling during training ($\text{softmax}_{\beta f}$) or inference (scaling $\beta$ with given $\theta$) might also be considered (?). Some initial experiment or commentary could serve to highlight this as a direction for exploration for practictioners. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and their interest in this work. > **m1) Why is the relation between (5) and (6) an upper bound? Is it tight e.g. for f of Legendre type?** There was a typo in these equations, the lower or equal sign should be replaced by a greater than or equal sign. This bound is detailed in Proposition 3 of (Blondel et al., 2020). Indeed it is tight if $\Omega$ is of Legendre type, that is, if $f$ is of Legendre type on $(0, +\infty)$. This is the case for the KL, reverse KL, Jensen to cite a few examples but not the case for the chi-squared or $\alpha$ divergences, for example. > **m2) Do the authors have any comment on the role of q=1 vs. q=1/K? The latter seems more principled. Panel 3 in Fig 10 (q=1/K) resembles Panel 5 of Fig 9 (q=1)** This is an excellent question. $f$-entropies are recovered with $q=1$, not $q=1/k$. Using $q=1$ or $q=1/k$ can lead to slightly different regularization function $\Omega$, and losses, due to the fact that $f$ can be non-homogeneous (that is, $f(p/q)q \neq f(p)$). Mathematically, the losses differ since $$ \mathrm{softmax}_f(\theta; \mathbf{1}/k) = \alpha \sup\_{p \in k \triangle^k} \langle p, \theta \rangle - D\_f(p, \mathbf{1}) \neq \alpha \ \mathrm{softmax}\_f(\theta; \mathbf{1}) $$ where $k \triangle^k = \{ k p, p \in \triangle^k\}$ is a scaled simplex. Numerically, we did not observe changes in training curves when using one or the other. > **1) What is the novelty compared to the original Fenchel-Young losses? Blondel et. al 2019 "Learning with Fenchel-Young Losses"** Our paper sets out to study Fenchel-Young losses when the regularization $\Omega$ is set to a $f$-divergence, which to our knowledge hadn’t been studied before. In doing so, we draw an interesting parallel between entropies already used in Blondel et al (Shannon, Gini, Tsallis) and $f$-divergences (KL, chi-square, alpha divergences). Proposition 1 in our paper can be thought of as a generalization of Proposition 9 in Blondel et al. When $q$ is non-uniform, their proposition does not apply, while our proposition does. When $q$ is uniform, our proposition tackles the case where $f$ is not differentiable on 0. In addition, we prove that there is a *unique* solution to the root problem on the considered interval, which Blondel et al did not prove. Our proof does not go through KKT conditions and rather relies on conjugate calculus. We also provide detailed computations in the appendix that take into account some potential numerical instabilities of naive implementations. On the empirical side, we demonstrate the proposed losses on tasks of different data modalities, including both vision (ImageNet) and text generation tasks, which cover different training strategies (from scratch, finetuning, and distillation). In addition, we obtained a novel empirical insight: using the classical soft-argmax works well even if we trained with our f-divergence based losses. This suggests that the choice of the loss used at training time, not the choice of the $f$-softargmax used at inference time, impacts accuracy the most. We hope that our results give a good glimpse of the losses’ potential as well as their limitations. > **2) Do the authors have any comment on why modifying the FY loss improves performance in general? (It's ok not to, I often dread this question when deriving generalized losses!)** Unfortunately, we do not have a good theoretical answer to this question, see also the answer we provided to reviewers 9x5b, EvEL and aCJJ. Our goal here was first and foremost to provide experimental results on recent tasks (pretraining, fine-tuning, distillation of LLMs) with a methodological approach to build these losses. We hope that such experimental results may help the community understand the relevance of different losses. We thank the reviewer for the numerous avenues they already proposed. > **3) Is the f-soft-argmax invariant to addition by a constant?** Yes, this is the case. This comes from the fact that $\langle p, \theta + c \rangle - \Omega(p) = \langle p, \theta \rangle - \Omega(p) + c$ when $p$ belongs to the simplex. > **4) Temperature scaling during training or inference might also be considered (?).** We believe temperature scaling is not useful at training time, as $\beta$ can be absorbed into the logits $\theta$. However, it could indeed be useful at inference time. We will add a remark to make this clarification. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed reply. I maintain my score due to the borderline question of novelty compared to Fenchel-Young Losses. While I believe choosing the special case of $f$-divergences and considering the case of non-uniform $q$ (discarded in emprical study) are relatively minor contributions, I appreciate the language model experiments and technical care to encompass a large class of divergences.
Summary: This paper proposes a generalization of entropy-based loss functions (such as logistic loss and softmax) by incorporating f-divergences. Specifically, the generalization is formulated using Fenchel-Young duality, where the standard Shannon entropy regularization is replaced with f-entropies. The authors demonstrate that several existing loss functions, including sparsemax and entmax, emerge as special cases of this framework under different choices of f-divergences. Furthermore, the paper provides detailed practical considerations on the computation and differentiation of the proposed loss functions, ensuring their feasibility for large-scale learning tasks. The empirical evaluation on ImageNet and language modeling datasets validates the effectiveness of this approach, with the α-divergence (α = 1.5) achieving the best performance. Claims And Evidence: The abstract claims two primary contributions: 1. Generalizing Shannon entropy to f-entropy produces loss functions that are advantageous. This is well supported by theoretical analysis (Fenchel-Young framework) and empirical results (image classification and language modeling) 2. The generalization allows for non-uniform reference measures, which could be useful. This is not well addressed. The authors themselves note in line 343 (lhs) that using a non-uniform reference measure did not lead to performance improvements. Although I recognize the possibility to use non-uniform reference to incorporate prior knowledge, this paper did not include informtation how this incorporation could be beneficial. Given this, I recommend that the authors reconsidr making Claim 2 in the abstract or clarify under what conditions non-uniform refernece measures might be useful. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally appropriate for the problem. 1. Theoretical insight is strong: The framework unifies prior approaches like sparsemax and entmax under a broader family of entropy-based loss functions. 2. Practical implementation is well considered: The paper addresses efficient computation of the loss and gradient, making the method feasible for large-scale applications. 3. Empirical evaluation is sufficient: The experiments on image classification (ImageNet) and language modeling show consistent performance improvements, supporting the claims. However, there is a key issue that remains unaddressed: 1. What differentiates members of the f-divergence family? - It is unclear why certain f-divergences (e.g., α-divergence with α = 1.5) improve performance, while others (e.g., Jensen-Shannon divergence) degrade it. - The paper lacks a theoretical explanation for why some choices lead to better optimization or generalization. - A deeper analysis of the effect of different f-divergences on training dynamics or model representations would strengthen the claims. Theoretical Claims: I reviewed the proofs and did not find any major issues. Experimental Designs Or Analyses: The experiments are well designed and support the primary claim that incorporating f-divergence could improve the training process and bring improvements in performance. Supplementary Material: I read the appendix. There is no supplementary material. Relation To Broader Scientific Literature: The topic of this paper has the potential to impact the broader community of machine learning. - It introduces a new family of loss functions under a unified framework based on f-divergences, offering a fresh perspective on entropy-basesd regularization. Nevertheless, the authors should provide insights into selection of specific f-divergences. - The variational problem under f-divergences is also relevant to automatic implicit differentiation techniques, which are increasingly used in optimization and deep learning. This paper provides a concrete example. Essential References Not Discussed: I suggest including addition refernces in variational inference techniqeus using f-divergences. f-divergence has been extensively studied in variational inference. For instance: - Nowozin et al., f-gan: Training generative neural samplers using variational divergence minimization. NeurIPS 2016. - Li & Turner. Rényi divergence variational inference. NeurIPS 2016. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow. 2. The appendix provides substantial details that assist understanding of the paper. Weaknesses: 1. Limited comparison beyond f-divergence losses. The proposed loss function is only compared within the f-divergence family, including sparsemax and entmax. How does it perform compared to temperature-scaled softmax or label-smoothed softmax? Including these comparisons could provide insights into why certain f-divergences are more beneficial than others and whether the proposed approach offers advantages beyond the existing methods. Other Comments Or Suggestions: Typos: - Line 161 and throughout the paper: Jeffrey**s** (instead of Jeffrey) divergence is the correct way of reference. - Line 617: Propositon --> Proposition - Line 636: coresponding --> corresponding Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. > **clarify under what conditions non-uniform reference measures might be useful.** Thank you for your suggestion. Non-uniform reference measures may be relevant in classification tasks with known imbalanced classes. We directly tried the relevance of the approach on pre-training in the Nanodo codebase and did not find any particular gains in that setting. > **The paper lacks a theoretical explanation for why some choices lead to better optimization or generalization.** We agree that such insights—knowing when and why a loss outperforms others on specific tasks—would be beneficial. However this is very challenging. Indeed, even in the case of the standard cross-entropy loss, its induced loss landscape in model parameters and the ease or difficulty of optimizing over it are generally hard to analyze for practically relevant tasks. We believe that, just as the rates of convergence of optimizers have not always dictated deep learning practices, theoretical properties of f-divergence losses may not be reflected in deep learning experiments. For this reason, we prefer to present a methodological approach that showcases the actual performance of different losses, upon which the community can build. Through these experiments, we observed that the alpha divergence with $\alpha=1.5$ appears to provide a good trade-off between the standard KL divergence and a sparsemax loss. > **including additional references in variational inference techniques using f-divergences.** Thank you for your suggestions. We will include them. > **How does it perform compared to temperature-scaled softmax or label-smoothed softmax? Including these comparisons could provide insights** Regarding temperature, we believe temperature scaling is not useful at training time, as $\beta$ can be absorbed into the logits $\theta$. However, it could indeed be useful at inference time. We will add a remark to make this clarification. Regarding label-smoothing, we believe it would be interesting to try, though this technique can be applied to any loss and is therefore relatively orthogonal to our work. In addition to $f$-divergence generated losses, we tried the (multiclass) hinge loss. However, despite trying various learning rates and warm-up, we were unable to make the hinge loss work well on ImageNet. We suspect that the hinge loss, as a non-smooth loss, requires a completely different hyper-parameter tuning. Our observation here aligns with findings from the paper Zhu et al., (2023) mentioned by Reviewer EvEL, which demonstrates that applying multiclass hinge loss to Tiny ImageNet results in accuracy only slightly better than random (See their Table 7, where “CS” denotes the Crammer-Singer multiclass hinge loss, which we also used). [1] Zhu et al, 2023, ICML, Label Distributionally Robust Losses for Multi-class Classification: Consistency, Robustness and Adaptivity. > **Typos** Thanks for spotting them. They are now corrected.
Summary: The authors propose a framework including operators (f-softmax, f-softargmax, f-softplus and f-sigmoid) and loss functions that generated by f-divergences for multi-class classification. Mathematical derivations and efficient computation algorithm are provided. The practical performance are demonstrated on ImageNet classification and language model settings. ## update after rebuttal Thank the authors for their rebuttal and I'd like to keep my evaluation score as weak accept. Claims And Evidence: The claims are clear and convincing to me. Methods And Evaluation Criteria: The methods and evaluation criteria make sense to me. Theoretical Claims: I didn't check details for the proof but the theoretical claims in the main content make sense to me. Experimental Designs Or Analyses: The experimental designs and analyses generally make sense to me. However, it would be better to include performance variance or standard deviation for multiple replicates with different random seeds or data splits. Supplementary Material: I didn't check the supplementary material. Relation To Broader Scientific Literature: The proposed method could be used for multi-class classification task. Essential References Not Discussed: There is another previous work that shares partially similar idea that generate loss function with convex regularization, which can recover logistic and SVM losses with KL divergence. Zhu, Dixian, Yiming Ying, and Tianbao Yang. "Label distributionally robust losses for multi-class classification: Consistency, robustness and adaptivity." International Conference on Machine Learning. PMLR, 2023. Other Strengths And Weaknesses: Strengths: 1. The paper writing and presentation is good. 2. The motivation is clear and the flow of mathematical derivations are neat. 3. The authors ensure the efficiency of the proposed method, which only takes negligible overhead in practice (Appendix A.3). Weaknesses: 1. There is no theoretical insight why certain variant ($\alpha$-divergence with $\alpha=1.5$) performs better than others. 2. The experiments don't include performance variance for multiple different replicates (as mentioned before). 3. It could make the proposed method more convincing by compare other classification losses in the literature, such as SVM losses and other CE loss variants. The current compared baselines is a little limited. Other Comments Or Suggestions: Please see previous comments. Questions For Authors: Please see previous comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for their constructive feedback. We hope we have addressed your comments. > **it would be better to include performance variance** Thank you for this suggestion. We repeated our experiments with multiple random seeds and reported the standard deviation across independent runs. Specifically, for each $f$-divergence generated loss, we used 5 random seeds, and applied the loss on ImageNet classification, finetuning, and distillation experiments. The results are below. Overall, the standard deviations of all losses are small, which we believe is because the training datasets are large enough to diminish the effect of different realization of random initial parameters. **ImageNet**: |divergence|accuracy mean|accuracy std|accuracy min|accuracy max| |---|---|---|---|---| |cs|0\.7604|0\.0015|0\.7587|0\.7621| |js|0\.7246|0\.0014|0\.723|0\.7266| |squared\_hellinger|0\.7281|0\.0008|0\.7272|0\.729| |alpha divergence ($\alpha=1.5$)|**0\.7758**|0\.0013|0\.7743|0\.7776| |kl|0\.7684|0\.0007|0\.7676|0\.7692| **Supervised finetuning experiment**: |divergence|ROUGE-2 mean|ROUGE-2 std|ROUGE-2 min| ROUGE-2 max| |---|---|---|---|---| |cs|11\.15|0\.1|11\.02|11\.31| |js|7\.95|0\.08|7\.87|8\.07| |rcs|9\.55|0\.1|9\.44|9\.7| |alpha divergence ($\alpha=1.5$)|**14\.27**|0\.04|14\.2|14\.32| |kl|9\.77|0\.02|9\.75|9\.8| **Distillation experiment**: |divergence|ROUGE-2 mean| ROUGE-2 std| ROUGE-2 min| ROUGE-2 max| |---|---|---|---|---| |cs|14\.17|0\.09|14\.01|14\.26| |js|16\.51|0\.06|16\.43|16\.6| |rcs|16\.3|0\.05|16\.25|16\.38| |alpha divergence ($\alpha=1.5$)|**17\.43**|0\.13|17\.19|17\.6| |kl|16\.64|0\.05|16\.57|16\.71| > **There is another previous work that shares a similar idea by Zhu, Dixian, Yiming Ying, and Tianbao Yang. "Label distributionally robust losses for multi-class classification: Consistency, robustness and adaptivity." International Conference on Machine Learning. PMLR, 2023.** We will add this citation, thank you. > **There is no theoretical insight why certain variants (alpha-divergence with alpha = 1.5) perform better than others.** We agree that such insights—knowing when and why a loss outperforms others on specific tasks—would be beneficial. However this is very challenging. Indeed, even in the case of the standard cross-entropy loss, its induced loss landscape in model parameters and the ease or difficulty of optimizing over it are generally hard to analyze for practically relevant tasks. We believe that, just as the rates of convergence of optimizers have not always dictated deep learning practices, theoretical properties of f-divergence losses may not be reflected in deep learning experiments. For this reason, we prefer to present a methodological approach that showcases the actual performance of different losses, upon which the community can build. Through these experiments, we observed that the alpha divergence with $\alpha=1.5$ appears to provide a good trade-off between the standard KL divergence and a sparsemax loss. > **compare to other classification losses in the literature, such as SVM losses and other CE loss variants.** In addition to our $f$\-divergence generated losses, we tried the (multiclass) hinge loss. However, despite trying various learning rates and warm-up, we were unable to make the hinge loss work well on ImageNet. We suspect that the hinge loss, as a non-smooth loss, requires a completely different hyper-parameter tuning. Our observation here aligns with findings from the paper of Zhu et al., (2023) you mention, which demonstrates that applying multiclass hinge loss to Tiny ImageNet results in accuracy only slightly better than random (See their Table 7, where “CS” denotes the Crammer-Singer multiclass hinge loss, which we also used). [1] Zhu et al, 2023, ICML, Label Distributionally Robust Losses for Multi-class Classification: Consistency, Robustness and Adaptivity.
Summary: The paper investigates a general framework for generating loss functions using f-divergences, extending the well-known logistic loss (cross-entropy). It introduces a new set of operators, namely f-softmax and f-softargmax, and develops a novel bisection algorithm for computing them. The experimental results focus on evaluating the effectiveness of these loss functions in image classification (ImageNet) and language modeling, including pretraining, supervised fine-tuning (SFT), and distillation. Claims And Evidence: The paper is very clear in its content and claims, all claims are clearly articulated. However, not all claims are justified and some are a bit overstated. Generalizing or extending loss functions by replacing KL divergence with the class of f-divergence functions is a well-studied topic, with numerous contributions (and techniques) and it is fairly a standard procedure. There is no novelty in that aspect here. Note that the generalization of the logistic loss by replacing the KL divergence with more general f-divergences has been done at least in the bayesian setting (see work by Andrea Tonello and others). Works like Nguyen et al. (2009), Martins & Astudillo (2016), and Go et al. (2023) already explored similar directions. This paper builds on Fenchel–Young losses rather than fundamentally innovating on them, and the use of Fenchel–Young losses (Blondel et al., 2020) as a general framework was already well-established. The new "entropies" cannot be claimed entropies simply because they are generated in the same way as KL or Tsallis divergence. Do they satisfy the axioms of information measures? Methods And Evaluation Criteria: The potential benefits or interest of the proposed loss functions are demonstrated on image classification (ImageNet), language model post-training, and distillation. The experimental study is rather limited, failing to test these ideas in other SOTA architectures, so how general the results and the claims could be? Theoretical Claims: Yes, the theoretical claims have been checked. The study rigorously defines a new class of operators that generalize softmax, this is correctly done, although there is no technical difficulty. The convexity and differentiability analysis are also correct, based on the framework of Fenchel–Young losses. Experimental Designs Or Analyses: The experiments are valid and correct but they are limited, not representative and the gains are not very pronounced or significant (nor are sufficiently supported by technical explanations and justifications why certain things work and others not). Stating the accuracy without the variance makes it hard to see whether the proposed loss function outperform others in ImageNet. Then, while α = 1.5 performs well, fine-tuning across different datasets and architectures might require additional adjustments (the parameter sensitivity needs to be further studied). Supplementary Material: There is no Supplementary Material per se, but the paper has a long Appendix part, containing well-known results on f-divergences as well as proof of the main theoretical results. Relation To Broader Scientific Literature: The use of f-divergences in loss functions is already well studied and extensions beyond KL for cross-entropy have been done before. Many ideas here are reformulations or extensions rather than fundamentally new insights. The theoretical part, in terms of defining f-divergences, the new entropies, etc. is well established in the information theoretic community and there are many, more general results, alternatives (Sharma-Mittal), etc. Essential References Not Discussed: Although the literature on this topic is very large, there are no essential references missing, apart from examples where f-divergences have been used in loss functions (specifically in cross entropy), such as "f-Divergence Based Classification: Beyond the Use of Cross-Entropy" Novello, Tonello, and https://arxiv.org/pdf/2501.18537v1. A clearer comparison highlighting the differences between this work and prior approaches should be provided. Other Strengths And Weaknesses: Strengths: - The authors explore the potential gains by extending KL divergence in cross-entropic loss functions. - There is an attempt to provide some analytical results (not always new or of technical depth though). - The algorithm seems to be relevant in practice as its implementation cost seems to be low. The main computational contribution is the bisection method for efficiently solving the f-softargmax transformation. Weaknesses: - Limited novelty, both in terms of f-divergences results (per se) or extending loss functions (including cross entropy) using f-divergences. There is a rich literature that has explored the benefit from replacing KL divergence by f-div. - The gains are not important and there is no theoretical/analytical or even experimental justification on why a particular value of a-divergence (as defined by Tsallis entropy) provides gains (but not some other widely used/well known cases of this rich family of divergences. Overall, a well-structured, incremental improvement on f-divergence-based losses, but not an important (or groundbreaking) contribution in theory (and practice). Other Comments Or Suggestions: Nothing major to comment. Questions For Authors: 1) What is the underlying (even fundamental) reason why a particular case of f-divergence provides some gains (not significant ones though)? Does it have to do with the properties it follows? (in terms of (a)symmetry, or satisfying the partition inequality, etc.). 2) How interesting are such loss functions in state-of-the-art systems in language processing? Can you report some gains or interesting results on networks based on attention using different (more complex) loss functions? 3) Do the so-called new entropies satisfy the axiomatic formulation of entropy? What are their properties and their interest? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. > **The experimental study is rather limited** We disagree that our experimental study is limited. We evaluate the proposed losses on tasks of different data modalities, including both vision (ImageNet) and text generation tasks, which cover different training strategies (from scratch, finetuning, and distillation). All other reviewers appreciated the experimental efforts ("experimental designs [...] make sense to me", "experiments are well designed", "Experiments appear to be soundly designed"). > **accuracy without variance** Thank you for this suggestion. Please see the results in our response to reviewer EvEL, where we report the mean, standard deviation (std), minimum (min), and maximum (max) across 5 independent runs for each divergence in the ImageNet, SFT, and distillation experiments. To summarize, we found that the standard deviation is low. > **underlying reason why a particular case of f-divergence provides some gains** We agree that such insights—knowing when and why a loss outperforms others on specific tasks—would be beneficial. However, this is very challenging. Indeed, even in the case of the standard cross-entropy loss, its induced loss landscape in model parameters and the ease or difficulty of optimizing over it are generally hard to analyze for practically relevant tasks. We believe that, just as the convergence rates of optimizers have not always dictated deep learning practices, theoretical properties of f-divergence losses may not be reflected in deep learning experiments. For this reason, we prefer to present a methodological approach that showcases the actual performance of different losses, upon which the community can build. Through these experiments, we observed that the alpha divergence with $\alpha=1.5$ appears to provide a good trade-off between the standard KL divergence and a sparsemax loss. > **This paper builds on Fenchel–Young losses rather than fundamentally innovating** At the beginning of Section 3, we clearly state “In this paper, we propose to study Fenchel–Young losses and associated operators when the regularizer is defined as $\Omega_f(p; q) = D_f(p, q)$". We believe this hasn’t been done before. > **The use of f-divergences in loss functions is already well studied** As we cover in our ample related work section, there are indeed existing works using $f$-divergences to derive loss functions, such as Nguyen et al. (2009), as you mention. However, these works do not use the same mathematical formulation: they are not based on Fenchel-Young losses. We are the first to study $f$-divergences as the regularizer in Fenchel-Young losses. > **"f-Divergence Based Classification: Beyond the Use of Cross-Entropy" Novello, Tonello** We will add this citation, thank you. > **How interesting are such loss functions in state-of-the-art systems in language processing?** Our pretraining experiments use a model with 1.2 billion parameters. While state-of-the-art models often use a larger number of parameters, we argue that this is a size where models already start to be very useful. In fact, many organizations have a 1 billion-parameter model in their offering (e.g., Gemma, Mistral, etc). In addition, our training pipeline relies on state-of-the-art components and training recipes. Our pretraining experiments are implemented with nanodo [1], we use modern decoder-only transformers with rotary embeddings and QK layer normalization, and train them on the large-scale C4 corpus, following the empirical scaling laws found in prior work [2]. This approach allows us to explore the impact of our loss functions within a framework that reflects current practices. To summarize, we argue that our experiments are conducted in a real-world setting and are therefore informative. [1] Peter Liu, et al. NanoDO: A minimal Transformer decoder-only language model implementation in JAX. http://github.com/google-deepmind/nanodo, 2024. [2] Mitchell Wortsman, et al. "Small-scale proxies for large-scale transformer training instabilities." ICLR, 2024. > **Can you report some gains on networks based on attention using different (more complex) loss functions?** We are not sure if we understand the question correctly. While it would be possible in principle, our work does not explore using f-softargmax as attention layers (i.e. intermediate layers). Our work focuses on loss functions (i.e. output layers). > **Do the so-called new entropies satisfy the axiomatic formulation of entropy? What are their properties and their interest?** An $f$-divergence gives rise to a well-defined negative $f$-entropy (Cichocki & Amari, 2010) if $f$ is strictly convex (Blondel et al, 2020, Section 4.1). These entropies have different sensitivity to changes in the probability of an event, which we visualize in Figure 1. --- Rebuttal Comment 1.1: Comment: Thank you for your response, your constructive approach, and the clarifications provided. The authors consider a specific case of Fenchel-Young losses using f-divergences, and indeed, this particular combination has not been previously studied within this framework, although related work leveraging the same framework does exist (e.g., https://arxiv.org/pdf/1901.02324, https://openreview.net/pdf?id=7Dep87TMJs). It is worth noting that f-divergences themselves have been previously used as loss functions in other contexts. Thus, the paper may be of interest to a community focused on Fenchel-Young losses and their generalizations. A key limitation, however, lies in the inability to interpret or explain why only a specific form of α-divergence (in addition to KL) appears to provide performance gains and we appreciate the acknowledgment from the authors. This remains an elusive and unexplored aspect of the work and stands as its primary weakness. That said, since the paper’s main focus appears to be computational and dealing with loss functions that interpolate between softmax and sparsemax, this limitation could be somewhat mitigated. One possible explanation for the observed behavior may lie in the interesting and relatively unique information-geometric properties of Amari’s α-divergence. This divergence, while part of the f-divergence family, sits at the intersection of f- and Bregman divergences and aligns well with the underlying geometry of the statistical manifold. This presents an intriguing direction for future research, and the present work can be viewed as a first, empirical indication of the potential of such divergences in this context. A small clarification would be helpful in this regard: when referring to “α-divergence,” the authors should specify that they are using Amari’s formulation (and not even the original α-divergence, in Amari’s paper or as introduced by Chernoff previously, which has a different scaling), in order to avoid confusion with other α-type divergences, such as Rényi’s, which does not belong to the f-divergence class. Regarding the comment on the proposed entropies and Figure 1, it is important to acknowledge that there is already a rich body of literature on the topic (e.g., [Ben-Bassat, 1978], (https://link.springer.com/chapter/10.1007/978-3-031-68208-7_5, early work by Daróczy in the 1970s, or more recent efforts to define Jensen–Shannon-type entropies and others (https://arxiv.org/abs/2106.14874). While defining entropies based on divergences (as with KL or Rényi) is a common approach, such definitions do not always yield valid uncertainty measures under standard axiomatic frameworks. It would be beneficial for the authors to either acknowledge this limitation or provide a more rigorous justification for their formulation. Specifically, it would be worth discussing whether the proposed entropies satisfy any of the well-established axioms of uncertainty measures — such as those proposed by Faddeev, Khinchin, or more recent formulations (e.g., https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.230401). This is ultimately a minor issue, as the main focus of the paper lies elsewhere. However, it would strengthen the presentation to either tone down these claims or indicate that a formal axiomatic analysis is left for future work. Given the clarifications and after a careful reconsideration of the paper’s focus and contribution, the recommendation will be reconsidered.
null
null
null
null
null
null
FedSMU: Communication-Efficient and Generalization-Enhanced Federated Learning through Symbolic Model Updates
Accept (poster)
Summary: The paper considers for federated learning in the heterogeneous regime, with compression and partial participation. A new algorithm called FedSMU is proposed. ## update after rebuttal I think the paper deserves to be accepted and I am confident that the authors will make the recommended changes to make the paper even better. Claims And Evidence: The focus in on the non-convex setting, for deep learning. The functions are supposed smooth and with bounded gradients (Assumption 4.3). The experiments and comparisons are satisfying and show the merits of FedSMU. The improvement is not large, however, and in some cases FedSMU is even worse than competitors. For instance it is worse than EF21 and FedAMS on CIFAR-10. There is a theoretical analysis in Section 4, which is rare for papers focused on empirical performance in deep learning settings. This deserves to be saluted. Methods And Evaluation Criteria: The evaluation is good. Theoretical Claims: I did not check the details of the theoretical results but the statements make sense and seem correct. Experimental Designs Or Analyses: The experiments seem valid to me. Supplementary Material: Yes, I looked in particular at Appendices B,C, F8-11. Relation To Broader Scientific Literature: The literature is correctly reviewed. I suggest below some references to add. Essential References Not Discussed: You write that "Current approaches in FL often prioritize either mitigating data heterogeneity to enhance generalization or compress- ing model updates to alleviate communication, rather than addressing both challenges concurrently." There are however works combining local steps, with control variates to mitigate client drift due to heterogeneity, and compression. * CompressedScaffnew in Condat et al. “Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Compressed Communication,” preprint arXiv:2210.13277, 2022. * TAMUNA, which extends CompressedScaffnew to partial participation, in Condat et al. “TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation,” preprint arXiv:2302.09832, 2023. * LoCoDL, which uses arbitrary unbiased compression, unlike CompressedScaffnew and TAMUNA that use specific sparsification, in Condat et al. “LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression,” ICLR 2025. These methods achieve acceleration in the convex setting. In the non-convex setting, it is less clear how to mitigate client drift. This has been studied in * Yi et al. “FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models,” preprint arXiv:2403.09904, 2024. * Meinhardt et al. “Sparse-ProxSkip: Accelerated Sparse-to-Sparse Training in Federated Learning,” preprint arXiv:2405.20623, 2024. There is the important paper Douillard et al. "DiLoCo: Distributed Low-Communication Training of Language Models," arXiv:2311.08105, 2023. Instead of SGD local steps, it uses AdamW for the inner iterations, with good empirical performance. Other Strengths And Weaknesses: The paper is well written. Other Comments Or Suggestions: No Questions For Authors: I don't have questions. The paper is good overall. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. In the following, we have provided our detailed responses to these comments. > Experiments on CIFAR-10 We thank the reviewer for this observation. Our experimental results indicate that FedSMU achieves a notably better performance on more complex datasets, such as CIFAR-100 and Tiny-ImageNet, while the improvement on CIFAR-10 is relatively marginal. We attribute this to the lower data complexity and heterogeneity in CIFAR-10, which restrains the potential benefit of FedSMU’s core designs, particularly its ability in handling client update imbalance and data heterogeneity. As shown in Table 3 of the original manuscript, FedSMU still outperforms all the baselines on Tiny-ImageNet, highlighting that its advantages become more pronounced in challenging federated scenarios. > There are however works combining local steps, with control variates to mitigate client drift due to heterogeneity, and compression. We thank the reviewer for the insightful comment and for suggesting the relevant literature. We agree that there are existing works that address both the data heterogeneity and communication efficiency, particularly in the convex setting. In response, we will revise our original statement as follows. “Some existing approaches in FL address either data heterogeneity to improve generalization or communication overhead through update compression. However, it remains a challenge to jointly addressing both of them, especially under the non-convex settings. For example, CompressedScaffnew [D1] and TAMUNA [D2] combine control variates (used to mitigate client drift due to heterogeneity) with the model compression. However, these methods rely on permutation-based compression schemes, which are relatively complex and less flexible. LoCoDL [D3] extends this line of work by supporting a broader class of compressors and demonstrating a convergence acceleration in convex problems, but it focuses exclusively on the convex setting. Our work complements these efforts by proposing a unified approach that simultaneously improves generalization and reduces communication overhead in the more challenging non-convex regime.” > In the non-convex setting, it is less clear how to mitigate client drift. In our work, we propose a new mechanism, symbolization of client updates, which normalizes the magnitude of each model parameter before transmission. This design aims to balance the contribution from each client, reducing the influence of extreme updates caused by local data heterogeneity, and thereby mitigating the model drift during aggregation. The effectiveness of this approach is empirically demonstrated in Figure 1. Moreover, relevant studies, such as FedComLoc [D4] and Sparse-ProxSkip [D5], make attempts to explore client drift under the non-convex objectives. However, FedComLoc’s performance may degrade under the compressed communication due to its reliance on communication variables, while Sparse-ProxSkip assumes the full client participation, which may not always be feasible in real-world FL scenarios. > Works on distributed low-communication language models We thank the reviewer for suggesting this work. We will incorporate DiLoCo [D6] into the Related Work section. This method adopts AdamW for local updates and Nesterov momentum globally, demonstrating a strong empirical performance. While our current focus is on symbolized updates within a communication-efficient FL framework, integrating advanced optimization techniques, such as those in [D6] (especially adaptive optimizers like AdamW), could further enhance the model performance. We consider this as a promising direction for our future work. [D1] Condat et al. "Provably doubly accelerated federated learning: The first theoretically successful combination of local training and communication compression," arXiv preprint *arXiv:2210.13277* (2022). [D2] Condat et al. "Tamuna: Doubly accelerated federated learning with local training, compression, and partial participation," *International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS*, 2023. [D3] Condat et al. "Locodl: Communication-efficient distributed learning with local training and compression," ICLR 2025. [D4] Yi et al. “FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models,” preprint *arXiv:2403.09904*, 2024. [D5] Meinhardt et al. “Sparse-ProxSkip: Accelerated Sparse-to-Sparse Training in Federated Learning,” preprint *arXiv:2405.20623*, 2024. [D6] Douillard et al. "DiLoCo: Distributed Low-Communication Training of Language Models," *arXiv:2311.08105*, 2023.
Summary: This paper proposes FedSMU, a federated learning algorithm that improves communication efficiency and generalization. It symbolizes model updates (using sign-based compression) to reduce communication overhead and mitigate data heterogeneity. Inspired by the Lion optimizer, FedSMU splits local updates and global execution, improving generalization. The performance of FedSMU is validated through both theoretical analysis and empirical experiments. Claims And Evidence: Most claims in the paper are well-supported by theoretical analysis and empirical experiments. Methods And Evaluation Criteria: The proposed method is appropriate for the federated learning setting. Theoretical Claims: I did not verify the correctness of the proofs for the theoretical claims. However, Assumption 4.3 is not a weak assumption and may not always hold in practice. Experimental Designs Or Analyses: The experimental design is mostly sound, with comparisons across multiple FL baselines. However, a potential limitation is that the experiments primarily focus on small- to medium-scale models (LeNet, ResNet18), leaving uncertainty about FedSMU’s performance on larger architectures. Supplementary Material: I reviewed the appendices, except for Appendix D, and found no issues. Relation To Broader Scientific Literature: FedSMU combines communication efficiency and generalization improvements in a unified FL framework, which previous works have addressed separately. Essential References Not Discussed: The paper covers most essential references, but it would be better to include Adaptive Federated Learning in the Related Works section for better context. Other Strengths And Weaknesses: The generalization benefits of FedSMU lack theoretical justification. Other Comments Or Suggestions: 1. In Line 164, there is a typo: "ruducing" should be "reducing". 2. Some notations are not explained in the paper, e.g., $\beta_1$ and $\beta_2$ are momentum coefficients, and $\gamma_2$ is the weight decay factor. Questions For Authors: FedSMU splits the standard Lion optimizer into local updates and global execution. How does each component impact FedSMU’s performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. > Impact of component of FedSMU As described in Appendix F.9, we had implemented two variants, Fed-LocalLion and Fed-GlobalLion, to evaluate the isolated impact of Lion on the client and server sides. To further assess the necessity of key components in FedSMU, we conduct here a systematic ablation study by removing specific elements and comparing each pruned variant against the full algorithm. In particular, we examine the effect of excluding the following component: 1) server-side weight decay regularization ($\gamma_2$), 2) client-side gradient sliding average ($\beta_1$), 3) client-side gradient symbolization. The experimental results in Table C1 (https://anonymous.4open.science/r/A-7823/F.pdf) demonstrate that the client-side sliding average plays the most crucial role in achieving a stable and effective training. Additionally, the model update symbolization mechanism itself proves to be more effective than using the full-precision updates. This validates our motivation of proposing FedSMU that symbolization balances contributions of the heterogeneous clients by suppressing some extreme update magnitudes, which thus enhances aggregation stability and leads to better generalization. Moreover, we observe that directly applying the Lion optimizer on either the client or server side yields suboptimal performance, which indicates that our FedSMU effectively harnesses the benefits of Lion, enhancing the generalization while compressing the communication load. > Assumption 4.3 We acknowledge that this is a slightly stronger assumption, ensuring that both the compressed targets and momentum terms (the moving averages of gradients) in our theoretical analysis are bounded. Moreover, this assumption has been adopted in other federated optimization and sign-based compression studies [C1, C2]. Specifically, in [C1], the authors demonstrate the convergence of distributed SIGNSGD with momentum under their Assumption 4, which is also the bounded gradient assumption. [C1] Tao Sun, et al. ``Momentum ensures convergence of SIGNSGD under weaker assumptions,'' ICML 2023. [C2] Sashank Reddi, et al. ``Adaptive federated optimization,'' ICLR 2020. > Evaluations on larger models We appreciate this suggestion. Following it, we conduct additional experiments using a larger model (Vision Transformer Small (ViT-S) [C3]) on CIFAR-100, with a Dirichlet distribution of 0.25 to simulate non-IID settings. Due to time constraints, we focus on comparison with the four strong baselines: FedAvg, SCAFFOLD, FedEF-TopK, and FedLion, which demonstrated higher accuracies in our original Table 2. As shown in Table A3 (https://anonymous.4open.science/r/A-7823/F.pdf), FedSMU consistently achieves a superior performance with the larger-scale model, further confirming its effectiveness and scalability in more FL scenarios. [C3] Dosovitskiy, et al. ``An image is worth 16x16 words: Transformers for image recognition at scale,’’ *arXiv:2010.11929* (2020). > Adaptive FL as related work We thank the reviewer for this valuable suggestion, and will incorporate a discussion of Adaptive FL in the Related Work section, as follows. “Several adaptive algorithms [C4, C5] dynamically adjust global learning rates based on the divergence between local and global models, thereby enhancing generalization performance in federated settings.” [C4] Reddi, Sashank, et al. "Adaptive federated optimization," ICLR 2020. [C5] Tong, et al. "Effective federated adaptive gradient methods with non-iid decentralized data," *arXiv:2009.06557* (2020). > Theoretical justification of generalization We acknowledge that the current manuscript does not include a theoretical generalization analysis, as our primary focus is on the design of a communication-efficient optimization strategy, for which we provided a convergence analysis. That is, under the general non-convex settings, FedSMU achieves a convergence rate of $\mathcal{O}(\frac{1}{\sqrt{T}})$, where $T$ is the total number of communication rounds. This theoretical result matches with the convergence rates of existing FL algorithms. Though a theoretical generalization analysis is not included, FedSMU’s generalization ability is thoroughly validated through extensive experiments. Key designs contributing to this include: 1) the symbolization of local updates, which normalizes client contributions and enhances Magnitude Uniformity (MU), helping alleviate data heterogeneity; and 2) the split-Lion optimizer, which decouples updates into local and global components, combining stability and efficiency to further improve generalization. > Typos Thank you for your correction, and we will correct it to "reducing". > Notations Thank you for your suggestion. We will incorporate the missing symbol definitions into Table 1. Specifically, we will clarify that $\beta_1$ and $\beta_2$ denote the momentum coefficients, while $\gamma_2$ represents the weight decay factor.
Summary: This paper proposes a new federated learning algorithm, FedSMU, designed to reduce communication costs and mitigate data heterogeneity. The key idea is to transmit only the sign of local updates for each parameter. Both theoretical analysis and experimental results are provided. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: This work may contribute to more efficient communication in federated learning. Essential References Not Discussed: No. Other Strengths And Weaknesses: The motivation and core design of the proposed algorithm are clearly articulated. However, it is unclear what trade-offs are made to achieve communication savings. Other Comments Or Suggestions: NA Questions For Authors: 1. A key concern is that quantizing local updates to one bit may cause the averaged gradient direction to deviate from the steepest descent, potentially increasing the number of communication rounds. 2. The paper claims that FedSMU reduces communication costs by transmitting only one bit per local update for each parameter. However, if this leads to more communication rounds, additional overhead—such as more frequent broadcasts—may offset the savings. In Figure 2, does the reported communication cost include both upload and download bits, or only the upload? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. In the following, we have provided our detailed responses to these comments. > Comment 1: The motivation and core design of the proposed algorithm are clearly articulated. However, it is unclear what trade-offs are made to achieve communication savings. **Response:** We thank the reviewer for the insightful comment. In our FedSMU, the primary trade-off made to achieve communication efficiency is the loss of precision in the model updates incurred by the 1-bit symbolization, which introduces the quantization noise. This may lead to a slight increase in the number of communication rounds required to reach a target accuracy, as shown in Figure 3 in the Appendix. However, we would like to emphasize that each communication round in FedSMU incurs a significantly lower communication cost compared to other methods. As a result, even with more communication rounds, the overall communication overhead remains substantially lower, as will be further discussed in our response to Question 2. We will also clarify this trade-off in the revised manuscript. > Question 1: A key concern is that quantizing local updates to one bit may cause the averaged gradient direction to deviate from the steepest descent, potentially increasing the number of communication rounds. **Response:** We appreciate the reviewer’s concern, and acknowledge that quantizing local updates to 1-bit may introduce deviation from the averaged gradient direction, which in turn may result in a larger number of communication rounds needed for convergence, as shown in Figure 3 in the Appendix. However, we would like to emphasize that our FedSMU significantly reduces the communication cost per communication round, which originates from its highly compressed 1-bit update representation. As a result, even if more communication rounds are needed for convergence, the total communication cost remains substantially lower than that of existing methods, as will be further discussed in our response to Question 2. > Question 2: The paper claims that FedSMU reduces communication costs by transmitting only one bit per local update for each parameter. However, if this leads to more communication rounds, additional overhead—such as more frequent broadcasts—may offset the savings. In Figure 2, does the reported communication cost include both upload and download bits, or only the upload? **Response:** We would like to thank the reviewer for pointing out this issue, which can be addressed as follows. First, we would like to clarify that the communication cost reported in Figure 2 includes only the uplink (client-to-server) transmission for all the comparison algorithms, which will be explicitly stated in the revised manuscript for more clarity. Second, the total communication cost per round can be generally estimated by using the following expression, as presented in Literature [B1]: $Total Communication = Uplink Communication +\alpha ·Downlink Communication, \alpha \in [0,1] $. In practice, due to the factors such as system asymmetry, caching constraints and protocol limitations, the uplink speed is often significantly lower than the downlink speed, as discussed in Literature [B2]. Consequently, many communication-efficient FL studies [B3, B4] focus merely on minimizing the uplink cost alone. To have a more comprehensive evaluation of our FedSMU, we followed the setting in [B2] and set $\alpha = 0.1$, and depicted Figure B1 (https://anonymous.4open.science/r/A-7823/F.pdf) to compare the total communication cost including both the upload and download bits. It shows that FedSMU remains communication-efficient even when accounting for the downlink overhead, with a significantly lower total cost compared to the other baselines at comparable accuracy levels. [B1] Condat, Laurent, Ivan Agarský, and Peter Richtárik. "Provably doubly accelerated federated learning: The first theoretically successful combination of local training and communication compression," *arXiv preprint arXiv:2210.13277* (2022). [B2] Condat, Laurent, et al. "Tamuna: Doubly accelerated federated learning with local training, compression, and partial participation," *International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS,* 2023. [B3] Li, Xiaoyun, and Ping Li. "Analysis of error feedback in federated non-convex optimization with biased compression: Fast convergence and partial participation," *International Conference on Machine Learning.* PMLR, 2023. [B4] Richtárik, Peter, Igor Sokolov, and Ilyas Fatkhullin. "EF21: A new, simpler, theoretically better, and practically faster error feedback," *Advances in Neural Information Processing Systems* 34 (2021): 4384-4396.
Summary: In this paper, the authors propose the FedSMU algorithm to address both communication cost and data heterogeneity challenges in federated learning, through sign-based model compression. ## update after rebuttal I continue to favor acceptance and will leave my rating unchanged. Claims And Evidence: 1. The motivation is unclear, particularly the deep rationale connecting the three major challenges. The paper integrates multiple techniques, including model update symbolization/sign operation/1-bit compression, sliding average, MU Index, and model compensation. However, each technique addresses a different challenge, and there is no clear connection between them. 2. The paper attempts to tackle multiple issues simultaneously, making its main contributions unclear. I strongly recommend that the authors refine the focus of the paper and clearly highlight its core contribution. 3. Regarding communication efficiency, few-shot or even one-shot FL methods [1][2] have been proposed. The authors should discuss these works in relation to their approach. 4. Client sampling in FL is another important issue [3][4], but it is not adequately addressed in the paper. From my understanding, partial client participant is one of the biased client sampling. However, the authors only consider smaller sampling rate. A discussion on how $\textbf{biased}$ client selection impacts the proposed method would be beneficial. 5. Several low-bit model quantization methods [5][6] have been proposed. Can sign-based 1-bit quantization be effectively applied to larger models? Is the quantization error within an acceptable range when using LLMs? 6. As shown in Figure 2, FedSMU exhibits large fluctuations in the early stages, which appears to contradict the findings in Figure 1(c). Can the authors elaborate on this discrepancy? 7. The evaluated datasets and models are not representative of real-world applications. To fully assess the proposed method, I recommend evaluating it on larger models such as ViT and RoBERTa and larger datasets such as DomainNet and GLUE. 8. The writing needs improvement to meet the standards of a prestigious conference like ICML. The sentence flow is disjointed (e.g., Line 23-24 on the left, Line 25 on the right). Some sentences are overly long (e.g., Line 19-27 on the left). Additionally, there are ambiguous terms, such as “values” in Line 44 on the right—how do smaller values reduce data transmission costs? Do you mean precision? Furthermore, multiple keywords with similar meanings (e.g., symbolic/sign/1-bit compression) should be consolidated for clarity. Sometimes the past tense is used as well. 9. I feel a big gap between the contribution summary and the paragraph from Line 70-90. [1] Zhang, J., Karimireddy, S. P., Veit, A., Kim, S., Reddi, S., Kumar, S., & Sra, S. (2020). Why are adaptive methods good for attention models?. Advances in Neural Information Processing Systems, 33, 15383-15393. [2] Ahn, K., Cheng, X., Song, M., Yun, C., Jadbabaie, A., & Sra, S. (2023). Linear attention is (maybe) all you need (to understand transformer optimization). arXiv preprint arXiv:2310.01082. [3] Chen, W., Horvath, S., & Richtarik, P. (2020). Optimal client sampling for federated learning. arXiv preprint arXiv:2010.13723. [4] Cho, Y. J., Wang, J., & Joshi, G. (2020). Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv preprint arXiv:2010.01243. [5] Ma, S., Wang, H., Ma, L., Wang, L., Wang, W., Huang, S., ... & Wei, F. (2024). The era of 1-bit llms: All large language models are in 1.58 bits. arXiv preprint arXiv:2402.17764, 1. [6] Malekar, J., Elbtity, M. E., & Zand, R. (2024). Matmul or No Matmul in the Era of 1-bit LLMs. arXiv preprint arXiv:2408.11939. Methods And Evaluation Criteria: See Claims And Evidence. Theoretical Claims: The paper assumes that the stochastic gradient is an unbiased estimator of the full gradient and that its variance is bounded. However, these assumptions do not align well with recent findings on minibatch gradient distributions in transformer-based models (as I mentioned before that I am concerned the method is not working well with LLMs). Specifically, prior research ([7-8]) has shown that minibatch gradients in attention-based models follow a heavy-tailed distribution rather than a Gaussian-like assumption with bounded variance. This discrepancy raises concerns about the applicability of the theoretical results, as heavy-tailed gradient noise can significantly impact convergence behavior. A discussion on how the proposed method handles such distributions would strengthen the paper. [7] Zhang, J., Karimireddy, S. P., Veit, A., Kim, S., Reddi, S., Kumar, S., & Sra, S. (2020). Why are adaptive methods good for attention models?. Advances in Neural Information Processing Systems, 33, 15383-15393. [8] Ahn, K., Cheng, X., Song, M., Yun, C., Jadbabaie, A., & Sra, S. (2023). Linear attention is (maybe) all you need (to understand transformer optimization). arXiv preprint arXiv:2310.01082. Experimental Designs Or Analyses: See Claims And Evidence. Supplementary Material: I check some additional experiments. Relation To Broader Scientific Literature: no Essential References Not Discussed: See Claims And Evidence. Other Strengths And Weaknesses: See Claims And Evidence. Other Comments Or Suggestions: See Claims And Evidence. Questions For Authors: See Claims And Evidence. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We’d like to thank the reviewer for the comments. > Motivation & Contribution FL involves intertwined challenges where improving one may worsens another. For example, communication-efficient methods may reduce generalization, methods addressing heterogeneity often increase communication, and those balancing both may assume full client participation. We aim to design an algorithm addressing both challenges while remaining effective under partial participation. To this end, we introduce the Magnitude Uniformity (MU) index, inspired by Jain’s fairness, to quantify consistency of client contributions. We observe that higher heterogeneity reduces MU, but degrades generalization. To mitigate this, we propose model update symbolization, which normalizes update magnitudes to enhance MU, thus implicitly reducing heterogeneity and communication. To address potential accuracy loss from compression, we integrate a sliding average mechanism (inspired by Lion) for improved stability. Unlike existing methods that trade off one aspect for another, our core contribution is to jointly improve communication efficiency and generalization, while supporting partial client participation. > Few-shot FL One-Shot FL typically trains a global model with a single communication round by ensembling and distilling client models using public data [A1, A2]. While FedSMU takes a multi-round paradigm, it shares similar goals and can potentially incorporate One-Shot strategy. To our best knowledge, [1, 2] are methodologically different: [1] proposes ACClip for handling heavy-tailed gradients in Transformer, and [2] studies linear Transformers for regression. We show that ACClip is compatible with FedSMU, with provenance improved by FedSMU-ACClip in Table A1 (https://anonymous.4open.science/r/A-7823/F.pdf). [A1] M. Hasan, et al. "Calibrated one round federated learning with Bayesian inference in the predictive space," AAAI 2024. [A2] N. Guha, et al. "One-shot federated learning," *arXiv:1902.11175* 2019. > Client sampling We assumed a uniformly random participation (unbiased sampling), allowing to isolate and study effects of data heterogeneity and communication. Though client sampling is not our main focus, FedSMU is still compatible with biased sampling. We incorporate it with loss-based client selection [3], and show in Table A2 that FedSMU with biased sampling improves performance. > Larger models and larger datasets, e.g., DomainNet and GLUE We conducted additional experiments on CIFAR-100 with ViT-Small model. Four strong baselines are compared: FedAvg, SCAFFOLD, FedEF-TopK and FedLion. Table A3 shows that FedSMU consistently outperforms them on this larger model, providing a strong evidence that our sign-based 1-bit quantization approach generalizes well to more complex models. Due to time constraints, we have not yet evaluated FedSMU with LLMs on RoBERTa or GLUE, but consider this a promising direction for our future work. We will also include a discussion of centralized low-bit quantization methods [5, 6] in Related Work, to better position our contribution within the FL context. > Fluctuations in early stages We clarify this apparent discrepancy from the following perspectives. 1)Figs. 1(c) and 2 present different metrics. Fig. 1(c) shows the MU index over communication rounds, while Fig. 2 plots the test accuracy against cumulative communication cost. As such, the x-axes differ in both the scale and meaning, and are not directly comparable. 2)In Fig. 1(c), FedSMU maintains a stable MU from the start due to symbolization, while FedAvg gradually improves. In Fig. 2, the early accuracy fluctuations in FedSMU are due to the initial quantization noise when the model is still far from the convergence. These fluctuations would diminish as training progresses and when symbolic updates begin to stabilize the training process. > Theoretical assumption We acknowledge that recent studies suggest that mini-batch gradients in attention-based models may follow heavy-tailed distributions, thus challenging the standard assumptions in FL, such as the bounded variance. In this paper, our theoretical analysis is based on the widely adopted assumptions in centralized and federated learning, particularly for CNNs and LSTMs, which follows the assumption in [A3, A4]. Besides, our evaluation on larger models empirically shows that FedSMU still performs well on ViT-S, which might indicate a practical robustness to the non-Gaussian gradients. Finally, we totally agree that better aligning the theory with behaviors of Transformer-based models is an important direction, and will briefly discuss this in the revised theoretical section. [A3] X. Li, et al. "Analysis of error feedback in federated non-convex optimization with biased compression: Fast convergence and partial participation," ICML 2023. [A4] X. Huang, et al. "Stochastic controlled averaging for federated learning with communication compression," *arXiv:2308.08165* 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the additional experiments and clarifications. Most of my concerns have been addressed, and I will increase my score. I strongly recommend that the authors discuss the quantization error in more detail in their revision, especially in the context of LLMs. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ZtNk, We would like to express our gratitude once again for the time and effort you have dedicated in reviewing our paper, as well as in the rebuttal and discussion phases. Your insightful comments are invaluable for enhancing our work. In the final version of the manuscript and supplementary material, we will incorporate these additional experiments and related works, as well as expand our discussion on the quantization error, particularly in the context of LLMs. Specifically, prior quantization methods for LLMs [5, 6] primarily focus on compressing the weights and activations, where the quantization errors may significantly impact the MatMul operations (e.g., in the attention heads). In contrast, our work targets at the gradient quantization in FL, where the use of 1-bit symbolization emphasizes the direction of updates over their precisions. Our preliminary results when using the ViT-Small model in Table A3 (https://anonymous.4open.science/r/A-7823/F.pdf) indicate that the induced quantization error has a minimal impact on the performance of a transformer model. We believe that incorporating our gradient quantization scheme with the established low-bit techniques for weights and activations may offer a compelling path towards the fully quantized LLMs in the federated environments. Exploring this incorporation is inspiringly a next step, and will be considered as a promising direction of our future works.
null
null
null
null
null
null
Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings
Accept (oral)
Summary: This paper presents a new problem of predicting whether the model's performance on a runtime dataset (called user dataset) has decreased compared to the test dataset. To tackle this problem, they introduce a method called suitability filter, which consists in two main steps: 1. several sample-level performance scores from the literature are computed and fused together with a learned regression model. The output of the regression model should predict correctness of individual samples. 2. A statistical test is conducted on fused sample signals to predict whether the whole user dataset differs in performance from the test dataset. The authors present several theoretical proofs regarding the efficiency of their approach under restrictive assumptions. They conduct experiments to test their approach on benchmark datasets from WILDS. Claims And Evidence: In view of the experimental results, the method seems effective in practice. My doubt is regarding the applicability/usefulness of the theoretical results. How are these bounds used in practice? Are they actually verified from the experiments? I did not find answers to these questions in the manuscript. Methods And Evaluation Criteria: The proposed evaluation is sufficient and rigorous. They do not compare with existing literature but it is OK as the problem presented is new. However, I would have liked to see an ablation study to see the importance of the two contributions. For instance, I would have liked to see how their signal fusion approach works to detect errors on individual samples. As this is a widely studied problem, this would have allowed for comparison against the literature. Theoretical Claims: I did not check the details of the proofs. Experimental Designs Or Analyses: The experiments are sufficient and well designed. However, I believe that Section 5.1 is really hard to follow. Experimental description should be made more simple. Why introducing new variables (n, m, k) to present the experimental setup? It would be much clearer to simply replace these values with the true values that were actually used in the experiments. Supplementary Material: I checked the part related to the experimental descriptions and additional experiments, but I just skimmed through the proofs. Relation To Broader Scientific Literature: The literature in the broad topic of runtime monitoring of neural networks is becoming very broad, but the literature review proposed in the paper is sufficient to introduce the new problem and present the novelty of the work. Essential References Not Discussed: Nothing specific Other Strengths And Weaknesses: The ideas presented in this paper are very interesting. Overall the presented framework is generic and modular, which is good. However, I sometimes felt like the notations and concepts presentations were overly complex, even for simple ideas, making the paper hard to follow. Regarding the introduction, I think it would have been worth discussing further the relevance and applicability domain of the newly introduced problem. Unlike most existing methods, they propose to not consider data suitability at the sample level, but rather at the dataset level, by aggregating acceptance/rejection signals. This is an interesting take, which is interesting for some applications but not all. For example, the credit risk model presented in the introduction is a good use case where dataset level suitability is interesting. However, other cases such as autonomous systems might not respond very well to such kind of safety monitors. It would be good to give more examples and to delineate applicability scenarios in the introduction. Other Comments Or Suggestions: - This work focuses only on classification tasks. This should be reflected in the title: Suitability Filter: A Statistical Framework for Classification Model Evaluation in Real-World Deployment Settings. - Definition 3.2: The use of both "if and only if" and "with high probability" in the same sentence seem contradicting. - Why the name Dsf? What does it stand for? - In eq 2 and 3, same notations (x1) represent different things. - I am not sure to understand the meaningfulness of Theorem 4.2 and its practical relevance and the insights that it brings. Is the assumption of independant and normally distributed samples really relevant? Where is it used later on? If it is only used in the proofs, maybe it should be placed in the appendix. - Typo in caption of table 1: =rate - Two different definitions associated with parameter alpha: In the experiments it is the FP rate whereas in the theroem 4.2 it is different. Questions For Authors: See questions and comments from other sections. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their time spent assessing our paper and for their critical and detailed feedback. We particularly appreciate their positive evaluation of our work’s novelty, the framework’s modularity and our experimental evaluation. We provide our answers to the reviewer’s raised concerns below, are looking forward to their response, and hope for a favorable reassessment of our score. **C1: Practical applicability of theoretical results.** As can be seen in Figure 5, margin adjustments become increasingly important the further away we move from the test set accuracy. On FMoW-WILDS, adjusting a 5% margin with e.g., 500 labeled user samples reduces the false positive rate from unbounded to below 0.03, satisfying a 0.05 significance level. The number of labelled samples available plays a crucial role here: the higher the number of samples, the more accurate the estimates of $\Delta_u$ and thus, the more precise the margin adjustment that leads to theoretical guarantees. **C2: Importance of the prediction correctness estimator for the end-to-end suitability filter performance.** Our prediction correctness estimator leverages multiple suitability signals—several of which have been independently proposed in the selective classification literature (see Section 2)—yielding performance on par with or better than any single signal for misclassification prediction. The contribution of the statistical test to the end-to-end suitability filter performance is orthogonal to this, as it allows for the aggregation of prediction correctness estimates across data samples. **C3: Clarity of Experimental Setup (Section 5.1).** We thank the reviewer for remarking that Section 5.1 is hard to follow and propose to rewrite the section as follows: "We evaluate the suitability filter on FMoW-WILDS, CivilComments-WILDS and RxRx1-WILDS. For each dataset, we follow the recommended training paradigm to train a model using empirical risk minimization and the pre-defined $D_{\text{train}}$. We then further split the provided in-distribution (ID) and out-of-distribution (OOD) validation and test splits into folds as detailed in Appendix A.2.2 (16 ID and 30 OOD folds for FMoW-WILDS, 4 ID and 8 OOD folds for RxRx1-WILDS, and 16 ID folds for CivilComments-WILDS). We conduct two types of experiments: first, each ID fold is used as the user dataset ($D_u$​), and the remaining ID data is split into 15 subsets, used as $D_\text{test}$ and $D_\text{sf}$​. This yields 16×15×14 experiments for FMoW-WILDS, 4×15×14 for RxRx1-WILDS, and 16×15×14 for CivilComments-WILDS. Second, each OOD fold is used as $D_u$, and the ID data is split into 15 subsets, used for $D_\text{test}$ and $D_\text{sf}$​​. This yields 30×15×14 experiments for FMoW-WILDS and 8×15×14 for RxRx1-WILDS." **C4: Delineation of applicability scenarios.** We thank the reviewer for highlighting the potential limitations of our suitability monitors in autonomous systems. We agree that for critical applications, focusing on average-case suitability is not sufficient. Instead, ensuring good performance on a per-instance (or worst-case) basis is crucial, a problem setting that is addressed in the selective classification literature. Our work, however, focuses on scenarios where average-case suitability is the primary concern. We will ensure to clarify this in future versions of the paper. **C5: Possibly contradictory statement in Definition 3.2.** We respectfully disagree with the concern that using “if and only if” and “with high probability” in the same sentence is contradictory. As per Definitions 3.1 and 3.2, "if and only if" defines the deterministic rule for outputting “SUITABLE,” while "with high probability" captures the uncertainty in estimating model performance (due to sampling or variance). Once the high-probability condition is met—i.e., the model's performance on $D_{u}$ is within a margin $m$ of its performance on $D_\text{test}$—the filter deterministically outputs “SUITABLE.” **C6: Complexity of notations and nomenclature.** We tried to strike a balance between theoretical rigor and avoiding overly complex notations. Thus, in both equations 2 and 3, $x_1$ represents the first sample of the respective dataset. $D_\text{sf}$ stands for the **s**uitability **f**ilter data that is used to learn the prediction correctness estimator. **C7: Meaningfulness of Theorem 4.2.** This theorem, drawn from established results (Lehmann et al., 1986; Wellek, 2002), underpins our theoretical analysis. The assumption of independent, normally distributed samples is standard for many tests, allowing us to use these guarantees. Here, $\alpha$ denotes the significance level, the probability of a false positive. Lastly, we will consider changing the title to reflect that our work focuses on classification only as suggested by the reviewer. We hope that we have addressed the reviewer’s concerns and are looking forward to their response.
Summary: This paper proposes a new paradigm for detecting whether the performance on unlabeled data processed during deployment, falls by a certain margin below the performance on a held-out dataset sampled from the training distribution but used for evaluation. This is a novel framework for detecting performance deterioration in deployment settings under covariate shift. Claims And Evidence: The results indicate that the proposed approach shows good performance on both in-distribution and out-of-distribution experiments across different tasks and datasets. Methods And Evaluation Criteria: This paper evaluates on different tasks and corresponding benchmark datasets. The datasets for classification span land use classification, text toxicity classification, and genetic perturbation classification. Theoretical Claims: I have looked at the proofs at a high level, but they are beyond my area of expertise. I think that it would be best if someone with more expertise could kindly check the details. Experimental Designs Or Analyses: The paper uses feature groups in the benchmark datasets to understand out-of-distribution performance. This approach is sound as these groups are naturally occurring. The suitability signals seem to cover different metrics informative to the suitability filer. Supplementary Material: The supplementary material contains details on statistical hypothesis testing, performance breakdown over the different feature/covariate groups, description of benchmark datasets used in the study, etc. Relation To Broader Scientific Literature: Given the rise in deployed machine-learning models in safety-critical systems, this work is highly relevant. Essential References Not Discussed: The related works section describes different orthogonal ways to do different parts of their approach, and why and how they differ from the proposed approach. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time spent assessing our paper, and appreciate their positive evaluation of our work’s relevance, novelty, and soundness.
Summary: The authors developed a framework to evaluate how well a trained model will perform when deployment for real-world inference. The framework combines ideas from distribution shift detection, selective inference, and interestingly dataset inference. A specific instantiation of the framework is presented and evaluated on datasets from the WILDS benchmark collection. Claims And Evidence: One claim is unwarranted: "we are able to detect performance deterioration of more than 3% with 100% accuracy". Also the claim about being agonistic to the model architecture needs to be substantiated. Otherwise, the claims are reasonable. Methods And Evaluation Criteria: Yes Theoretical Claims: I skimmed over the proofs of Lemma 4.3. and COROLLARY 4.4. There are a few assumptions introduced but they are reasonable. Experimental Designs Or Analyses: Reasonable. WILDS is a a great benchmark of in-the-wild distribution shifts. Supplementary Material: I skimmed over all appendices A1-A4. Relation To Broader Scientific Literature: The presented work combines ideas from closely related topics and utilizes them to design a practical framework. Essential References Not Discussed: It would quite helpful to relate the presented solution with the statical view of uncertainty by Gruber et al: "Sources of Uncertainty in Supervised Machine Learning – A Statisticians’ View" https://arxiv.org/abs/2305.16703 Other Strengths And Weaknesses: This is a neat work that builds on the progress in distribution shift detection to provide a comprehensive and practical solution to make decision on the suitability of an ML model at deployment time. Some highlights: + careful usage of statistical measures to instantiate the proposed framework and calibrate it against a target dataset. + statical guarantees backed by theoretical analysis. + the authors discuss several practical aspects such as continuous monitoring. Some recommendations: - Provide more details about the model architectures used in the evaluation. Demonstrate quantiatively that you approach is agnostic to the architecture used. - Provide examples that the framework do not predict well. It is not hard to introduce delibrate perturbations to a dataset that degrades the performance significantly while passing all the checks you introduce. - Consider breaking down uncertainty into aleatoric and epistemic as proposed by Gruber et al (see reference cited above). - Provide figures to illustrate some key concepts such as calibration. Other Comments Or Suggestions: I encountered a few typos: - degredation => degradation - are no limited => not - deplyoment Questions For Authors: Can the developed framework be instantiated for evaluating adversarial robustness? Similarly, I would curious to see if adversarially-trained models fare better in handling distribution shifts as measured by your framework. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time spent assessing our paper and for their critical and detailed feedback. We particularly appreciate their positive evaluation of our work’s practical applicability, theoretical foundation and experimental design. We provide our answers to the reviewer’s raised concerns below, are looking forward to their response, and hope for a favorable reassessment of our score. **C1: Claim of detecting performance deterioration of more than 3% with 100% accuracy.** We assume that the reviewer refers to key contribution 4 at the end of the introduction section. It should be clarified that this is not a general performance guarantee but an empirical finding that has been validated across 29k experiments on FMoW-WILDS. We have made sure to clearly mention this in an updated revision of the paper and to point interested readers to Figure 4, which visualizes this finding. **C2: Claim of model architecture agnosticism.** The proposed suitability filter framework is inherently compatible with various model architectures because it only requires the model to be a classifier, without making any other assumptions. As mentioned in the introduction, we propose a well-performing default instantiation of the filter. This instantiation uses domain-agnostic suitability signals and can be applied to any classifier that outputs logits, regardless of its underlying architecture. To show this empirically, we evaluate this instantiation on diverse architectures including a DenseNet-121, a ResNet-50 and a DistilBERT-base-uncased model. Additional details about our experiments are provided in Appendix A.2.2. We hope that this addresses the reviewer’s concern and welcome any further questions. **C3: Connection to Gruber et al.** We appreciate the reviewer’s suggestion to consider the decomposition of uncertainty into aleatoric and epistemic factors, as proposed by Gruber et al. However, this approach does not directly align with our primary objectives. Our method does not rely on quantifying or separating out specific sources of uncertainty; rather, it compares predictive performance deviations between the user’s dataset and the model provider’s test set. This viewpoint effectively marginalizes over varying error sources that might lead to a decrease in accuracy, including aleatoric and epistemic uncertainty. Nonetheless, we acknowledge that such a decomposition may be valuable as an alternate instantiation of suitability filters beyond accuracy (i.e., determining suitability based on deviations in uncertainty beyond a margin $m$). **C4: Adversarial examples and the limitations of suitability filters.** We thank the reviewer for their insightful comments regarding deliberate perturbations and adversarial robustness. There seem to be different angles to this concern that we would like to address separately: 1. Deliberate perturbation of examples: The reviewer correctly pointed out that a model user could deliberately perturb their data to be misclassified by the model without these perturbations being detected by the suitability filter. One of the key underlying assumptions of our framework is that both the model provider and the model user are non-adversarial and provide representative samples of their datasets. We believe this is realistic because the model user's goal is to identify the most suitable model for their actual task. Deliberately perturbing their own data samples to pass a suitability check would undermine this goal. We will clarify this assumption in the paper. 2. Using suitability filters to evaluate adversarial robustness: While we have not confirmed this experimentally, we do think it should be possible to use suitability filters to evaluate adversarial robustness. We will leave it to future work to explore suitability filters where suitability is defined based on other metrics than accuracy. 3. Effects of distribution shift for adversarially-trained models: Our framework could indeed be used to evaluate whether adversarially trained models exhibit greater robustness to distribution shifts. Since the suitability filter is independent of the training algorithm, it could be used to compare the suitability of two models on the same user data, one of which was trained normally and the other with adversarial training. **C5: Figures to illustrate calibration.** We visualize the effect of miscalibration and motivate the margin adjustment under accuracy estimation error in Figure 3. In Figure 5 in the Appendix, we empirically show the effect of miscalibration on the FMoW-WILDS dataset. We would be happy to include additional figures to illustrate calibration if the reviewer has a specific figure in mind. Lastly, we thank the reviewer for pointing out the typos and we will make sure to correct them in future versions of our paper. We hope that we have addressed the reviewer’s concerns and are looking forward to their response. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response of the authors. I stay with my rating.
null
null
null
null
null
null
null
null
Risk Quadrangle and Robust Optimization Based on Extended $\varphi$-Divergence
Reject
Summary: This work integrates distributionally robust optimization (DRO) into the Fundamental Risk Quadrangle (FRQ) framework by using $\varphi$-divergence measures. By extending the $\varphi$-divergence measures, they are able to cover a wide range of applications in ML and finance. They derive the primal and dual representations of the risk, deviation, regret, and error, the four components of the extended FRQ, and also give a robust optimization (RO) interpretation using the dual. Finally, they provide examples to illustrate that the extended quadrangle recovers many existing quadrangles. ================================ I would like to also use this space to acknowledge that I am not an expert in FRQ and DRO. Although I work in somewhat adjacent areas, the FRQ framework is new to me. As I have already communicated with the ACs, my review will be of low confidence and that my evaluation should be taken lightly. I also apologize in advance for the lack of more valuable feedback from this review. ================================ # Update after rebuttal: My last comment posted on 08 Apr 2025 was not visible to the authors (my bad). I will paste it below as my update after rebuttal: > Thanks for the response. The classification example you just detailed is very helpful, especially the difference between large margin distribution machine and -SVM. I think it's definitely worth including in the main text if space allows. Thanks for fixing the plot as well. > I will increase my score, as I now have a better understanding of the paper, and believe that the authors will edit appropriately to improve the readability and strengthen the practicality argument of the FRQ framework. Claims And Evidence: Yes. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: I did not check the correctness of the proofs. I find the setup already being too technical for me and confusing in many places. Experimental Designs Or Analyses: This paper proposes an extension to an existing theoretical framework, therefore no experiments are needed. The authors provided a visualization of the risk identifier on a few tasks with synthetic datasets. The goal is to illustrate that a larger incurred loss corresponds to a larger weight being assigned to the data point. The figure should be made bigger though, or at least the label fonts should be made bigger. It is very hard to read even when zoomed in, let alone printed on paper. Supplementary Material: No. Relation To Broader Scientific Literature: I think the authors have done an adequate job stating their contributions in relation to existing works. I do not have enough expertise to elaborate more. Essential References Not Discussed: I do not have enough expertise to answer this question. Other Strengths And Weaknesses: I suppose it's nice to have an extended FRQ framework that covers all common applications in RO/DRO. Unfortunately, I fail to appreciate the FRQ framework itself, let alone the extension of it. I think the authors could spend a few more lines to motivate FRQ at the beginning. For instance, why do we need such a verbose framework just to fit in, say, linear regression. And why do we need another quadrangle? Perhaps I'm not understanding the basics here, so any clarifications would be appreciated. I can't tell if the material is too verbose and technical, or if the organization / writing can be improved, but I found myself very lost reading the paper starting from Section 3. I'll elaborate more on clarity later. Other Comments Or Suggestions: - In the first paragraph of Section 4.2, you said "... in the extended $\varphi$-divergence quadrangle corresponds to DRO, whereas the non-extended quadrangle corresponds to robust optimization (RO)". Should this be the other way around? From what I understand RO is extended but DRO is non-extended, which you also said in the contribution and can be inferred by other parts of the paper. - Line 1115: Figure ?? Questions For Authors: - I don't think I fully understand why the non-extended $\varphi$ divergence won't work. For example what goes wrong if we just plug in the non-extended $\chi^2$-divergence into the framework? - Section 3: I don't follow the conclusion "Therefore, for some risk measures, it may be even impossible to find an appropriate space". This section also seems a bit out of place --- to me it's more of a technical issue that can be deferred to the Appendix? - Definition 4.2: it's unclear to me why you need define the risk in terms of $\mathbb{E}[XQ]$, and why we need to restrict $\mathbb{E}[Q]=1$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and effort in evaluating our work. We understand that not all reviewers will be deeply familiar with both FRQ and DRO, and we sincerely value all feedback. **Experimental designs or analyses** - We will make the fonts and figures bigger. **Addressing weaknesses** - The objective function of learning tasks are often treated separately in each problem, such as in portfolio optimization, classification and regression. They are often termed risk, uncertainty, utility, error, loss etc, without a clear definition. FRQ provides a general framework that connects risk, deviation, regret, error, and bridges statistical estimation with optimization. It is most convenient to use Example 5.2 and 6.1 to illustrate. The risk measure is used as objective function in large margin distribution machine, while the deviation measure is used to quantify uncertainty in portfolio optimization, and the error measure is used for least squares regression. They are connected by the axiomatic relations in Def 2.8. New quadrangles will contain new relations that connects the objective functions in some learning tasks. Given another error measure that the reviewer might think of, we can generate a new risk quadrangle with Def 2.8. E.g. Example 5.3 and 6.2. **Addressing other comments or suggestions** - It is indeed the other way around. We will correct it. - We will correct the figure reference. **Addressing questions for authors** - The extended $\varphi$-divergence quadrangle includes (non-extended) $\varphi$-divergence quadrangle as a special case. However, the risk measure in a (non-extended) $\varphi$-divergence quadrangle is coherent. Coherent risk measures are monotone, that is, $\mathcal{R}(X) \leq \mathcal{R}(Y)$ if $X(\omega) \leq Y(\omega)$ almost surely. This property excludes important risk measures, such as mean-standard deviation risk measure, from consideration. It works to plug in the (non-extended) $\chi^2$-divergence, which generates the (coherent) $\chi^2$-divergence risk measure and the associated quadrangle in Example 5.6. The quadrangle elements can be used as objective functions for learning tasks such as classification and regression, and provide them with an interpretation of DRO based on $\chi^2$-divergence. By extending the $\chi^2$-divergence, we recovers the mean quadrangle in Example 5.2. The associated risk measure is mean-standard deviation risk measure, which is regular but not coherent. The quadrangle elements are important - they are objective functions of Markowitz portfolio optimization, large margin distribution machine, and least squares regression (See Example 6.1). - We will merge Sec 3 with Sec 2. We will better phrase the claims for clarification. We meant that by the referred proposition, if a coherent risk measure takes infinite value at one point in $\mathcal{L}^p$, then it is infinite in a dense subset in $\mathcal{L}^p$. This makes the risk measure less informative in comparing outcomes. Ideally, we want to select a space such that all considered risk measures have good behavior, that is, have finite value. However, some risk measures, such as the worst-case risk $\sup X$ is only finite in $\mathcal{L}^p$ when $p=+\infty$. After reviewing the choices of space in the literature, we settle with $\mathcal{L}^2$ in general and switch to finite sample space when necessary. - * The restriction $\mathbb{E}[Q] = 1$ is required by Radon-Nikodym derivative, and ensures that the following two definitions are equivalent for (non-extended) $\varphi$-divergence: - Def 4.2: $\mathcal{R}\_{\varphi,\beta}(X) = \sup\_{\mathcal{Q}\in \mathcal{Q}\_{\varphi,\beta}^1}\mathbb{E}[XQ]$, where $\mathcal{Q}_{\varphi,\beta}^1 = \\{ Q \in \mathcal{L}^2: \mathbb{E}[Q]=1, \mathbb{E}[\varphi(Q)]\leq \beta\\}$. - Def 2.3: $\mathcal{R}\_{\varphi, \beta}(X) = \sup\_{P\in\mathcal{P}\_{\varphi,\beta}} \mathbb{E}\_P[X]$ where $\mathcal{P}\_{\varphi,\beta} = \\{ P\in \mathcal{P}(\Sigma) : D\_\varphi(P||P\_0) \leq \beta \\}$. By Radon–Nikodym Theorem, since $P$ is absolute continuous w.r.t. $P_0$, there exists a measurable function $Q\geq0$ such that $P(A) = \int_A Q dP_0$. $Q$ is the Radon–Nikodym derivative denoted by $dP/dP_0$. Note that $\mathbb{E}\_{P_0}[Q]=\int_\Omega QdP_0=P(\Omega)=1$. * Definition in terms of $\mathbb{E}[XQ]$ facilitates connection to other elements: - Letting $\varphi$ to be an extended divergence function, we obtain the extended $\varphi$-divergence quadrangle. In Def 4.2, extending $\varphi$ to the negative domain allows $Q$ to be negative. In contrast, it does not change Def 2.3. - Removing $\mathbb{E}[Q]=1$, we obtain the regret measure in Eq (2). We will incorporate the discussion into Def 2.3, 4.2 and Sec 4.2 for clarification. We hope our reply effectively addresses the reviewer's concerns. We are happy to answer any further questions the reviewer may have. --- Rebuttal Comment 1.1: Comment: Thank you for the very detailed explanations. I think I understand the value of FRQ and the technicalities a bit better now. The utilities gained section for reviewer gFjF is also very helpful. I also agree that the mathematics and the framework are both quite elegant. I still think Section 3 should be deferred instead of merged into Section 2, which is already pretty heavy with all the definitions. Instead, I would include perhaps the actual quadrangle figure (Fig 2) and/or an elaboration of why we should care about the FRQ in the first place, using this space. I think this might help the readers transition into Section 4, with an appreciation and solid understanding of FRQ, before entering the extended FRQ. Regarding the motivation: I agree that "risk, loss, uncertainty" are often used interchangeably. It's probably a good idea to properly define and connect them. But I'm not entirely sold on how much we can practically benefit from such a verbose framework, just to connect them. You mentioned in your response to reviewer gFjF that one of the benefits of FRQ is it allows us to solve an equivalent optimization problem when given one. Do you often yield with one that's easier to solve? More minor suggestions regarding Figure 1 - I think you can remove the words "Risk envelope" in the caption of all 3 subfigures and just say it in the main caption - Instead of giving the actual values of the optimal solution (decision line, portfolio weights, etc.) which you already do in the Appendix, it might be helpful to use this space to summarize on the relationship between the color intensity and optimality. And instead of just saying darker = higher $Q^*$, it'd be helpful to explain what higher $Q^*$ mean in each of the applications. You did this in Appendix J, but I think it's better to just summarize it in the main text / caption. This would make the authors appreciate the (extended) FRQ more, as normally you'd just solve a classification problem and get a line of separation, but here you also obtain values measuring how "difficult" the examples are via FRQ, in a systematic way. To me, the benefit here is that for classification, the "margin" measures "difficulty", which is probably proportional to the $Q^*$ values you compute for that application. For a different application, one'd need a different measure of difficulty. With the help of FRQ, this measure is streamlined into just $Q^*$. Am I understanding this correctly? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for appreciating the elegance of the framework, offering detailed suggestions, and raising points that led us to reflect more carefully on the relations between difficulty, margin, and weight. - We will move Sec 3 to the appendix and bring some discussion on FRQ from the appendix into Sec 2. - It is often easier to solve the minimization problem than the minimax problem (Sec 6). The inner maximization in the minimax formulation is over an ambiguity set $\mathcal{Q}$. When the sample size is $n$, $Q \in \mathcal{Q}$ can be represented as an $n$-dimensional variable, where each element is a weight on a sample. A large $n$ may lead to numerical issues in optimization. In contrast, the primal representation involves only two variables, $t$ and $C$, which in some cases can be solved explicitly and eliminated. E.g. robust expected margin maximization (Example 6.1) involves $m+n$ optimization variables, where $m$ is the dimension of $(\boldsymbol{w}, b)$ and $n$ is the sample size. The equivalent large margin distribution machine is a mean–standard deviation minimization, which involves only $m$ variables and is convex with respect to the negated margin $-L$. Standard algorithms can then be applied. - We will move "Risk envelope" from the subfigure captions to the main caption. - We will explain $X$ and $Q$ in each application in the caption. The random variable $X$ (not to be confused with feature $\boldsymbol{X}$) can be interpreted as the difficulty of a sample, while $Q^*$ represents the corresponding weight. $Q^*$ depends on $X$. In different applications, the difficulty is measured differently. $Q^*$ changes accordingly. - We illustrate with classification, where random variable $X$ is the negated margin $-L$. - Following notation in Sec 6, the margin is $L(\boldsymbol{w}, b) = Y(\boldsymbol{w}^\top \boldsymbol{X} - b)$. The margin is proportional to the signed distance from the feature point $\boldsymbol{X}$ to the decision boundary, which has normal vector $\boldsymbol{w}$ and signed distance $b$ from the origin. A classification problem minimizes $\mathcal{R}(-L)$, a risk measure of the negated margin. The decision rule assigns label $1$ if $\boldsymbol{w}^\top \boldsymbol{X}>b$, and $-1$ otherwise. Points with negative margin are misclassified. - Consider the minimax formulation $\min_{\boldsymbol{w},b}\max_{Q\in\mathcal{Q}}\mathbb{E}[-QL]$. $\mathbb{E}[-Q L]$ can be viewed as a weighted sum of negated margins. The solution $Q^*$ (risk identifier) represents the worst-case weights on samples from the risk envelope $\mathcal{Q}$, which maximizes the weighted sum given $(\boldsymbol{w},b)$. Thus, samples with smaller margins $L$ are assigned larger weights. - Proposition 4.9 derives the expression for the risk identifier associated with a given $\varphi$. In classification, $Q^*$ satisfies $Q^* \in \partial \varphi^*\left( -\frac{1}{t^*}L - C^* \right)$. To obtain $Q^*$, one can solve the easier problem $\mathcal{R}(-L)$ and obtain $(\boldsymbol{w},b)$, then applies Proposition 4.9 to calculate it. - The margin can be interpreted as a measure of difficulty: the smaller the margin, the farther a misclassified point (if exists) lies from the decision boundary, and the harder it is to classify correctly. With different risk measure, margins are weighted differently. - In large margin distribution machine, $\partial \varphi^*(z) = \frac{1}{2}z + 1$ (Example 5.2). Thus, $Q^*$ is linear in the margin $L$. - In $\nu$-SVM, $\partial \varphi^*(z) = 0$ if $z < 0$; $[0, (1 - \alpha)^{-1}]$ if $z = 0$; and $(1 - \alpha)^{-1}$ if $z > 0$ (Example 5.3). Roughly, it assigns $(1 - \alpha)^{-1}$ to the samples with the largest $(1-\alpha) \times 100\\%$ negated margins, and zero to the rest. The intuition is that it minimizes the average of the largest negated margins. - In other applications in Sec 6, such as portfolio optimization, a similar interpretation holds. The loss is $\boldsymbol{w}^\top \boldsymbol{L}$, and samples with larger losses are assigned larger weights $Q$. In mean–standard deviation optimization, weights are linear in loss; in CVaR optimization, the largest-loss samples are assigned weight $(1 - \alpha)^{-1}$, as in $\nu$-SVM. - Anonymous link: https://limewire.com/d/anoi8#5HT5Nevtm8. (Captions removed due to policy.) - Fig 1 is updated: we set $\beta = 1$ (vs. varying values in the appendix). Color intensity in Fig. 1(a) was mistakenly reversed—now decision is $1$ on the left, $-1$ on the right; misclassified samples have higher weights. - We added visualizations of risk envelopes for Example 6.2 ($\alpha = 0.1$). Fig 2(a)(b)(c) correspond to $\nu$-SVM, CVaR portfolio optimization, and quantile regression, respectively. Fig 1(a) and 2(a) share one dataset; the others use another. Further details will be provided in the appendix. - Missing variable $b$ will be added to Eqs (13)(16)(17).
Summary: This paper discussed the extended phi-divergence risk measure Claims And Evidence: It is hard for me to understand the authors' claims, perhaps due to the imprecise writing of this manuscript. The risk measure and study of phi-divergence have been extensively done in literature, such as [1]. The nuermcial study is also hard to interpret and we cannot see which framework will work well in practical applications such as regression, classification, portfolio optimization. [1] Shapiro A, Dentcheva D, Ruszczynski A. Lectures on stochastic programming: modeling and theory[M]. Society for Industrial and Applied Mathematics, 2021. Let me elaborate in the following: 1. it seems Theorem 4.4 is based on Ang et al. (2018); Sun et al. (2020). I hope the authors could elaborate impact and novelty here. 2. it seems Theorem 4.6 is an application of dual representation of DRO. is it true? 3. it seems Proposition 4.9 is the optimality condition restatement for phi-divergence dro 4. i hope the authors can elaborate more on their experiment setup and results. Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: v Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the taking the time to review our manuscript. Below are our point-to-point response. - **Summary:** The key concept of our paper is the extended $\varphi$-divergence quadrangle. The extended $\varphi$-divergence risk measure is merely one of the four elements in the quadrangle. We kindly refer the reviewer to our reply to Reviewer eEdZ for a brief motivation of FRQ. - **Reference:** The provided reference [1] is already cited in our paper, along with some other insightful papers by the same authors. We would like to point out that [1] only studies coherent risk measure generated by divergences. Our paper extends the notion of $\varphi$-divergence to define a new risk measure, and study the other elements in the corresponding quadrangle along with the new risk measure, none of which appears in [1]. - **Thm 4.4:** The novelty lies in that we study regular measures instead of coherent measures, and we study four elements of the quadrangle instead of just risk and regret. The proof in Appendix B consists of several parts: (i) Verify that the risk measure in Thm 4.4 satisfies the axioms in Def 2.4, namely, closedness and convexity, constancy and risk aversity. Ang et al. (2018) proves the aversity result for coherent risk measure. We verify that the proof technique is valid for regular risk measure; (ii) Verify that the regret measure in Thm 4.4 satisfies the axioms in Def 2.6, namely, closedness and convexity, zeroness and risk aversity; (iii) Verify that the regret and risk satisfies Q2 in Def 2.8. Sun et al. (2020) studies coherent risk and regret measures. We verify that the proof techniques are applicable to regular risk and regret measures; (iv) Derive the deviation and error using Q3 in Def 2.8; (v) Apply Thm A.1 to show that infimum over the statistics is attainable. - **Thm 4.6:** It is more than the dual representation of DRO. (i) The considered risk measure is regular but not necessarily coherent. The proof on dual representation of coherent risk measures in DRO needs to be verified. (ii) The theorem contains results on four elements of the quadrangle, more than just the risk measure. (iii) The theorem considers $\mathcal{L}^2$ space with a general definition of $\varphi$-divergence function. As is discussed in Sec 3, the generality requires extra technical details in the proof. - *Proposition 4.9:* In Appendix G.2, we proved Proposition 4.9 with Lemma G.5 (Bauschke & Combettes (2011)) and Proposition 8.36 in Royset & Wets (2022). The result is for both regular risk measure and regular regret measure. We welcome any reference on the optimality condition statement for $\varphi$-divergence DRO. - **Experiments:** The experiments aim at visualizing the risk envelope (Def 2.11) in classification, portfolio optimization, and regression associated with the extended $\chi^2$-divergence quadrangle (Example 5.2, 6.1). They are large margin distribution machine, Markowitz optimization and least squares regression, respectively. The risk envelop can be viewed as the worst-case weight on samples in a RO/DRO problem. The visualization helps intuitive understanding on the primal problems by showing the implicit weighting on the data. The visualization does not aim at providing a guide on which framework is better. We will add a figure for another divergence to compare with the extended $\chi^2$-divergence. The dual representation (Sec 4) can inspire future work on choosing the learning tasks. Reformulating learning tasks as RO/DRO with different ambiguity sets allows us to interpret choices of objective functions as choices of divergence functions. E.g. SVM vs. Large Margin Distribution Machine can be thought of as indicator divergence vs. extended $\chi^2$-divergence. The details are in Example 5.3, 5.2, 6.1, 6.2. Research on which statistical divergence better quantifies ambiguity can play an important role in choosing the objective function through our framework. * Experiment setup: In each of the three problems (Example 6.1), we follow the same procedure. First, we simulate the data from Gaussian distribution(s). Next, we solve the primal problem with convex optimizer. Then, we compute the worst-case weight by applying Thm 4.5. The weight, samples and optimal solution are graphically presented. The detailed setup including the distribution parameters and are in Appendix J. * Experiment result: The optimal solution obtained in each problem are listed in the caption. The weight and the optimal solution confirm our intuition that in a RO/DRO problem, larger weights are assigned to samples incurring larger losses. We kindly refer the reviewer to our reply to Reviewer gFjF for a list of practical utility gained from the derived relations. We hope this response effectively addresses the reviewer's concerns. We are happy to answer any further questions the reviewer may have.
Summary: There exists the notion of a Fundamental Risk Quadrangle which discusses the relationships between 4 metrics of uncertainty. Specifically, Risk, Error, Deviation and Regret. In this current paper, the authors focus on the notion of the FRQ in the context of risk measures (along with others) as derived by the standard and the extended $\phi-$divergence functions. The show that elements of the FRQ derived from such functions have equivalent representations as Robust and Distributionally Robust Optimization problems. The then present some practical examples of FRQs generated from $\phi-$divergence functions and how the resulting reformulation matches existing FRQs. They also provide a result which allows one to recover the $\phi$-divergence function from elements of an FRQ. Finally, they visually demonstrate their results on some synthetic problems. ## Update after Rebuttal I have maintained my score after the author's response. Claims And Evidence: The authors make mainly theoretical claims in the paper where they demonstrate how using the $\phi-$divergence based functions can lead to RO/DRO formulations in the context of FRQ. These claims are backed up by convincing proof. Methods And Evaluation Criteria: The paper is mainly theoretical. There is only one set of experiments which is done using synthetic data and which is mainly a demonstration of the fact that the theoretically develop RO/DRO formulations make sense numerically as well. Theoretical Claims: I primarily verified the correctness of the key results such as Theorem 4.4 and Theorem 4.6. Experimental Designs Or Analyses: The paper is primarily theoretical and does not use experiments to illustrate the method beyond some simple analysis. Supplementary Material: I reviewed the proofs of the Key results from the supplementary material along with the experimental setup. Relation To Broader Scientific Literature: The paper builds upon existing work by Rockafellar & Uryasev (2013) and shows the how RO/DRO can be incorporated in the framework of the Fundamental Risk Quadrangle. It also shows how several existing risk quadrangles can be expressed as $\phi$-divergence based risk quadrangles and thus gain new interpretations as robust. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: Strengths 1. The idea of using $\phi$-divergence based functions to connect the FRQ to RO/DRO is novel while further demonstrating the benefits of robust optimization. 2. The authors demonstrate how the $\phi$-diverged based FRQ covers a variety of existing quadrangles. Weaknesses 1. The paper is primarily theoretical and extends the FRQ framework to RO/DRO. However, there is not much discussion of how this connection can be leveraged for practical applications. Other Comments Or Suggestions: There are several places in the paper where the references have not been compiled correctly see Appendix G, H and J. Questions For Authors: 1. Could the authors discussion a bit about how this connection can be leveraged for improvement the use of these risk measures or of DRO in practical applications. 2. Can this FRQ approach also be extended to DRO problems with ambiguity sets constructed from distance metrics such as Wasserstein or Sinkhorn distances? If not then is there some key property missing which these metrics do not have? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would love to thank the reviewer for checking the claims the proofs, and raising insightful questions. Below are our response. **Practical applications.** We kindly refer the reviewer to the list on utility gained from the relations in our reply to Reviewer gFjF. **Extension.** Yes, it can be extended using results in [1]. - A risk measure can be defined by the Wasserstein/Sinkhorn ambiguity set $\mathcal{R}\_\beta(X) =\sup\_{P \in \mathcal{P}} \mathcal{E}\_P[X]$, where $\mathcal{P} = \\{P\in\mathcal{P}(\Sigma): \mathcal{W}(P||P\_0) \leq \beta \\}$. It satisfies the axioms of regular risk measure in Def 2.4. - A regret measure uniquely determines a risk measure via Q2 in Def 2.8, but the regret measure that generates a given risk measure is not unique. The challenge is to find a regret measure that makes sense, like we did in the extended $\varphi$-divergence quadrangles. Theorem 4.5 in [1] provides a full and clear picture to obtain the coherent regret, and Eq 4.22 in [1] provides one convenient way. However, there might not be a intuitive interpretation as many $\varphi$-divergence quadrangles have. - Our study relies on the dual representation of extended $\varphi$-divergence risk measure $\mathcal{R}\_{\varphi,\beta}(X) = \sup\_{\mathcal{Q}\in \mathcal{Q}\_{\varphi,\beta}^1}\mathbb{E}[XQ]$, where $\mathcal{Q}\_{\varphi,\beta}^1 = \\{ Q \in \mathcal{L}^2: \mathbb{E}[Q]=1, \mathbb{E}[\varphi(Q)]\leq \beta\\}$. We obtain the extended $\varphi$-divergence regret measure by removing $\mathbb{E}[Q]=1$ (Def 4.3, Thm 4.4). Since the dual representation of the risk measure based on Wasserstein ambiguity set is not in such form, our technique cannot be applied directly. We hope this response effectively addresses the reviewer's concerns. We are happy to answer any further questions the reviewer may have. [1] Rockafellar, R.T. Distributional robustness, stochastic divergences, and the quadrangle of risk. Comput Manag Sci 21, 34 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your responses and for answering my questions. After considering them, I have decided to maintain my score.
Summary: The authors introduce a new divergence measure that allows for potentially negative weights. They then derive Fundamental Risk Quadrangles based on this divergence measure. It is shown that many known learning tasks fall within this FRQ framework with the proposed extended divergence measure. ## Update after rebuttal I appreciate the authors response. I will leave my score unchanged. Claims And Evidence: The claims are proved in the supplementary materials. Methods And Evaluation Criteria: The authors show that many learning tasks fall within the same mathematical framework. The presented theory seems adequate for this purpose. They also demonstrate their findings with some experiment on data. Theoretical Claims: I did not thoroughly check the proofs in the appendix, but did verify some of the given example applications of their theorems. Experimental Designs Or Analyses: The design of the (small) experiment in Section 8 appears reasonable. Supplementary Material: I read SM A, and scrolled over the remainder of the supplementary material, but did not thoroughly review it. Relation To Broader Scientific Literature: In prior work, the FRQ framework was established that connects several quantities (risk, regret, deviation and error) using the $\phi$ divergence. This paper extends the definition of $\phi$-divergence and derives the corresponding quadrangle. The authors show that this extended framework contains many learning tasks previously studied in the literature. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: * The article derives interesting relations between different criteria * The mathematics is elegant. Weaknesses: * The structure is somewhat confusing. * The paper is hard to read if you're new to DRO or FRQ. * The paper reads like an endless enumeration of theoretical relations between different optimization criteria, without much discussion of what is gained from establishing these relations. Other Comments Or Suggestions: It would be helpful to include the discussion of Supplementary Material A to the main text. For readers that are not familiar with FRQ, it is otherwise really difficult to understand the point of the paper. The notation $\mathcal{L}^2$ is not introduced. Are these random variables with bounded second moment? In Definition 2.2, the concept of domination (between distributions) is not introduced. Is this stochastic domination? The Radon-Nikodym derivative is not introduced/defined. Section 2.1 seems like an endless enumeration of definitions. It would be helpful to give some pointers in between about the roles of these different definitions in the framework. In Definition 2.7 (error) it says "$\forall X\neq$ const". Shouldn't this be $X\neq 0$? The placement of Section 3 seems somewhat odd. Shouldn't this just be a subsection of Section 2? It is very short and it feels like the choice for $\mathcal{L}^2$ should be motivated earlier. Perhaps it would improve the structure of the paper to focus on a few results/relations rather than having such a long enumeration of different relations and examples. Questions For Authors: It is nice and elegant that all these different quantities are related in the FRQ framework, but it is somewhat unclear what this brings. Does this allow to transfer results about one optimization problem to the other? What is the utility of these relations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for highlighting areas where additional clarification and context are needed, which greatly helps us improve the presentation of our paper. Below, we address the concerns regarding background introduction, the utility of our derived relations, and specific technical points. **Addressing weaknesses** - Following the reviewer's suggestion, we will incorporate additional explanations in the Preliminaries section for readers new to DRO or FRQ. We will also integrate relevant discussions from SM A. Due to space constraints, we kindly refer the reviewer to our response to Reviewer eEdZ for further background. - The relations studied in this paper are: $(i)$ Def 2.8, 2.9 and Thm 2.10 - projection from error $\mathcal{E}$ to deviation $\mathcal{D}$; $(ii)$ Sec 4.1 and 4.3 - dual and primal representation of a quadrangle; $(iii)$ Sec 4.4 - relation between (non-extended) $\varphi$-divergence quadrangle and extended $\varphi$-divergence quadrangle. We will add a paragraph to summarize the **utility gained from the relations:** - The relations $(i)(ii)$ allow to **transfer results** about one optimization problem to the other. By relation $(i)$, a regression problem (error minimization) is equivalent to a constrained deviation minimization problem. E.g. least squares regression $\min_{f,C}||Y-f(X)-C||_2$ can be solved by minimizing standard deviation $\min_f\sigma(Y-f(X))$ and compute $C=\mathbb{E}[Y-f(X)]$. Statistical estimation is connected with deviation (and thus risk) in operations research. By relation $(ii)$, many common learning tasks can be reformulated as RO/DRO problem. E.g. Example 5.2 reformulates mean-standard deviation optimization as DRO with extended $\chi^2$-divergence ambiguity set. Due to the equivalences, we expect future work to transfer the results, such as on estimation efficiency and generalization bound, between the problems. - Relation $(ii)$ reformulates minimax problems as minimization problems with an additional variable $t$, facilitating **efficient problem solving** by convex optimization. $t$ can be solved in some cases, which further simplifies the problem, such as standard deviation minimization. - Relation $(ii)$ provides a new **statistical perspective** on comparing well-known learning tasks. Reformulating learning tasks as RO/DRO with different ambiguity sets allows us to interpret optimization criteria as choices of divergence functions. E.g. SVM vs. Large Margin Distribution Machine can be thought of as indicator divergence vs. extended $\chi^2$-divergence. The details are in Example 5.3, 5.2, 6.1, 6.2. Research on which statistical divergence better quantifies ambiguity can play an important role in choosing the optimization criterion through our framework. - By Relation $(ii)$, the solution of RO/DRO in dual representation helps **intuitive understanding** on the primal problem that does not explicitly contain weight. The solution to dual problem contains the worst-case weight $Q$ on the samples, which is discussed in Sec 4.2. Different divergence leads to different $Q$. Visualizing the weight helps understand how each primal problem implicitly weighs the data. In addition to Fig 1 associated with extended $\chi^2$-divergence, we will add a figure for another divergence as comparison. - Relation $(iii)$ **connects RO and DRO** by showing RO is more conservative. The ambiguity set defined by the extended $\varphi$-divergence contains the ambiguity set defined by the (non-extended) $\varphi$-divergence. Thus, the functionals in the the extended $\varphi$-divergence quadrangle upper bounds their counterparts in the (non-extended) $\varphi$-divergence. E.g. Example 4.8 uses mean-standard deviation risk measure as a conservative version of the $\chi^2$-divergence risk measure. Relation $(i)$ is studied in the referred literature, while $(ii)(iii)$ are novel. We use the mean quadrangle as an illustrative example. More examples can be found in Sec 5.1, 5.2. **Addressing other comments or suggestions** - Yes, $\mathcal{L}^2$ is the space of RVs with bounded second moment. - A measure $P$ is dominated by measure $P_0$ if $P(A)=0$ implies $P_0(A)=0$ for all measurable set $A$. To avoid confusion with stochastic dominance between random variables, we will instead use the term "absolute continuity". We will introduce Radom-Nikodym derivative in Preliminaries. - We appreciate the reviewer for spotting the error. It is $X\neq 0$, since $\mathcal{E}$ measures the nonzeroness. - We will merge Section 3 with Section 2. We hope this response effectively addresses the reviewer’s concerns. We are excited to share our findings with the community and are happy to answer any further questions the reviewer may have!
null
null
null
null
null
null
Maximizing Intermediate Checkpoint Value in LLM Pretraining with Bayesian Optimization
Accept (poster)
Summary: The paper provides an interesting Bayesian optimization based method for selecting merging weights between latest checkpoints saved during LLM pre-training. This method showed performance speed ups in empirical results and theoretical guarantees with reasonable assumptions. The initial pilot experiments were helpful to understand the motivation. Overall LLM training efficiency is an important area and this is a timely paper. Claims And Evidence: Some of the claims are misplaced for instance. 1. Contribution a) This claim although has associated results but is not novel. For instance [1-2] specifically [1] already showed that llm pre-training can be improved through checkpoint averaging. I believe not citing key paper in one's exact area is not good practice. I fully understood that the key contribution is to find optimal weights for latest checkpoints. I believe the authors need to re-write or clarify this part. Contribution b) and c) are well placed and has sufficient evidence. The theoretical claims [line198-215] about convergence and tighter bound is all compared with regular training. Some prior works [3] have proved similar bounds and shown that LAWA [1] and [2] SWA based averaging techniques has similar bounds. I could not access the novelty or relevance of the theoretical analysis given the prior works. [1] Sunny Sanyal, Atula Tejaswi Neerkaje, Jean Kaddour, Abhishek Kumar, and Sujay Sanghavi. 2024. _Early weight averaging meets high learning rates for LLM pre-training._ In _First Conference on Language Modeling. (https://arxiv.org/abs/2306.03241) [2] Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, A. G. Averaging weights leads to wider optima and better generalization. Mar 2018. [3] Wang, P., Shen, L., Tao, Z., Sun, Yan Zheng, G., and Tao, D. A unified analysis for finite weight averaging. _arXiv preprint arXiv:2411.13169_, 2024b. Methods And Evaluation Criteria: The authors have explained the method and evaluation criteria reasonably clearly. Here are some of my concerns The method and set up seems to be pretty similar to LAWA [1] with some changes. Not mentioning LAWA or using it as a baseline seems not appropriate. The authors have used model souping as key baseline. In model souping multiple models are trained simultaneously with different hyper params and then souped (let's call it population averaging). I am not sure why this is a good baseline for this paper which does tail averaging again like LAWA , SWA or EMA (exponential moving average). The authors need to clarify why their baselines population averaging when there method is a variant of tail averaging. Theoretical Claims: I have already written about theory claims above. I have some concerns about novelty compared to [3] (in theory). I have read the claims and skimmed over the proof I have not found anything wrong. Experimental Designs Or Analyses: a) The pilot studies looks super interesting. b) In Table 1 the baselines are relevant but LAWA, SWA and EMA will make these results stronger. c) The authors have used checkpoints released by various companies such as DeepSeek, Baichuan, Pythia etc. This way of evaluation is fine but [1] has some similar evaluation using pythia. But the results are still interesting and evaluation is quite robust. d) The term used in Sec 4.1 as Generalization to unseen data is a bit vague as the pre-trained models are trained on huge internet worth of data and the models might have seen many of those tasks in pre-training. I understand that you are using chinese data valset for weight selection but calling it OOD without running a de-contamination analysis seems unprincipled. e) Also an explanation or analysis of linear model connectivity among distant checkpoint could be interesting to see. Supplementary Material: I have looked in to the supplementary results based on the prompts given in the main paper. I have also skimmed through the proofs. Relation To Broader Scientific Literature: Efficient pre-training definitely has impact. I have some concerns about very similar prior works such as [1], [2] and [3] not been discussed or treated as a baseline. Essential References Not Discussed: I have some concerns about very similar prior works such as [1], [2] and [3] not been discussed or treated as a baseline. Other Strengths And Weaknesses: I have already discussed previously. Other Comments Or Suggestions: Figure 3 seems to be placed in the wrong position as there are no reference to Figure 3. Questions For Authors: 1) Please list novel contributions compared to [1] for LLM pre-training. 2) Do you believe the pre-train perplexity would improve with checkpoint averaging. Can you show similar results? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1: Novel contributions compared to Previous Works.** > **(A) Our Focus: A Search Perspective for Pairwise Merging** - **Key Objective.** We propose to **linearly merge** two consecutive checkpoints $ \widetilde{\Theta} _t = \lambda_t\, \Theta _t \;+\; (1 - \lambda _t)\, \Theta _{t-1}, \quad \lambda_t \in [\alpha, 1], $ at each stage of large-scale pre-training, while **optimally selecting** the weight $\lambda_t$ using a **Bayesian optimization** approach. - **Why Pairwise?** As we show empirically, merging adjacent checkpoints tends to preserve or enhance performance when the models are close in training steps (rather than distant). This simplification (from merging multiple checkpoints at once to only the two most recent) *drastically reduces* the dimensionality of the search to **one** parameter, $\lambda_t$. - **Bayesian Optimization Rationale.** Empirically (**Figures 3--5** in our paper), the function $f(\lambda)$ mapping the merging weight $\lambda$ to downstream performance is **non-monotonic**. Our method: 1. Treats $f(\lambda)$ as a **black-box** function (we do *not* assume it is strictly convex or linear). 2. Uses **Gaussian Process** (GP) modeling with an acquisition function (e.g., EI, UCB) to systematically **explore** $\lambda\in[\alpha,1]$. 3. Finds the near-optimal or optimal $\lambda$ in a small number of evaluations, *even if* the function is multi-modal. **(B) Distinctions from LAWA, SWA, and EMA** 1. **LAWA**, **SWA**, and **EMA**: - Typically rely on an **internal training procedure**—for instance, SWA requires cyclical or stepwise learning rates and continuously accumulates model parameters from multiple epochs. LAWA uses large learning rates and tail-averaging from the final phase of training. EMA uses an exponential decay factor. 2. **Ours**: - **Post-hoc approach**: We do not require mid-training gradient data, specialized cyclical learning rates, or entire logs of model state. We only need: 1. Several discrete checkpoints (e.g., from open-source releases or a standard training pipeline) 2. A small **held-out** dataset to measure performance at different $\lambda$. - **Black-box optimization**: Instead of a *fixed* schedule (as in EMA or LAWA) or a *continuous tail average* (SWA), we *actively search* for $\lambda$ that *maximizes* the model’s performance metric. This is especially powerful for large language models whose training logs might not be fully available, and where advanced Hessian-based or continuous averaging is infeasible. Hence, although both lines of work share the notion that “averaging can yield flatter minima,” our **Bayesian search-based viewpoint** and **minimal assumptions** on training steps make our method **(i) more flexible, (ii) directly data-driven for each pairwise merge, and (iii) easy to apply when only discrete checkpoints are at hand**. > **Q2: Empirical Evidence and Comparisons** > **(A) Added Comparisons with LAWA, SWA, EMA** In the revised manuscript, we include additional baselines: | Model | C-Eval (5-shot) | CMMLU (5-shot) | MMLU (5-shot) | | --- | --- | --- | --- | | Baichuan2-2200B | 54.98 | 56.29 | 51.27 | | Baichuan2-2420B | 54.82 | 56.78 | 53.97 | | Uniform Soup | 54.93 | 56.71 | 54.62 | | Greedy Soup | 54.64 | 56.78 | **54.82** | | Fisher Weighted Avg | 54.44 | 56.62 | 54.16 | | RegMean | 54.55 | 56.46 | 54.77 | | **LAWA** | 54.96 | 56.11 | 54.11 | | **SWA** | 56.01 | 56.66 | 54.40 | | **EMA** ($\beta=0.99$) | 55.48 | 56.64 | 54.17 | | **Ours** | **56.23** | **56.97** | 54.56 | Our method either outperforms or closely matches these strategies, reaffirming the practical advantage. **(B) Pre-Training Perplexity** We also conducted additional experiments on **pre-training perplexity**. As shown in the table below: | Model | CMMLU PPL | MMLU PPL | | --- | --- | --- | | Baichuan2-2200B | 5.46 | 4.87 | | Baichuan2-2420B | 5.46 | 4.87 | | Uniform Soup | 13.96 | 4.87 | | Greedy Soup | 5.46 | 4.87 | | Fisher Weighted Averaging | 14.17 | 12.45 | | RegMean | 5.46 | 14.54 | | **Ours** | **5.43** | **4.87** | > Q3: **Generalization to “Unseen” Data:** > We recognize the reviewer’s concern that large LLMs might have encountered segments of certain benchmarks in their pre-training. We specifically mean “unseen” to refer to: - A dataset or domain *not* used to tune $\lambda$. For instance, we tune on a *Chinese domain (C-Eval)* and evaluate on *English tasks (MMLU, GSM8K)*, ensuring cross-lingual or cross-domain checks. - Although full de-contamination is challenging, these curated benchmarks (like MMLU, GSM8K) are standard *evaluation sets* widely regarded as indicative of “test” performance beyond the direct training set. **Ref:** [1] Gaussian process optimization in the bandit setting: No regret and experimental design --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and running so many experiments. 1. I fully acknowledge the novelty argument and the method is indeed novel but it seems less useful compared to simpler baselines such as LAWA. 2. SWA relies on internal training procedures as it changes the scheduler post 75% of the training. However, LAWA and EMA don't. For instance the Fig 5,6,7,8 in LAWA [1] used Pythia models trained by a company without any high learning rate hypothesis. The argument made in that paper it LAWA works best with high LR. 3. I concur that this approach us more straight forward and given any checkpoints this can be applied where as LAWA, SWA needs all the checkpoints and information of the training log. 4. I need more context how given limited number of checkpoints the authors have conducted Q2? LAWA is uniform averaging of latest checkpoints. Please provide the experimental details of SWA, LAWA and EMA baselines. 5. The pre-training perplexity experiment is highly non-standard using MMLU or CMMLU to compute PPL is unknown and unseen atleast to me (please cite prior works). One could have used wikitext, C4 or any next token prediction style dataset for ppl evaluation. Also please provide experimental details for this experiment. Please provide the details only in the form of text and no further experiments are needed at this point. The C-Eval and CMMLU results looks strong and I am open to improve my score to 3 if the experimental details follows standard procedure as suggested in the original papers. --- Reply to Comment 1.1.1: Comment: > **Q1: Experimental Details for SWA, LAWA, and EMA Baselines** > 1.SWA - **Rationale & Checkpoint Selection:** In our setup, the objective is to obtain the final optimal model. Thus, our baseline comparisons use the merged models from Baichuan2-2200B, Baichuan2-2420B. Since SWA is designed to average model weights during the low learning rate phase to converge toward a flatter region of the loss landscape—and given that we cannot access all the detailed data from the training process—we approximate this environment. Although the original SWA method recommends commencing the averaging process at roughly 75% of the training progress, we initiated the process with the checkpoint at 1760B tokens (approximately 67.7% progress) to facilitate averaging of checkpoints from regions that are expected to be sufficiently flat. Specifically, the checkpoints we averaged are: - **1760B (≈67.7%)** - **1980B (≈76.2%)** - **2200B (≈84.6%)** - **2420B (≈93.1%)** - **Learning Rate Dynamics:** Baichuan 2 employs a two-phase approach to adjust its learning rate. Initially, a 2,000-step linear warm-up is performed to reach a peak learning rate of 2×10⁻⁴. Subsequently, a cosine decay schedule is applied [1]. Under the cosine decay regime, the learning rates at the selected checkpoints are computed as follows: - **67.7% (1760B):** The calculated learning rate falls within the low-LR regime — approximately between 3.85×10⁻⁵ and 4.7×10⁻⁵. - **76.2% (1980B):** Approximately between 2.31×10⁻⁵ and 2.7×10⁻⁵. - **84.6% (2200B):** Approximately between 1.19×10⁻⁵ and 1.2×10⁻⁵. - **93.1% (2420B):** Approximately between 3.77×10⁻⁶ and 2.3×10⁻⁶. - **Comparison with Pythia:** The Pythia 70M baseline similarly utilizes a cosine decay learning rate schedule. For example, after a warm-up phase (roughly 10% of the total steps), Pythia’s learning rate peaks at 1×10⁻³ and subsequently drops to around 1×10⁻⁴ during the latter stages of training. At corresponding training progress fractions, the learning rates for Pythia are approximately: - **67.7%:** ∼3.1×10⁻⁴ - **76.2%:** ∼2.2×10⁻⁴ - **84.6%:** ∼1.5×10⁻⁴ - **93.1%:** ∼1.1×10⁻⁴ --- 2.LAWA - **Methodology:** The original LAWA paper proposes a sliding window average using the most recent *k* checkpoints (with *k*=5) over a window of 1K steps, saving a checkpoint every 200 steps. However, Baichuan 2 only provides checkpoints at 220B-token intervals. - **Our Adaptation:** To approximate LAWA under these constraints, we uniformly average the last 5 checkpoints available: 1540B,1760B, 1980B, 2200B and 2420B in our results. --- 3.EMA We follow the standard EMA implementation: - **Decay:** β = 0.99 - **Initialization:** Start from the weight corresponding to the checkpoint at 220B tokens. - **Update Rule:** For each new checkpoint, the EMA is updated as follows: $\widetilde{\Theta} _{t} = 0.99 \times \widetilde{\Theta} _{t-1} + 0.01 \times \Theta _{t}$ This recurrence is applied sequentially through all checkpoints (e.g., 440B, 660B, …, 2420B). --- > **Q2: Perplexity Details** > Our approach deliberately diverges from the standard perplexity evaluation—typically based on next-token prediction tasks using datasets like WikiText-103 or C4—in order to target knowledge-intensive scenarios. Instead of calculating perplexity over all tokens in the input, we compute cross-entropy exclusively on the ground-truth answer tokens while masking out any prompt or question text. And here is implementation code: ```python def eval(args, subject, model, tokenizer, dev_df, test_df): total_loss = 0 cors = [] for i in range(test_df.shape[0]): prompt_end = format_example(test_df, i, include_answer=False) train_prompt = gen_prompt(dev_df, subject, args.ntrain) prompt = train_prompt + prompt_end # Tokenize prompt input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() labels = input_ids.clone() # Only compute loss on the answer region (prompt_end) labels[:, :-len(tokenizer(prompt_end).input_ids)] = -100 # Forward pass outputs = model(input_ids=input_ids, labels=labels) loss = outputs.loss total_loss += loss.item() # Optional: measure accuracy by comparing predicted answer to reference # ... avg_loss = total_loss / len(test_df) ppl = np.exp(avg_loss) return ppl ``` **Ref:** [1] Yang A, Xiao B, Wang B, et al. *Baichuan 2: Open large-scale language models.* arXiv preprint arXiv:2309.10305, 2023.
Summary: The paper introduces a novel approach to enhancing the pretraining of large language models by leveraging intermediate checkpoint merging. The key idea is to exploit the information stored in intermediate checkpoints along the pretraining trajectory without incurring additional resource costs. The authors propose a method based on Bayesian optimization—specifically using Gaussian Processes—to determine the optimal linear blending weight for merging consecutive checkpoints. The contributions of the paper can be summarized as follows: - **Checkpoint Merging Strategy:** The paper advocates a pairwise (two-checkpoint) merging method which simplifies the high-dimensional weight search problem to a one-dimensional optimization task. - **Bayesian Optimization Framework:** The proposed solution utilizes Gaussian Process regression to model the relationship between the merging weight and model performance, combined with acquisition functions (such as Expected Improvement, Probability of Improvement, Upper Confidence Bound, and an adaptive GP-Hedge strategy) to efficiently search the weight space. - **Theoretical Analysis:** Detailed proofs are provided based on quadratic approximations, smoothness assumptions, and bounded Hessians to derive performance bounds for the merged model. Additionally, a PAC-Bayesian generalization bound is derived, indicating that the merged model can exhibit improved generalization compared to individual checkpoints. - **Extensive Empirical Evaluation:** Experiments are performed on multiple LLM families (Baichuan2, DeepSeek, and Pythia) across diverse benchmarks (C-Eval, CMMLU, MMLU, GSM8K, PIQA, WinoGrande, SciQ, ARC-Easy). These experiments substantiate the claims that merging adjacent checkpoints enhances accuracy and stabilizes performance across different domains. Claims And Evidence: - **Major Claims:** - The intermediate checkpoint merging methodology improves model performance and convergence without incurring extra computational overhead. - Bayesian optimization, when applied to optimizing the checkpoint merging weight, finds near-optimal solutions more effectively than conventional grid or random search methods. - **Supporting Evidence:** The authors present comprehensive experimental results demonstrating improved performance over various benchmarks compared to methods such as Uniform Soup, Greedy Soup, Fisher Weighted Averaging, and RegMean. The theoretical contributions are supported by rigorous proofs that derive performance bounds and convergence guarantees based on standard assumptions in optimization theory. Methods And Evaluation Criteria: - **Methodological Approach:** The paper proposes merging checkpoints via a linear combination of adjacent checkpoints, thereby reducing complexity to the optimization of a single merging weight. The weight is determined by Bayesian optimization, modeled by a Gaussian Process with carefully selected acquisition functions. This framework is intended to balance exploration of the search space and exploitation of thorough performance estimates. - **Evaluation Criteria:** The methodology is evaluated using standard performance metrics (accuracy, perplexity, etc.) over a wide range of benchmarks. Although the experimental studies are extensive, the evaluation framework additionally emphasizes theoretical bounds and convergence behavior, thereby linking empirical performance with rigorous theoretical analysis. Theoretical Claims: - Could the authors provide a detailed derivation showing how the Lipschitz continuity of the Hessian $ \|H(w) - H(v)\| \le L_H \|w - v\| $ for all $w, v$ leads to the above remainder bound? Additionally, please elaborate on the practical implications of this bound if $ \|w_t - w_{t-1}\| $ is not small. - Could the authors derive or estimate $\gamma_T$ explicitly in this one-dimensional context, showing that $ \gamma_T = O\big(\log T\big)? $ Moreover, how does the dimensionality of the search space affect the overall regret bound and convergence guarantee? Experimental Designs Or Analyses: - **Design Soundness:** - The experimental design is robust, featuring comparisons across various checkpoint pairs, ablation studies on held-out dataset sizes, and evaluations on models of different scales. - Visual aids like Figures 1, 3, 4, and 7 effectively illustrate the trends in performance gains and the impact of key hyperparameters, such as the merging weight search space. - **Potential Improvements:** - A more explicit discussion on the computational cost of the Bayesian optimization process would be valuable. - **Question:** - **Location:** Section 3.2 and Algorithm 1 - *What is the runtime overhead introduced by the GP-based Bayesian optimization process compared to grid or random search? Could you provide quantitative metrics (such as runtime comparisons in seconds or relative speedups) to help us understand its efficiency in practice?* Supplementary Material: N/A Relation To Broader Scientific Literature: The manuscript situates its contributions within the existing body of work on checkpoint averaging and model merging (e.g., Uniform Soup, Greedy Soup). It also connects to a broad literature in Bayesian optimization, Gaussian Process regression, and PAC-Bayesian theory. By integrating these diverse theoretical components, the paper bridges methods in model averaging with rigorous optimization methods, offering a new perspective on efficient utilization of checkpoint information during pretraining. Essential References Not Discussed: While the authors have discussed many pertinent works, additional recent studies on advanced model fusion techniques—particularly those exploring non-linear combinations or adaptive schemes for merging large-scale transformers—could provide further context. For example, works published at recent conferences on LLM optimization or papers proposing non-convex model fusion strategies might help position the contributions more clearly within evolving trends. Other Strengths And Weaknesses: - **Strengths:** - The proposed methodology is innovative in exploiting intermediate checkpoints to improve performance with minimal additional cost. - The combination of theoretical analysis with extensive empirical validation lends robustness to the claims. - The comprehensive treatment of both convergence and generalization under a unified theoretical framework is a significant asset. - **Weaknesses:** - The theoretical derivations often rely on strong assumptions—such as local quadratic behavior, Lipschitz continuity, and bounded Hessians—which may not fully capture the complexity of modern LLM loss landscapes. - Some of the derivations and notation could be clarified further to enhance readability and facilitate verification. Other Comments Or Suggestions: Regarding Figure 3, could the authors provide further clarification on the meaning of the x-axis label "merging weight" for detail explanation? Questions For Authors: 1. **Questions for Proof:** - see in the theoretical Claims 2. **Kernel Choice in Gaussian Process Modeling:** - While the paper mentions the use of Gaussian Process regression to model the function \(f(\alpha_t)\), can the authors comment on how different kernel functions might influence the information gain \(\gamma_T\) and the resulting regret bound? Are there situations where a non-standard kernel might lead to better empirical convergence rates than those suggested by the theoretical analysis? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: > **Q1: Bounds on the Quadratic Approximation Error.** > **Our Response:** - We appreciate the request for an expanded derivation. Under the assumption that the Hessian $H(\cdot)$ is Lipschitz continuous with constant $L_H$, consider the third-order Taylor expansion remainder term when approximating $f$ around $w_t$. For $\widehat{w}_t = w_t + \Delta,\quad \text{with} \quad \Delta = \widehat{w}_t - w_t,$ the Taylor remainder $R$ can be written (via the integral form) as $R = \frac{1}{2}\int_0^1 (1-\tau)^2 \big[H(w_t+\tau \Delta) - H(w_t)\big]\, d\tau \,\Delta^{\otimes 2}.$ Taking norms and using the Lipschitz property, we have: $\|H(w_t+\tau \Delta) - H(w_t)\| \le L_H\, \tau\|\Delta\|.$ Therefore, $|R| \le \frac{1}{2}\int_0^1 (1-\tau)^2 L_H\, \tau\, d\tau \,\|\Delta\|^3.$ Evaluating the integral: $\int_0^1 (1-\tau)^2 \tau \, d\tau = \frac{1}{6},$ we obtain the bound $|R| \le \frac{L_H}{6}\|\Delta\|^3.$ - **Practical Implications:** This bound is meaningful when $\|\Delta\| = \|\widehat{w}t - w_t\|$ *is small—typically a valid approximation because our analysis focuses on adjacent checkpoints in the pretraining trajectory. If $\|w_t - w{t-1}\|$* is large (and hence $\|\Delta\|$ is large, given the convex combination structure), the higher-order terms may no longer be negligible. In such cases, the quadratic approximation will incur a larger error, and our theoretical guarantees become local. Empirically, our pilot experiments demonstrate that merging distant checkpoints (which effectively produce larger $\|\Delta\|$ degrades performance, confirming that our theory is most valid in the local regime. --- > **Q2: Estimating the Information Gain in GP-Based Optimization ($\gamma_T$).** > **Our Response:** - In one-dimensional Bayesian optimization (with a standard kernel such as the squared exponential or Matérn kernel with smoothness parameter $\nu$, it is well established (e.g., Srinivas et al. (2010)) that $\gamma_T = O(\log T).$ To be precise, for a kernel $k$ on the interval $[\alpha, 1]$ with bounded RKHS norm, one can show that the maximum information gain $\gamma_T$ grows at most logarithmically in the number of evaluations $T$. --- > **Q3: Runtime Overhead of GP-Based Bayesian Optimization.** > **Our Response:** - Plz see the response in **reviewer Znoq Q2.** --- > **Q4: Influence of Kernel Choice in Gaussian Process Modeling on $\gamma_T$ and Regret Bounds.** > **Our Response:** - The kernel function in GP regression encapsulates our assumptions about the smoothness and structure of the performance function $f(\lambda_t)$. Standard choices such as the squared exponential or Matérn kernels guarantee that $\gamma_T$ grows at most logarithmically (in one dimension), which underpins our theoretical regret bounds. In settings where $f(\lambda_t)$ exhibits characteristics such as periodicity or non-stationarity, alternative kernels (e.g., periodic kernels or non-stationary kernels) could more accurately model the underlying function. This improved modeling fidelity could, in turn, lead to a smaller empirical $\gamma_T$ and faster convergence in practice. While our theoretical development is stated in a general manner (i.e., assuming a kernel with bounded information gain), we acknowledge that kernel choice is a crucial hyperparameter. Our preliminary experiments with standard kernels (which are widely used and well understood) already yield superior performance. Future work will explore adaptive kernel selection strategies to further optimize convergence rates. --- > **Q5: Clarification on the “Merging Weight” Label in Figure 3.** > **Our Response:** - The term “merging weight” in **Figure 3** refers to the coefficient $\lambda_t$ in the convex combination $\widetilde{\Theta}t = \lambda_t\, \Theta_t + (1-\lambda_t)\, \Theta_{t-1}.$ It quantifies the relative contribution of the later checkpoint $\Theta_t$ versus the earlier checkpoint $\Theta_{t-1}$ during the merging process. The x-axis represents the value of $\lambda_t$ uniformly sampled over the interval $[\alpha, 1]$, where **$\alpha$** is a lower bound ensuring that the more recent checkpoint retains a minimum contribution. We will add an explicit explanation in the revision to clarify this point.
Summary: **Overview** The paper introduces a novel checkpoint merging strategy aimed at enhancing the efficiency of large language model (LLM) pretraining. The central idea is to exploit intermediate checkpoints by forming linear combinations in the parameter space while optimizing the merging weight via Bayesian optimization. The paper’s key contributions are as follows: - **Algorithmic Contribution:** It presents an efficient pairwise checkpoint merging protocol (outlined in Algorithm 1) that effectively reduces a high-dimensional weight search problem to a manageable one-dimensional search. The approach utilizes Gaussian Process (GP) regression with acquisition functions such as EI, PI, and UCB along with a GP-hedge strategy. - **Theoretical Insights:** The work includes thorough theoretical analysis with convergence proofs, performance bounds using quadratic approximations of the loss landscape, and a derivation of PAC–Bayesian generalization bounds that explain how merging helps to reduce model variance. - **Empirical Findings:** Experiments spanning several models (e.g., Baichuan2, DeepSeek, and Pythia) across datasets like C-Eval, CMMLU, MMLU, and GSM8K demonstrate that merging adjacent checkpoints (but not those that are far apart) leads to performance improvements over baseline methods such as Uniform Soup, Greedy Soup, Fisher Weighted Averaging, and RegMean. Moreover, the proposed method shows enhanced both in-domain and out-of-domain generalization. Claims And Evidence: - **Main Claims:** - Merging intermediate checkpoints can notably boost LLM pretraining performance without adding much computational cost. - Bayesian optimization is a practical and effective tool for determining near-optimal merging weights, outperforming conventional merging strategies. - Models produced via this merging process exhibit superior generalization, as supported by PAC–Bayesian generalization bounds. - **Evidence Supporting the Claims:** - **Empirical Evidence:** Extensive tests—including pairwise and multi-checkpoint merging strategies—across various benchmarks back up these claims convincingly. - **Theoretical Evidence:** The paper provides detailed proofs based on common assumptions (like Lipschitz continuity, bounded Hessian, and the Polyak–Łojasiewicz condition) that support the theoretical claims on convergence and generalization. - **Comparative Analysis:** The systematic comparison against several existing merging techniques highlights consistent performance gains with the proposed approach. - **Question:** - **Location:** Section A, Equation (15) (Quadratic Approximation) - *Can the authors provide empirical evidence or extra justification to show that a quadratic approximation is valid around intermediate checkpoints in large-scale LLM pretraining?* Methods And Evaluation Criteria: - **Methodology:** - The authors smartly reduce the checkpoint merging problem into a one-dimensional search by focusing on pairwise merging. - They optimize the merging weight using Bayesian optimization, which is a well-suited choice for tackling expensive black-box problems. - The method is evaluated on various LLM architectures and across several tasks and datasets, ensuring a comprehensive analysis. - **Evaluation Criteria:** The paper uses common metrics (like accuracy on C-Eval and CMMLU) and assesses both in-domain (IND) and out-of-domain (OOD) performance. Additionally, ablation studies (examining the effects of held-out dataset size and merging weight search space) further confirm the method’s robustness. - **Question:** - **Location:** Section 3.2 and Algorithm 1 - *What is the actual runtime overhead of the GP-based Bayesian optimization compared to simpler methods like grid or random search? Could the authors include quantitative metrics or runtime comparisons to illustrate how the computational cost scales (for instance, with the number of checkpoints)?* Theoretical Claims: - **Proofs and Theoretical Analysis:** - The paper delivers rigorous derivations, including performance bounds from merging (see Equations (17)–(25)) and a convergence analysis in the context of gradient descent. - The derivation of the PAC–Bayesian generalization bound is well-detailed (Sections D.1–D.4) and adds to the theoretical strength of the paper. - **Issues and Validation:** - Although the proofs are largely logical, some assumptions—particularly the local quadratic approximation and the boundedness of the Hessian—might need further empirical support when applied to large-scale LLMs. - **Question:** - **Location:** Section A, Equation (15) (Quadratic Approximation) - *Can the authors provide empirical evidence or extra details to justify that a quadratic approximation holds around intermediate checkpoints in large-scale LLM pretraining? How sensitive are your conclusions if this assumption doesn’t fully hold?* Experimental Designs Or Analyses: - **Design Soundness:** - The experimental design is robust, featuring comparisons across various checkpoint pairs, ablation studies on held-out dataset sizes, and evaluations on models of different scales. - Visual aids like Figures 1, 3, 4, and 7 effectively illustrate the trends in performance gains and the impact of key hyperparameters, such as the merging weight search space. - **Potential Improvements:** - A more explicit discussion on the computational cost of the Bayesian optimization process would be valuable. - **Question:** - **Location:** Section 3.2 and Algorithm 1 - *What is the runtime overhead introduced by the GP-based Bayesian optimization process compared to grid or random search? Could you provide quantitative metrics (such as runtime comparisons in seconds or relative speedups) to help us understand its efficiency in practice?* Supplementary Material: This paper without supplementary material. Relation To Broader Scientific Literature: - **Integration with Previous Work:** - The paper builds on established ideas in model merging (e.g., “Model Soup” by Wortsman et al.) and extends them to the pretraining phase. - It leverages Bayesian optimization—an approach widely used in Bayesian machine learning and NLP—and relates it to prior hyperparameter tuning research. - The contribution fits well with current trends in improving LLM efficiency and robust training strategies. - **Missing References:** - Even though most relevant literature is cited, there might be additional recent work dealing with gradient smoothing and sharpness-aware minimization that could further support the discussion on generalization. - **Question:** - *Have the authors considered citing and contrasting their approach with the latest studies on gradient smoothing and sharpness-aware minimization? How could these ideas enhance or further explain the theoretical underpinnings of checkpoint merging strategies?* Essential References Not Discussed: Some recent studies on efficient pretraining techniques and checkpoint fusion in very large language models might be missing from the references. Including these could further situate the current contribution within the broader research context. Other Strengths And Weaknesses: **Strengths:** - **Originality:** The paper offers a fresh take by applying Bayesian optimization to checkpoint merging during LLM pretraining—an area not deeply explored before. - **Comprehensive Evaluation:** The wide-ranging experiments across multiple datasets and architectures convincingly substantiate the method’s effectiveness. - **Theoretical Rigor:** Detailed derivations provide a solid theoretical justification for improved convergence and generalization. **Weaknesses:** - **Computational Overhead:** The paper could delve deeper into understanding the added computational cost and resource usage due to the optimization step. - **Multi-Checkpoint Merging Discussion:** While merging two checkpoints works well, the benefits of merging more than two snapshots are less clear. A more thorough exploration of this issue would be appreciated. Other Comments Or Suggestions: see weaknesses above Questions For Authors: 1. **Computational Costs of Bayesian Optimization:** - **Location:** Section 3.2 and Algorithm 1 - *What is the runtime overhead of the GP-based Bayesian optimization process compared to grid and random search? Could you include some quantitative metrics (e.g., runtime in seconds or relative speedup) to help us gauge its efficiency in practice?* 2. **Generalization Across Different Architectures:** - **Location:** Section 4 - *Your experiments cover Baichuan2, DeepSeek, and Pythia models. Have you noticed any architecture-specific differences in terms of merging efficiency or optimal merging weights?* 3. **Multi-Checkpoint Merging:** - **Location:** Section 5.1 and Figure 7 - *The results indicate that merging two checkpoints consistently provides improvements, but merging three or four checkpoints doesn’t yield significant additional gains. Can you provide more insight into why this happens, and whether there might be a way to refine the strategy to better harness the information from multiple checkpoints simultaneously?* Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Q1: Quadratic Approximation Validity (Equation (15)).** > **Our Response:** - Our derivation starts from the assumption that, for small perturbations, the performance function $f(\Theta)$ can be locally approximated by a quadratic form: $f(\Theta) \approx f(\Theta_t) + \nabla f(\Theta_t)^\top \Delta + \tfrac{1}{2} \Delta^\top H_t \Delta,$ where $\Delta = \widetilde{\Theta}t - \Theta_t$ *and $H_t$ is the Hessian of $f$ at $\Theta_t$. This assumption leverages the smoothness $L_g$-Lipschitz continuity of $\nabla f$ and boundedness of the Hessian (i.e., $\lambda_{\min} I \preceq H_t \preceq \lambda_{\max} I$)*. - **Empirical Evidence:** Empirically, **Figures 1 and 3** in our paper illustrate smooth and nearly monotonic trends in performance with respect to the merging weight $\lambda_t$. These trends indicate that, within a sufficiently small region between $\Theta_{t-1}$ and $\Theta_t$, the quadratic approximation is reasonable. In addition, our ablation studies show that—despite the nonconvexity of the loss landscape—the averaged performance follows the bounds predicted by our quadratic model (cf. **Eq. (17)–(25))**. --- > **Q2:** Runtime Overhead of GP-Based Bayesian Optimization**.** > **Our Response:** - Our GP-based Bayesian optimization is designed to address an expensive, derivative-free objective. In our experiments for pairwise merging the search space is one-dimensional (i.e., $\lambda_t \in [\alpha, 1]$). In a typical run we perform on the order of 10-15 evaluations. Empirical measurements show that such an optimization loop incurs an overhead of approximately 30–60 mins on **a** **single 4090 GPU**. In contrast, grid or random search methods typically require many more evaluations to reach comparable performance and may require **4–10**× more time per experiment. --- > **Q3: Relation to Gradient Smoothing and Sharpness-Aware Minimization.** > **Our Response:** - We acknowledge that recent studies on gradient smoothing and sharpness-aware minimization have focused on explicitly encouraging flat minima during training. Our approach to checkpoint merging can be viewed as complementary. By averaging weights from adjacent checkpoints, we effectively smooth over transient sharp local minima, thereby biasing the final model toward flatter regions of the loss landscape. - In our convergence and PAC–Bayesian analyses, smoother loss landscapes (i.e., flatter minima) correspond to tighter generalization bounds. Mathematically, if one denotes model sensitivity as $\| \nabla^2 L(\Theta) \|$, then averaging (i.e., merging) reduces the effective curvature: $\widetilde{\Theta}t = \lambda_t \Theta_t + (1-\lambda_t)\Theta{t-1} \quad \Rightarrow \quad \| \widetilde{H}t \| \leq \lambda_t \| H_t \| + (1-\lambda_t)\| H{t-1} \|,$ thereby promoting flatness. We will revise the manuscript to explicitly contrast our method with these related works and add references to the most recent literature in sharpness-aware minimization. --- > **Q4: Multi-Checkpoint Merging Versus Pairwise Merging.** > **Our Response:** - Our experiments (see **Figure 7**) reveal that while pairwise merging provides substantial improvements, extending the merging to three or four checkpoints often leads to performance dilution. We attribute this to the fact that the most recent checkpoints already capture the latest state of the model, whereas merging with older checkpoints may introduce bias or “stale” information. Furthermore, increasing the number of checkpoints increases the dimensionality of the search space exponentially—extending from a one-dimensional segment (two checkpoints) to a two-dimensional triangle (three checkpoints), and ultimately to a closed tetrahedron (four checkpoints)—which exacerbates the computational cost and complexity of the optimization process. We have conducted additional experiments employing Bayesian optimization for merging more than two adjacent checkpoints. The results are summarized in the table below: | Number of Checkpoints | C-Eval | | --- | --- | | 2 (Our original method) | 56.20 ± 0.52 | | 3 | 56.27 ± 0.55($\uparrow$ 0.07) | | 4 | 55.34 ± 0.23($\downarrow$ 0.86) | These results justify our decision to adopt pairwise merging as it achieves a favorable trade-off between performance gains and computational efficiency. --- > **Q5: Generalization Across Different Architectures.** > **Our Response:** - Our empirical results indicate that the proposed merging method consistently improves performance across various architectures (i.e., Baichuan2, DeepSeek, and Pythia models). Although there are minor variations in the optimal merging weights—typically on the order of 0.1–0.7% accuracy—the overall trend of enhanced generalization and reduced variance is maintained. --- Rebuttal Comment 1.1: Comment: After reading the review, I am satisfied with the author's response, therefore I maintain my decision of giving a score of 4.
Summary: The paper demonstrates that averaging adjacent checkpoints leads to better downstream performance compared to using individual checkpoints. To determine the optimal weighting, the paper proposes using Gaussian process-based Bayesian optimization. The proposed approach outperforms existing merging strategies on downstream tasks, including commonsense reasoning tasks and mathematical exam questions. Claims And Evidence: The central claims of the paper are to a) provide "substantial benefits at minimal cost" and b) achieve robust generalization performance. The results show that the proposed approach does improve upon the baseline and outperforms simply selecting a single checkpoint. However, I would argue that the observed gains are relatively modest, and the claim of "substantial benefits" might be somewhat overstated. Methods And Evaluation Criteria: Looking at Figure 3, I am left wondering what exactly is being optimized. For the two blue checkpoints, the objective function appears to exhibit high observation noise. By visual inspection, it seems somewhat random how to set \lambda, as choosing a value close to 1 or close to 0 appears equally likely to yield similar results. For the red checkpoints, there is a noticeable trend toward higher \lambda; however, these values are not a) included in the search space \lambda \in [\alpha, 1] where \alpha is set to > 0.5 in the experiments, and b) effectively mean that only one of the checkpoints is selected according to Equation 2. I would like to see a model-fit plot, similar to Figure 1 in Frazier et al., to demonstrate that Bayesian optimization can effectively model the observed data. Additionally, given that this is merely a simple 1-D problem, I do not fully understand why the lower bound for \alpha needs to be carefully selected rather than simply setting it to 0. A Tutorial on Bayesian Optimization Peter I. Frazier Theoretical Claims: I am unsure whether the assumptions in Equation 10 are justified, particularly regarding the diagonal covariance matrix. Experimental Designs Or Analyses: Overall, the experiments appear well-structured and thorough. However, the results in Table 1 would be more convincing if they included uncertainty bounds. Supplementary Material: I did not review the supplementary material in detail. Relation To Broader Scientific Literature: The key contribution of the paper to the broader literature is: a) demonstrating that merging adjacent checkpoints yields better results than merging more distant checkpoints, and b) formulating model merging as an optimization problem that can be addressed using Bayesian optimization. Essential References Not Discussed: As far as I can tell, the paper addresses all relevant literature. Other Strengths And Weaknesses: Strengths: The paper provides valuable and easily interpretable visualizations, such as Figure 2, which compares different checkpoints. Other Comments Or Suggestions: - Line 141: The reference appears to be incorrect—should this refer to Figure 3? - Line 340, right column: Incorrect reference—should be Table 4. - I'd suggest that the authors extend the results in Table 4 to also include plots that shows the optimization trajectories of Bayesian optimization compared to for example random search as it is common practice in the Bayesian optimization literature. This will also provide a better sense on the convergence speed of the proposed approach. Questions For Authors: - What values of \lambda does Bayesian optimization actually select? - How many iterations are performed with Bayesian optimization? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > **Q1: Clarification Regarding Figure 3** > **Our Response:** We appreciate this observation. **Figure 3** is not a result of the Bayesian Optimization procedure. Instead, it provides an **empirical exploration** of how the merged model’s performance changes when we *uniformly sample* $\lambda$ in $[0, 1]$. In other words, it is a “landscape” study (a brute-force sweep of $\lambda$ ) to: 1. **Observe Variability for Large Gaps:** When checkpoints differ substantially in performance, $\lambda \approx 1$ typically yields higher performance, because weighting the stronger checkpoint is beneficial. 2. **Observe Smooth Behavior for Similar Checkpoints:** When checkpoints have closer performance levels, a broad range of $\lambda\in[0,1]$ can give improvements, reflecting that interpolating between two similarly strong models often yields robust results. Because **Figure 3** revealed that performance often worsens if one heavily weights a weaker checkpoint, our practical merging approach **restricts** $\lambda\in[\alpha, 1]$ with $\alpha>0.5$. Empirically, this domain ensures the more recent (and presumably stronger) checkpoint $\Theta_t$ maintains a dominant contribution. Hence, **Figure 3** is purely illustrative—an exploratory “map” of $\lambda$-versus-accuracy—while the actual search for $\lambda$ is done by Bayesian Optimization on the narrower interval $[\alpha,1]$. > **Q2: Justification for Eq. 10.** > **Our Response:** **Eq. 10** appears in our **PAC-Bayesian generalization bound**, where we assume $P \ = \ \mathcal{N}(\Theta_{t-1}, \sigma_P^2 I), \quad Q \ = \ \mathcal{N}(\widetilde{\Theta}t, \sigma_Q^2 I),$ *and derive $D_{\mathrm{KL}}(Q \ | | \ P)$*. While real neural networks often exhibit correlated parameter distributions, we rely on: 1. **Analytical Tractability:** A diagonal covariance structure (or isotropic $\sigma^2I$) keeps the KL divergence in closed form: $D_{\mathrm{KL}}(Q || P) \ = \ \frac{\|\widetilde{\Theta}t - \Theta{t-1}\|^2}{2\sigma^2}.$ This simplification is standard in many PAC-Bayes treatments and Bayesian NN approximations (e.g., “mean-field” approximations). 2. **Boundedness:** Even if true covariances are not diagonal, using a diagonal approximation typically *overestimates* correlations (or lumps them into the diagonal variance), making the bound somewhat looser but still valid as an upper bound. 3. **Consistency with Empirical Practice:** Prior works (e.g., [1]) frequently adopt similar diagonal or isotropic assumptions for theoretical clarity and to yield tractable bounds. > **Q3: Part of Uncertainty Bounds in Table 1.** > **Our Response:** We have now run **multiple-seed** experiments (e.g., 5 random seeds) for each method and checkpoint merge. We report both the **mean** accuracy and **95\% confidence intervals (CIs)**. Below is an illustrative example for C-Eval (5-shot): | **Model / Method** | **Accuracy** | **95% CI** | | --- | --- | --- | | **Baichuan2-2200B** | 55.65 | [54.66, 55.98] | | **Baichuan2-2420B** | 54.90 | [54.32, 55.41] | | **Uniform Soup** | 55.47 | [54.89, 56.01] | | **Greedy Soup** | 55.58 | [54.94, 56.23] | | **Fisher Weighted Averaging** | 55.77 | [55.14, 56.39] | | **RegMean** | 53.81 | [53.08, 54.53] | | **Ours (BayesOpt)** | **56.20** | [55.61, 56.90] | These intervals show that differences are both *statistically* and *practically* meaningful. We will include such CIs for all key results in the revised manuscript. > **Q4: Miscellaneous Comments and Reference Corrections.** > **Our Response:** Thank you for pointing out these reference issues. We have carefully reviewed and will correct all citation and reference errors. > **Q5: Details on the Optimization Trajectories in Bayesian Optimization** > **Our Response:** We have **added figures** for model-fit plot at anonymous link: https://anonymous.4open.science/r/checkpoint-47F1/bayse.png that illustrate the Bayesian Optimization process: - **Posterior Mean & Confidence Intervals:** After each iteration, we plot the Gaussian Process posterior (mean + confidence interval) over $\lambda$, and show which $\lambda$ was selected next. - **Number of Iterations:** Typically, we only need about **10–15** iterations for a 1D problem to converge near an optimal $\lambda$. Restricting $\lambda\in[\alpha,1]$ with $\alpha>0.5$ is a pragmatic choice. *If* users want more flexibility (e.g., to incorporate knowledge from an older checkpoint more strongly), they can lower $\alpha$. In practice, $\alpha\approx 0.5$ balanced well the merging of older vs. newer checkpoints, **based on pilot experiments** that showed merges with $\lambda<0.5$ rarely helped. **Ref:** [1] Dziugaite G K, Roy D M. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data[J]. arXiv preprint arXiv:1703.11008, 2017.
null
null
null
null
null
null
BoA: Attention-aware Post-training Quantization without Backpropagation
Accept (poster)
Summary: The paper proposes a novel Hessian-based backprop-free method for quantizing the attention weights of transformer models. The proposed method utilizes the computation pattern between different model weights within the attention modules to compute the Hessian with respect to quantization residual. Compared to previous work, which considers each weight independently, the resulting Hessian for each weight matrix incorporates information of other weight matrices based on the attention computation. For example, the Hessian with respect to the quantization residual of key matrix involves terms from the query matrix due to the multiplication operation between the two matrices for the attention computation. The paper also shows that the computation of the Hessian can be made efficient. The paper conducts extensive experiments on Llama and OPT models with different sizes, and show that their method improves consistently over other backprop-free baselines and some baselines that involve backprop to compute the gradients. ===Post rebuttal=== Thank the authors for addressing my concerns regarding the compatibility with RoPE. I think this is a very important part of the algorithm, which is missing from the current draft. Please do include these details if the paper is accepted. I still think the paper would benefit from including more comparisons with gradient-based methods, especially the light-weight ones. But I don't think this is a reason to reject since the paper mainly focus on gradient-free methods. Hence I would recommend the paper for acceptance. Claims And Evidence: The algorithm is motivated by a concrete insight on the computation within attention module and solid mathematical derivations on the Hessian with respect to the quantization residual. The insights are then backed up by extensive experiments on different models and datasets. The claims are solid to the best of my knowledge. Methods And Evaluation Criteria: The evaluations are performed on Llama and OPT models with standard benchmarking datasets. The paper considers different quantization settings (different numbers of bits for weights, activations, and KV cache), which are standard practice in the quantization literature. Theoretical Claims: The paper is mainly algorithmic and experimental. The only claim is Proposition 3.1, which seems correct to me. Experimental Designs Or Analyses: The experiments cover a good set of state-of-the-art quantization methods. However, there are still a few missing, especially if the authors want to claim their advantage over methods that involve gradient computation (backprop). See papers in `Essential References Not Discussed`. In particular, a set of the works, e.g., [Tseng et al 2024a] and [Egiazarian et al 2024] perform gradient-based finetuning for PTQ which captures inter-layer interaction to some degree, which is the same intuition used in this paper. While these methods involves gradient computation, they are relatively lightweight, and the quantization time might be comparable. It would be nice if the authors could perform a comparison to some of these more current gradient based methods, in terms of both the performance and efficiency. I want to point out that some of the setups for BoA uses SpinQuant, which involves gradient computation to learn the transformation as well. Supplementary Material: N.A. Relation To Broader Scientific Literature: The paper relates to a broad literature of Hessian-based quantization methods for LLM. The introduced computation of attention-aware Hessian that relates the different weight matrices within an attention module is novel to the best of my knowledge. See `Experimental Designs Or Analyses` for discussion on other methods. Essential References Not Discussed: [Chee et al 2023] Jerry Chee, Yaohui Cai, Volodymyr Kuleshov, Christopher De Sa QuIP: 2-Bit Quantization of Large Language Models With Guarantees [Tseng et al 2024a] Albert Tseng, Jerry Chee, Qingyao Sun, Volodymyr Kuleshov, Christopher De Sa QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks [Tseng et al 2024b] Albert Tseng, Qingyao Sun, David Hou, Christopher De SaQTIP: Quantization with Trellises and Incoherence Processing [Chee et al 2024] Jerry Chee, Arturs Backurs, Rainie Heck, Li Zhang, Janardhan Kulkarni, Thomas Rothvoss, Sivakanth Gopi DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory [Egiazarian et al 2024] Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh Extreme Compression of Large Language Models via Additive Quantization Other Strengths And Weaknesses: One question I have is that the Hessian computation is based on the MHA computation between line 142-144, which doesn't involve RoPE embedding. With RoPE embedding, the Hessian will be different. How is the position embedding handled in the algorithm? Other Comments Or Suggestions: Input to Algorithm 1: make other matrices as input explicitly since the quantization of W depends on other weight matrices as well. Questions For Authors: See my questions in previous sections. I am happy to increase my score if the questions could be addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Hessians for models with RoPE** - When RoPE is applied, the proposed objective $|| K \Delta Q ^{T} || _{F}^{2} = || K \Delta W _{Q} X || _{F}^{2}$ used to develop the attention-aware Hessian for the query projection $W _{Q}$ (see line 196) is converted to $|| \tilde{K} \Delta \tilde{Q} ^{T} || _{F}^{2}$, where $\tilde{K} = RoPE(K)$ and $\tilde{Q} = RoPE(Q)$. - Let $R _{l}$ be the rotary matrix for the $l$-th token (see [1, eq.(15)]) and $\tilde{Q} ^{T} = [\tilde{q} _{1} \ldots \tilde{q} _{L}]$, then the objective can be expressed as $$|| \tilde{K} \Delta \tilde{Q} ^{T} || _{F}^{2} = \sum _{l=1}^{L} || \tilde{K} \Delta \tilde{q} _{l} || _{2}^{2} = \sum _{l=1}^{L} || \tilde{K} \Delta (R _{l} W _{Q} x _{l}) || _{2}^{2} = \sum _{l=1}^{L} || \tilde{K} R _{l} \Delta W _{Q} x _{l} || _{2}^{2}.$$ - Recalling that the second-order derivative of $|| M _{1} \Delta W M _{2} || _{F}^{2}$ with respect to $\Delta w$ is $2M _{2} M _{2}^{T} \otimes M _{1}^{T} M _{1}$ (see Footnote 2), the Hessian for $W _{Q}$ is given as $$H ^{(w _{Q})} = \sum _{l=1}^{L} ( 2x _{l} x _{l}^{T} \otimes R _{l}^{T} \tilde{K} ^{T} \tilde{K} R _{l} ).$$ - Finally, we take the factorized approximation for the summation of Kronecker products [2, eq.(20)] (i.e., $\mathbb{E} [M _{1} \otimes M _{2}] \approx \mathbb{E} [M _{1}] \otimes \mathbb{E} [M _{2}]$): $$H ^{(w _{Q})} \approx \sum _{l=1}^{L} 2x _{l} x _{l}^{T} \otimes \frac{1}{L} \sum _{l=1}^{L} R _{l}^{T} \tilde{K} ^{T} \tilde{K} R _{l} = 2X X ^{T} \otimes \frac{1}{L} \sum _{l=1}^{L} R _{l}^{T} \tilde{K} ^{T} \tilde{K} R _{l}.$$ By taking similar steps, the attention-aware Hessian for the key projection $W _{K}$ with RoPE can be established as $$H ^{(w _{K})} = 2X X ^{T} \otimes \frac{1}{L} \sum _{l=1}^{L} R _{l}^{T} \tilde{Q} ^{T} \tilde{Q} R _{l}.$$ - We note that our results for LLaMA models are based on the above Hessians. In the future version, we will clarify the attention-aware Hessians for models exploiting RoPE. [1] J. Su et al, "RoFormer: Enhanced Transformer with Rotary Position Embedding," arXiv: 2104.09864. [2] A. Botev et al, "Practical Gauss-Newton Optimisation for Deep Learning,'' ICML 2017. **2. Comparison with more current gradient-based methods, e.g., QuIP# and AQLM** - We sincerely thank the reviewer for pointing out interesting references. To say the conclusion first, it is not easy to compare the performance of the proposed BoA with the methods that the reviewer mentioned due to the following reasons: - QuIP# and AQLM are vector quantization algorithms. Compared to BoA targeting scalar uniform quantization, those methods need additional memory (bits) for storing codebooks, which are required to perform the dequantization during the inference. In short, it is hard to perform a comparison under the same number of bits. - QuIP# requires additional processing time to undo the incoherent processing during the actual inference [1, Algorithm 2], which hinders us from performing an apple-to-apple comparison. - Reported results for QuIP# and AQLM are based on different numbers of calibration data; QuIP# and AQLM have used 6144 [1, Appendix F.2] and 2048 samples [2, Appendix C], respectively, while we utilized 128 samples. - We believe that the fine-tuning strategy adopted in QuIP# and AQLM is orthogonal to our approach and can be integrated with the proposed BoA in the following way: after quantizing weights inside the Transformer block via BoA, unquantized parameters (e.g., weights of normalization layers) are fine-tuned to preserve the output of the Transformer block, compensating for the quantization error of linear layers. Such integration will lead to performance improvement but may incur a longer quantization processing time. Indeed, it has been reported that AQLM requires 36-48 GPU hours for quantizing and fine-tuning a 7B model [2, Appendix D] while BoA completes the quantization within an hour. We leave this integration as a future research direction and will mention it in the conclusion section. We will also mention other works that the reviewer noted in the related work section. [1] A. Tseng et al, "QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks," ICML 2024. [2] V. Egiazarian et al, "Extreme Compression of Large Language Models via Additive Quantization," ICML 2024. **3. Better presentation to enhance the clarity** - We appreciate the reviewer's careful reading and constructive suggestions. In the future version, we will add the other weights inside the attention module as inputs for Algorithm 1 to clarify that the proposed BoA uses other weights to consider inter-layer dependencies. We will also clarify that SpinQuant, which has been integrated with BoA for the activation quantization and shows great synergy, uses gradient-based optimization to optimize rotation matrices that can well suppress activation outliers.
Summary: This paper proposes a backpropagation-free PTQ algorithm, named BOA, to quantize LLMs. Authors consider inter-layer dependencies to optimize the weights being quantized, and in this case they use attention-aware Hessian matrices. As the computational cost of hessian matrices is too expansive, BOA simplifies the hessian calculation by involving several approximations. Experiments are benchmarked against existing PTQ approaches on LLMs, showing improved performance on perplexity and zero-shot task accuracy. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods make sense for the problem. The evaluation criteria is commonly used in the field. Theoretical Claims: The proofs of BOA are correct. Experimental Designs Or Analyses: Limited INT3 comparison: The INT3 experiments in table.3 only compare against GPTQ. A robust evaluation should include comparisons to a wider range of relevant quantization methods, such as AWQ, SmoothQuant. Significant INT2 degradation: While the INT2 experiments include comparisons to several existing methods, the performance drop compared to FP16 is significant. The degradation suggests that the INT2 approach is not practical for real-world deployment. Supplementary Material: - The supplementary mathematical proofs are valuable. - The additional experiments are mostly conducted on INT2/3, and the degradation is relatively significant. Relation To Broader Scientific Literature: The topic of this paper is of significant importance and represents one of the most active and rapidly evolving research areas in the field. Essential References Not Discussed: No missing citations of essential, directly relevant prior work. Other Strengths And Weaknesses: strengths: - The mathematical derivation of the attention-aware Hessian approach provides a strong theoretical foundation for the proposed method. The steps are clearly explained, and the assumptions are explicitly stated. weaknesses: - The presentation of weight-only quantization results could be improved. Tables 3 and 4 both present results for weight-only quantization. Combining these results into a single table, or at least presenting them contiguously with a clear explanation of their differences, would enhance the clarity and readability of the paper. - A significant portion of the main experimental results (Tables 3, 4, and 6) focuses on INT2 quantization. However, the performance degradation is substantial. This level of degradation raises concerns about the practical applicability of the proposed method. Other Comments Or Suggestions: no Questions For Authors: Table 5 presents processing times for the proposed method (BOA). However, a direct comparison with GPTQ, a closely related method, is missing. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1. Processing time comparison with GPTQ** - We appreciate the reviewer's constructive suggestion. The main goal of Table 5 is to compare the processing times of PTQ algorithms _**considering inter-layer dependencies**_. Since GPTQ assumes layer-wise independence, we did not include it in Table 5. - As suggested, we have compared the processing times of GPTQ and the proposed BoA (see Table I). We observe that BoA requires a longer processing time than GPTQ. This is because GPTQ quantizes all the rows simultaneously, while BoA sequentially quantizes them (see Fig. 1(b)) to consider the inter-layer dependencies within the attention module, which leads to better performance than GPTQ. Clearly, there is a trade-off between quantization speed and accuracy. In real cases, when one needs to consider inter-layer dependencies to preserve the performance of the original model as much as possible, the proposed BoA would be an intriguing solution when compared to existing gradient-based methods in terms of both quantization speed (see Table 5) and accuracy (see Tables 3, 4, and 6). - When faster quantization is required, BoA can still be used with a slight relaxation. Recalling that a longer processing time of BoA is attributable to the one-by-one quantization of rows, we can accelerate the process by quantizing multiple, say $N$, rows simultaneously. Let $N=2$ and $w_{i}$ be the $i$-th row. In the original BoA, the second row $w_{2}$ is quantized after compensating for the quantization error of the first row $w_{1}$. In the accelerated version, $w_{2}$ does not participate in the error compensation of $w_{1}$ but is quantized together, as if $w_{2}$ is independent of $w_{1}$. Our experimental results demonstrate that a significant reduction in the processing time can be achieved via this multiple-row selection strategy (see Table I(a)), yet this relaxation incurs marginal performance degradation (see Table I(b)). - In the future version, we will add a processing time comparison with GPTQ and discuss how to accelerate BoA in time-limited cases. <Table I. Comparison with GPTQ> (a) Processing time (sec) |Method|N|7B|13B|30B| |-|-|-|-|-| |GPTQ||427|731|1560| |BoA|1|3460|5590|11900| |Relaxed BoA|8|1620|3250|8880| ||16|1230|2610|7410| (b) INT2 PPL ($\downarrow$) on Wiki-2 |Method|N|7B|13B|30B| |-|-|-|-|-| |GPTQ||19.6|34.6|9.77| |BoA|1|10.3|8.31|6.67| |Relaxed BoA|8|10.4|8.37|6.78| ||16|10.5|8.39|6.80| **2. Limited INT3 comparison and substantial performance degradation for INT2** - Most of INT3 quantization results have been reported in Appendix C (see Tables 7-11) due to the page limitation. In Table 10, we compare the performance of BoA with various transformation-based methods such as OmniQuant, AffineQuant, QuaRot, and DuQuant. Other transformation-based methods that the reviewer mentioned (AWQ and SmoothQuant) are not included since they perform worse than OmniQuant and AffineQuant [1], [2]. For INT3 quantization, BoA still performs the best and almost preserves the performance of the original full-precision model; the accuracy drop is 2.3%p for LLaMA3-8B and 1.3%p for LLaMA2-13B. - While it may not be easy to use 2-bit quantized models practically, we kindly note that low-bit results are commonly used as benchmarks to identify which algorithm performs the best [1]-[4]. One main reason is that it is often challenging to observe significant performance differences between algorithms at high precision (e.g., 4-bit) because the underlying models are robust enough to be quantized into 4-bit. In fact, even a naive rounding-to-nearest (RTN) quantizer usually does not incur severe performance degradation for INT4 (see Tables 7 and 8). On large-scale models (e.g., LLaMA2-13B), it is difficult to distinguish the performances of different algorithms even for INT3 (see Table 10). Thus, we believe that INT2 results could be a good metric to compare the performances of different algorithms. [1] W. Shao et al, "OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models," ICLR 2024. [2] Y. Ma et al, "AffineQuant: Affine Transformation Quantization for Large Language Models," ICLR 2024. [3] Y. Jeon et al, "A Frustratingly Easy Post-Training Quantization Scheme for LLMs," EMNLP 2023. [4] J. Chee et al, "QuIP: 2-Bit Quantization of Large Language Models With Guarantees," NeurIPS 2023. **3. Better presentation of weight-only quantization results** - We appreciate the reviewer's constructive suggestion. While both Tables 3 and 4 present weight-only quantization results, the key difference of Table 4 over Table 3 is that the weight quantization has been performed after transforming models to be robust to quantization. As suggested, in the future version, we will make a single table by combining these results and enhance the clarity and readability.
Summary: This paper proposes a post-training quantization method based on the construction of attention-aware Hessian matrices to capture inter-layer interactions. The method generalizes GPTQ and also requires no back-propagation. Extensive experiments demonstrate the effectiveness of the proposed approach. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The theoretical results are correct. Experimental Designs Or Analyses: The experimental evaluation is adequate. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: In addition to GPTQ, there are also some other backpropagation free PTQ methods that are based on the coordinate descent algorithm, e.g., [1,2] [1] Zhang et al., COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization. [2] Behdin et al., QuantEase: Optimization-based Quantization for Language Models. Other Strengths And Weaknesses: Strengths: The paper is well written. The proposed method is original, and the quantization results are impressive when compared to SOTA. Weaknesses: To improve the clarity, it should be stated clearly the difference between the weights denoted by a small letter and the ones denoted by a capital letter, such as $\Delta w^{(l)}$ and $\Delta W^{(l)}$ in Eqn (3). It would be beneficial to include a paragraph to clarify the notations. Other Comments Or Suggestions: I recommend that the authors compare the quantization times of BoA with other methods including GPTQ and block-wise reconstruction techniques like OmniQuant. Questions For Authors: Should the second "=" in Eqn (5) be "$\approx$"? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1. Processing time comparison** - The processing times of block-wise reconstruction techniques (e.g., OmniQuant and AffineQuant) have been summarized in Table 5. As evident, the proposed BoA completes quantization much faster than block-wise reconstruction techniques (e.g., for 30B, about 2.5 times faster than OmniQuant and 7 times faster than AffineQuant), yet outperforms those methods by a large margin (e.g., 20%p accuracy improvement for the 2-bit quantized LLaMA3-8B; see Table 4). - As suggested, we have newly compared the processing times of GPTQ and BoA (see Table I). We observe that BoA requires a longer processing time than that needed by GPTQ. This is because GPTQ quantizes all the rows simultaneously, while BoA sequentially quantizes them (see Fig. 1(b)) to consider the inter-layer dependencies within the attention module, which leads to better performance than GPTQ. Clearly, there is a trade-off between quantization speed and accuracy. In real cases, when one needs to consider inter-layer dependencies to preserve the performance of the original model as much as possible, the proposed BoA would be an intriguing solution when compared to existing gradient-based methods in terms of both quantization speed (see Table 5) and accuracy (see Tables 3, 4, and 6). - When faster quantization is required, BoA can still be used with a slight relaxation. Recalling that a longer processing time of BoA is attributable to the one-by-one quantization of rows, we can reduce the processing time by quantizing multiple, say $N$, rows simultaneously. For better understanding, let $N=2$ and $w_{i}$ be the $i$-th row. In the original BoA, the second row $w_{2}$ is quantized after compensating for the quantization error of the first row $w_{1}$. In the accelerated version, $w_{2}$ does not participate in the error compensation of $w_{1}$ but is quantized together, as if $w_{2}$ is independent of $w_{1}$. Our experimental results demonstrate that a significant reduction in the processing time can be achieved via this multiple-row selection strategy (see Table I(a)), yet this relaxation incurs marginal performance degradation (see Table I(b)). - In the future version, we will add a processing time comparison with GPTQ and discuss how to accelerate BoA in time-limited cases. <Table I. Performance comparison with backpropagation-free layer-wise PTQ methods> (a) Quantization processing time (sec) |Method|N|7B|13B|30B| |-|-|-|-|-| |GPTQ||427|731|1560| |QuantEase||5160|8530|17900| |BoA|1|3460|5590|11900| |Relaxed BoA|8|1620|3250|8880| ||16|1230|2610|7410| (b) INT2 PPL ($\downarrow$) on Wiki-2 |Method|N|7B|13B|30B| |-|-|-|-|-| |GPTQ||19.6|34.6|9.77| |QuantEase||14.3|16.7|9.51| |BoA|1|10.3|8.31|6.67| |Relaxed BoA|8|10.4|8.37|6.78| ||16|10.5|8.39|6.80| **2. Not discussed references** - We thank the reviewer for pointing out interesting references. The key difference of the proposed BoA over COMQ and QuantEase is that BoA considers inter-layer dependencies while COMQ and QuantEase assume layer-wise independence as in GPTQ. In Table I, we summarize the processing time and the PPL performance of QuantEase; results for COMQ are not included because its official github code does not support quantization of LLMs. As evident, BoA outperforms QuantEase yet completes quantization faster. In the future version, we will cite these works and mention the differences in the related work section. **3. Clarification of notations** - We appreciate the reviewer's constructive suggestion. The weight denoted by a capital letter (e.g., $W$) means the matrix, and the weight denoted by a small letter (e.g., $w$) means its vectorized representation. In the future version, we will add a paragraph that summarizes the notations to enhance the clarity of the paper. **4. Typo** - We appreciate the reviewer's careful reading. As the reviewer pointed out, "$\approx$" is correct. We will fix this typo in a future version.
Summary: This paper introduces a novel backpropagation-free PTQ method called BoA, which improves the conventional Hessian-based compensation strategy by comsidering layer-wise dependencies. Extensive experiments show its superiorty on both weight-only and weight-activation quantization. Claims And Evidence: Yes, the claims made in the submission are supported by evidence. Methods And Evaluation Criteria: Yes, the methods in this paper make sense for the problem. Theoretical Claims: Yes, I have checked the proof. Experimental Designs Or Analyses: Yes, I have checked experimental designs and analyses in this paper. Supplementary Material: Yes, I have reviewed the supplementary material completely. Relation To Broader Scientific Literature: This paper is related to model compression. Essential References Not Discussed: No missing references. Other Strengths And Weaknesses: Strength: 1. In addition to refining the existing dynamic weight compensation scheme, the authors also recognize the issue of increased computational complexity it introduced and propose corresponding solutions to address it. 2. The theoretical derivation is thorough and detailed. Weakness: 1. In abstract and introduction, the authors claim that the proposed BoA "optimizes integer weights", which leads to siginificant ambiguity. This makes me (or future readers) to initially believe that the paper achieves the integer format real-quantized weights without backpropagation, only to later discover that this is not the case. 2. Still in abstract and introduction, the authors claim the conventional methods use naive nearest rounding, which cannot be seen as their shortages. Because the quantizer in BoA is still RTN except for AdaRound. 3. Due to much complex Hessian calculation, the memory usage maybe unnegligible. But this paper misses the memory usage comparison between BoA and conventional layer-independent Hessian method GPTQ. Especially on larger LLMs such as 70B, where the memory usage gap maybe much larger. 4. In Table 5, the optimization type included into comparison is insufficient. Because in recent PTQ research, backpropagation-free methods are regarded as mainstream, such as GPTQ, QuIP, QuaRot. So it is crucial to include these backpropagation-free methods into comparison. 5. In order to highlight the individual effectiveness of the three proposed tricks for efficient Hessian implementation, the authors should provide a ablation study on processing time reduction of each trick. Other Comments Or Suggestions: None Questions For Authors: 1. Can block-wise update strategy in GPTQ be used into row-wise compensation? 2. In line 248-255, the authors assume different attention heads are indepent so that they can be quantized simultaneously. But what about V, O and K in GQA? In my opinion they can only be quantized sequentially. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Memory comparison with GPTQ** - In Table I, we summarized memory costs of GPTQ and BoA. BoA indeed needs larger memory since BoA additionally uses outputs of other layers to consider inter-layer dependencies, which leads to better performance. - When the memory resource is limited, BoA can be used with a slight relaxation. We note that large memory cost of BoA is attributable to the row-wise Hessian for the value ($XA_{h}^{T}A_{h}X^{T}$ in (8)) whose shape is $[H,d,d]$ ($H$ is number of heads and $d$ is hidden size). In memory-limited cases, one solution is to apply the conventional Hessian ($XX^{T}$) for the value and consider inter-layer dependencies only for query and key. In doing so, BoA needs slightly more memory (e.g., 3GB for 30B) yet still performs much better than GPTQ (see Table I). <Table I. Comparison with GPTQ> (a) Memory (GB) |Method|7B|13B|30B| |-|-|-|-| |GPTQ|4.43|5.87|8.27| |BoA|9.55|16.6|32.8| |Relaxed BoA|6.45|8.40|11.6| (b) INT2 Wiki2 PPL |Method|7B|13B|30B| |-|-|-|-| |GPTQ|19.6|34.6|9.77| |BoA|10.3|8.31|6.67| |Relaxed BoA|10.5|8.70|7.01| **2. Time comparison with GPTQ** - For this issue, please see our response 1 to the reviewer dtMA due to the character limit. We note that QuIP and QuaRot require longer processing times than GPTQ since they conduct incoherent processing or rotation before applying GPTQ. **3. Individual effect of three tricks** - The ablation for the simultaneous quantization has been reported in Table 2. As evident, the simultaneous quantization accelerates the process significantly (e.g., 45 times on 30B). - The main goal of the other tricks is to reduce the memory usage, which allows us to quantize LLMs with a single GPU. - Without the relaxation on Hessians, we need to compute and store the Jacobian $J$ for the softmax (see (6), (7)). Noting that the shape of $J$ is $[H,L,L,L]$ where $H$ is number of heads and $L$ is sequence length, storing $J$ needs 400 GB memory even for OPT-125M ($H=12, L=2048$), which is not possible with a single GPU of 80 GB memory. - Without the proposed computation of inverse Hessians, we need to compute and store the Kronecker product $H_{col} \otimes H_{row}$ before computing the inverse where shapes of $H_{col}$ and $H_{row}$ are $[d,d]$ and $[d_{h},d_{h}]$, respectively ($d_{h}$ is the head dimension and $d=Hd_{h}$). Noting that the shape of the Kronecker product is $[d_{h}d,d_{h}d]$ for each head, we need 100 GB memory even for OPT-125M, which cannot be done with one GPU. - In the future version, we will elucidate the benefits of each trick. **4. Sequential processing of BoA for models with GQA** - For models with GQA, key and value have a few heads, so the gain achieved by the simultaneous quantization is reduced (see Table II(a)). In particular, if multi-query attention is used, key and value have only one head and thus cannot enjoy the benefits. - For those models, if one wants to speed up the process, the multiple-row selection, i.e., quantizing multiple $N$ rows simultaneously where $N$ rows do not participate in the error compensation of each others (see our second response), could be an intriguing solution. From Table II(b), we observe that such relaxation brings significant reduction in the processing time with marginal degradation. <Table II. Processing time of different strategies on LLaMA3-8B> (a) Time (sec) for one layer |Layer|n_heads|Sequential|Simultaneous| |-|-|-|-| |Query|32|1440|47.2 (-97%)| |Key|8|359|47.1 (-87%)| (b) Process time (sec) and INT2 Wiki2 PPL |Method|N|Time|PPL| |-|-|-|-| |BoA|1|4047|21.7| |Relaxed BoA|8|1772|21.9| ||16|1393|22.3| **5. Block-wise update for row-wise compensation** - The block-wise update in GPTQ is based on the following fact: "The final decision for the $i$-th column is affected only by updates performed on this very column, so updates to later columns are irrelevant [1]." - Obviously, this can also be applied to the rows; rows outside the block are updated all at once after all rows inside the block are quantized. Under this strategy, the update formula in Prop. 3.1 is modified to $[\delta W] _{j+B:,:} = -[ U _{row}^{T}] _{j+B:,j:j+B} E$, where $B$ is the block size and $E$ is the $[B,d]$ matrix whose $i$-th row is defined as $e _{i} U _{col} / [U _{row}] _{i, i}$. We will discuss this issue and provide the detailed proof in the future version. [1] E. Frantar et al., GPTQ **6. Misleading expressions** - We feel sorry for misleading expressions. We will globally modify "integer weights" to "quantized weights". - While BoA uses RTN as the reviewer noted, BoA updates weights during quantization, and thus the results become different. We also note that the key difference over conventional methods exploiting the naive nearest rounding is that BoA minimizes the attention reconstruction error (rather than the weight perturbation $\Delta W$), which has been considered as a good approximation of the task loss degradation. We will clarify this point.
null
null
null
null
null
null
ConText: Driving In-context Learning for Text Removal and Segmentation
Accept (poster)
Summary: This paper introduces ConText, a visual in-context learning framework tailored for OCR tasks such as text removal and text segmentation. It employs task-chaining, context-aware aggregation, and a self-prompting strategy to leverage multi-task logic and enhance in-context reasoning. Extensive experiments demonstrate state-of-the-art performance on multiple benchmarks, confirming its effectiveness and adaptability. Claims And Evidence: The paper's claims are generally supported by extensive experimental evidence across several benchmarks. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are logically appropriate for addressing OCR tasks within a visual in-context learning framework. Theoretical Claims: The paper does not include any formal proofs, as its theoretical claims are supported primarily by intuitive reasoning and empirical evidence. Experimental Designs Or Analyses: The experimental designs and analyses, including ablation studies and cross-benchmark comparisons, are generally sound. Supplementary Material: Yes, I reviewed all the supplementary material. Relation To Broader Scientific Literature: The paper's contributions can be seen as an incremental improvement over existing V-ICL approaches like SegGPT and Painter, integrating task chaining, context-aware aggregation, and self-prompting strategies to better address OCR challenges. Essential References Not Discussed: The paper appears to have sufficiently discussed all key related works, with no significant essential references omitted. Other Strengths And Weaknesses: Strengths: 1.The paper presents a comprehensive framework that effectively adapts visual in-context learning to OCR tasks, offering a practical solution for both text removal and segmentation. 2.It demonstrates thorough experimental validation across multiple benchmarks, which supports the claimed improvements of the proposed methods. Weaknesses: 1. Overall, the work represents an incremental improvement in visual in-context learning for OCR tasks; both the context-aware aggregation and self-prompting strategies lack significant technical novelty and robust theoretical justification. 2. The paper proposes to recast the task demonstration by forming an explicit chain rather than a simple input-output pair, but it fails to discuss the impact of this design on training efficiency. 3. The method is tightly confined to OCR tasks, as it leverages the inherent logical relationship between text removal and segmentation—where the removed text is exactly the text to be segmented—limiting its extensibility to other domains. Other Comments Or Suggestions: 1.The related work section is weakly written; for example, the description "Unlike these discriminative models, this paper proposes a universally generative-based framework for the task." does not clearly outline the shortcomings of prior works before introducing the new approach. 2.The explanations for Figure 2 and Figure 3 are not detailed enough, making the proposed method hard to understand. 3.It is suggested that the paper be targeted to a computer vision conference rather than a machine learning venue, given the limited theoretical support. Questions For Authors: Please refer to the weaknesses section for the detailed questions, particularly regarding the limited scalability beyond OCR tasks. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### W1 (Limited Contribution & Novelty) >...incremental improvement in visual in-context learning...lack significant technical novelty... We respectfully disagree with this and argue that our work, **rather than being merely incremental, offers substantial technical novelty and valuable contributions**: 1. *Novelty*: Beyond CAA and SP, we are **the first to explore visual task-chaining to enhance multi-task performance**, which is a novelty consistently acknowledged by other reviewers. Furthermore, as suggested by Reviewers qfVE & L63m, its **promising generalized scalability** has been experimentally validated across various tasks. Thus, we believe this simple yet effective design warrants recognition for its novelty and represents a significant contribution to visual-ICL. 2. *Contributions*: We'd like to emphasize that we propose **the first visual-ICL paradigm for OCR**, achieving SOTA performance against both generalist and specialist models. Notably, we find that our ConText **emerges with in-context understanding of visual-specific instructions in a training-free manner** (see **Figure 6**), e.g., removing and segmenting the text of the specified color, thereby facilitating flexible user-model interaction. We believe these valuable insights will encourage further developments in visual-ICL. --- ### W2 & S3 (Theoretical Support & Inappropriate Submission) >...lack robust theoretical justification...paper be targeted to a computer vision conference rather than a machine learning venue... Our method may lack theoretical analysis, but we argue it is grounded by **robust experimental findings and the underlying mechanisms of ICL** (please refer to Appendix A). We contend that this does not render it unsuitable for ICML, as **no stringent theoretical support is required for submissions.** This openness has fostered a series of impressive theoretical-free works in computer vision presented at ICML, such as the notable CLIP [1], the multi-modal generalist mPLUG-2 [2], and the OCR framework UPOCR [3]. Meanwhile, this embrace of diversity is also reflected in *Reviewer Instruction*: "*reviewers are encouraged to be open-minded in terms of potential strengths...particularly for application-driven ML papers.*" Therefore, while we understand your concerns, we respectfully hope for your inclusive understanding. We believe that our paper, which presents **a powerful and practical visual-ICL OCR paradigm with valuable insights and contributions** (as agreed upon by other reviewers), deserves consideration at this conference. --- ### W3 (Task-chaining Efficiency) >...it fails to discuss the impact of this design on training efficiency. Thanks for raising this. It is claimed that **a single task chain can address multiple tasks simultaneously, while the input-output baseline requires a separate training sample for each task**, e.g., one for segmentation and another for removal. As such, one sample of task-chaining effectively equals two samples from the baseline, which should yield **no significant training efficiency gap** between these approaches. As shown in the following table (HierText dataset), task-chaining adds merely **+0.6 minutes/epoch** compared to the baseline, indicating a quite acceptable level of training overhead. We will include this analysis. |Method|Training Time (per epoch)| |-|-| |SegGPT|3.2 min| |SegGPT+task-chaining|+0.6 min| --- ### W4 (Scalability) >...confined to OCR...limiting its extensibility to other domains. We agree that task-chaining utilizes inter-task logic for collective benefits, but we contend that **it is not merely limited to OCR domains**, and can serve as a flexible pipeline to **bridge different tasks with promising scalability:** 1. Intuitively, the segmentation-removal concept could be directly applied to the **natural image domain.** To verify this, we utilize one natural image removal benchmark, PIPE [4]. The results in the table below demonstrate ConText's superiority in the natural domain. |Method (PIPE dataset)|PSNR (Rem.)|fgIoU (Seg.)| |-|-|-| |SegGPT|31.72|58.92| |ConText|**35.33**|**62.76**| 2. Additionally, beyond the removal-segmentation logic, we argue that there are **inherent logical connections among various vision tasks that can be explored implicitly.** By scaling this task-in-chaining concept, we can enhance other image-level tasks, e.g., *watermark removal, edge detection, and denoising* (please refer to our responses to **W1-Reviewer L63m & qfVE**). --- ### S1 & 2 (Writing Issue) >...weakly written...Figure are not detailed... Thanks for this and we will make the suggested improvements. --- >### References >[1] Learning Transferable Visual Models From Natural Language Supervision, ICML'21 >[2] mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video, ICML'23 >[3] UPOCR: Towards Unified Pixel-Level OCR Interface, ICML'24 >[4] Paint by Inpaint: Learning to Add Image Objects by Removing Them First, CVPR'25.
Summary: The authors present a visual in-context learning (V-ICL) approach, ConText, for fine-grained text recognition tasks (segmentation and removal). In order to accomplish this they focus on three novelties: 1. Task-chaining: The two tasks are chained together instead of being done independently to leverage inter-task correspondence and enhance the generalised capability. The tasks would have to be related however for this benefit to manifest 2. Context aware aggregation (through attention mechanisms): helps leverage learnability from other image-label pairs in the given context, instead of simple addition 3. Self-prompting: helps select the right demonstrations; during training the most relevant demonstrations are selected dynamically which makes the more adaptive and helps deal with the context-homogeneity issue inherent in older models due to random or fixed demonstrations. During training the standard masked-based strategy is used. Overall, the authors show that their approach is superior to single-task methods across three datasets when considering both other generalists and specialists. Ablations are also presented to provide compelling qualitative evidence. Claims And Evidence: Yes, the authors show results supporting claims Methods And Evaluation Criteria: The datasets and metrics make sense for this task Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the experiment description in the ablation makes sense Supplementary Material: Appendix Relation To Broader Scientific Literature: Text Removal/Segmentation is often a component of OCR pipelines, however these require explicit fine-tuning instead of in-prompt learning (Visual) In-context Learning has leveraged prompt context to provide examples and avoid fine-tuning. However, currently tasks are done independently and not chained together (and also don't have the self-prompting and context-aggregation present in ConText) Chain-of-though reasoning, popular with models such as R1, however hasn't been explored much for the visual modality Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths - Overall I think this is a novel paper that extends the current V-ICL space to leverage chain-of-thought and help produce good results for text-tasks without need for fine-tuning or human effort in curating labelling samples (through self-prompting) Weaknesses - May need to consider extra cost of self-prompting and context aggregation when evaluating this compared to competing methods Other Comments Or Suggestions: Both the text and most diagrams feel very crammed with information, instead of fitting everything into the given page limit it would be better to be much more selective w.r.t the core information. Questions For Authors: - How does ConText perform on document-type datasets (i.e. dense text) such as DocLayNet - I couldn't find an ablation on retrieval (nearest neighbour) vs self-prompting and wondering if former can be a lower cost way - Would be interesting to understand in what particular scenarios (i.e. types of images) chaining is better than joint training (and in which it is not) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### W1 (Designed Modules Efficiency) >...extra cost of self-prompting and context aggregation... Below we report the additional costs of these modules, and find that 1. Self-prompting (SP) incurs an additional training burden of **+0.4 minutes/epoch**. However, this cost is deemed acceptable due to the moderate engagement of SP (SP-0.2) during training. Moreover, SP is not utilized during inference, yielding **no additional computational burden for inference.** 2. Context-aware aggregation (CAA) introduces extra computational costs during both training and inference. However, as a lightweight cross-attention module, it only **increases model complexity by a manageable 2%**. Consequently, it leads to a mere **increase of +0.03 seconds (+0.8 min/epoch) during inference (training).** Based on this, we can more safely conclude these modules indicate **a reasonable level of computational efficiency.** We will add this analysis. | Method|Training Time (per epoch) | Inference Time| Model FLOPs | |-|-|-|-| |Baseline|3.8 min|0.09 sec |666.76G| |Baseline + SP|+0.4 min|+0 sec|+0G (0%) | |Baseline + CAA | +0.8 min | +0.03 sec | +17.20G (2%) | |Baseline + CAA + SP| +1.0 min | +0.03 sec | +17.20G (2%) | **The results are evaluated on HierText, and FLOPs and inference time are calculated by forwarding one 512x512 image on one A100, with the inference time reported in seconds.* --- ### Q1 (Performance on Document Case) >How does ConText perform on document-type datasets (i.e. dense text) such as DocLayNet. We believe that **ConText could address this case due to its superior performance on HierText [1], which collects a substantial number of high-fidelity, real-world document-based samples** (please refer to [1] for some visualized examples and its data curation in *"Section 3"*). We will add more these visualized samples in the updated version. Since **DocLayNet lacks pixel-level stroke and removal annotations**, we may not be able to provide valid numerical evaluations during this rebuttal period. We hope for your understanding and plan to explore this further in the future. --- ### Q2 (SP *v.s.* Nearest Neighbor Retrieval) >...ablation on retrieval (nearest neighbor) vs self-prompting... Compared to SP, we agree with you that **the retrieval strategy should exhibit lower training costs due to its direct prompt-specific assignment.** To verify this, we adopt DINOv2 [2], a leading visual retriever [3-4], to assign the nearest counterpart for each sample as the in-context prompt during training. As shown in the table below, this once-for-all solution demonstrates slightly better training efficiency (**-0.35 minutes/epoch**) than SP. However, it is observed that **this retrieval-based selection yields inferior ICL ability**, as it has a narrower prompt selection scope compared to the adaptive SP. We will add this study. | Method| Training Time (per epoch) | Seg. (RS / GT) fgIoU↑| |-|-|-| |ConText + Retrieval| **4.45 min** | 72.11 / 73.25 | |ConText + SP | 4.8 min | **74.86** / **78.12** | **The above results were obtained using HierText. The training time is reported in minutes per epoch, and pre-processing time of retrieval is not calculated. RS (GT) refers to the randomly selected (ground-truth) demonstration case.* --- ### Q3 (Task-chaining *v.s.* Joint-training) >...what particular scenarios (i.e. types of images) chaining is better than joint training (and in which it is not) Thanks for raising this. Here we provide a corresponding case-by-case study as follows: 1. For those high-performance samples, as shown in Figure 4, we find that most of them contain **hard-to-recognize visual patterns**, e.g., texts with similar background colors (the last sample), small textual fonts (the 5th sample), and character-like objects (the 2nd and 3rd samples). In these cases, task-chaining explicitly highlights the prompted patterns across tasks, thereby yielding significant improvements. 2. Meanwhile, we also identify a small number of samples that exhibit limited or even worse performance compared to joint-training. Most of them have relatively **low resolution and noisy annotations in text regions**. In this way, simply relying on task-chaining may exacerbate the accumulation of these noises, leading to performance results that fall below expectations compared to joint-training. In conclusion, **task-chaining shows overall superiority over joint-training.** We will add this discussion. --- ### S1 (Crammed formatting) >...it would be better to be much more selective w.r.t the core information. Thanks for this advice and we will polish our formatting for better clarity. --- >### References >[1] Towards End-to-End Unified Scene Text Detection and Layout Analysis, ICDAR'23. >[2] DINOv2: Learning Robust Visual Features without Supervision, TMLR'24. >[3] Retrieval-augmented embodied agents, CVPR'24. >[4] FORB: a flat object retrieval benchmark for universal image embedding, NeurIPS'23.
Summary: This paper explores the application of visual in-context learning to fine-grained text recognition tasks, including text segmentation and removal. Rather than employing single-task solutions, the authors propose a task-chain prompt framework that connects multiple tasks. Through extensive experimentation, they demonstrate state-of-the-art results on both text removal and text segmentation benchmarks while showing the effectiveness of the task-chaining approach compared to single-task methods. Claims And Evidence: yes Methods And Evaluation Criteria: The evaluations make sense to me. Theoretical Claims: correct Experimental Designs Or Analyses: Please refer to the weaknesses Supplementary Material: yes, each part of the supplementary material. Relation To Broader Scientific Literature: The paper explores how to extend single-task in-context learning to multi-task in-context learning, using text removal and text segmentation as a case study. Essential References Not Discussed: None Other Strengths And Weaknesses: - **Strengths** - The paper is well-written, with good presentation and extensive experimental evaluation. - The work successfully extends visual in-context learning from single-task scenarios to task-chaining prompts, enhancing the model's ICL capabilities through multi-task reasoning. - Extensive benchmarking demonstrates that the approach achieves state-of-the-art performance on both text removal and segmentation tasks, with relatively huge improvements over other ICL methods. - **Weaknesses** - The idea of extending single-task in-context learning to multi-task is interesting, is it possible to generalize to other tasks? Also, the task of text segmentation and removal only has two tasks, how would the framework perform with three or more chained tasks? A discussion of the generalizability of the framework would be helpful. - Table 5 shows that self-prompting has limited effect in randomly selected settings. Is there any interpretation of this? Additionally, the threshold selection for self-prompting lacks comprehensive evaluation - including results across a range of thresholds (e.g., SP = [0, 0.1, 0.2, ... 1.0]) would better demonstrate how to optimize this parameter. - The ablation study is conducted only on SegGPT rather than on the proposed ConText model. - While the paper demonstrates significant improvements from ConText, it lacks direct comparisons between individual task paths (ConText → Rem and ConText → Seg) against both other methods and the combined approach (ConText → Rem + Seg). While there are comparisons shown on the SegGPT and Painter model, a direct comparison of the proposed model would give more intuitive insights of the advantage of task-in-chain over a single task. Other Comments Or Suggestions: NA Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### W1 (Beyond two tasks) >...is it possible to generalize to other tasks?...how would the framework perform with three or more chained tasks... We'd like to argue that **our method can intuitively be integrated with other tasks (even image-level tasks) alongside chaining extension**. Except for adapting ConText to watermark removal (please refer to **W1-Reviewer qfVE**), we have conducted two tasks: image *edge detection* and *denoising*. As shown in the table below (HierText), our method also shows promising performance on these new tasks, and the progressive task-chaining could lead to a global improvement. Overall, these results help demonstrate the **task-wise scalability of ConText**. We will add this analysis. |Method|MAE↓ (Edge.)|PSNR↑ (Den.)| PSNR↑ (Rem.) | fgIoU↑ (Seg.)| |-|-|-|-|-| | Painter -> Edge.|29.78|-|-|-| | ConText -> Edge.|**22.74**|-|-|-| | Painter -> Den.|-|30.45|-|-| | ConText -> Den.|-|**34.57**|-|-| | Painter -> Edge. + Den. |27.17|32.68|-|-| | ConText -> Edge. + Den. |**19.79**|**36.83**|-|-| | Painter -> Edge. + Rem. + Seg. |26.48|-|30.12|66.43| | ConText -> Edge. + Rem. + Seg. |**17.76**|-|**40.23**|**76.61**| | Painter -> Edge. + Den. + Rem. + Seg.|25.22|34.73|31.19|68.65| | ConText -> Edge. + Den. + Rem. + Seg.|**16.14**|**38.19**|**41.26**|**77.98**| *For edge detection (Edge.), we use the Canny operator for the whole image regions. For denoising (Den.), we follow [1] and add the image with noising. To chain these tasks, we keep the noised counterpart as the 1st input, set the original image as the 2nd one, and the edge-detected image, along with the removal and segmentation masks, as the following parts.* --- ### W2 (Discussion on Self-prompting Strategy) >...self-prompting has limited effect in randomly selected settings...(e.g., SP = [0, 0.1, 0.2, ... 1.0]) would better demonstrate how to optimize this parameter. The observed weakened performance could essentially be attributed to the **in-context empowerment mechanism** of the self-prompting (SP). SP enables the model to maintain in-context generalization by guiding it to perform shortcut-like reasoning based on the self-demonstrations. Compared to the no-SP process, where no valid counterparts are provided during the training, **the model with SP may reduce the complexity of task reasoning by learning valid prompted patterns from the given self-demonstrations**. Consequently, this simplification shall reasonably yield a performance drop in demonstration-free (randomly selected) cases. The table below provides a more detailed assessment of SP. **As the involvement of SP increases, the model generally exhibits decreased demonstrate-free reasoning ability and improved in-context learnability,** highlighting the significance of proper utilization of SP. We will add this discussion. | Method | Seg. (RS / GT) fgIoU | Rem. (RS / GT) PSNR | | -------- | -------------- | -------------- | | Baseline | 79.14 / +0.65 | 38.59 / +0.37 | | SP-0.1 | 78.45 / +2.12 | 38.02 / +0.77 | | SP-0.2 | 78.02 / +3.98 | 37.67 / +0.92 | | SP-0.3 | 77.94 / +4.05 | 37.12 / +1.24 | | SP-0.4 | 77.63 / +4.17 | 36.82 / +1.45 | | SP-0.5 | 77.39 / +5.04 | 36.52 / +1.94 | | SP-0.6 | 77.14 / +5.83 | 36.12 / +2.13 | | SP-0.7 | 76.64 / +6.44 | 35.82 / +2.75 | | SP-0.8 | 76.33 / +6.73 | 35.37 / +3.47 | | SP-0.9 | 76.12 / +6.95 | 35.03 / +3.96 | | SP-1.0 | 75.94 / +7.04 | 34.81 / +4.37 | **The table follows same setting of Table 5. RS (GT) refers to the randomly selected (ground-truth) demonstration case. Here Baseline refers to the baseline method + CAA.* --- ### W3 & 4 (Ablation Studies on ConText) >The ablation study is conducted only on SegGPT rather than on the proposed ConText model. Due to the page limit, we have presented several ablations on ConText at **Appendix B.3 & B.6**, including the **influence of synthetic training datasets**, **masking ratio**, and **different inference mechanisms**. We will adjust the formatting for better clarity. >...it lacks direct comparisons between individual task paths... Thanks for this advice. Below we present the suggested single-task-tuned experiments on HierText. Clearly, our single-task-tuned ConText outperforms multi-task baselines, and our task-chaining ConText further shows a significant improvement over its single-task-oriented counterparts, validating the advantage of the task-in-chain concept (our response to your *W1* could also help validate this). We will add this analysis. |Method|PSNR (Rem.)| fgIoU (Seg.)| |-|-|-| | Painter -> Rem. + Seg.|28.17|60.60| | SegGPT -> Rem. + Seg. |28.16|65.23| | ConText -> Rem.|**37.35**|-| | ConText -> Seg.|-|**70.88**| | ConText -> Rem. + Seg.|**39.48**|**74.86**| **We implement single-task-tuned ConText through implementing the designed modules by only fusing the single task feature.* --- >### References >[1] Visual Prompting via Image Inpainting, NeurIPS'22. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response and the additional results. My questions have been resolved, and I have increased my score to 4. It would be helpful to also include a discussion on what types of chaining tasks could benefit from the proposed design along with the additional experiment results, which is also mentioned by reviewer AZf8. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We are pleased to hear that our rebuttal has addressed your concerns. We also agree that discussing various types of chaining tasks is valuable. To this end, we‘d like to provide a case-by-case analysis as follows: 1. *Highly Relevant Tasks*: For tasks that are highly related, such as text/watermark/object removal and segmentation, task-wise chaining can lead to significant collective improvements across all sub-tasks (+3.98 mIoU and +2.13 PSNR for OCR tasks). This is because these tasks are explicitly logically chained (i.e., the removed object directly corresponds to the segmented target), and our task chaining effectively highlights these patterns, facilitating the model's multi-task learning. 2. *Implicitly Relevant Tasks*: Additionally, we implement two low-level tasks with comparably implicit logical patterns:edge detection and image denoising. We find that these tasks also contribute to a certain degree of collective improvement. This underscores the potential correlation among various vision tasks. However, since these task-level chains do not demonstrate explicit relationships like removal-segmentation, the observed collective improvement is less highlighted than that of the more explicitly chained tasks. 3. *Bridging All*: Finally, we explore the potential of bridging all these tasks regardless of explicit logical connections and find that this approach can yield further gains. Moreover, it is revealed that denoising, which precedes segmentation and removal, can enhance the overall model's performance by relieving some original low-performing cases with noisy annotations. Meanwhile, segmentation and removal tasks could be improved from edge detection, as it involves more fine-grained texture patterns. Overall, the above analysis illustrates the benefits of task-chaining and emphasizes the importance of understanding task-wise relationships. Combing this analysis with our response of Q2-Reviewer AZf8 shall provide a more comprehensive discussion on task-chaining. We will add this at the updated version. Thanks again for your efforts and increasing the score of our work.
Summary: This paper proposes ConText, an adaptation of the visual in-context learning (V-ICL) paradigm specifically tailored for optical character recognition tasks, focusing on text removal and segmentation. To address the single-step reasoning bottleneck in existing V-ICL methods, ConText introduces a task-chaining compositor, sequentially linking text removal and segmentation tasks. Additionally, the authors propose a context-aware aggregation module to enhance latent query representation and introduce self-prompting to maintain robust in-context reasoning and prevent overly specialized, context-free inference. Extensive experiments on multiple benchmarks demonstrate that ConText significantly outperforms existing V-ICL generalist and specialist methods, achieving new state-of-the-art results in both text removal and segmentation tasks. Claims And Evidence: The paper makes three primary claims: First, the authors claim that existing methods limit models' in-context learning (ICL) capability by restricting prompts to image-label pairs, and propose a task-chaining compositor to enhance in-context reasoning. Second, they argue that their proposed context aggregation module effectively improves contextual understanding. Third, the authors assert that the inherent heterogeneity of text is more complex than object-level scenes, motivating their simple yet effective self-prompting strategy. Collectively, these claims underpin the development of ConText, which achieves state-of-the-art performance on several benchmarks for text removal and segmentation, verifying the combined effectiveness of the proposed methods. Individually, the authors provide supporting evidence for each claim; for instance, Fig. 5 demonstrates that the task-chaining compositor significantly enhances out-of-domain generalization capabilities. Methods And Evaluation Criteria: The proposed methodology is clearly written and logically aligns with the ICL framework for text removal and segmentation. The benchmarks, baselines, and evaluations are comprehensive. Theoretical Claims: No theoretical claims were evaluated. Experimental Designs Or Analyses: The experimental design is sound, and the analysis is comprehensive. Supplementary Material: Yes, according to what is mentioned by the authors in main content. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper introduces fresh ideas in visual in-context learning, particularly through the task-chaining compositor and context aggregation module, which enhance task chaining. The methodology is easy to understand yet effective, offering potential insights for future research in visual in-context learning. Weaknesses: While the task-chaining compositor is shown to be effective, the paper only explores chaining two tasks, i.e. text removal and text segmentation. This may be due to dataset limitations, but extending the approach to additional tasks, such as watermark removal or text registration, would be valuable for further exploration. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### W1 (Additional Task Exploration) >While the task-chaining compositor is shown to be effective...This may be due to dataset limitations, but extending the approach to additional tasks, such as watermark removal or text registration, would be valuable for further exploration. Thanks for this valuable advice. We also appreciate your understanding regarding the challenging scarcity of relevant OCR benchmarks. Considering data availability, we have additionally explored our ConText on 1 prevailing *watermark removal* benchmark, CLWD [1]. The table below reports the results of SegGPT and a leading specialist [2]. Clearly, our approach demonstrates superior performance, further validating the **task-wise generalizability of ConText** (We also kindly invite you to refer to our response for **W1-Reviewer L63m**, where we provide additional generalizations to more tasks). We will include this analysis. | Method | PSNR↑ (Rem.) | fgIoU↑ (Seg.) | | ------- | -- | ----| | SegGPT (generalist) | 30.11 | 74.42 | | PFMNet [2] (specialist) | 39.45 | 79.09 | | ConText | **40.73** | **82.16** | **CLWD includes both pixel-level binary watermark masks and the removal images. We utilize its training set to train SegGPT and ConText with similar learning strategy. Rem. (Seg.) refers to the removal (segmentation) task.* --- >### Reference >[1] Wdnet: Watermarkdecomposition network for visible watermark removal, CVPR22. >[2] Fine-grained Visible Watermark Removal, ICCV23.
null
null
null
null
null
null
TabSDS: a Lightweight, Fully Non-Parametric, and Model Free Approach for Generating Synthetic Tabular Data
Accept (poster)
Summary: This paper introduces a model-free approach for synthetic data generation based on direct data perturbation. The method operates by sequentially shuffling feature values while conditioning on binned representations of other features. The authors compare their approach to generative models, evaluating its effectiveness in generating synthetic data. Claims And Evidence: >[...] it extends SJPPDS from a data perturbation approach (where the data is shuffled but no new values are generated) into a fully synthetic data method. While the authors introduce an initial step aimed at producing new values for numerical features that did not exists in the training set, in my view, the proposed method should still be classified as a data perturbation approach as it is not unlike adding noise to an existing dataset. The authors compare their method with generative models which are forced to compress a dataset into a set of model parameters which are then used to generate new data. To me, this seems like a fundamentally different approach. > In addition to being easier to tune, this allows for a very precise control of the trade-off between data utility/fidelity and data privacy [**Resolved Post-rebuttal**] While this may be true, I find the experimental setup fails to properly analyze this trade-off when comparing to the other methods (see **Methods And Evaluation Criteria**). > [...] show that it consistently shows very competitive performance against the alternative approaches (including TabDDPM - the current state-of-the-art for data quality) The authors miss more recent works that have claimed to outperform TabDDPM. A non-exaustive list would be: [1] Alexia Jolicoeur-Martineau et al. Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees. AISTATS 2024 [2] Hengrui Zhang et al. Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space. ICLR 2024 [3] Juntong Shi et al. TabDiff: a Mixed-type Diffusion Model for Tabular Data Generation. ICLR 2025 While I do not believe that comparison with these methods is necessary since TabDDPM's performance is, in my experience, comparable, this claim should probably be adjusted. [**Post-rebuttal note:**] I considered this a minor issue and the authors added more relevant baselines like PATE-GAN and SMOTE Methods And Evaluation Criteria: [**Resolved Post-rebuttal**] In my opinion the selection of benchmark datasets and generative baselines is adequate. I believe, however, that the authors should also compare to other data perturbation approaches, such as approaches that add noise to the data. The proposed method generates data by directly manipulating an existing dataset, in contrast to the baselines which are forced to distill a training set down to a more compressed representation which is then used to generate new data. The question of privacy if therefore of central importance and I have two main concerns in this regard: 1. [**Resolved Post-rebuttal**] The choice of the privacy metric. On some datasets we see that the membership inference score is worse than a random classifier. Furthermore, when looking at how this MIA score varies with $n_c$ in Figures 3 and Figures 12-22, we see that it sometimes has little effect and often is not monotonic and varies wildly. All of these observations cast doubts on how adequately this metric can be used to assess privacy. 2. [**Resolved Post-rebuttal**] Instead of picking a single value of $n_c$, comparison should be made considering both privacy and utility at the same time. This could be accomplished, for example, by making plots similar to those in [1], which show the full trade-off curve between privacy and utility. Alternatively, a fair comparison would control for one aspect and measure the other. This approach, however, seems less viable given that the other generative baselines don't necessarily allow for this fine control using a single hyperparameter. As it stands, we are presented with two separate tables where it looks like TabSDS is sometimes achieving higher utility at the cost of less privacy. [1] van Breugel et al. Membership Inference Attacks against Synthetic Data through Overfitting Detection (2023) Theoretical Claims: I checked the proof of Theorem 1 and found no issues. Experimental Designs Or Analyses: The overall experimental design is sensible in my opinion. The hyperparameters of competing methods are tuned or taken from previous works. My main concern is with the choice of metrics and how they are presented (see **Methods And Evaluation Criteria**). Supplementary Material: I did not check the code provided as supplementary material. Relation To Broader Scientific Literature: This work extends previous work [1] in two directions: 1. It adds a prior step aimed at generating new values for numerical features. 2. It extends the method (SJPPDS) to work with categorical features. While these extensions are interesting, I still believe that the question of originality arises. The second point seems rather trivial and the impact of the first is not analyzed regarding additional privacy. The main draw of the paper is, in my opinion, the extensive experiments comparing the method to generative approaches [1] Elias Chaibub Neto. Statistical disclosure control for numeric microdata via sequential joint probability preserving data shuffling Essential References Not Discussed: Not that I am aware. Other Strengths And Weaknesses: [**Resolved Post-rebuttal**] The SJPPDS approach is somewhat *ad-hoc* and does not come with any formal privacy guarantees, unlike other perturbation approaches. It is unclear to me why one would opt to use it instead of simpler methods such as adding noise to the data. In my opinion, some comparison with this type of approach is crucial to prove the value of the proposed method. The Appendix includes additional reproducibility details and exhaustive analysis of the results on a dataset-by-dataset basis and is one of the strong points of the paper. Other Comments Or Suggestions: N/A Questions For Authors: - [**Resolved Post-rebuttal**] How would this method compare to other perturbtation approaches based on adding noise to data? - [**Resolved Post-rebuttal**] What do the privacy-utility trade-off curves look like for both approaches? - [**Resolved Post-rebuttal**] Why would I prefer TabSDS over the simpler option? [**Post-rebuttal Note:**] The authors have provided convincing evidence that their method outperforms simpler approaches like noise addition. While the results against SMOTE and particularly TabDDPM are much more mixed and it seems to me that the latter generally achieves a better trade-off than those made possible by TabSDS, I can see the value in the alternative proposed approach. As a result, I have raised my score from a 2 to a 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your very thoughtful comments/suggestions. We address below your main concerns, and we would be happy to follow up on the remaining ones (or any additional questions) during the discussion period. Please, let us know. >Authors should compare to approaches based on adding noise data … My main concern is with the choice of privacy metric and how the comparisons are presented. Comparison should be made using plots which show the full tradeoff curve between privacy and utility. We now included new comparisons with the additive noise approach using 2 new privacy metrics (and display the results using privacy/utility tradeoff curves as suggested). Following the suggestion of Reviewer Y1nw, the new adopted metrics are the sorted versions of the Standard Deviation Interval Distance (SDID) metric (which measures robustness to attribute disclosure attacks), and the sorted version of the Distance Based Record Linkage (DBRL) metric (which measures robustness to re-identification attacks). Please, see our reply to Reviewer Ai12 for further details about these metrics. We also adopted Energy Distance (ED) as a fidelity metric. (ED is closely related to MMD and can be used to measure the distance between 2 multivariate distributions.) We implemented the noise addition comparison using the sdcMicro R package. Noise addition was evaluated over a grid of 13 increasing levels of additive noise ranging from 1% to 50% of the standard deviation of each variable. Figures 1-10 in the linked pdf report the results: https://drive.google.com/file/d/1PYa_pgyQqqWwPI0Dq0ichscMmYftoyfK/view?usp=sharing In each figure, panel a show the ED vs DBRL tradeoff curves for noise addition (blue) and TabSDS (black). The red dot represents the selected value of $n_c$ based on the DCR criterium, while the purple symbols represent the other baseline generators. To better compare the generators, panel b shows the same results without displaying the noise addition curve. Panels c and d show analogous results in terms of the ED vs SDID tradeoff. To further illustrate the influence of the amount of noise and of $n_c$ on these privacy and fidelity metrics, panels e-i show the metric values against the perturbation parameter values. Inspection of the figures shows that: 1. TabSDS usually achieves a better ED vs DBRL tradeoff than the additive noise approach (note how the black curves are usually closer to the bottom left corner than the blue curves in panel a). 2. TabSDS tends to achieve better fidelity (lower ED) than the other approaches (the red dot tends to be closer to 0). 3. In terms of re-identification risk (panel b), all methods generated low risks (below 2.5% across all datasets) and none of the methods stands out as systematically better or worse than the other methods. 4. In terms of attribute disclosure (panel d), again, the risks tended to be low for all methods. >While these extensions are interesting, I still believe that the question of originality arises. The second point [extension to categorical variables] seems rather trivial and the impact of the first [generating new values for numerical features] is not analyzed regarding additional privacy. Regarding the extension to categorical variables, while our approach is simple, alternative approaches based on the computation of ranks on categorical variables are considerably more involved and difficult to implement in practice. For instance, [1] proposes the use of an ontology-based semantic distance. Regarding the evaluation of additional privacy protection achieved by new values, in our new evaluations we also compare TabSDS against a simplified version of the algorithm (denoted TabSJPPDS), which discards the new value generation step. The results are presented in panels e, f, and g of Figures 1-10 on the pdf linked above. Overall, generating new values improves privacy protection, as shown by the lower disclosure risks of TabSDS (black) relative to TabSJPPDS (brown) in panels e and f. As expected, generating new values also lead to a decrease in fidelity, as illustrated by the higher ED values of TabSDS relative to TabSJPPDS (panel g). Finally, note that these extensions have important practical impacts. First, by handling categorical variables TabSDS can be much more widely applied than SJPPDS (since tabular datasets often contain mixed data). Second, even for exclusively numeric datasets, in addition to the better re-identification and attribute disclosure protection, TabSDS also offers better protection against a trivial form of membership inference attack for which SJPPDS is helpless. (Namely, for datasets containing unusual values, an attacker might immediately infer membership by recognizing the presence of unique values of a given record in the perturbed dataset.) [1] Domingo-Ferrer et al (2013). Information Sciences 242:35-48. Thanks again for your review, and let us know if you have any additional questions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I find the additional results encouraging but I have a couple follow-up comments/questions: - In the provided figures, the range of values used for the added noise make it so that one can't really properly see the remaining approaches. There is a zoomed in version of the pictures but which doesn't really show the trade-off curve for the proposed method. Could you zoom in to the area of interest and still show the curve for TabSDS? - What was the reasoning behind the change to ED for the fidelity metric? I thought that the previous setup with the AUC of a classifier was easy to understand and, as far as I am aware, no reviewers raised any issues with it. In contrast, ED seems to assume the existence of a meaningful metric on the input space which I find questionable for tabular data. How did you choose this metric? How are categorical variables handled? And how are variables scaled so that the the distance between points is a sensible metric given the different semantic meanings of each variable. --- Reply to Comment 1.1.1: Comment: Sorry for the late reply. (We ended up implementing the SMOTE baseline, as suggested by Reviewer 9VCx, and wanted to include it in these comparisons.) The reviewer raises good reservations wrt the use of ED as a fidelity metric. We regenerated the tradeoff plots replacing ED by the detection test AUC for discriminating synthetic and real data. (We adopted ED because it has an efficient R implementation and is analogous to the Wasserstein distance, the fidelity metric in the tradeoff plot of van Breugel et al 2023, pointed by the reviewer.) As for the privacy metrics, we now include additional comparisons against DCR (in addition to the DBRL and SDID metrics). Figures 1 and 2 of the linked PDF https://drive.google.com/file/d/1MGP02tn1rmiZansoQXP27EbXhwrtnC-_/view?usp=sharing show tradeoff plots for all datasets (now including the tradeoff curve for the TabSDS method). In addition to the original baselines, they now also include comparisons vs SMOTE (based on 5 and 20 nearest neighbors), and vs ADSGAN and PATEGAN (as requested by Reviewer Ai12). The results for these DP-based models were, nonetheless, based on default hyperparam. choices and should be taken with a grain of salt. The left panels of Figs 1 and 2 show DCR vs AUC plots. The red line represents the DCR score comparing the training and test sets and provides an estimate of the DCR value we would expect to see for an ideal generator able to draw i.i.d. data from the same distribution as the training data (as described in Section G5). The red dot represents the selected value of $n_c$ based on the DCR criterium (i.e., the DCR value closest to the test set DCR). The middle and right panels show the tradeoff plots comparing DBRL vs AUC and SDID vs AUC, respectively. (Note the DBRL and SDID values differ slightly from the values reported previously due to a small bug in our code, which is now fixed.) Overall, DDPM, TabSDS, and SMOTE tended to outperform the other methods in terms of fidelity. These 3 methods tended to show somewhat balanced performances with none of the methods consistently outperforming the others. However, in terms of privacy, SMOTE tended to be considerably worse than DDPM and TabSDS with respect to DCR. It also tended to be worse than DDPM (and of TabSDS to a lesser extent) in terms of DBRL and SDID. ADSGAN, PATEGAN, TVAE, CTGAN, and BayesNet tended to trade high data privacy by low data fidelity. In all datasets, these methods showed AUCs close to 1, low DBRL and SDID, and high DCR. (These high DCR values are likely a consequence that these models fail to approximate well the distribution of the training data.) ARF tended to do slightly better than these models in a few datasets. Figs 2 to 12, report additional comparisons for each of the datasets. Panels a, b, and c present the tradeoff curves comparing TabSDS (black) vs additive noise (blue). The additive noise approach showed high AUC values across all noise levels across most datasets and is not competitive against TabSDS. Panels g, h, and i compare TabSDS against TabSJPPDS, and illustrate the additional privacy protection achieved by generating new values. (Note the higher DCR and lower SDID scores. The differences were less clear cut for DBRL). Panels j, k, and l illustrate that higher noise levels lead to increased data privacy. The results described above (and in the paper) were based on DCR values computed on the original data scales. Categorical variables were one-hot-encoded prior to DCR computation. (Note this should not cause issues given that the datasets contained only a few binary categorical variables. Datasets containing only or mostly categorical variables such as Mushroom and Adult datasets were not evaluated given that noise addition can only be applied to numeric data.) As pointed by the reviewer, one potential caveat when variables have different scales is that distance-based metrics (such as DCR) might be dominated by the wider range variables. To evaluate this potential issue, we also performed comparisons based on DCR values computed on scaled data. (Reported here: https://drive.google.com/file/d/1Kw1x3pDmiLT-ApTFR8vB5fNxjKAON2B5/view?usp=sharing) These results show the same qualitative conclusions as before. The main quantitative difference was that the selected $n_c$ values tended to be lower, leading to a decrease in fidelity and increase in privacy of TabSDS relative to the previous results. Finally, the linked PDF shows the qualitative comparisons of TabSDS against the newly included baselines: https://drive.google.com/file/d/1r1k8prAyA-8AUXJYysPiXeQtk5-IlqbK/view?usp=sharing As before, TabSDS tended to generate more realistically looking marginal distributions than the other methods (Figs 2 to 11). SMOTE, however, tended to recapitulate better the correlations from the real data (Fig 1). The DP based methods generated considerably lower quality data. Please, let us know if you have any other questions, and sorry again for the late reply.
Summary: The paper proposes a novel method for synthetic data generation. This new method is based on two basic actions: generating new values for each of the features by sampling through from its "interpolated marginal", then shuffling the original data following the algorithm SJPPD and then matching the ranking of the shuffled data with the synthetic ones. Claims And Evidence: One of the worries I have with this paper is what happens when the number of features increases. As the number of features increases, we keep shuffling. Wouldn't this make the distribution of the features become more and more similar to the distribution obtained with random shuffling? In your experimental analysis the authors tested the method with datasets with up to 27 features. What happens if we take datasets with 100 or more features? Finally, it would be interesting to have a discussion/analysis of the relationship between $n_c$ and the privacy metrics. If $n_c$ grows, can we still retain good privacy values? Methods And Evaluation Criteria: In the methods used for comparison some key methods are missing. In particular: WGAN [Arjovsky et al., 2017] and GReaT [Borisov et al., 2023]. In the metrics used for privacy, the authors used assessed the robustness of the generated data wrt membership attacks. However, it would be interesting to also evaluate its robustness wrt attribute disclosure attacks, (i.e., try to gain access to sensitive attributes of an entity from the real dataset) and re-identification attacks (i.e., attacks try to map a synthetic data point back to the original dataset) Theoretical Claims: See claims and evidence box Experimental Designs Or Analyses: Given that the model is not always the best performing model it would be useful to perform a Friedman statistical test (once you have included also Great and WGAN) together with a Nemenyi test to check the statistical significance of your results. For more info see (Statistical Comparisons of Classifiers over Multiple Data Sets, Demsar, 2006). Supplementary Material: I read the algorithms in the supplementary material and the additional details on the experimental analysis. Relation To Broader Scientific Literature: The authors have covered good part of the literature Essential References Not Discussed: All the essential references are discussed. Other Strengths And Weaknesses: No other strengths and weaknesses need to be discussed. Other Comments Or Suggestions: The paper contains a lot of algorithms, which often are not even needed and risk to simply confuse the reader. For example, algorithm 8 is simply the application to every column of the matrix $\mathbf{X}$ of the algorithm InterpolatedOrderStatsSampling. Figures are also not aligned well (see, e.g., Figure 5c and 5d). Finally the na,e of the other methods (e.g., TVAE) should be capitalized following the capilazion given by the authors. Questions For Authors: Please see the boxes above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments and thoughtful suggestions. While we could only address your main concerns here, we would be happy to follow up on the remaining ones (or any additional questions) during the discussion period. Please, let us know. >One of the worries I have is what happens when the number of features increases. Wouldn't the distribution become more and more similar to the distribution obtained with random shuffling? No. The sequential joint probability preserving data shuffling approach employed to shuffle the data is designed to preserve the association structure of the data columns, irrespective of the number of columns. To illustrate this point, we: 1. Simulated a dataset $X$ from a multivariate normal distribution of dimension 200 (with a highly structured correlations). In our experiment, this simulated data plays the role of the “real” data. 2. Applied TabSDS to subsets of $X$ with increasing numbers of features. 3. Compared how synthetic datasets generated by TabSDS recapitulated the real data's correlation structure as the number of features increased. Explicitly, in our experiment, we considered 10 subsets of $X$, composed, respectively by the first 20, 40, …, 180, and 200 features of $X$. For each of the 10 subsets, we generate synthetic data (using the same $n_c$ value) and computed the correlation matrices of the real and the synthetic data subsets. To measure how well the synthetic data recapitulated the correlations of the real data we computed the L2 distance (L2d) between the correlation matrices (define as average($(r_j – s_j)^2$), where $r_j$ and $s_j$ represent entries of the real and synthetic data cor. matrices and the average is taken over the upper (or lower) diagonal entries of the matrices). The above procedure was repeated using $n_c$ values set to 5, 10, 15, 20, 25, and 30. The results are reported in Figures 1 to 5 on the linked pdf file: https://drive.google.com/file/d/1dB9Vsb0IucFZBCft_k13Ljb82FXr0zE1/view?usp=sharing Figure 1a reports the values of the L2d for each of the 10 subsets. (Each boxplot reports the L2d values across the 6 distinct $n_c$ values.) Figure 1b shows the L2d values across the 10 subsets for each of the $n_c$ values. As expected, larger $n_c$ values lead to higher fidelity synthetic data and lower L2d values. This figure shows that the ability to recover the correlation structure of the real data remains largely constant as we increase the number of features from 20 to 200. (We would expect to see an upward trend if the method’s performance worsened with larger number of features.) Finally, Figures 2 to 5 compare the correlation matrices of the real and synthetic data for subsets of 20, 40, 100, and 200 features (based on $n_c = 30$). They illustrate how TabSDS is able to recover very well the correlation structure of the real data across all these subsets. >In the comparison some key methods are missing. In particular: WGAN and GReaT We restricted our comparisons to generators implemented in Synthcity. While GReaT is available, preliminary checks showed it was computationally unfeasible to include it in our comparisons (we run our experiments in CPUs). (Also, recent work by Zhang et al. (2024) [1] has shown that GReaT tends to be outperformed by TabDDPM - which is included in our comparisons.) WGAN is not available in Synthcity. [1] Zhang et al (2024) Mixed-type tabular data synthesis with score-based diffusion in latent space. ICLR 2024. >the authors assessed robustness of the generated data wrt membership attacks ... it would be interesting to also evaluate robustness wrt attribute disclosure attacks and re-identification attacks This is an good suggestion. We have now included new evaluations based on the sorted versions of the Standard Deviation Interval Distance (SDID) and Distance Based Record Linkage (DBRL) metrics (which measure robustness to attribute disclosure and re-identification attacks, respectively). Please, see our reply to Reviewer Ai12 for further details about these metrics. As suggested by Reviewer r8L7, we now report these new results using tradeoff curves comparing these privacy metrics against the Energy Distance (ED) fidelity metric. Results are presented in the linked pdf file: https://drive.google.com/file/d/1PYa_pgyQqqWwPI0Dq0ichscMmYftoyfK/view?usp=sharing Please, see our reply to Reviewer r8L7 for the interpretation of these results. >it would be interesting to have a discussion/analysis of the relationship between n_c and the privacy metrics. If n_c grows, can we still retain good privacy values? As $n_c$ grows, TabSDS generates less private data. This point is discussed in lines 376 to 380 of the manuscript. Also, Figure 4 (main text) and Figures 11d - 22d (Appendix) illustrate this by showing decreasing DCR privacy metric values as $n_c$ increases. Similarly, panels e and f of Figures 1-10 (see the linked pdf file) show rising disclosure risks in sorted DBRL and SDID metrics as $n_c$ grows.
Summary: The paper introduces TabSDS, a non-parametric and model-free method for generating synthetic tabular data. Unlike deep generative models (DGMs), which are computationally expensive and require extensive hyperparameter tuning, TabSDS leverages rank-based transformations and data shuffling to approximate the joint probability distribution of real data. The method extends sequential joint probability preserving data shuffling (SJPPDS) by incorporating categorical features and generating entirely new data points rather than merely shuffling existing data. ## update after rebuttal I thank the authors for their rebuttal and remain positive about the work. Claims And Evidence: Claim 1: TabSDS offers competitive fidelity and utility compared to state-of-the-art models. -- Supported by benchmarking against TabDDPM, ARF, TVAE, CTGAN, and Bayesian networks using the Synthcity library. Results demonstrate strong performance in fidelity and downstream ML utility. Claim 2: TabSDS is significantly faster than both deep generative models and adversarial random forests (ARF). -- Experimental results show orders-of-magnitude improvements in runtime across multiple datasets. Claim 3: TabSDS improves privacy compared to deep generative models. -- Measured via membership inference attack (MIA) success rates, showing reduced susceptibility compared to deep models and ARF. Methods And Evaluation Criteria: The evaluation settings and criteria are appropriate to support the claims. The evaluation assess fidelity (ROC AUC on synthetic vs. real detection), utility (ROC AUC on downstream ML tasks trained with synthetic data) and privacy (Membership inference attack (MIA) success rate). These characteristics are suitable and the metrics used appropriate. The comparisons baselines are TabDDPM, ARF, TVAE, CTGAN, and Bayesian networks, which is a good representative set of the main synthetic data generation methods for tabular data. Evaluation includes 12 datasets containing mixed numerical and categorical data. These datasets are often used in the literature. Theoretical Claims: I checked the proof of Theorem 4.1 (Rank-matching step: Maintains the joint probability distribution of the original data). They look senseful to me but I cannot guarantee it. Experimental Designs Or Analyses: Yes. The evaluation looks sound to me (cf. metric and evaluation criteria section). Experiments were repeated 10 times to account for random factors. Supplementary Material: - Algorithm details: Full pseudocode for SJPPDS, SyntheticSJPPDS, CategoricalSJPPDS, MixedSyntheticSJPPDS. - Tuning analysis: Details on how nc affects fidelity, utility, and privacy. - Runtime comparisons: TabSDS vs. baselines on execution time. Relation To Broader Scientific Literature: The key contribution is to propose a new tabular data generation method that is more efficient than state of the art approaches (like deep generative models) while remaining of quality in terms of fidelity, utility and privacy. Tabular data generation remains an unsolved problem (or at least, existing solutions are not entirely satisfactory) so I believe there is value in pursuing research in this area. The proposed approach rely on SJPPDS, which is novel compared to the usual practice of the literature. Essential References Not Discussed: The paper could benefit from a comparison with differential privacy-based synthetic data generation methods. Other Strengths And Weaknesses: Efficiency: Faster than generative models, with minimal parameter tuning required. Flexibility: Handles both categorical and numerical data, unlike SJPPDS. Privacy-Utility Tradeoff: Provides explicit control over fidelity and privacy through the nc parameter. Other Comments Or Suggestions: None Questions For Authors: How does TabSDS compare to differentially private synthetic data methods? How does it handle imbalanced categorical variables? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your thoughtful comments and questions. >How does TabSDS compare to differentially private synthetic data methods? We now included comparisons with two additional DP based methods available in Synthcity: ADSGAN [1] and PATEGAN [2]. (We also evaluated the DPGAN [3] but didn’t include it in the comparisons because it was considerably slower and tended to generate considerably lower fidelity data.) Due to time constraints, we were unable to optimize the hyperparameters of these two DP-based generators, and the results presented here are based on the default hyperparameter values in Synthcity. (Hence, they should be taken with a grain of salt and are aimed for the appendix.) Following a suggestion by Reviewer Y1nw we now evaluate the methods wrt 2 additional privacy metrics, the sorted version of Distance Based Record Linkage (DBRL) metric, which measures robustness to re-identification attacks; and the sorted version of the Standard Deviation Interval Distance (SDID) metric, which measures robustness to attribute disclosure attacks. The standard versions of the DBRL [4] and SDID [5] metrics are traditionally used in the Statistical Disclosure Control field to evaluate perturbation methods. Application to synthetic data requires a prior sorting step as described in [6]. (The basic idea is to sort the rows of both the original and synthetic datasets according to the values of a given column of the data prior to the computation of the DBRL and SDID metrics.) After the sorting step, the DBRL metric is implemented by computing the Euclidean distances between each record in the synthetic dataset against all records in the real dataset. A synthetic record is considered “linked” when the nearest record in the real data turns out to be the corresponding real record. The metric is defined as the proportion of synthetic records linked to real records. After the sorting step, the standard SDID metric corresponds to the proportion of real records inside a standard deviation interval whose center is the corresponding synthetic record. Following the suggestion of Reviewer r8L7, to better facilitate the visualization of the tradeoffs between privacy and fidelity we now report plots comparing these two new privacy metrics against the Energy Distance (ED) fidelity metric. (ED is closely related to maximum mean discrepancy and can be used to measure the distance between 2 multivariate distributions.) Figures 1 to 4 in the linked pdf report these comparisons: https://drive.google.com/file/d/1M93WfhXtWQVRLXsvsgoac0S1O_kAqjt_/view?usp=sharing Figure 1 shows scatterplots of ED vs DBRL for the original methods and ADSGAN. Overall, TabSDS tended to generate higher fidelity data than the other methods (the red dot tends to be closer to 0), while ADSGAN tends to generate lower fidelity data (the inverted purple triangle tends to be farther from 0 than most methods on most of the datasets). In terms of re-identification risk, all methods generated low risks (below 2.5% across all datasets) and none of the methods stood out as systematically better or worse than the other methods. Not even ADSGAN tended to do systematically better on this metric. This might, however, be due to the fact that the default hyperparameter value in Synthcity is set for moderate privacy protection (or due to suboptimal model training). Figure 2 shows analogous results for the ED vs SDID comparison. Again, the SDID risk tended to be low for all methods (and was 0 for ADSGAN across all datasets). Figures 3 and 4 add the comparisons against PATEGAN (open purple diamond). Overall, PATEGAN tended to achieve considerably lower data fidelity than ADSGAN (note the extended x-axis). [1] Yoon et al (2019) Anonymization through data synthesis using generative adversarial networks (ADS-GAN): a harmonizing advancement for AI in medicine. [2] Jordon et al (2019) PATE-GAN: generating synthetic data with differential privacy guarantees. [3] Xie et al (2018) Differentially private generative adversarial network. [4] Domingo-Ferrer and Torra (2001) A quantitative comparison of disclosure control methods for microdata. [5] Mateo-Sanz et al (2004) Outlier protection in continuous microdata masking. [6] Chaibub Neto (2024) Statistical disclosure control for numeric microdata via sequential joint probability preserving data shuffling. >How does TabSDS handle imbalanced categorical variables? When categorical variables are imbalanced, synthetic data generators can struggle to preserve the frequencies of rare categories observed in the real data. This is not an issue for TabSDS which, by construction, preserves the marginal distribution of the categorical variables (since the data from each categorical variable is simply shuffled around, so that the marginal frequencies of the categories are preserved). Thanks again, and please let us know if you have any further questions.
Summary: The paper introduces TabSDS, a non-parametric, lightweight framework for generating synthetic tabular data. The approach is based on the Sequential Joint Probability Preserving Data Shuffling (SJPPDS) algorithm, a perturbation method that relies on restricted feature permutations to preserve the joint distribution of the data. Specifically, given T columns or features, the algorithm performs multiple restricted permutations of T-1 columns with respect to the remaining column. Each column's data is shuffled, ensuring that the marginal distributions are preserved. The authors extend this approach by generating new data for each feature, effectively providing synthetic data while enhancing privacy. This generation process is carried out using a new algorithm called IOSSampler, which samples new data uniformly from the values of ordered sub-samples of the input feature. The proposed method is significantly faster than existing deep learning and machine learning approaches, while it trades off privacy for better realism. ## update after rebuttal The authors addressed some of my concerns, but I believe the generation of new data --- in practical settings --- could be improved. I keep the original score. Claims And Evidence: Yes, claims are supported by theorems/proofs and experiments. Methods And Evaluation Criteria: The proposed method and evaluation is reasonable. The selected benchmarks could be extended, including for example a shallow interpolation-based methods, e.g., SMOTE (Chawla et al., 2002). Theoretical Claims: Theorem 4.1 seems correct to me, but I didn't check the formal proof. Experimental Designs Or Analyses: The datasets are not explicitly linked, and in Table 4 the number of features (num and cat) appears to differ from the cited paper where these datasets were originally used, as well as from other related studies (TabDDPM). To improve reproducibility, I suggest adding the specific URL or reference for each dataset, at least in the appendix. Additionally, the authors mention that all models were tested on CPU instance (with 8 cores). Given the computational demands of deep learning models like TabDDPM, I would like to ask about the number of training epochs used. If the models were not trained sufficiently, it could explain potential performance limitations. Supplementary Material: I reviewed some results and additional details about the experiments. Relation To Broader Scientific Literature: The paper addresses an important topic: synthetic tabular data generation. While it differs from existing deep learning and machine learning models, it builds and extends an interesting line of research, particularly relevant are the work of Chaibub Neto, 2024 (SJPPDS), and the work of Domingo-Ferrer et al. (2025). Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - The paper addresses an important topic, proposing a new non-parametric lightweight method . - The method has very good performance in terms of utility and realism. - The method could be a good alternative solution to more expensive and classic DL/ ML approaches. Weaknesses: - The novelty of the proposed approach compared to existing work should be better explained. In particular, compared to Chaibub Neto (2024) and Domingo-Ferrer et al. (2025), the main contribution appears to be a lighter approach to fit marginal distributions and generating synthetic samples. - I have some concerns about the evalution. The method has very good performance in utility and realism, but privacy may be a problem. Also the real and syn marginal distributions seem really close. Some benchmarks are missing (e.g., SMOTE, CTABGAN+), while some details are missing. See other comments and questions. Other Comments Or Suggestions: - Acronyms in Section 2 maybe should be capitalized, e.g., *DDPM* instead of *ddpm*. - The datasets used in the paper (table 4) seem to have a different number of columns compared to the original paper and TabDDPM. For exampel, the columns *#num* and *#cat* list feature for numerical and categorical. But House_16H appears to have 17 features instead of the 16 features in the cited Hansen et al. (2023). - I suggest that to improve reproducibility, consider providing a URL where the exact datasets can be downloaded. And also add details on training epochs for benchmarks - In large datasets with many columns and values, category granularity can be an issue—some categories may be too broad or too fine-grained, leading to sparsity or excessive heterogeneity. How to handle that and select the correct hyper-parameters? - Theorem 4.1 is cited ad Theorem 1. Questions For Authors: Q1: In the examples of Table 2 and Table 3, does the method requires for the same category to have different ranking? Is possible to ahve the same ranking for the category A, and what's the impact of doing that? Different ranks, could "break" the join distributions in the synthetic data? Q2: In algorithm 3: while the distribution is expected to converge as *m* → ∞, in practice, will the input and real distributions actually match for like bimodal distributions? How to choose the correct value for m? For example a bimodal distribution with values clustering around 0 and 10? When we sample: uniform(0, 10) we can get values not in the real/training set? Q3: The authors mention that all models were tested on a CPU instance (with 8 cores). Given the computational demands of deep learning models like TabDDPM, I would like to ask about the number of training epochs used. If the models were not trained sufficiently, it could explain potential performance limitations. How they handle this possible limitation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the very thoughtful comments and questions. >Q1. In Tables 2 and 3, does the method requires for the same category to have different ranking? Yes, it requires different ranks for each category. The problem is that if you assign the same rank value for each category the restricted permutation generates an identical copy of the real data (except for the order of the rows). The linked table https://drive.google.com/file/d/12OXRKwkT06lokekRKe42VOjANX0RJytV/view?usp=sharing shows an illustrative toy example, where the original categorical data is recoded as $X_1=[C \equiv 1, D \equiv 2]$ and $X_2=[A \equiv 1, B \equiv 2]$, and perm. 1 to 4 represent restricted permutations of $X_1$ relative to $X_2$ (where we shuffle the values of $X_1$ within each level of $X_2$). Clearly, this approach cannot be used for datasets containing only categorical variables. Having different ranks for the same category is necessary for generating distinct datasets, while still preserving the joint distribution to a good extent. (Also, it helps unify the treatment of categorical and numeric variables in mixed datasets.) >Q2. In algorithm 3 will the input and real distributions match for bimodal distributions? How to choose the value for m? Yes, the synthetic and real distributions will match if the bimodal distribution has continuous support (e.g., bottom left panel of Fig. 29). However, for a discontinuous bimodal distribution—where values cluster near 0 and 10 with no training examples in between—the method may generate a small fraction of intermediate values outside the training range. Thanks for noting this edge case; we will mention it in the Limitations section and indicate that some post processing might be necessary in this case. For the choice of m, note that m = floor(n * p), where n is the number of samples and 1/n <= p <= 0.5. Hence, m can vary from 1 to floor(n/2). As described in page 6 (lines 292 to 300 in the 2nd column) we found that lower values of p can lead to considerable decreases in data fidelity without comparable increases in privacy. Hence, in practice, we recommend taking m = floor(n/2). >Q3. models were tested on a CPU instance. Given the computational demands of deep models like TabDDPM, how many training epochs were used? The number of training epochs was one of the parameters evaluated during hyperparameter optimization (or taken from other works) using the hyperpar. search spaces defined in Synthcity. We will add tables of hyperpar. values to the final version of the paper. For TabDDPM, the number of epochs ranged from 1051 to 8300, and the experiments run for long times on the CPU instance. >benchmarks could include SMOTE and CTABGAN+ We restricted our comparisons to models available in Synthcity. Since Kotelnikov et al (2023) performed extensive comparisons of these weaker baselines against TabDDPM, and we performed comparisons against TabDDPM, we feel these additional comparisons might not be essential. But, please, let us know if you disagree (as we might be able to implement SMOTE and report results over the discussion period). >In Table 4 the # of features appears to differ from the cited paper Table 4 reports # of variables (i.e., # of features plus the target) while the Table 4 in Hansen et al 2023 reports only # of features. (Also, for the California housing we used the sklearn data rather than Hansen’s.) We will clarify these points and add specific URL/references for each dataset in the appendix. >novelty of the proposed approach should be better explained Compared to Chaibub Neto (2024), we extend SJPPDS to categorical data while also generating synthetic marginal distributions. This improves practicality: first, handling categorical variables makes our approach more widely applicable (as tabular data is often mixed). Second, even for purely numeric data, our method offers better protection against trivial membership inference attacks (which SJPPDS cannot prevent) when datasets contain unusual attribute values which can be easily spotted by an attacker. Our approach differs from Domingo-Ferrer et al (2025) in 3 key ways. First, we generate marginals via order statistics interpolation, whereas they require the user to choose predefined parametric distributions (e.g., Gaussian, Gamma). Second, we extend SJPPDS to categorical variables, while they employ a more complex ontology-based semantic ranking [1] (what is also more involved in practice). Third, we use SJPPDS for rank data shuffling, whereas they rely on different algorithms. [1] Domingo-Ferrer et al (2013). Information Sciences 242:35-48. >In large datasets, category granularity can be an issue. How to select the correct hyper-parameters? For deep learning and ML baselines, we optimized hyperparameters using Optuna within Synthcity's defined search spaces or adopted values from Hansen et al (2023), which were also optimized with Optuna in Synthcity. Please, let us know if you have any further questions.
null
null
null
null
null
null
Point Cloud Dataset Distillation
Accept (poster)
Summary: This paper studies the dataset distillation for point cloud. This paper claims this is the first study on dataset distillation for point cloud, aiming at the two challenges: diverse orientations and resolutions in 3D space. To overcome these issues, this paper 1) proposes a plug-and-play point cloud rotator to align the point cloud to a canonical orientation; 2) devises a point-wise generator to produce point clouds at various resolutions based on the sampled noise amount. The experimental results demonstrate the proposed method achieves higher score than previous methods. ### Update after rebuttal The authors addressed my concerns. After reading other reviews, I keep my initial rating "weak accept". Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: do not check carefully. Experimental Designs Or Analyses: no issues. Supplementary Material: Do not review the submitted code. Relation To Broader Scientific Literature: Dataset distillation has not yet been studied in the community. This paper, as the first attempts, might be helpful for the community. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: This paper makes the first attempt for point cloud dataset distillation, which might provide some insights for the community. The proposed method seems to be efficient in comparison with previous methods. Weaknesses: === I am not familiar with this task. === - During the dataset distillation process, the task model is also trained on the original dataset. Therefore, how does DD significantly reduce the computational cost of training neural networks from scratch (LINE09, PAGE01)? - After the distillation process, do we need to further train another task model on the distilled dataset? - For part segmentation (L338), "each shape is divided into multiple parts. For example, an airplane can be divided into its fuselage, wings, engines, and tail." For other objects, how to divide the parts? The division strategy impacts the performance and how? - The performance is low, which impacts the practical application. For example, for modelnet40, current SOTA has achieved over 94% accuracy, while the highest score is 83% in this paper. For part segmentation, the performance lags far behind. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your constructive advice and the opportunity to address your concerns. &nbsp; ### ``Other Strengths And Weaknesses`` > **Q1: The task model is also trained on the original dataset. How does DD significantly reduce the computational cost?** A1: We would like to clarify that the task model is not trained on the original datasets. Generaaly, gradient matching employs a bi-level optimization framework: - Inner-loop: Optimize task model using **synthetic data**. - Outer-loop: Fix the task model and match the gradients between a batch of real and synthetic data. After distillation, the synthetic data contains the gradient information of real data, thereby converging the models with fewer iterations, i.e., batches. > **Q2: Do we need to further train another task model on the distilled dataset?** A2: Yes, another task model needs to be trained from scratch. Although the task model is optimized in the inner loop, its performance is far from ideal, as the synthetic data is not fully trained. One benefit is that well-trained synthetic data can be used to train different task models. For example, we can use PointNet during distillation and train more complex models, such as DGCNN or Point Transformer, after distillation. A cross-architecture performance is reported in ``Table 3`` of the main text. > **Q3: For other objects, how to divide the parts? The division strategy impacts the performance and how?** A3: The parts of objects are defined by the labels of the real datasets. For example, the ShapeNetCore dataset contains 16 object classes and 50 part classes in total. There is a segmentation dictionary to store the correspondence between objects and parts, like {'Airplane': [0, 1, 2, 3], 'Bag': [4, 5], ...}. In summary, **the division strategy is fixed and depends on real data**, which will not have different impacts on the performance of DD3D. > **Q4: Performance of DD3D** A4: We understand your concerns about the performance of DD. We propose two strategies to mitigate the performance gap between models trained on real and synthetic data 1. Improving the number of CPC. We add the performance of DD3D with CPC=100 in ModelNet40 and ScanObject, and find that it can achieve comparable performance with the full dataset. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | GM | 81.74 | 84.17 | | GM | 57.52 | 62.82 | | DD3D | 83.91 | **86.68** | | DD3D | 61.96 | **65.51** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | 2. Combined with knowledge distillation (KD). Recent DD methods suggest that leveraging KD can significantly improve the performance of DD. We can also adopt this strategy to achieve nearly lossless performance. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |DD3D | 83.91 | 86.68 | |DD3D | 61.96 | 65.51 | |DD3D+KD | 86.55 | **88.75** | |DD3D+KD | 65.06 | **66.84** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | > **Q5: Comparison with state-of-the-art point cloud analysis models.** A5: The performance of DD3D is behind the SOTA point cloud analysis models, as we only adopt the vanilla 3-layer PointNet as the distillation network for DD3D, which makes a good trade-off between effectiveness and efficiency. Notably, in the image distillation task, DD methods often use a 3-layer ConvNet as the distillation network [1]. We report the performance of vision DD (Tiny-ImageNet) and point cloud DD (ModelNet40) below. | Tiny-ImageNet | 3-layer ConvNet | ResNet-18 | | :--: | :--: | :--: | | IPC=50 | 37.6% | 59.8% | | ModelNet40 | 3-layer PointNet | PointNext | | :--: | :--: | :--: | | CPC=50 | 88.1% | 94.00% | We can observe that the performance of the 3-layer ConvNet is far from the ResNet-18 model. But it still promotes the development of vision DD. On the other hand, the performance of the 3-layer PointNet is close to the SOTA models, like PointNext. We believe that our research will also benefit the 3D DD field, and future works will extend DD to the SOTA point cloud analysis models. [1] Dataset Condensation with Gradient Matching. ICLR 2021.
Summary: In this paper, the authors presented DD3D, a novel framework for 3D point cloud distillation that aligns the rotation-invariant data distribution between real and synthetic data by transforming point clouds into a canonical orientation. Once trained, DD3D is capable of synthesizing point clouds at arbitrary resolutions, thereby significantly reducing memory consumption and enhancing scalability. Through extensive experiments on both shape classification and part segmentation tasks, the proposed method can achieve the superior performance compared to conventional dataset distillation methods. Claims And Evidence: The proposed method opens up new opportunities for efficient training of point cloud models and broadens the applicability of dataset distillation techniques to unstructured 3D data, compared to existing distillation methods have primarily focused on structured data such as images, videos, and text. Methods And Evaluation Criteria: One of the core idea is that rotation invariance positively contributes to the effectiveness of distillation. In this context, I am curious about the authors’ perspective on the potential impact of incorporating more recent rotation-invariant or rotation-equivariant models, and how such integration might influence the performance or generalizability of the proposed approach. While some performance drop compared to using the full dataset is expected in dataset distillation, it remains a notable limitation for high-accuracy applications. Despite the improvements over existing methods, concerns remain about potential degradation when scaling to larger or more complex tasks. Theoretical Claims: The paper provides solid mathematical justifications that effectively support the proposed approach and offer clear insights into why the techniques work as intended. Experimental Designs Or Analyses: The authors conduct comprehensive evaluations of DD3D across diverse datasets and scenarios, demonstrating that it consistently outperforms traditional distillation methods. To reinforce the author’s argument, it would be beneficial to compare the performance and feature characteristics of models trained on both aligned and unaligned orientations. Such a comparison would provide clearer insights into the effect of rotation on feature learning and model robustness. Supplementary Material: I reviewed the attached code to verify whether the implementation aligns well with the contribution in the paper. Relation To Broader Scientific Literature: The authors present a theoretical analysis showing that matching rotation-invariant features is essential for effective 3D point cloud distillation. They demonstrate that random rotations weaken the principal components of real data, leading to degraded distillation performance, which strongly justifies the use of the proposed rotator. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: I mentioned all comments including reasons and suggestions in each section and I recommend that the author will provide my concerns in the rebuttal. Other Comments Or Suggestions: The authors should revise several typos and grammatical errors in the paper. (not critical) Questions For Authors: Rather than posing additional questions, I would like the authors to carefully review and address the concerns I have raised above in detail. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the detailed and helpful comments. We reply to the comments in detail. Hope to address your concerns. &nbsp; ### ``Methods And Evaluation Criteria`` > **Q1: Potential impact of incorporating more recent rotation-invariant or rotation-equivariant models.** A1: Thanks for the great advice. We have tried several rotation-invariant or equivariant models for DD. However, there are two issues. 1. The architectures of advanced rotation-invariant models are quite different, making it hard to transfer to rotation-variant models. 2. These models often involve complex components. A recent survey on DD [1] shows that more complex models do not necessarily lead to better performance. See Q2 for a performance comparison. On the other hand, the rotator of DD3D can alleviate the above two issues as it is lightweight and **model-agnostic**. [1] Dataset distillation: A comprehensive review. TPAMI 2024. > **Q2: How such integration might influence the performance or generalizability of the proposed approach?** A2: We compare DD3D with some SOTA rotation-invariant methods, including RISurConv, TetraSphere (suggested by Reviewer vcTR), and SGMNet (suggested by Reviewer gVCR). The results are shown below. | ModelNet40 | CPC=10 | CPC=50 | Full | | :-- | :--: | :--: | :--: | | DD3D | 58.14 | **71.27** | 80.45 | | RISurConv | 55.81 | OOM | **95.60** | | TetraSphere | 54.72 | OOM | 90.50 | | SGMNet | **58.33** | 70.48 | 80.77 | We have the following observations: 1. Both RISurConv and TetraSphere perform well in the full dataset training. However, they are not suitable for the DD task, as they all need to build a k-NN graph to learn rotation-invariant representations. During distillation, the synthetic point clouds are dynamically optimized, resulting in different nearest neighbors. As a result, we need to **re-calculate the k-NN graph** in each iteration, which significantly increases the time and space overhead. 2. On the other hand, the rotator of DD3D and SGMNet are model-agnostic, which can be easily combined with different point cloud models. Here we use the 3-layer PointNet as the backbone, and their results are better than RISurConv and TetraSphere. This indicates that the complex models do not always lead to better distillation performance. In contrast, simple models may be more suitable for DD. > **Q3: Performance drop compared to using the full dataset and limitation for high-accuracy applications.** A3: We fully understand your concerns about the accuracy. For applications requiring high accuracy, we recommend two ways to improve the accuracy of DD3D. 1. Improving the number of CPC. We report the performance of DD3D with CPC=100 in ModelNet40 and ScanObject, and find that it can achieve comparable performance with the full dataset. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | GM | 81.74 | 84.17 | | GM | 57.52 | 62.82 | | DD3D | 83.91 | **86.68** | | DD3D | 61.96 | **65.51** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | 2. Combined with knowledge distillation (KD). Recent DD methods suggest that leveraging KD can significantly improve the performance of DD. We can also adopt this strategy to achieve nearly lossless performance. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |DD3D | 83.91 | 86.68 | |DD3D | 61.96 | 65.51 | |DD3D+KD | 86.55 | **88.75** | |DD3D+KD | 65.06 | **66.84** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | > **Q4: Potential degradation when scaling to larger or more complex tasks.** A4: We conduct semantic segmentation experiments on the scene-level S3DIS dataset, which contains more than 80M point clouds for training. Generally, the scene-level task is more challenging than the object-level task, as it includes more data and noise. We can observe that DD3D consistently outperforms GM by a large margin, demonstrating its effectiveness in large-scale scene-level tasks. | S3DIS (OA / mIoU) | GM | DD3D | Full | | :-- | :--: | :--: | :--: | | CPC=1 (0.06%) | 53.09% / 0.3674 | **57.45% / 0.4043** | 73.78% / 0.5786 | | CPC=10 (0.6%) | 61.72% / 0.4438 | **64.05% / 0.4624** | 73.78% / 0.5786 | &nbsp; ### ``Experimental Designs Or Analyses`` > **Q5: Compare the performance and feature characteristics of models trained on both aligned and unaligned orientations.** A5: ``Table 4`` in the main text shows the performance of models trained on both aligned and unaligned orientations. We can see that the performance of GM and DM drops quickly without the help of the rotator of DD3D, demonstrating the effectiveness of DD3D in distillating 3D datasets. &nbsp; ### ``Other Comments Or Suggestions`` > **Q6: The authors should revise several typos and grammatical errors in the paper.** A6: We will carefully polish our paper and revise typos.
Summary: The paper introduces DD3D, a dataset distillation method tailored for 3D point clouds, addressing challenges of orientation diversity and varying resolutions. It proposes a rotation-invariant feature matching approach, a point cloud rotator for canonical alignment, and a point-wise generator that efficiently produces multi-resolution point clouds. Extensive experiments demonstrate DD3D’s effectiveness in shape classification and part segmentation, showing strong cross-architecture and cross-resolution generalization while reducing memory usage. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1, The paper is well-written and easy to understand. 2, It is the first work to introduce dataset distillation for 3D point clouds. 3, The rotation-equivariant design for point cloud distillation is intuitive and well-motivated. Weaknesses: 1, While I am not an expert in this area, I am familiar with other data-efficient methods such as knowledge distillation and semi-supervised learning, and dataset distillation appears to demonstrate promising results in comparison. 2, However, the performance improvements over other methods not specifically designed for 3D tasks seem relatively modest, raising concerns about the method’s overall impact. Other Comments Or Suggestions: It seems that the link in the paper is not anonymous. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and suggestions. &nbsp; ### ``Other Strengths And Weaknesses`` > **Q1: Comparison between dataset distillation, knowledge distillation, and semi-supervised learning.** A1: Dataset distillation (DD) and knowledge distillation (KD) are two orthogonal directions in efficient deep learning. DD aims to compress data, and KD is used to compress models. A recent DD method [1] suggests that incorporating KD into DD can alleviate the performance gap between the real and synthetic data. To verify this, we report the performance of **DD3D** and **DD3D+KD** in the ModelNet40 and ScanObject datasets below. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |DD3D | 83.91 | 86.68 | |DD3D | 61.96 | 65.51 | |DD3D+KD | 86.55 | **88.75** | |DD3D+KD | 65.06 | **66.84** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | We can see that with the help of KD, the performance of synthetic datasets is close to the real datasets, indicating the potential of combining DD and KD. Moreover, semi-supervised learning is also a potential application of DD. Leveraging the knowledge of a few labeled samples to compress the massive unlabeled data is also a promising direction. [1] Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective. NeurIPS 2023. > **Q2: Performance improvements over other methods not specifically designed for 3D tasks seem relatively modest.** A2: The advantages of DD3D over other methods not specifically designed for 3D tasks, e.g., GM, DM, and TM, concentrate on addressing the rotation and resolution issues. 1. For the randomly rotated datasets, DD3D outperforms other methods by a large margin. We report the results in Table 4 of the main text below. We can see that the performance of GM and DM drops quickly without the help of the rotator of DD3D, demonstrating the effectiveness of DD3D in distillating 3D datasets. | ModelNet40 | Random | GM | DM | DD3D | | :-- | :--: | :--: | :--: | :--: | | PointNet | 14.75 | 9.47 | 10.16 | 17.91 | | PointNet + PCA | 60.77 | 53.55 | 55.57 | 62.72 | | PointNet + Rotator | 70.13 | 68.92 | 69.31 | **71.27** | 2. For the large-scale datasets, DD3D enables “low-resolution training and high-resolution generation”, which significantly reduces the space overhead during distillation. We report the space overhead of GM and DD3D below, from which we can observe that the memory cost of GM is 10x higher than DD3D, but its performance is still not as good as DD3D. Therefore, DD3D can handle large-scale datasets and larger CPC. | CPC (1/5/10) | Memory (MB) | Performance | | :-- | :--: | :--: | | GM | 120 / 1200 / 6000 | 53.38 / 72.11 / 75.45 | | DD3D | 140 / 230 / **630** | 53.82 / 73.54 / **76.31** | &nbsp; ### ``Other Comments Or Suggestions`` > **Q3: It seems that the link in the paper is not anonymous.** A3: We kindly clarify that this link is not related to our paper (our code is in the Supplementary Material) but to a related work on point cloud DD. **This link does not violate the double-blind reviewing policy.**
Summary: The paper addresses the problem of distilling large 3D point cloud datasets into a much smaller set while preserving model performance. The paper identifies two key challenges unique to point clouds: random orientation and variable resolution. To tackle these issues, the authors propose a novel framework called DD3D, which introduces a plug-and-play point cloud rotator that aligns each point cloud to a canonical orientation, making the distilled data rotation-invariant. It also uses a point-wise generator to produce point clouds from noise, allowing flexible output sizes (arbitrary resolutions) while training on lower resolutions. The rotator and generator are optimized jointly using a gradient-matching distillation objective so that models trained on the small synthetic set achieves performance close to ones trained on the full set. The paper demonstrates DD3D on 3D shape clasification and part segmentation, and shows competitive performance through this dataset distillation process. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidences. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Theoretical Claims: I have checked the theoretical claims in the main paper and saw no obvious errors. Experimental Designs Or Analyses: The experimental designs are valid. Supplementary Material: Yes, in its entirety. Relation To Broader Scientific Literature: The paper could be significant for efficient 3D network training and provide insights in this field. Essential References Not Discussed: The references are adequate. Other Strengths And Weaknesses: Strengths - This work is the first to focus on dataset distillation for 3D point clouds​, an extension of data distillation methods beyond images. It clearly pinpoints unique challenges in 3D that were not handled by prior 2D distillation methods​. The problem is well-motivated by practical needs (reducing memory/training costs on huge 3D datasets). - The paper provides a theoretical analysis to justify the approach. While intuitive, the paper proves (under simplifying assumptions) that the gradient-matching distillation objective is equivalent to preserving the data variance, and that random rotations can weaken the principal components (variance) of the data, harming distillation​. This analysis highlights why matching rotation-invariant features is important. - The paper backs up its claims with comprehensive experiments on multiple datasets and tasks, including 3D point cloud classification and segmentation. Notably, the authors tested performance on cross-architecture generalization, and show general applicability of the distillation scheme. Minor Weaknesses - Since some point cloud models are inherently rotation-invariant (e.g. [1, 2], and should be many more recent methods), comparing DD3D to a scenario where a rotation-invariant model used (instead of adding an external rotator) could help delineate the benefits of the proposed approach. Any discussion in this direction is welcomed. - The framework adds extra components (the rotator and generator) to the distillation pipeline, which increases the method’s complexity. The paper would benefit from more details on these components. For instance, the authors could elaborate on the point-wise generator’s architecture and training process, e.g., do they simply adjust the amount of input noise, how is the generator supervised across different point counts, etc. Likewise, the rotator module is a key piece of the pipeline- a deeper understanding of its training procedure or potential failure cases (e.g., ambiguous alignments for symmetric shapes) would be useful. Without these details, it’s hard to assess the overall robustness under different input settings. [1] Li, Xianzhi, et al. "A rotation-invariant framework for deep point cloud analysis." IEEE transactions on visualization and computer graphics 28.12 (2021): 4503-4514. [2] Xu, Jianyun, et al. "Sgmnet: Learning rotation-invariant point cloud representations via sorted gram matrix." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Other Comments Or Suggestions: See Other Strengths And Weaknesses. Questions For Authors: See Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and insightful questions. &nbsp; ### ``Other Strengths And Weaknesses`` > **Q1: Comparing DD3D to a scenario where a rotation-invariant model is used could help delineate the benefits of the proposed approach. Any discussion in this direction is welcomed.** A1: As suggested, we compare the performance between DD3D, SGMNet [1], and Li et al [2] in the randomly rotated ModelNet40 dataset. Specifically, DD3D and SGMNet are model-agnostic methods, and we use our 3-layer PointNet as the backbone. The results are shown below. | ModelNet40 | CPC=10 | CPC=50 | Full | | :-- | :--: | :--: | :--: | | DD3D | 58.14 | **71.27** | 80.45 | | SGMNet | **58.33** | 70.48 | 80.77 | | Li et al | 43.82 | OOM | **89.4** | We have the following observations: 1. DD3D has a similar performance to SGMNet, demonstrating the effectiveness of the proposed rotator. Generally, DD3D leverages the fact that eigenvectors are rotation-equivalent to alleviate the influence of random rotations, i.e., $(PR)(R^\top U)=PU$. SGMNet uses the outer product to eliminate rotations, i.e., $(PR)(PR)^\top=PP^\top$. Compared to SGMNet, DD3D only needs to maintain a $3 \times 3$ matrix $U$ rather than an $N \times N$ matrix $XX^\top$, which is more efficient. 2. While Li et al’s work has the highest accuracy in the full dataset, it does not perform well in the distillation task. This observation is consistent with a recent work in DD [3] that networks with complex components may affect the performance of DD. 3. Li et al’s work is out-of-memory in the CPC=50 setting, due to the quadratic complexity of fastest sampling. Each time the synthetic point cloud is optimized, the fastest sampling needs to be recalculated, which significantly increases the time and memory overhead of the distillation process. Based on the above observation, we have the following discussion: 1. **Distillation network is important**. The SOTA point cloud models often need to build a k-NN graph or apply fastest sampling on the point clouds, which is not suitable for DD as it costs too much time and space. 2. **Model-agnostic v.s. Model-specific**. Li et al’s work is model-specific and achieves the highest accuracy. However, it is infeasible to transfer it to some simple models, like PointNet. On the other hand, SGMNet is model-agnostic and can be used for different models, which is more suitable for DD. > **Q2: The framework adds extra components (the rotator and generator) to the distillation pipeline, which increases the method’s complexity.** A2: Compared to the gradient matching process, the rotator and generator only slightly increase the computational complexity. To verify this conclusion, we use `line_profiler` to record the time used in rotator, generator, and gradient matching, respectively, and report their percentages. The results below indicate that the rotator and generator only slightly impact the computational overhead: | CPC | Rotator (%) | Generation (%) | Matching (%) | | :---: | :---: | :---: | :---: | | 1 | 0.5 | 3.9 | 95.6 | | 10 | 0.3 | 2.7 | 97.0 | | 50 | 0.1 | 1.0 | 98.9 | > **Q3: The authors could elaborate on the point-wise generator’s architecture and training process.** A3: Thanks for the great advice. We will add more description of the rotator and generator of DD3D in the revision. We briefly introduce the training process of the generator below. - The point-wise generator $\mathbb{R} \rightarrow \mathbb{R}^3$ aims to map each sampled noise into a point. Therefore, we can adjust the amount of input noise to control the resolution of synthetic point clouds. - During training, we sample fewer points from the real data and calculate their gradient as supervision to update the generator. During inference, we can sample more noise to ensure geometric details. > **Q4: A deeper understanding of its training procedure or potential failure cases (e.g., ambiguous alignments for symmetric shapes) would be useful.** A5: We will add more visualization in the revision to illustrate how the rotations affect the training process of synthetic datasets.
Summary: This paper proposes DD3D, the first dataset distillation method designed specifically for 3D point cloud data. DD3D addresses two critical challenges in point cloud distillation: orientation misalignment and varying resolutions. The authors first establish theoretically that an ideal dataset distillation should preserve the intrinsic variance of the dataset and that orientation-misaligned samples introduce undesirable perturbations. To solve this, they introduce a novel plug-and-play rotator to consistently align point clouds to canonical orientations, resolving rotation and sign ambiguities. Additionally, DD3D employs a point-wise generator capable of synthesizing high-resolution point clouds from low-resolution training samples, making the approach flexible and scalable. Experiments conducted on shape classification and segmentation tasks demonstrate that DD3D consistently outperforms existing baselines. The method proves effective, scalable, and memory-efficient, significantly advancing dataset distillation for 3D data. ## update after rebuttal The authors tried to address my concerns, and I encourage the authors to fulfill their commitment by including the promised updates in the final manuscript. My final rating is "Weak Accept" (leaning towards accept). Claims And Evidence: The authors prove mathematically that random rotations weaken the principal components of real data, negatively impacting distillation performance. This strongly motivates the inclusion of a rotation-aware component ("rotator") in their proposed method. Methods And Evaluation Criteria: The authors provide thorough theoretical analyses, formally demonstrating that matching rotation-invariant features is essential for successful 3D point cloud distillation. Theoretical Claims: The paper provides a robust mathematical foundation, clearly explaining why the proposed dataset distillation (DD3D) approach works effectively. Experimental Designs Or Analyses: The approach demonstrates relatively limited performance on fine-grained tasks such as shape classification and part segmentation. This highlights potential limitations in its ability to capture detailed geometric features. Furthermore, the proposed method focuses on shape-level tasks and does not easily extend to large-scale scene-level applications. Moreover, while the paper compares DD3D with various distillation methods, its claims could be substantially strengthened by including comprehensive benchmarks against state-of-the-art rotation-invariant methods as I will mention below. Supplementary Material: The Appendix B is incomplete with only Rotator, there is no Generator. Also, the Algorithm 2 is different with the provided source code. In addition, the provided source code is incomplete too, without instructions to reproduce. Relation To Broader Scientific Literature: The paper can promote research on dataset distillation for point clouds, based on 2D image studies. Essential References Not Discussed: There are several references regarding rotation-equivariant and invariant features that are worth discussing: [1] Zhang et al., RISurConv: Rotation Invariant Surface Attention-Augmented Convolutions for 3D Point Cloud Classification and Segmentation, ECCV’24 [2] Hao et al., RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud Registration, TPAMI’24 [3] Melnyk et al., TetraSphere: A Neural Descriptor for O (3)-Invariant Point Cloud Analysis, CVPR’24. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: It is good to show experiments with large-scale scene and also compare with recent rotation-invariant methods as I mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your detailed review and the recognition of our contributions. &nbsp; ### ``Experimental Designs Or Analyses`` > **Q1: The approach demonstrates relatively limited performance on fine-grained tasks.** A1: We propose two strategies to further improve the performance of DD3D. 1. Improving the number of CPC. We report the performance of DD3D with CPC=100, and find that it can achieve comparable performance with the full dataset. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | GM | 81.74 | 84.17 | | GM | 57.52 | 62.82 | | DD3D | 83.91 | **86.68** | | DD3D | 61.96 | **65.51** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | 2. Combined with knowledge distillation (KD). Recent DD methods suggest that leveraging KD can significantly improve the performance of DD. We can also adopt this strategy to achieve lossless performance. | ModelNet40 | CPC=50 | CPC=100 | | ScanObject | CPC=50 | CPC=100 | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |DD3D | 83.91 | 86.68 | |DD3D | 61.96 | 65.51 | |DD3D+KD | 86.55 | **88.75** | |DD3D+KD | 65.06 | **66.84** | | Full | 88.05 | 88.05 | | Full | 66.96 | 66.96 | > **Q2: This highlights potential limitations in its ability to capture detailed geometric features.** A2: The goal of DD is to make the synthetic data informative, making the synthetic samples deviate from the distribution of real data. As a result, the synthetic dataset may have some artifacts and overlook some detailed geometries. **The same results can also be observed in vision dataset distillation.** Notably, ``Figure 4`` (lines 385-395) illustrates that synthetic data generated by GM has much noise and isolated points, making the data unrealistic. On the other hand, synthetic data generated by DD3D is coherent and captures the global geometric shapes. > **Q3: The proposed method focuses on shape-level tasks and does not easily extend to large-scale scene-level applications.** A3: We conduct a semantic segmentation experiment on the S3DIS dataset, which contains more than 80M points in the training data and has more noise than shape-level datasets. We use Area 5 as the test set and other areas as the training set, and report the overall accuracy (OA) and instance mean IoU of different methods. We can observe that DD3D consistently outperforms GM by a large margin, demonstrating its effectiveness in large-scale scene-level tasks. | S3DIS (OA / mIoU) | GM | DD3D | Full | | :-- | :--: | :--: | :--: | | CPC=1 (0.06%) | 53.09% / 0.3674 | **57.45% / 0.4043** | 73.78% / 0.5786 | | CPC=10 (0.6%) | 61.72% / 0.4438 | **64.05% / 0.4624** | 73.78% / 0.5786 | > **Q4: Including comprehensive benchmarks against state-of-the-art rotation-invariant methods.** A4: We compare DD3D with some SOTA rotation-invariant methods, including RISurConv, TetraSphere, and SGMNet (suggested by Reviewer gVCR). The results are shown below. | ModelNet40 | CPC=10 | CPC=50 | Full | | :-- | :--: | :--: | :--: | | DD3D | 58.14 | **71.27** | 80.45 | | RISurConv | 55.81 | OOM | **95.60** | | TetraSphere | 54.72 | OOM | 90.50 | | SGMNet | **58.33** | 70.48 | 80.77 | We have the following observations: 1. Both RISurConv and TetraSphere perform well in the full dataset training. However, they are not suitable for the DD task, as they all need to build a k-NN graph to learn rotation-invariant representations. During distillation, the synthetic point clouds are dynamically optimized, resulting in different nearest neighbors. As a result, we need to **re-calculate the k-NN graph** in each iteration, which significantly increases the time and space overhead. 2. DD3D and SGMNet are model-agnostic, which can be easily combined with different point cloud models. Here we use the 3-layer PointNet. Their results are better than RISurConv and TetraSphere, indicating that complex models do not always lead to better distillation performance. In contrast, simple models may be more suitable for DD. &nbsp; ### ``Supplementary Material`` > **Q5: The Appendix B is incomplete with only Rotator, there is no Generator. Also, the Algorithm 2 is different with the provided source code. In addition, the provided source code is incomplete too, without instructions to reproduce.** A5: We will revise our code to make it more complete and easier to read. Specifically, - Add a PyTorch-style pseudo algorithm of the Generator. - The implementation of Algorithm 2 is rooted in ``CINR.py/InvariantWrapper``. We also provide a simpler vision of the rotator, which removes the hidden dimension of SIREN to reduce the model complexity. - Add a Readme file to illustrate the reproduction process. &nbsp; ### ``Essential References Not Discussed`` > **Q6: There are several references regarding rotation-equivariant and invariant features that are worth discussing.** A6: Thanks for pointing out these important related works. We will cite and discuss them in the revision. --- Rebuttal Comment 1.1: Comment: Thank you the authors for your response, which mostly addressed my concerns. Could you please provide the time and space overhead associated with recalculating the k-NN graph in comparison to rotation-invariant methods? From my experience, the current C++/CUDA implementation of k-NN for Python is very fast, so I doubt it significantly impacts the overall pipeline. --- Reply to Comment 1.1.1: Comment: Thanks for the quick response. To provide a detailed comparison, we use ``line_profiler`` to track the time costs in each line. We set the ``outer_loop=20`` and ``inner_loop=10`` and report the time consumption of one iteration for gradient calculating, gradient matching, and backward, which dominate the total time costs (>80%). | ModelNet40 (CPC=10) | Gradient Time (s) | Matching Time (s) | Backward Time (s) | GPU Memory (MB) | | :-- | :--: | :--: | :--: | :--: | | DD3D | 2.21 | 0.94 | 6.28 | 11320 | | RISurConv | 43.49 | 9.38 | 57.82 | 24130 | | TetraSphere | 6.39 | 23.76 | 51.65 | 22504 | Notably, this time consumption is only the result of one iteration, and we may need hundreds of iterations in DD. Therefore, the time and space overhead of RISurConv and TetraSphere is larger than that of DD3D.
null
null
null
null
ESPFormer: Doubly-Stochastic Attention with Expected Sliced Transport Plans
Accept (poster)
Summary: This paper introduces ESPFormer, a novel Transformer architecture integrating a fast, doubly-stochastic attention mechanism based on Expected Sliced Transport Plans (ESP). By projecting high-dimensional queries/keys into 1D slices using axis-aligned directions (Θ = I), ESPFormer efficiently computes optimal transport plans via differentiable soft sorting, ensuring end-to-end trainability. The framework aggregates sliced plans using an inverse temperature parameter τ to balance attention sparsity and distribution, achieving O(mN²) runtime with full parallelization across m slices—significantly faster than Sinkhorn’s O(SN²) iterative approach. Experiments demonstrate consistent performance gains across diverse tasks: +0.5% accuracy on ModelNet40 (point clouds), +0.2 BLEU on IWSLT translation, and 6% improvement on Cats&Dogs with 1% data. Claims And Evidence: The key claims (doubly-stochastic attention, efficiency gains, performance improvements) are supported by experiments across four benchmarks (ModelNet40, IMDb, IWSLT, Cats&Dogs) with consistent metrics (accuracy, BLEU). However, while axis-aligned slices (Θ=I) are justified by parameter efficiency, the paper lacks ablation studies comparing learned vs. fixed slices, leaving uncertainty about their optimality for non-axis-aligned data. Methods And Evaluation Criteria: The proposed methods (sliced transport, soft sorting, τ-controlled aggregation) are theoretically sound and aligned with ESP’s properties (Liu et al., 2025). Axis-aligned slicing avoids extra parameters but may limit expressiveness for non-axis-separable distributions. Evaluation criteria (accuracy, BLEU, runtime) are standard and relevant. Theoretical Claims: The paper relies on prior work (Liu et al., 2025) for ESP’s theoretical foundation but does not provide new proofs. Assumptions about ESP’s equivalence to Wasserstein distance and τ’s role in sparsity are empirically validated. Experimental Designs Or Analyses: Experiments are generally controlled (same architectures across baselines, multiple runs) but have limitations: 1.No analysis of τ’s impact beyond qualitative visualization (Fig. 2) or slice count (m). 2.While the paper compares ESPFormer with Sinkformer and DiffTransformer on IMDB using accuracy, the narrow focus on a single dataset. Supplementary Material: Yes, I reviewed the supplementary material, specifically: Appendix A (Full Runtime Analysis): Provided detailed wall-clock runtime comparisons between ESPFormer, Sinkformer, and baselines (Figure 5), validating computational efficiency claims. Appendix B (Implementation Details): Described technical details of Sinkhorn’s algorithm and Differential Transformer implementations, aiding reproducibility. Appendix C (Experiment Details): Included hyperparameter tables for all experiments, dataset preprocessing steps, and training schedules. Relation To Broader Scientific Literature: Key Connections: Sinkformer (Sander et al., 2022): Directly builds on this work by replacing Sinkhorn iterations with sliced OT for doubly-stochastic attention. The efficiency claim is framed as an improvement over Sinkformer’s O(SN²) complexity. Sliced OT (Liu et al., 2025): Relies on ESP theory to construct transport plans from 1D slices, adapting it for attention via soft sorting. Soft Sorting (Prillo & Eisenschlos, 2020): Critical for differentiable sliced transport. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1.Introduces a temperature-controlled sparsity mechanism (via τ), enabling tunable attention patterns without iterative optimization. 2.Parallelizable O(mN²) complexity addresses a critical bottleneck in OT-based attention, making doubly-stochastic constraints viable for long sequences. Clarity: 3.Well-structured with intuitive visualizations (e.g., Fig. 2: Sinkhorn vs. ESP attention patterns). 4.Appendix details hyperparameters and runtime analysis, aiding reproducibility. Weaknesses: 1.Axis-aligned slicing is presented as a default choice without justifying why it suffices (vs. learned/adaptive slices). Prior work on adaptive slicing (e.g., [Nguyen et al., 2023]) could strengthen motivation. 2.In the task of image classification, the Cats and Dogs dataset and the MNIST dataset are relatively simple, and there is a lack of verification on more complex datasets. 3.In the Sentiment Analysis experiment, the dataset is single. 4.No analysis of ESPFormer’s scaling to ultra-long sequences (e.g., N=10k), a key use case for efficient attention. Other Comments Or Suggestions: no Questions For Authors: 1.Why choose axis-aligned slices (Θ=I) over learned or adaptive slicing (e.g., Nguyen et al., 2023)? Did you test adaptive slicing, and if so, how did it compare in terms of accuracy/efficiency? 2.How do τ (inverse temperature) and m (slice count) affect performance? Are optimal τ values consistent across tasks? 3.What the performance when ESPFormer handle ultra-long sequences (N > 10k)? Did you test runtime/accuracy on such data (e.g., text/document classification)? 4.Could you supplement experiments on more diverse datasets? I feel that the improvements in your experimental section are not significant, and the single dataset used may introduce chance factors due to limited diversity. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the constructive feedback. Below are our responses. **1. Axis-aligned slices** We chose axis-aligned slices because the keys and queries are learned parameters. Thus, any potential optimization of slice orientations is implicitly captured by the query and key matrices, $W_Q$ and $W_K$. This choice enabled us to avoid introducing extra learnable parameters to ensure a fair comparison with baseline methods, such as vanilla attention and Sinkformer. To investigate the effect of the number and type (learnable vs. frozen) of slices, we conducted additional experiments on the Cats vs. Dogs benchmark, varying the number of slices $L$, the inverse temperature parameter $\tau$, and whether the slices were frozen or learnable. The results are presented below. | **$\tau=0.1$** | $L = 1$ | $L = 8$ | $L = 32$ | $L = 64$ | $L = 128$ | | ---------------- | ------- | ------- | -------- | -------- | --------- | | **Learnable** | 74.45% | 78.56% | 79.33% | 78.25% | 76.15% | | **Frozen** | 66.61% | 72.75% | 78.41% | 79.39% | 79.66% | | **Axis-Aligned** | — | — | — | 79.47% | — | | **$\tau=1.0$** | $L = 1$ | $L = 8$ | $L = 32$ | $L = 64$ | $L = 128$ | | ---------------- | ------- | ------- | -------- | -------- | --------- | | **Learnable** | 74.45% | 79.07% | 78.10% | 77.64% | 74.34% | | **Frozen** | 66.61% | 73.09% | 77.89% | 78.88% | 78.50% | | **Axis-Aligned** | — | — | — | 78.86% | — | | **$\tau=10$** | $L = 1$ | $L = 8$ | $L = 32$ | $L = 64$ | $L = 128$ | | ---------------- | ------- | ------- | -------- | -------- | --------- | | **Learnable** | 74.45% | 79.24% | 78.06% | 77.07% | 74.20% | | **Frozen** | 66.61% | 73.50% | 76.79% | 77.91% | 78.13% | | **Axis-Aligned** | — | — | — | 77.78% | — | As shown in the table, both learnable and frozen slicers can operate on the key and query projections. We chose a simpler, axis-aligned slicer for interpretability and fair comparison, avoiding extra parameters. Learnable slicers may help with fewer slices by focusing on informative directions, but their advantage diminishes as the slice count increases. In contrast, frozen slicers avoid added complexity but require many slices to capture distributional structure effectively, especially in high dimensions. **2. Consistency of optimal $\tau$ values across tasks:** The optimal inverse temperature values, $\tau$, depend on: 1) the input measures, i.e., the task, and 2) the number of slices. Therefore, it is reasonable to expect that this hyperparameter should be task-dependent. Smaller $\tau$ values promote more diffuse attention, while larger values lead to sharper, more focused (i.e., near one-to-one) attention between tokens. Importantly, as shown in our experimental setup, we perform cross-validation only over a very coarse grid, i.e., $\tau\in\{0.1,1,10,100\}$. **3. Additional benchmarks:** We conducted experiments on TweetEval. The table below summarizes the test performance of several models, including ESPFormer, on this benchmark. ESPFormer demonstrates competitive results compared to the baselines, outperforming both Sinkformer and the standard attention mechanism (denoted by "Vanilla"). For a controlled comparison, we replaced the attention module in a standard 6-layer Transformer (as proposed by Vaswani et al., 2017) with alternative components—Sinkformer, DiffAttention, and ESPFormer—while keeping the rest of the architecture unchanged. As illustrated, all variants outperform the vanilla baseline, with ESPFormer and DiffAttention achieving the highest accuracies on this benchmark. | Model | Vanilla | Sink. | Diff.Att. | ESP. | | -------- | ------- | ----- | --------- | ---- | | Accuracy | 71.5 | 72.0 | 72.6 | 72.6 | Note that this evaluation is limited to a single architecture and run due to the rebuttal timeframe. We plan to extend it in the camera-ready version with plug-and-play experiments across diverse attention modules and pretrained language models. **4. ESPFormer on ultra-long sequences (N > 10k)** Due to time and resource constraints, we couldn't evaluate ESPFormer on ultra-long sequences (N > 10k) in this submission. We agree this is important and plan to include full runtime and accuracy benchmarks in future work. Regarding efficiency, by switching from soft-sorting to hard sorting during inference, we can significantly reduce our latency. Once again, we sincerely thank the reviewer for their thoughtful comments and insightful questions, which helped us refine the presentation and analysis of our contributions. **P.S.** Due to the limited rebuttal period, the results presented are based on a single run. We recognize the importance of reporting performance across multiple seeds and will include averaged results in the camera-ready version.
Summary: This paper is about achieving doubly stochastic attention in Transformers through ESP (expected sliced transport plans). The softmax is known to be a notorious bottleneck for the expressivity and gradient flow in Transformers, thus is clearly a relevant research area. Moreover, prior art like the Sinkformer rely on iterative approximation procedures to obtain DSMs. The authors ensure differentiability of their ESP approach and and show empirically that performance improves across multiple datasets. Moreover, the authors introduce a flexible hyperparameter (tao) which allows to control the temperature of the attention while ensuring doubly stochasticity. ## update after rebuttal: I had originally given a score of 4 to this paper due to high methodological novelty, sound experimental design and clear relevance for the field. The score was contingent on the author's response to some raised concerns and questions, which were adequately addressed. I keep my score. Claims And Evidence: The general claims of the paper are well supported by experimental evidence. - The ESPFormer ensures a more balanced distribution (than Softmax) and gives flexibility via inverse temperature $\tau$. - Performance improvement over Transformer (and even Sinkformer and DiffFormer) is clear in all cases, even though the hyperparameter sweeps to reach these results are not made explicit, so it is unclear how the authors arrived e.g., at different values of $\tau$. - ESPFormer can work as a drop-in replacement for models trained with standard attention of DiffAttention and can still boost performance during finetuning. Overall, I think that this is an exceptional and very innovative paper. I still have some concerns that require clarification but I believe that this is a very valuable piece of research that will help shaping the future of attention and Transformers. Methods And Evaluation Criteria: The authors empirically evaluate ESPFormers across a rich set of domain and show convincing results everywhere. So in general, evaluation criteria are absolutely convincing. However, at least one ablation study on the impact of $\tau$ is necessary for the paper. From the Appendix, I understand that the authors tuned tau specifically for each dataset, yet they are not transparent on the conducted hyperparameter sweeps, raising some questions on how many attempts the ESPFormer was given to obtain the results finally shown in the paper. Theoretical Claims: (1) FIg1 suggests (visually speaking) that the slice-specific DSMs are actually permutation matrices rather than interior points of the Birkhoff polytope. Compare eq. (8), this seems to originate from the deltas being diracs (BTW, this is never said explicitly!), yet in Eq (8) neither the sigmas nor the Us seem to be binary to me. Please clarify! Moreover, if the slices of G are indeed permutation matrices, maybe you want to emphasize the Birkhoff von Neumann theorem which is that every DSM (every element in the birkhoff polytope) can be expressed as a convex combination of permutation matrices. This brings me to another question about empirical and theoretical expressivity. Can you guarantee that every DSM can be reached with your algorithm? The BvN theorem requires non-negative coefficients theta which are more flexible then the simple averaging you do to obtain the ESP Attention matrix. Ultimately, may the ESPFormer be complemented with an extra set of L parameters enforced to sum to 1 that allow a weighted average of the slice-specific DSMs in order to obtain full expressivity over the Birkhoff polytope? (2) Whether the runtime of $O(LN logn)$ for transport plan computation is lower than $O(N^2 logN)$ depends on the number of slices. Since you avoid adding extra parameters (which I agree is a good choice), you set $L=m$ which is called $d_k$ in standard attention. Indeed, typically $d_k << N$ in transformers in practice, still it should be emphasized that we are not talking about a theoretical speedup, just about a practical one, due to common design choices when deciding about transformer hyperparameters. Nobody prevents me from setting $d_k > N$ and in that case, IIUC, the runtime complexity would be lower. (3) Section 3.2 -- Dimensionalities requires clarification. It seems there is a mixup on the matrix dimension because they are swapped? I recommend to follow the notation from the original paper by Vaswani et al, where we start with X of size $N x d$. Instead you seem to start from X of shape $d x N$, so you are left-multiplying the inputs with the $WQ$ matrix to obtain Q and K of shape m x N. Standard attention runs $QK^T$ to get a matrix of shape $NxN$ whereas in your case you have to compute $K^TQ$ to get a $NxN$ matrix which is very unconventional. To ease readability, I suggest to update all notation. For the same reason in (12) you arrive at $VG$ rather than $GV$. Also W_V needs to be quadratic according to your definition, whereas in general it does not have this constraint. Please clarify Experimental Designs Or Analyses: The experimental results are convincing and well designed, just ablation study and interpretations on the impact of $\tau$ are missing, ideally on more than one dataset. Fig 2: Effectively, the choice of tao seems to interpolate between the extremes of tao=100 where the attention matrix is a permutation matrix and tao=0 where the DSM is almost the identity matrix -- in other words we seem to move between the vertices and the origin of the Birkhoff polytope. Can you confirm (or even prove) this intuition? If yes, it migh be worth emphasizing this in the text to ease interpretability of the parameter tao. Supplementary Material: Some rationale on the hyperparameter choices would be good Relation To Broader Scientific Literature: This paper suggests a clear improvement over the Sinkformer which has itself shown to improve over standard attention in Transformer. It removes the limitation of Sinkformer that only produces DSMs as iterative approximations and shows superior empirical results. Essential References Not Discussed: The authors did a great job at revising prior literature about adaptations of Transformers, especially toward doubly stochastic attention. An alternative potential avenue for doubly stochastic Transformer could be quantum computing as was highlighted in the conclusion of this paper from ICML last year: https://proceedings.mlr.press/v235/mariella24a.html Other Strengths And Weaknesses: Positive: - Strong theoretical motivation, clear novelty - Strong empirical results at only mild runtime increase Negative: - No source code is provided, hampering reproducibility - A small discussion on when to chose which tao would be helpful Other Comments Or Suggestions: - Fig 2: Effectively, the choice of tao seems to interpolate between the extremes of tao=100 where the attention matrix is a permutation matrix and tao=0 where the DSM is almost the identity matrix -- in other words we seem to move between the vertices and the origin of the Birkhoff polytope. Can you confirm (or even prove) this intuition? If yes, it migh be worth emphasizing this in the text to ease interpretability of the parameter tao. - Are your DSMs exact or approximate? In the Sinkformer they are only approximate due to the iterative nature of Sinkhorn's Algorithm. - For Figure 3, for completeness S=3 should be added since it is empirically often enough as shown in the Sinkformer. Section 3.3: - Please mention for completeness also the complexity of the standard attention following the same notation - Whether your complexity is better than the Sinkformer, in theory, depends on (1) whether m < S and (2) d < N. Regarding (1): Typicall no because the Sinkformer shows good performance with S being as small as 3 or 5 and observes saturation around S=20. Regarding (2): typically yes, your d should correspond to d_v in the standard Transformer). Even if this is more of a theoretical than a practical perspective, I feel amplifying it would improve clarity Minor: L412 (right side) - missing space Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. Below, we provide our responses. **1. The Birkhoff Polytope.** We thank the reviewer for raising this insightful question. In the context of the Birkhoff polytope $B_N$, we will consider the transport plans between $\mu=\sum_{i=1}^N\delta_{x_i}$ and $\nu=\sum_{j=1}^N\delta_{y_j}$, i.e., two measures having the same number of supports $N$ and uniformly-distributed mass. To avoid confusion, we refer to the "identity matrix" in $B_N$ as the outer product plan and denote the $N\times N$ matrix with all entries equal to $1$ as $\mathbb{1}_{N\times N}$. The following fact about $B_N$ is used in the following analysis [1]: * (Birkhoff-von Neumann Theorem) $B_N$ is a convex polytope whose extreme points, or corners, are the $N!$ permutation matrices. 1. When $\tau\rightarrow\infty$, a permutation matrix is recovered. In the original EST work [2], the authors have mentioned that the distribution of $\theta$ will degenerate into a one-hot vector as $\tau\rightarrow\infty$. The EST will recover min-SWGG [3], where the minimum is taken over all permutation matrices induced by the slices, resulting in a permutation matrix. By the Birkhoff-von Neumann Theorem, this will be one of the corners of $B_N$. 2. When $\tau=0$, the EST is not at the origin, i.e., the outer product measure $\frac{1}{N^2}\mathbb{1}_{N\times N}$. In Figure 6 of [2], EST with varying $\tau$ is compared with the Optimal Transport plan, the entropic Optimal Transport plan, and the outer product plan. EST is evidently different from the outer product measure. It is worth mentioning that entropic Optimal Transport plan interpolates between the outer product and the Optimal Transport plan, whereas EST does not. 3. In [this figure](https://github.com/Anon4142/ESPFormer/blob/main/figures/dist_perm.png), we provide illustration for $\tau=0$ and $\tau\rightarrow \infty$ using a pair of distributions $\mu, \nu$, each supported on 3 points. The corners of $B_3$ consists of $S_0, S_1, \cdots, S_5$. The pie chart in this figure shows the optimal transport plan (i.e., the optimal permutation) as a function of the slicing angle. As can be seen $S_1$ and $S_3$ are the highly voted transportation plans. When $\tau=0$, the EST is a convex combination of the corners with weights indicated in the pie chart. When $\tau\rightarrow \infty$, the EST will recover the one permutation that gives the minimal transport cost, which is $S_1$ in this example. [1] Birkhoff, G., 1946. [2] Liu X, et al. 2024. [3] Mahey G, et al. 2023. **2. What $\tau$ to choose?** The optimal value of the hyperparameter $\tau$ depends on the input probability measures (i.e., the task) and the number of slices. Smaller $\tau$ values promote more diffuse attention, while larger values lead to sharper, more focused (i.e., near one-to-one) attention between tokens. We ran more ablative studies on the choice of $\tau$ and slicing strategies. Please see responses to reviewer eE5t. **3. Source code availability.** We apologize for the unclear placement of the code link, which was included as a footnote on page 7. In the camera-ready version, we will move it to the abstract for better visibility. Meanwhile, the anonymous GitHub repository is available [here](https://github.com/Anon4142/ESPFormer). **4. Exact/Approximate DSMs.** Soft sorting yields approximate DSMs during training, but annealing the temperature gradually transitions it toward hard sorting and exact DSMs. At inference, hard sorting can replace soft sorting, reducing complexity from $O(N^2)$ to $O(LN\log N)$. We will include this discussion in the revised paper. To demonstrate, we fine-tuned ESPFormer for 40 epochs on Cats vs. Dogs (1% and 10% data) and replaced soft sorting with hard sorting at inference. Results show improved accuracy and efficiency: | Data Fraction | Initial (Soft) | After Annealing | Hard Sort | |---------------|----------------|------------------|-----------| | 1% | 59.87% | 60.50% | 61.02% | | 10% | 71.71% | 72.16% | 72.66% | Experiments on 25% and 100% data are ongoing and will be included in the camera-ready version. **5. Updating Figure 3.** We have updated Figure 3 to include the runtime for the Sinkhorn method with $S=3$. We also added the inference runtime for ESP via hardsorting (i.e., inference time). The revised Figure 3 is available [here](https://github.com/Anon4142/ESPFormer/blob/main/figures/esp_sink_runtime.png). **6. On the complexity of ESP, Sinkhorn and Vanilla Attention.** We observe that with SoftSort, ESP achieves lower runtime complexity than Sinkhorn for $S \geq 5$, though it remains slower when $S = 3$ (for training). While $S = 3$ or $5$ may be sufficient for some Sinkformer tasks, others, such as ModelNet40, require more iterations (e.g., $S = 21$). We will clarify this and include the complexity of vanilla attention in the main manuscript. Thank you again for your consideration.
Summary: The paper introduces ESPFormer, a attention mechanism that enforces doubly-stochastic constraints in attention matrices without requiring iterative Sinkhorn normalization. Instead, it leverages Expected Sliced Transport Plans (ESP) to achieve a fully parallelizable and computationally efficient solution. The authors integrate a temperature-based soft sorting technique to ensure differentiability, making ESPFormer compatible with deep learning models. Experimental evaluations across diverse applications, including image classification, point cloud classification, sentiment analysis, and neural machine translation, demonstrate the advantages of ESPFormer over traditional self-attention and Sinkhorn-based attention methods. ## update after rebuttal I increased the score Claims And Evidence: 1. ESPFormer provides a more computationally efficient alternative to Sinkhorn-based doubly-stochastic attention mechanisms. 2. The proposed attention mechanism leads to better-balanced attention distributions, improving model performance. 3. ESPFormer enhances downstream tasks such as classification and translation compared to classical self-attention and Sinkhorn-based approaches. 4. Fine-tuning pre-trained models with ESPFormer leads to significant performance improvements. These claims are supported by experimental evidence on multiple datasets. The runtime complexity analysis demonstrates computational advantages, and performance metrics from classification and translation tasks confirm improved accuracy. Methods And Evaluation Criteria: The methods proposed are well-motivated for addressing the limitations of standard self-attention and Sinkhorn normalization. The evaluation criteria include: 1. Accuracy on benchmark datasets for classification tasks. 2. BLEU scores for machine translation tasks. 3. Runtime complexity comparisons between ESPFormer and Sinkformer. 4. Ablation studies on hyperparameters such as inverse temperature. These metrics appropriately reflect the benefits of the proposed method in terms of both efficiency and performance. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are well-structured, covering a range of tasks that benefit from improved attention mechanisms. The paper provides: 1. Comparisons against baseline models (Vanilla Transformer, Sinkformer, DiffTransformer) across various datasets. 2. Analysis of computational efficiency via runtime evaluations. 3. Fine-tuning experiments to validate the plug-and-play nature of ESPFormer. Supplementary Material: The supplementary material includes implementation details of Sinkhorn’s algorithm, Differential Transformer, and additional runtime analyses. These materials provide useful insights into the experimental setup and computational efficiency comparisons. Relation To Broader Scientific Literature: ESPFormer extends prior work on optimal transport-based attention mechanisms by addressing computational bottlenecks and using sliced optimal transport plans. Essential References Not Discussed: While the paper discusses relevant prior work, additional references on non-iterative attention normalization techniques or alternative soft sorting methods might provide further context. Other Strengths And Weaknesses: ## Strengths 1. Introduces a novel, computationally efficient alternative to Sinkhorn-based attention mechanisms, eliminating the need for iterative normalization. 2. The method is fully parallelizable, making it more scalable than Sinkhorn-based approaches, which require sequential iterations. 3. Strong theoretical grounding connects ESP attention with optimal transport theory, ensuring the approach is well-principled. 4. Extensive experiments demonstrate the effectiveness of ESPFormer across a variety of tasks, including image classification, point cloud classification, sentiment analysis, and neural machine translation. 5. The paper provides clear empirical evidence that replacing self-attention mechanisms in pre-trained models with ESPFormer leads to performance improvements with minimal fine-tuning. 6. The introduction of the inverse temperature hyperparameter allows fine-grained control over attention sparsity, adding adaptability across different tasks. ## Weaknesses 1. The reliance on soft sorting introduces an additional hyperparameter (temperature), which may require careful tuning for different tasks. 2. Additional experiments on language model is needed to demonstrate further the performance of ESPFormer. Other Comments Or Suggestions: Since language model is one of the most important application of Transformer, experiments on language model (e.g., on WikiText-103) should be considered. Questions For Authors: 1. How does ESPFormer handle very long sequences compared to efficient self-attention variants like Linformer? Is it possible to reduce further the complexity of ESPFormer? 2. I understand that the choice of Identity matrix is for computational convenience since it does not require projection. However, it is worth to try if a fixed set of directions $\theta_1,\ldots,\theta_L$ which are not the standard basis can affect the performance (L can be larger than d). This set of directions can be randomly sampled from the uniform distribution or using quasi Monte Carlo methods to encourage repulsive structure [1]. [1] Quasi-Monte Carlo for 3D Sliced Wasserstein, Nguyen et al. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. Below are our responses. **1. Reliance on Soft sorting:** Soft sorting introduces a temperature hyperparameter that requires tuning. To address this, we use *temperature annealing*, an exponential decay schedule, to smoothly transition from soft to hard sorting. This allows the model to adapt as sorting sharpens, enabling hard sorting at inference without performance loss. This improves runtime, reducing complexity from \( O(N^2) \) to \( O(N \log N) \), and reduces our dependency on soft-sorting. See [Figure 3](https://github.com/Anon4142/ESPFormer/blob/main/figures/esp_sink_runtime.png) for updated runtimes. The table below summarizes results on Cats vs. Dogs across training fractions. | Data Fraction | Initial Test Acc (Soft Sort) | After Temp Decay (Sharp Soft Sort) | After Switching to Hard Sort | |---------------|------------------------------|-------------------------------------|-------------------------------| | 1% | 59.87% | 60.50% | 61.02% | | 10% | 71.71% | 72.16% | 72.66% | Experiments for the 25% and 100% training splits are currently in progress and will be included in the **camera-ready version** of the paper. **2. Additional experiments:** We conducted experiments on TweetEval. The table below summarizes the test performance of several models, including ESPFormer, on this benchmark. ESPFormer demonstrates competitive results compared to the baselines, outperforming both Sinkformer and the standard attention mechanism (denoted by "Vanilla"). For a controlled comparison, we replaced the attention module in a standard 6-layer Transformer (as proposed by Vaswani et al., 2017) with alternative components—Sinkformer, DiffAttention, and ESPFormer—while keeping the rest of the architecture unchanged. As illustrated, all variants outperform the vanilla baseline, with ESPFormer and DiffAttention achieving the highest accuracies on this benchmark. | Model | Vanilla | Sink. | Diff.Att. | ESP. | | -------- | ------- | ----- | --------- | ---- | | Accuracy | 71.5 | 72.0 | 72.6 | 72.6 | Note that this evaluation is limited to a single architecture and run due to the rebuttal timeframe. We plan to extend it in the camera-ready version with plug-and-play experiments across diverse attention modules and pretrained language models. **3. Axis-aligned slices:** We chose axis-aligned slices because the keys and queries are learned parameters. Thus, any potential optimization of slice orientations is implicitly captured by the query and key matrices, $W_Q$ and $W_K$. This choice enabled us to avoid introducing extra learnable parameters to ensure a fair comparison with baseline methods, such as vanilla attention and Sinkformer. To investigate the effect of the number and type (learnable vs. frozen) of slices, we conducted additional experiments on the Cats vs. Dogs benchmark, varying the number of slices $L$, the inverse temperature parameter $\tau$, and whether the slices were frozen or learnable. The results are presented below. | **$\tau=0.1$** | $L = 1$ | $L = 8$ | $L = 32$ | $L = 64$ | $L = 128$ | | ---------------- | ------- | ------- | -------- | -------- | --------- | | **Learnable** | 74.45% | 78.56% | 79.33% | 78.25% | 76.15% | | **Frozen** | 66.61% | 72.75% | 78.41% | 79.39% | 79.66% | | **Axis-Aligned** | — | — | — | 79.47% | — | | **$\tau=1.0$** | $L = 1$ | $L = 8$ | $L = 32$ | $L = 64$ | $L = 128$ | | ---------------- | ------- | ------- | -------- | -------- | --------- | | **Learnable** | 74.45% | 79.07% | 78.10% | 77.64% | 74.34% | | **Frozen** | 66.61% | 73.09% | 77.89% | 78.88% | 78.50% | | **Axis-Aligned** | — | — | — | 78.86% | — | | **$\tau=10$** | $L = 1$ | $L = 8$ | $L = 32$ | $L = 64$ | $L = 128$ | | ---------------- | ------- | ------- | -------- | -------- | --------- | | **Learnable** | 74.45% | 79.24% | 78.06% | 77.07% | 74.20% | | **Frozen** | 66.61% | 73.50% | 76.79% | 77.91% | 78.13% | | **Axis-Aligned** | — | — | — | 77.78% | — | Learnable slicers may help with fewer slices by focusing on informative directions, but their advantage diminishes as the slice count increases. In contrast, frozen slicers avoid added complexity but require many slices to capture distributional structure effectively, especially in high dimensions. Once again, we thank the reviewer for their time and consideration,. **P.S.** Due to the limited rebuttal period, the results presented are based on a single run. We recognize the importance of reporting performance across multiple seeds and will include averaged results in the camera-ready version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for a detailed rebuttal and additional ablation studies. I raised my score to 4 since my questions were answered. I believe this work is a great connection between the sliced optimal transport literature and deep learning architecture literature. I suggest the author include more discussion on related works (as suggested by reviewers) in both literature to form a strong connection. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their thoughtful feedback and for raising their score. We will ensure that all missing references are included in the camera-ready version of the paper and add the suggested discussions on related works.
null
null
null
null
null
null
null
null
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Accept (poster)
Summary: The submission is based on learning a single concept space that is shared between SAEs trained on multiple vision models. The aim is to learn a universal set of concepts which can be used to translate between different models and highlight differences in how models represent visual information. The experiments center on a universal concept space trained on a set of three models (SigLIP, ViT, and DinoV2). Various characterizations are performed: the dominant concepts are visualized via activation heatmaps; the reconstruction quality across and within each model; concept firing distributions; comparison to singular SAEs; and universal concept activation maximization. Claims And Evidence: The paper convincingly supports the claim that a shared concept space was learned, and I agree that the analysis is of a depth that novel insights are offered about how these different models represent visual information. The claim of novelty around “coordinated activation maximization” appears overblown -- it’s activation maximization on the learned space, repeatedly applied. Methods And Evaluation Criteria: Largely, yes. There are a few issues which would strengthen the paper if addressed: - Why introduce a firing threshold $\tau$ when using a top-k SAE? In other words, we know exactly what concepts fire for each input -- it’s the top k. By introducing a degree of arbitrariness, and one that is decoupled from the functioning of the SAE, subsequent processing is called into question. - The firing entropy does not actually use firing patterns across the models; it merely compares counts. A concept could fire completely disjointly across the models (say, 20 times for each model on a total of 60 inputs) and the paper’s firing entropy metric would be maximized. Thus the evaluation is problematic, and might be improved by directly assessing the firing distribution per concept. This would be easy -- each firing pattern is a three bit vector, and the authors could calculate something like the total correlation of the joint distribution from the 8 probability values. Theoretical Claims: I saw no theoretical claims. Experimental Designs Or Analyses: The paper provides little information about the experimentation that culminated in the final setup, which weakens the presentation of the method. Can any evidence at all be presented for L1 improving interpretability over L2 in the topK SAE implementation? How many epochs were needed for convergence? The interpolation of the size 14 patches used in dinoV2 to the size 16 patches for the other two models is reasonable, though it would be great to have more of a discussion and demonstration of the effect of this interpolation vs any other form. Was the interpolation bilinear? What were the specific versions of the models used, given that `timm` has multiple for each? In the top-K SAE implementation, what is k and how was it determined? Supplementary Material: There was no supp attached. In the appendices I reviewed the brief implementation details and the additional visualizations. Relation To Broader Scientific Literature: The paper leverages SAEs to connect multiple vision models into a shared space -- a nice idea given that SAEs are increasingly used to gain interpretability in individual models. Essential References Not Discussed: No, the references appear quite thorough. Other Strengths And Weaknesses: The paper is nicely written and the analyses are well-motivated, even if there is some room for improvement. Other Comments Or Suggestions: Could it provide a useful frame of reference to train a USAE with a known mismatch, such as with an early and a late layer of the same model? The sense in which concepts are shared or not might be clearer, in order to build a better understanding of the method before application to varied vision models. I understand this is most likely too much for the rebuttal period, so commenting how the authors see this experiment playing out would be fine. Questions For Authors: - Regarding reconstruction $R^2$ values, the paper states “positive off-diagonal $R^2$ scores indicate successful cross-model reconstruction...” (L313). Merely having a positive $R^2$ is an extremely low bar for success. The off-diagonal values are around 0.3-0.4 -- which still does not seem particularly successful. Can the authors provide more grounding for why this is good enough, and not higher? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough analysis of our work, and thank them for finding our paper is nicely written with well-motivated analyses. We aim to answer the questions raised below. For plots and figures, please refer to: https://sparkling-queijadas-998747.netlify.app/ ### **Implementation Details** *“Can any evidence at all be presented for L1 improving interpretability over L2?”* In our early experiments, we found that training USAEs with an L1 reconstruction loss tended to yield more interpretable concepts than L2—particularly in the sparsity and visual clarity of activations. While this remains a qualitative observation, it aligns with prior findings in cross-model interpretability settings [Lindsey et al., 2024]. We emphasize that USAEs are not tied to a specific loss; we simply chose L1 based on these initial results and precedent. *“How many epochs were needed for convergence?”* All USAEs were trained for 30 epochs, after which reconstruction metrics plateaued and we observed diminishing returns. This setting was consistent across all experiments. *“Was the interpolation bilinear?”* We appreciate the suggestion and agree that the token interpolation choice is an interesting direction. We used bilinear interpolation in our experiments. Its main effect appears as smoothing in the border regions of attribution maps. However, we haven't observed meaningful differences in the types of concepts discovered. We plan to explore alternative schemes (e.g., bicubic, with and without anti-aliasing) in future work. *“What were the specific versions of the model used?”* SigLIP: 'vit_base_patch16_siglip_224' DinoV2: ‘facebookresearch/dinov2_vits14’ ViT: ‘'vit_base_patch16_224' *“In the top-K SAE implementation, what is k and how was it determined?”* We set K = 32 for all experiments, following prior work on sparse autoencoders [Gao et al., 2024]. While this choice worked well empirically, we acknowledge that concept interpretability and stability may vary with K, and we leave a deeper hyperparameter sweep to future work. ### **Firing Threshold $\tau$** We agree that one could just use the K largest dictionary entries; however, our use of a threshold was to reduce noise in the case for which some of the top K firing dimensions are very close to 0. For example, some concepts may be captured successfully with only a subset of K dimensions, which could be identified with a small threshold to ensure the magnitude of the activation is sufficient. ### **Firing Entropy vs Co-Fire Proportion** We agree with the reviewer that modifying the firing-entropy (FE) metric to measure the distribution of fires per concept between models may yield additional useful insights. The FE metric focuses on firing counts (irrespective of co-fires) to initially probe whether concepts were particularly biased to any of the models. We choose to measure this to avoid degenerate solutions of reserving subsets of concept dimensions for specific models. Once we determined that many concepts fired with equal probability for each model, we used co-firing proportion (CFP) as a more granular token-level analysis to see if these concepts fired for the same tokens. ### **Training on Different Layers for Frame of Reference** We appreciate the suggestion to train our USAE on the first and last layers of a single network as a baseline. We expect partial concept overlap between layers, as shown in Fig. 4 where low-level features (e.g., color blue, concept 4235) emerge from last-layer training. Our DTD experiments (**Generalization OOD** 51dn) further demonstrate detection of low-level concepts when applying ImageNet-trained USAE to texture data. Given neural networks' hierarchical feature learning, finding low-level features in the last layer suggests our method may identify "layer-redundant" features encoded across multiple network depths. This makes cross-layer training an imperfect mismatch but still a valuable experiment. ### **R2 off-diagonal scores** In the confusion matrix (Fig. 5), the maximum possible off-diagonal R2 score is 1, which would imply that activations from one model (e.g., SigLIP) can be perfectly reconstructed by encoding activations from a different model (e.g., DinoV2 or ViT). While the upper bound for such cross-model reconstruction is unclear, our positive off-diagonal R2 scores already provide strong evidence of shared structure across models. These results suggest that USAEs capture meaningful, transferable representations even across architectures and training paradigms. We believe that further optimizing the USAE design—including architecture (increasing the depth of the encoder, Matching Pursuit encoder…), loss functions (HSIC, Cosine, …), training objectives (iBot style…), and hyperparameters—could improve cross-model reconstruction and raise this empirical bound. We leave a more detailed exploration of these design factors to future work.
Summary: The authors introduce USAEs, a framework that jointly learns a universal concept space for the internal activations of multiple vision models. By optimizing a shared objective, they show that USAEs semantially coherent universal concepts at different levels across vision models. Their results showcase the strong correlation between concept universality and importance, and also identify unique features that are learned by individual models. A unique application of USAEs, coordinated activation maximization, is also presented to achieve simultaneous visualization of universal concepts across models. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes. Appendix A Relation To Broader Scientific Literature: While previous papers focus on learning the concepts in individual models with SAEs, this paper propose a new USAE for learning universal concepts across models. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The introduction of USAEs is novel, exploring a new direction of learning universal SAE concepts across models 2. Extensive experiments are done to demonstrate the capture of universal concepts using USAEs. 3. The presentation is clear and easy to follow. Weaknesses: 1. Some implementation details are not clear (e.g., what is the threshold used in Figure 5; how are the concept meanings summarized). 2. The comparison between USAEs and independent SAEs is not sufficient (e.g., do they find more universal concepts than just comparing independent SAEs; will USAEs show more consistent visual results?). 3. Some experiments need more discussions (e.g., what is the connection between FE and CFP?) Other Comments Or Suggestions: 1. While $x\in \mathcal{X}$ is written in Equation (8), I don't see where $x$ is used in the equation. Is it the same as $x^{(i)}$? 2. Where are the results for the discussion in Line 317? Questions For Authors: 1. Is Energy(k) (Equation (9)) computed for just one model? Or is it computing a universal score for the concept k across models? 2. A baseline would be trying to identify shared concepts in independent SAEs. Do USAEs find more universal concepts than just comparing independent SAEs? Also, there is a lack of visual comparison between USAEs and SAEs. 3. Will USAEs learn more redundant concepts than SAEs? 4. Although the authors claim Equation (6) will "strike a practical balance between training speed and memory usage", there is no empirical evidence supporting this. 5. It is not clear to me how the concept meanings shown in this paper were generated. Were they manually summarized by humans? 6. How is the threshold determined for results in Figure 5. 7. More discussion/analysis should be made on the connection between FE and CFP. Will the high value of one metric always lead to the high value of the other? 8. Why do the results in line 357 indicate that SigLIP and ViT share more concepts? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful analysis of our work, and appreciate the positive compliments regarding the presentation quality, novelty, and extensive experimental results. We answer the questions raised below in text form. For plots and figures, please refer to our temporary anonymous website hosted on the Netlify platform: https://sparkling-queijadas-998747.netlify.app/ ### **USAE vs SAE Features** See our response **USAE vs SAE features** to (51dn) for further discussion. ### **Firing Entropy vs Co-Firing Proportion** See our response **Firing Entropy vs Co-Fire Proportion** to (SLTN) ### **Clarifying Questions** 1. *Is Energy(k) (Equation (9)) computed for just one model? Or is it computing a universal score for the concept k across models?* Energy is first computed individually across each model and we take the average to rank concepts. 2. *A baseline would be trying to identify shared concepts in independent SAEs. Do USAEs find more universal concepts than just comparing independent SAEs? Also, there is a lack of visual comparison between USAEs and SAEs.* Please see **USAE vs SAE features** in our response to (51dn) 3. *Will USAEs learn more redundant concepts than SAEs?* Assuming redundant concepts refers to when multiple concepts that look visually similar and appear repeatedly, we do not observe many of them in the learned dictionary. Designing a formal way of quantifying their frequency and performing a deeper analysis of their frequency is an interesting direction to explore for future work. 4. *Although the authors claim Equation (6) will "strike a practical balance between training speed and memory usage", there is no empirical evidence supporting this.* The alternative approach for mining universal concepts between individual SAE’s is restricted to being pairwise [Lan et al. 2024]; scaling this approach greatly increases its computational complexity. Our approach, being learning-based, maintains the same complexity at the cost of training time as we increase the number of models used. 5. *It is not clear to me how the concept meanings shown in this paper were generated. Were they manually summarized by humans?* Concept meanings were determined by qualitative inspection, as is common practice in previous works [Ghorbani et al. 2019, Fel et al. 2023c, Kowal et al. 2024b]. However, we believe the rise of more capable vision+language models (e.g., Vision-LLMs) could aid in automated summarization of the results. 6. *How is the threshold determined for results in Figure 5?* Assuming the threshold being referred to is Figure 5. (C): We observed a clear phase transition beyond 1000 concepts (e.g., r=0.63 vs. r=0.89). Thus, we aimed to analyze properties of these highest co-firing concepts and set the threshold to >1000 highest co-firing concepts. 7. *More discussion/analysis should be made on the connection between FE and CFP. Will the high value of one metric always lead to the high value of the other?* See (SLTN) Firing Entropy vs Co-Fire Proportion 8. *Why do the results in line 357 indicate that SigLIP and ViT share more concepts?* Phrased differently, the results in L357 indicate that SigLIP and ViT share a higher fraction of total concepts which co-fire across all three models, this is likely due to DinoV2 possessing a higher fraction of unique concepts due to its training objective promoting 3D scene understanding, the concepts of which we observe in appendix Fig.10 and 11 encoded as low-entropy concepts. ### **Implementation Details** See response to (SLTN) **implementation details**.
Summary: This work introduces Universal Sparse Autoencoders (USAEs), a method to discover concepts shared across different deep learning models. The authors focus on the study of USAEs for the last-layer representations of three popular vision models, showing that the methodology enables the construction of an interpretable feature space shared across different architectures. The authors present a coordinated activation maximization procedure to investigate concept alignment. The paper supports that the proposed procedure produces meaningful universal features through qualitative and quantitative analyses. Claims And Evidence: The claims in the paper are well supported by experimental evidence. Both qualitative (Fig 1) and quantitative (Fig 3 and 4) results support the evidence that USAEs learn meaningful features shared among the three models. The claim that USAEs learn features at different levels of granularity is supported by clear qualitative evidence. The work would benefit from further evidence that the method allows the construction of truly universal features. Considering that all three models have been trained on Imagenet, the claim would be further strengthened by showing that this is still true for images not in the training set. Methods And Evaluation Criteria: The method proposed by the authors is suited for the purpose of constructing interpretable features that are shared among different vision models. The evaluation criteria are generally well constructed but can be strengthened in some parts. For example, in Section 4.4, the authors could better characterize what type of features the USAE method allows to learn with respect to SAE features, which are shared among the different models. The analysis in this section would benefit from a more clear description. Theoretical Claims: NA Experimental Designs Or Analyses: The cross-model reconstruction, co-firing rate, and energy-based importance metrics are appropriate for measuring concept alignment. Qualitative visualizations effectively illustrate shared concepts. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: The authors contextualize well their work within existing literature. Essential References Not Discussed: To the reviewer's knowledge, there are not major papers that have been overlooked by the authors. Other Strengths And Weaknesses: The authors propose an original, well-structured way to analyze shared representations across multiple architectures, which leads to interesting observations concerning model alignment. The Coordinated Activation Maximization method proposed in the work is a useful strategy for comparing concept representations across different models. The presentation of the results and the definition of the necessary mathematical concepts are explained with a good level of clarity. The work would benefit from further evidence that the feature identified by the USAE methodology generalize to other dataset and are not restricted to the three models considered in the work. A more thorough discussion of the advantages of the proposed strategy in terms of scalability could benefit the work, specially in the light of the fact that the authors do not outline (at least in terms of a proof of concept) automatic strategies to interpret USAEs coordinates beyond visual inspection. Other Comments Or Suggestions: I have no further comments or suggestions. Questions For Authors: I have no further questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer 51dn for their thorough review of our work. We appreciate that they found the paper original and well-structured, and that they found the proposed Coordinated Activation Maximization application a promising tool for concept visualization. For plots and figures, please refer to our temporary anonymous website hosted on the Netlify platform: https://sparkling-queijadas-998747.netlify.app/ ### **Dataset Generalization (OOD)** We agree that testing for potential dataset bias is critical. To this end, we evaluated OOD generalization using two diverse datasets outside of ImageNet: DTD, a texture dataset (e.g., stripes, spotted, …),, and CelebA a face dataset with 40 binary attributes (e.g., glasses, wavy hair, …). Using DTD and CelebA as the validation dataset for our ImageNet trained USAEs show strong evidence of generalization outside of the training distribution (**Table A**). We find consistent activation reconstruction accuracy (measured by MSE and R2), consistent trends in co-firing metrics (**Fig. C**) and visualize some of the most important concepts for these new datasets, along with their associated highest activating images, from ImageNet (**Fig. A and B**). Despite differences in domain and semantics, USAEs trained on ImageNet exhibited robust generalization to both DTD and CelebA. Importantly, many of the concepts identified in these datasets also aligned with high-activation concepts from ImageNet, suggesting that the USAE dictionary captures generalizable structure beyond its training data. ### **Scalability, automatic strategies beyond visual inspection** As in prior work [Ghorbani et al., 2019; Fel et al., 2023c; Kowal et al., 2024b], we determine concept semantics via qualitative inspection: for each concept, we collect its top-activating image examples and generate corresponding spatial token heatmaps to aid interpretation. While this manual approach remains standard, we believe that emerging vision-language models (e.g., vision-capable LLMs) offer promising tools for automating concept summarization and interpretation. We view this as an exciting direction for future work. ### **USAE vs SAE features** We agree with the reviewer that further exploring the differences between USAE concepts and common SAE concepts is an interesting direction. We assume 51dn is asking for a characterization and comparison between universal features mined between independent SAEs vs universal concepts learned from our joint approach. To the best of our knowledge, all previous work does only pairwise analysis to mine overlapping concepts between independent SAEs [Lan et al. 2024], scaling these approaches beyond pairwise comparisons is currently not possible without substantial modifications to the previous methods. Developing a working baseline that extends these post-hoc mining approaches beyond pairwise is out of the scope of our work. We demonstrate that USAEs and independent SAE have fewer overlapping features (USAE-SAE, Sec 4.4 Fig. 7), yet demonstrate much higher overlap between themselves i.e., USAE-USAE, and SAE-SAE (see 11mk Dictionary Stability), further indicating USAEs do learn unique features that are not captured by independent SAEs. Lan et al. “Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models.” ArXiv 2024
Summary: The paper proposes a recipe to jointly train Sparse AutoEncoders (SAE) across different vision models in a shared (universal) space. The novel idea is to force the SAE to extract features (and, therefore, concepts) that are as shared as possible across models. This shared space enables cross-model and new within-model applications: the former allows studies of how different models encode the same concept, while the latter is used to characterize the distinction between model-specific and shared concept decompositions by comparing SAE and USAE decompositions. Claims And Evidence: The claims are backed up by strong empirical evidence. The compatibility of extracted features is backed by quantitative measures (sections 4.2 and 4.3), while the interpretability applications are validated through qualitative experiments (sections 4.1 and 4.5). The combination of both makes for a strong case. Methods And Evaluation Criteria: The methodology is straightforward (a plus!), with extra credit for the joint optimization process. Instead of optimising every pair, using a single space as a pivot, or adding an extra loss term, the paper takes a cleaner route: individually encoding each model’s activations while jointly decoding across all spaces. However, even though it’s noted in the appendix that the hyperparameters (I guess the SAE ones) require some tuning, it’s unclear how sensitive the method is to these choices. Adding some more information about it would certainly strengthen the paper. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments feel carefully designed and informative, with a mix of quantitative and qualitative measures that reinforce the key claims. My main curiosity in this section is how stable the concept universality metrics (firing entropy, co-firing proportion, and concept consistency) are across different (U)SAE hyperparameters, such as the dictionary size, the chosen models, or the dataset. This would provide a clearer picture of the method’s robustness and how “universal” the extracted features really are. *For example*, does a smaller dictionary recover a subset of the strongest concepts found with a larger dictionary, or does it lead to a more entangled representation? Does the method extract similar concepts with just a subset (2) of the models? Supplementary Material: N/A (no supplementary materials attached, but I read through the appendix and, personally, I really valued the "unique concept" studies). Relation To Broader Scientific Literature: This work has broad implications for the representation learning and interpretability fields. The study of representation compatibility and alignment is an active research topic, and the proposed framework naturally fits within this space by unlocking cross-model studies. One particularly interesting direction would be applying the same method to other modalities (yes, I’m referring specifically to LLMs). A shared concept space is, by definition, abstract. Therefore, it would be interesting to see some cross-domain extensions. A starting point could be applying USAE to the vision branch of CLIP but then applying the learned encoder $\Psi$ and decoder $D$ on the language branch. Essential References Not Discussed: A few comments on the references: - The citations to "The Platonic Representation Hypothesis" (PRH) (Huh et al., 2024) aren’t always accurate. The perfect placement is in the related works section under Feature Universality, supporting the discussion. But in the Introduction, it’s cited as a “technique for identifying universal features,” which isn’t quite right. PRH is more of a hypothesis about the existence and the convergence to a shared representation space than a method for discovering them. - In the Introduction, I would reference "Relative Representations enable Zero-Shot Latent Space Communication" (Moschella et al., 2023). This paper shows that different models, regardless of initialization, pretraining task, or architecture, can exhibit latent spaces that align well enough to be projected into a shared/universal one. That’s very much in line with the idea of universality in USAE-extracted concepts. - In the "Concept-Based Interpretability" paragraph of the related works section, I would include "Interpreting CLIP’s Image Representation via Text-Based Decomposition" (Gandelsman et al., 2024). This paper analyzes CLIP’s image encoder by decomposing (output-level) image representations into interpretable components (e.g., contributions from individual attention heads), using text representations as a dictionary for the decomposition. Other Strengths And Weaknesses: ### Extra Strengths - The writing is **amazingly clear, well-structured, and well-motivated**. The paper is a pleasure to read. I particularly appreciated that many questions that came up during my first read were addressed immediately (e.g., the USAE vs. SAE analysis). The only section that initially gave me trouble was 3.3 (Coordinated Activation Maximization), which could have been improved by adding a high-level sentence or two explaining the methodology in practical terms, making it more accessible. - Personally, I think cross-model interpretability is a **truly exciting topic**! ### Extra Weaknesses - **No failure case analysis**. The paper only shows successful examples of extracted concepts, but what happens when the USAE fails? Does it sometimes miss clearly present concepts or identify ones that aren’t? Seeing failure cases where the decomposition disagrees with human intuition would give a better sense of its limitations. - **Potential bias in the learned dictionary**. Since the USAE extracts concepts from ImageNet (I found this information in the appendix, I would add a sentence in "Implementation details" under section 4 specifying it), it’s likely biased toward the dataset’s structure/composition. But how does the dictionary change if trained on a different dataset? Some analysis of how the discovered dictionary shifts across datasets would be valuable. - An analysis of **concept stability across runs** is missing. I would expect different runs to produce slightly different dictionaries. But how much is "slightly"? Do the same concepts emerge consistently? Other Comments Or Suggestions: I'm adding it here because it can be considered a concurrent paper. I think "Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models" (Lan et al., 2024) is highly related to this work. In that one, representations of independently learned SAE on LLMs are aligned/compared post-training with similar universality findings. Questions For Authors: I find the paper to be very well-executed, and the core idea is highly interesting. I'm more than open to increasing my rating if the authors could further strengthen it by showing some experiment that addresses the "universality" robustness (see weaknesses 2 and 3 and "Experimental Designs Or Analyses"). To be clear, I don't think this is needed for the paper to be acceptable (that's why I'm already giving a 4), but given how central the “universality” claim is, appearing prominently throughout the text and even in the title, some extra validation would make it more substantiated. With the current validations, I would only refer to it as **a** shared set of concepts to jointly decompose multiple models. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank 11mk for their thorough analysis of our work. We appreciate that the reviewer found our work to be clearly written and well-motivated. We too believe cross-model interpretability is a truly exciting topic! We answer the questions raised below. For plots and figures, please refer to our temporary anonymous website hosted on Netlify: https://sparkling-queijadas-998747.netlify.app/ ### **Dictionary stability** We agree that concept stability is an important metric to consider. Note that our cosine-similarity-based analysis (Sec. 4.4 & Sec. A.3.1) in the main paper measures the stability between independent SAEs and USAEs, which found that USAEs do indeed preserve some of the concepts found in independent SAE dictionaries. However, we appreciate the suggestion to explore stability across training runs of hyperparameter-matching USAEs. Furthermore, concept stability (via cosine similarity) in SAEs has been analysed in recent work (Fel et. al, 2025). We first independently verify this paper’s findings on ‘individual model’ TopK SAEs, closely matching the stability score they observed (~0.5). We further find that (TopK) USAEs exhibit similar stability to individual TopK SAEs (**Fig. F**), with scores of 0.52, 0.41 and 0.55 for SigLIP, DinoV2, and ViT, resp. Furthermore, we observe a strong positive correlation between concept stability and importance (**Fig. G**). The most important concepts—those contributing most to reconstruction—are also the most stable across runs, suggesting that universality and stability are linked. ### **Bias in the learned dictionary** For discussion on bias in the learned dictionary, see response to 51dn: **Dataset Generalization OOD**. ### **Comparison: smaller/larger dictionary sizes** To investigate whether smaller dictionaries favor more universal or entangled concepts, we ablated the expansion factor (4/8/12 = 3072/6144/9216 concepts). We observed that while all dictionary sizes recover many of the same high-importance concepts, **the smallest dictionary (×4) tends to emphasize the most universal and high-entropy concepts (Fig. D and Table B)**, likely due to its limited capacity. In contrast, larger dictionaries better capture low-frequency and low-entropy concepts, which tend to be more model-specific. ### **Comparison with a subset of models** As suggested, we train a USAE on a subset of the models (SigLIP & ViT) and, as expected, find it has a significant overlap of concepts with the 3-model USAE. The stability scores between their dictionaries are 0.47 and 0.5 for SigLIP and ViT, resp. In addition, we again observe the trend of stability/importance correlation: the top 1000 Energy concepts from the 3-model USAE exhibit an average stability of 0.65 with the 2-model USAE. We also note that our firing metrics remain consistent with the 3-model USAE in (**Fig E**). These results reinforce our finding that the most **important and universal concepts are also the most stable**, and that universality is recoverable even from subsets. ### **Universality of concepts** We appreciate and agree with your framing: USAEs identify a possible set of shared concepts, not the definitive one. We will revise the paper’s language accordingly. Still, we show that the universal concepts we do identify are: - **Stable** across training runs, - **Important** to activation reconstruction, - **Generalizable** across datasets. These properties form a compelling working definition of universality, and while we cannot claim full coverage, we believe USAEs capture a meaningful and interpretable subset of the shared conceptual space across models. ### **Failure case analysis** We appreciate the suggestion to include failure modes of USAEs. We did find difficult-to-interpret concepts (concept 2188 in **Fig. H**) or model-biased concepts; e.g., typically related to positional information (concept 5728 in **Fig. H**) which was clearly represented in DinoV2 but not strongly represented in the other models. While these may appear as failure cases from a universality standpoint, we believe they help illustrate a key strength of USAEs: they surface not only shared concepts, but also **highlight the set of unique concepts of each model**. We will include these findings in the revised version. ### **Other: References and description refinements** We appreciate your feedback regarding Platonic Representations as well as the referral to review and the recommendation to clarify the high-level intuition for Sec. 3.3, and will implement these changes in the revised version. Thank you again for the detailed review and suggestions for improvement! We hope we have addressed your main concerns, especially regarding the robustness of USAEs. If so, we would appreciate the consideration to raise your score or let us know what we can further demonstrate to help your decision! Fel et al. “Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models” ArXiv 2025 --- Rebuttal Comment 1.1: Comment: The extra content is excellent, thank you for your work! I had a look at the other reviews/comments, and I believe the paper is much stronger now, especially with the experiments reinforcing the robustness/universality claims, so I'm pretty confident in raising my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for your time and expertise in providing a comprehensive initial assessment and for taking the time to further examine our rebuttal materials. Furthermore, we greatly appreciate your thoroughness in reviewing not only our responses to your specific comments, but also our responses to other reviewers. We are excited to see that considering the full context of our work with the other reviews and rebuttal materials has strengthened your confidence in our contribution, as reflected in raising your rating from "accept" to "strong accept." We are particularly encouraged that our new results demonstrating USAE stability and robustness resonated with you. The constructive engagement throughout this review process has been invaluable. We strongly believe our work has been improved from your review and will incorporate the feedback in the revised version of the paper.
null
null
null
null
null
null
Towards Robust Influence Functions with Flat Validation Minima
Accept (poster)
Summary: The article "Towards Robust Influence Functions with Flat Validation Minima" addresses the challenge of influence estimation in deep neural networks, particularly in the presence of noisy training data. The authors identify a fundamental limitation of existing influence function (IF) methods: their susceptibility to unreliable estimates due to the sharpness of the validation risk. To overcome this, they propose a novel approach that leverages flat validation minima for more accurate and robust influence estimation. This is achieved through a second-order approximation to minimize the impact of vanishing gradients and a refined parameter change estimation method tailored for flat test minima. The paper evaluates the proposed methods (VM and FVM) across various tasks, including mislabeled sample detection, training sample relabeling, influential sample identification in text and image generation. The experimental results demonstrate superior performance compared to existing approaches, highlighting the importance of seeking flat minima for enhancing influence estimation accuracy. Claims And Evidence: Claim: Influence functions suffer from unreliability when applied to noisy training data due to the sharpness of validation risk. The authors provide both theoretical analysis (e.g., Theorem 3.2 and Corollary 3.3) and empirical observations (e.g., Figure 1 and Figure 3). These demonstrate how sharp validation minima introduce gaps between estimated and actual influence, undermining reliability. The combination of theory and experimentation effectively supports this claim. Methods And Evaluation Criteria: The proposed methods (VM and FVM) and the evaluation criteria used in the paper are generally well-aligned with the problem at hand—improving influence estimation in deep neural networks, particularly for noisy datasets Theoretical Claims: Yes. The theoretical proof is largely correct, but I believe that Theorem 3.2 neglects the sample size $M$ ( eq. 35) when using Hoeffding's inequality. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally well-structured and provide strong support for the claims made. Supplementary Material: I review the code/influence/val_minima.py and code/influence/tools.py I found that the authors only submitted the code for the simplest model scenarios, and I couldn't find the influence function calculation code for large language models such as Llama. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to several important areas in machine learning research, including influence functions, robustness in deep learning, and optimization landscapes. 1) The authors identify that existing influence function (IF) methods often fail to provide reliable estimates in deep neural networks, particularly when applied to noisy training data. They attribute this failure to deficiencies in loss change estimation due to the sharpness of validation risk. 2) The paper establishes a theoretical connection between flat validation minima and accurate influence estimation, emphasizing the importance of optimizing for flat minima. 3) The proposed methods (VM and FVM) significantly outperform existing approaches in detecting mislabeled samples and relabeling tasks. The key contribution is based on the work of " Sharpness-Aware Optimization". Essential References Not Discussed: I argue that the experiments of EK-FAC-based Influence function such as reference [1] should be compared with the proposed method. [1] Grosse R, Bae J, Anil C, et al. Studying large language model generalization with influence functions[J]. arXiv preprint arXiv:2308.03296, 2023. Other Strengths And Weaknesses: Other weakness: 1. Assuming that the diagonal elements dominate the empirical Fisher Information matrix is an overly simplified solution that may introduce more theoretical bias. 2. I'm not entirely sure if the $ \tilde{\theta}_{ztr}$ needs to be recomputed for every training sample z. If so, the computational cost would be too high. Other Comments Or Suggestions: line 215 influence estimation error's math symbol is incorrect. Questions For Authors: 1. How to calculate the $ \tilde{\theta}_{ztr}$ in eq. (17)? 2. How to calculate the $ \tilde{g}_{ztr}$ in eq. (21)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the useful comments of the reviewer. We will update our current draft to avoid any confusion. > [Q1 / Other Weakness 1] How to calculate the $\tilde{\theta} _ {z_{tr}}$ We would like to clarify that it is unnecessary to compute $\tilde{\theta} _ {z_{tr}}$for each training sample. Specifically, we only need to fine-tune the pre-trained model parameters $\theta$ to obtain $\tilde{\theta}$. Once this is done, we can approximate the parameter change $\tilde{\theta} _ {z_{tr}} - \tilde{\theta}$ as outlined in Eq 18. > [Q2] How to calculate the $\tilde{g} _ {z_{tr}}$ After reviewing our initial submission, we found that the definition on line 244 indeed contains an typo, which we suspect may have contributed to this misunderstanding. Specifically, $\tilde{g} _ {z_{tr}}$ is defined as $\tilde{g} _ {z_{tr}} = \nabla _ {\tilde{\theta}} \ell(z_{tr}, \tilde{\theta})$, as demonstrated in line 879 in the appendix. We will correct this in the revised version. In practice, we implement this using Automatic Differentiation in PyTorch. > [Theoretical Claims] Sample size $M$ in eq. 35 We can confirm there is no $M$ in eq. 35. We directly apply the variance proxy for the whole random variable $\mathcal{I}(z, S_\text{val})$ in eq. 35. We thank you for the observation, and will improve the readbility of the proof therein. > [Supp. Material] Code for Generation Tasks We promise we will make the code for reproducing all reported experiments publicly available upon acceptance. > [References] Comparison with EK-FAC-based Influence function Thank you for pointing out this important baseline that we had overlooked. We have now conducted a comparison with EK-FAC [r6] on mislabel detection tasks. The results (ROC AUC/AP) are reported below, where \* denotes the results computed using the best training checkpoints. As observed, our proposed method consistently outperforms the EK-FAC-based IF. |Method|C-10N Aggre|C-10N Random|C-10N Worst|C-100N Noisy| |-|-|-|-|-| |EK-FAC|57.03/39.71|69.85/57.70|72.18/72.81|60.30/59.37| |EK-FAC\*|70.02/60.51|81.55/76.58|80.24/81.72|62.72/64.86| |VM|95.18/76.31|95.92/87.35|95.88/94.27|89.77/83.81| |FVM|96.14/79.53|96.63/88.82|96.46/94.97|90.80/85.41| > [Other Weakness 1] Diagonal approximation for inverse Hessian Thank you for raising the concern regarding the potential oversimplification of our inverse Hessian approximation. Please refer to our detailed response to Reviewer tK1F for a full discussion on this point. > [C1] Typo Thank you for pointing out the incorrect mathematical symbol on line 215. We will correct this in the revised version. --- [r6] Studying large language model generalization with influence functions, arxiv, 2023.
Summary: In this paper, the authors propose a method for estimating influence functions (IFs) for deep neural networks, addressing limitations of existing approaches that struggle with noisy training data and rely on first-order IF approximations without considering the sharpness of validation risk. They demonstrate that the error in the influence function estimator is directly related to the risk on validation samples. As a result, their IF estimator relies on parameters optimized for the validation risk. Additionally, the authors introduce a second-order approximation method tailored to their flat validation minima framework. Extensive experiments are conducted across various benchmarks and architectures. Claims And Evidence: Yes, the claims are supported by relevant evidence. Methods And Evaluation Criteria: Yes, the authors have considered relevant baselines and benchmarks datasets. Theoretical Claims: I did not go through the theoretical claims in great detail as I am not an expert in that. However, the overall arguments made logical sense and provided intuition. Experimental Designs Or Analyses: Yes, the experimental setup and design are sound in most areas. However, I recommend that the authors update the manuscript to include clearer descriptions of the tasks and provide more detailed explanations of how each section of the experiments is carried out. Supplementary Material: Yes, I reviewed the appendix provided in the main paper. Relation To Broader Scientific Literature: In my humble opinion, the contributions are significant within the broader context of the scientific literature. However, there are some questions I have raised that require further clarification. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The proposed method is novel in its approach to addressing influence functions within the context of the flat validation minima framework that the authors consider. The experiment list is comprehensive, and the paper is well-written with minimal typos. Other Comments Or Suggestions: 1. It would be beneficial to provide the experiment setup for Figure 2 either upfront or more explicitly in the caption. 2. The explanation for why removing $z_{tr}$ corresponds to $\epsilon = -\frac{1}{N}$ could be included in the preliminaries section. 3. **Minor**: In Figure 2, the blue legend line should be changed to "Validation ROC-AUC" to match the caption. 4. **Minor**: The title of subsection 3.2 needs to be updated. 5. $R_{val}$ in Theorem 3.2 and the corollary needs further explanation in the main paper. 6. In Corollary 3.3, Line 215, $\hat{\mathcal{R}}_S(\mathcal{I})$ should be replaced by $\hat{\epsilon}_{S}(\mathcal{I})$ to maintain consistency (though I may be incorrect) 7. The experiment setup for Figure 3 should also be provided upfront for better readability. Questions For Authors: 1. The proposed approach for IF estimation depends on optimization with the validation set. Can the authors comment on how the performance might be affected by changes in the validation set size? Moreover, is there an assumption that this optimization will lead to an optimum that does not significantly deviate from the optimum obtained during the training phase, but instead moves toward regions where the validation loss is flat? 2. If the training data samples do not have noisy labels, will the method still provide the same performance benefits as the standard IF? 3. Since the method still requires computing second-order gradients to estimate the IF, can the authors comment on the computational complexities? 4. Can the authors explain Equation 23 in the paper and, in turn, provide a better explanation of the corresponding experiment? I reviewed Kwon et al., but I believe there is a major difference in your approach. Additionally, in Appendix B.2, only 10 validation examples are used for influence estimation. Could this pose a challenge for accurately estimating the IF? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's valuable suggestion. After pondering over your questions, we compose the following responses and will include all the following elaborations in the revised manuscript. > [Q1.1] Validation set size We appreciate the reviewer for highlighting this important factor, and we will include a detailed discussion of this in the revision. Our approach consists of two steps: (1) fine-tuning and (2) influence function (IF) computation. In the second step, both our method and other IF-based approaches are similarly affected by the size of the validation set used for computing influence scores. Therefore, compared to other methods, the validation set size primarily impacts the fine-tuning step in our approach. To further investigate the impact of the validation set size on our approach, we conducted experiments comparing different IF approaches for mislabel detection on the CIFAR-10N Worst dataset with varying validation set sizes. The results (ROC AUC/AP) are reported below, with the performance drop relative to the size of 10.000 shown in parentheses. |Size|10000|5000|2000|1000| |-|-|-|-|-| |LiSSA|65.75/68.35|65.84/68.35 (+0.13%/-0.00%)|65.60/67.75 (-0.22%/-0.87%)|64.79/66.54 (-1.46%/-2.64%)| |VM|95.88/94.27|95.31/93.44 (-0.59%/-0.88%)|94.37/92.11 (-1.57%/-2.29%)|93.07/90.26 (-2.93%/-4.25%)| |FVM|96.46/94.97|95.68/93.71 (-0.80%/-1.32%)|94.86/92.60 (-1.65%/-2.49%)|93.72/90.94 (-2.84%/-4.24%)| As observed, the performance of all approaches tends to degrade. Nevertheless, our method maintains strong performance despite the reduced data size and still outperforms other approaches by a significant margin. > [Q1.2] Optimum assumption In Theorem 3.1, we assume that $\mathbb{E} _ {z \sim D_+}[\mathcal{I}(z, S_{val}) > 0]$ and $\mathbb{E} _ {z \sim D_-}[\mathcal{I}(z, S_{val}) < 0]$. As noted in the remark on line 201, this requires the overall influence estimation performance on $D$ (the training data distribution in practice) to be better than random guessing. This assumption implicitly relies on the training and validation data being drawn from related distributions and on the model retaining useful information about the training data. Consequently, we assume that the flat validation minimum does not deviate significantly from the given training minimum. > [Q2] Benefits under cases without noisy labels We would like to clarify that we have indeed conducted experiments in settings without noisy labels. Specifically, the generation tasks presented in Sections 4.3 and 4.4 are based on clean datasets, and our proposed methods, VM and FVM, consistently outperform existing baselines in identifying the most influential samples under these conditions. > [Q3] 2nd-order gradients and computational complexities In practice, to reduce both time and space complexities, we **approximate the Hessian** using the diagonal elements of the empirical Fisher Information Matrix, as described in Appendix A. Following the notation used in [r5], where $n$ and $m$ denotes the number of training samples and validation samples, $D$ the number of parameters per layer, and $L$ the number of layers, the computational complexity of LiSSA for estimating the Hessian inverse is $O(nD^2L)$, DataInf is $O(nDL)$ and our approximation is $O(mDL)$ (since it is performed w.r.t. the validation loss). Notably, in most practical scenarios, $m \ll n$. > [Q4.1] Experimental setting in generation tasks Regarding Equation (23), we acknowledge that its current form may have led to some misunderstanding due to limitations in presentation. We would like to clarify that the experimental setup for the generation tasks totally follows that of [r5]. Later, we will refine the explanation and notation of this part in the revision. > [Q4.2] Only 10 validation samples are used for influence estimation in LLMs scenarios We acknowledge the reviewer’s concern. Due to time constraints, we were unable to conduct more extensive experiments in the large language model (LLM) setting. However, we provide a simplified validation dataset size analysis on smaller-scale datasets—please refer to our response in [Q1.1] for details. We will include a more thorough discussion on this limitation and its potential implications in the revised version of the manuscript. > [C1-7] Writing We thank the reviewer for the detailed suggestions. We will revise the manuscript to improve clarity, including clearer descriptions of the experiment setups, explanation of why $\epsilon = -\frac{1}{N}$, and clarification of $R_{\text{val}}$, which refers to the risk on the validation set. The typo in Line 215 will be corrected. Additionally, we will also update the legend label in Figure 2 to "Validation ROC-AUC" and revise the caption of Figure 3 to include the experiment setups. --- [r5] DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models, ICLR 2024.
Summary: The influence function measures the influence of training samples on the validation loss. While this is typically done using minimas of the training loss, the authors argue for using flat validation minima. They then show experimntally and arguen theoretically that the standard estimators for influence do not work well in this setting and propose a new way of calulcating the influence that is designed to deal with these problems. Claims And Evidence: The basic claims are well supported, both theoretically and experimentally. I also think that the flow of the manuscript (observation -> problem -> solution) is coherent and convincing. The experimental evidence for the superiority of their approach is sufficient and extensive. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the theoretical claims in detail (especially in the appendix) but from my reading they seem to be consistent. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: The paper does not have a "Related literature" section which makes it hard for a non-expert reader to assess novelty. I would strongly suggest adding one, since many of the core ideas (flat minima etc.) have vast literature attached to them. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: There has been a lot of discussion around the concept of flat minima and their dependence on the parameterization of the loss function (see below). This seems to be important also for the current paper but the authors do not discuss it. I think it would be good to have a sentence or two on the impact of these issues on the proposed method. [1] Dinh, Laurent, et al. "Sharp minima can generalize for deep nets." International Conference on Machine Learning. PMLR, 2017. [2] Pittorino, Fabrizio, et al. "Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry." International Conference on Machine Learning. PMLR, 2022. Other Comments Or Suggestions: None. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the theoretical and experimental support for our claims. > [Broader Sci. Literature.1] Absence of Related Literature section We kindly note that, due to space constraints, the discussion of related work on influence functions was not included in the main paper but has been deferred to **Appendix E**. > [Broader Sci. Literature.2] Further discussion on related works of flat minima We thank the reviewer for highlighting the importance of further discussing flat minima in the related works section. In the initial version, we acknowledge that we did not elaborate on prior work related to flat minima. This decision was based on the observation that most existing literature focuses on flat minima in the context of model generalization. In contrast, our work considers flat validation minima as a key factor for accurate IF estimation. Given this shift in focus, we initially chose not to include a detailed discussion of the broader flat minima literature. Nonetheless, we agree that adding this context will help clarify our perspective and position our work more clearly. We will therefore incorporate a more comprehensive discussion of relevant literature in the revised version. > [Other Weakness] Impact of "parameterization of the loss function" First, we would like to highlight that the main theoretical contribution of our manuscript is to establish a connection between flat validation minima and the accuracy of influence function estimation. The focus of our analysis is on **how flat minima matter for IF**, rather than on how to obtain them. To this end, we adopted a commonly used method—Sharpness-Aware Minimization (SAM)—to obtain flat validation minima, primarily as a means to support our theoretical insights rather than as a central contribution of the paper. We sincerely thank the reviewer for suggesting relevant literature [1][2] on achieving flat minima from the parameterization of the model. While these directions are indeed insightful, due to time constraints and the lack of public code for those papers, we opted to experiment with two more recent and open-sourced sharpness-aware optimizers as alternatives. Specifically, we compare three different sharpness-aware optimizer, including SAM (utilized in our initial submission), ASAM [r3] and F-SAM [r4], on CIFAR-10N Worst and CIFAR-100N Noisy dataset. The results (ROC AUC) are presented below. |Method|CIFAR-10N Worst|CIFAR-100N Noisy| |-|-|-| |SAM|96.46/94.97|90.80/85.41| |ASAM|96.71/95.11|90.83/85.25| |F-SAM|96.76/95.29|91.25/86.06| As observed, different sharpness-aware optimizers do affect the final results, with F-SAM achieving the best performance. This supports our hypothesis that better flat validation minima can lead to more accurate influence function estimation. --- [r3] ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks, ICML, 2021 [r4] Friendly Sharpness-Aware Minimization, CVPR, 2024
Summary: This paper reexamines influence functions (IF) in deep learning and argues that their standard formulations fail when models are trained on noisy data—primarily because of sharp validation risk landscapes. The authors theoretically link the estimation error of influence functions to the sharpness of the validation loss and posit that obtaining flat validation minima (via techniques such as Sharpness-Aware Minimization) is key for accurate influence estimation. To that end, they propose a new influence estimator based on second‐order approximations for both the parameter change and the loss change, tailored specifically for flat validation minima. Extensive experiments on tasks such as mislabeled sample detection, training sample relabeling, and identifying influential samples in text and image generation are presented to demonstrate that their methods (denoted as VM and FVM) outperform existing approaches. ## update after rebuttal The author's rebuttal addressed part of my concern, that the assumption of the Hessian being positive semi-definite is not part of their conclusion. I accept this reason. Claims And Evidence: The paper claims that the sharpness of the validation risk degrades the performance of standard influence functions and that flat minima lead to more reliable influence estimates. Although the authors provide theoretical bounds (Theorem 3.2 and Corollary 3.3) and empirical results, a critical issue arises with one of the core components: the inversion of the Hessian matrix. The diagonal approximation and its justification are not convincing in Appendix A. This weakens the overall support for the paper’s claims, as a robust inverse Hessian approximation is vital for the accuracy of the influence estimation. Methods And Evaluation Criteria: The proposed method employs a flatness-aware objective (inspired by SAM) to find flat validation minima and then computes influence using a second-order approximation. However, a major methodological concern is the unclear treatment of the inverse Hessian. In Appendix A, the authors attempt to justify an approximation for the inverse Hessian—crucial for computing the parameter change—but the derivation and assumptions (such as the diagonal dominance of the empirical Fisher Information matrix) are not convincing enough. This lack of clarity raises doubts about the stability and reliability of the proposed estimator. Theoretical Claims: The paper's theoretical development is generally sound and well-structured. The authors derive bounds on the influence estimation error by explicitly connecting it to both the validation loss and its sharpness. Experimental Designs Or Analyses: The experimental evaluation spans multiple tasks, including mislabeled sample detection and influential sample identification in both text and image generation. However, the reliance on benchmark datasets like CIFAR-10N/CIFAR-100N and controlled experimental settings means that the impact of the unclear Hessian inversion is not fully explored. Supplementary Material: The Appendix A is intended to provide detailed derivations of the inverse Hessian approximation. Unfortunately, the discussion there is dense and fails to convincingly justify the approximation technique. The assumptions required (e.g., diagonal dominance) are not sufficiently validated, leaving a critical component of the proposed method ambiguous. Relation To Broader Scientific Literature: The paper situates itself within a robust line of work on influence functions (e.g., Koh & Liang, 2017) and builds on recent advances by incorporating flat minima via SAM. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper presents an innovative theoretical framework linking flat validation minima to influence estimation error. - It introduces a novel estimator that, in controlled experiments, appears to outperform several recent baselines. Weaknesses: - The derivation and approximation of the inverse Hessian—a central element of the method—are not presented convincingly. Other Comments Or Suggestions: The authors should provide a clearer, more detailed justification for their inverse Hessian approximation. Additional empirical or theoretical evidence validating the assumptions made in Appendix A would be valuable. Questions For Authors: See listed above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We acknowledge the reviewer’s concern regarding the use of **diagonal approximation of the empirical Fisher matrix for estimating the inverse Hessian**. We are happy to provide a more thorough discussion of this aspect. ### 1. Diagonal approximation for inverse Hessian is simply a practical choice We would like to clarify that our core contribution lies in improving the influence function (IF) estimation by (1) **seeking flat validation minima**, and (2) **adapting IF computation to this scenarios**. Therefore, our core contributions are **orthogonal** to the specific choice of inverse Hessian approximation method. In fact, once flat validation minima are obtained, the specific method used to compute the inverse Hessian in Equations (20) or (21) becomes a matter of choice, depending on the trade-offs between efficiency and accuracy. ### 2. Further support for the use of the diagonal approximation We mainly refer to [r1] in JMLR, where the author discusses the diagonal of the empirical Fisher matrix and emphasizes that, - for accelerating 2nd-order optimization, the **diagonal approximation is a widely accepted method** to avoid the full computation of the Hessian [r2]. - furthermore, optimization methods such as Adagrad and Adam, which are based on the Fisher matrix, also estimate second-order derivatives through an approximation to the diagonal of the empirical Fisher matrix [r1]. These examples reflect a broader consensus in the community that approximating the inverse Hessian in DNNs through the diagonal of the empirical Fisher matrix strikes a practical balance between efficiency and effectiveness, especially in DNNs scenarios. ### 3. Additional experiments for different inverse Hessian approximation approaches We acknowledge that with a more accurate inverse Hessian approximation approach, the improved IF estimation performance can be expected. To quantitatively discover the impact of the approximation on the inverse Hessian, we conduct experiments on the mislabel detection task. Specifically, we replace the computation of the inverse Hessian $\\tilde{H}_{val}^{-1}$ in Equation 21 from diagonal Fisher to LiSSA and DataInf. > [Technical details in adaptation] > > Note that the accelerated approximations in LiSSA and DataInf both rely on the inverse Hessian-vector product (iHVP). However, the initial product $\\tilde{H}\_{val}^{-1} \\tilde{g}\_{z\_{tr}}$ in Equation (21) depends on the specific training sample $z_{tr}$, which requires recomputation for each training sample. As a result, we cannot directly apply these methods to our proposed influence function. > > To address this, we introduce a random vector $V \\in R^{|\\theta| \\times 1}$, where each element is sampled from a standard normal distribution, i.e., $V_i \\sim \\mathcal{N}(0, 1)$. With this, Equation (21) becomes $\\tilde{g}\_{z\_{tr}}^\\top \\tilde{H}\_{val}^{-1} V V^\\top \\tilde{g}\_{z\_{tr}}$, and we have $\\mathbb{E}[\\tilde{g}\_{z_{tr}}^\\top \\tilde{H}\_{val}^{-1} V V^\\top \\tilde{g}\_{z\_{tr}}] = \\tilde{g}\_{z\_{tr}}^\\top \\tilde{H}\_{val}^{-1} \\tilde{g}\_{z\_{tr}}$. By using the random vector $V$, we can directly apply the iHVP trick to $\\tilde{H}\_{val}^{-1} V$ and compute the inverse Hessian based on LiSSA or DataInf. In practice, we sample 5 different $V$ to ensure stability and reduce the variance of the approximation The results (ROC AUC/AP) are as follows: |Method|CIFAR-10N Aggre|CIFAR-10N Random|CIFAR-10N Worst|CIFAR-100N Noisy| |-|-|-|-|-| |VM(LiSSA)|95.22/67.36|95.67/78.90|95.00/89.26|88.88/80.89| |VM(DataInf)|95.37/72.09|95.98/83.26|95.57/91.09|89.58/82.81| |VM(ours, diagonal)|95.18/76.31|95.92/87.35|95.88/94.27|89.77/83.81| |FVM(LiSSA)|95.95/70.71|96.17/80.16|95.33/89.56|89.48/81.81| |FVM(DataInf)|96.18/76.15|96.58/84.85|96.02/91.54|90.24/83.81| |FVM(ours, diagonal)|96.63/88.82|96.63/88.82|96.46/94.97|90.80/85.41| Theoretically, from the perspective of computational complexity and estimation fidelity, LiSSA and DataInf are expected to provide more accurate approximations of the inverse Hessian than diagonal approximation. However, in our current experiments, we observe that the performance of the diagonal approximation is competitive with, and in some cases even superior to, LiSSA and DataInf. One possible reason is that LiSSA and DataInf involve more sensitive hyperparameters (e.g., number of iterations, damping), and tuning them appropriately for each setting is non-trivial. Despite this, the diagonal approximation—which is significantly more efficient—achieves consistently strong performance across all datasets. This suggests that our use of the diagonal approximation, while simplistic, does not lead to a substantial degradation in influence estimation accuracy in practice. --- [r1] New Insights and Perspectives on the Natural Gradient Method, JMLR 2020. [r2] Deep learning via Hessian-free optimization, ICML 2010.
null
null
null
null
null
null
OmniArch: Building Foundation Model for Scientific Computing
Accept (poster)
Summary: The paper introduces *OmniArch*, a foundation model for scientific computing designed to solve multi-scale and multi-physics Partial Differential Equations (PDEs) using a unified architecture. It employs a Fourier Encoder-Decoder to transform spatial-temporal PDE data into the frequency domain and a Transformer Backbone to capture temporal dependencies, enabling it to handle 1D, 2D, and 3D PDEs within a single framework. A key innovation is the *PDE-Aligner*, which fine-tunes the model with physics-informed constraints, ensuring alignment with governing physical laws. OmniArch achieves state-of-the-art performance across 11 PDE types, demonstrating strong generalization capabilities, including zero-shot learning, in-context learning, and multi-scale inference. Compared to existing models, it significantly improves accuracy, with up to 98.7% enhancement in some cases, making it a versatile tool for applications in computational fluid dynamics, weather prediction, and engineering simulations. Claims And Evidence: This paper offers three claims in the introduction. However, some claims are not convincing. (1) The author claimed that "The temporal mask effectively addresses inconsistencies in multi-physics". However, I could not find the explanation. Also, I could not find the ablation study to prove it. (2) The PDE-Aligner is claimed to leverage the hidden representations for equations. However, these claims are not supported by the ablation studies. Methods And Evaluation Criteria: The benchmark datasets and evaluation criteria is reasonable. However, In Eq.(1), the author select top K significant parts. However, what is the reason? Can it improve the efficiency? Can it improve the generalization ability? Also, I can not find how temporal mask be used in the proposed method. Also, there is no explicit ablation to verify its effectiveness. Theoretical Claims: There is no theoretical analysis. Experimental Designs Or Analyses: The experimental results are fully convincing. [1] The variation is not clarified in the experimental results. [2] The model size, training time, and inference time is not compared in the table. I seems that the proposed method is larger and slower than the competing methods. Supplementary Material: I read the clarification of the experimental setting. Relation To Broader Scientific Literature: The key contributions of the *OmniArch* paper align with and extend several existing concepts in the broader scientific and machine learning literature. The unified architecture for multi-physics PDEs builds upon traditional neural operators like the Fourier Neural Operator (FNO), which are typically tailored to specific PDEs, by enabling simultaneous learning across diverse equations and conditions. The integration of a Fourier Encoder-Decoder with a Transformer Backbone leverages the efficiency of frequency domain representations and the capability of transformers to model long-range dependencies, enhancing generalization across different scales and physical phenomena. The introduction of the PDE-Aligner for physics-informed fine-tuning extends the principles of Physics-Informed Neural Networks (PINNs) by aligning model predictions with governing physical laws through contrastive learning in the frequency domain. Moreover, *OmniArch*'s emergent generalization capabilities, such as zero-shot and few-shot learning, parallel advancements in foundation models in other domains, demonstrating adaptability to novel PDE systems without retraining. Essential References Not Discussed: Upon reviewing the paper *OmniArch: Building Foundation Model for Scientific Computing*, I have identified several pertinent works related to partial differential equations (PDEs) that are not currently cited but are essential for understanding the context of the paper's key contributions: 1. **Physics-Informed Neural Networks (PINNs)**: PINNs integrate physical laws described by PDEs into the learning process of neural networks. They offer a mesh-free alternative to traditional numerical methods for solving PDEs and have been applied to various problems in computational science. Notable works include: - Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. *Journal of Computational Physics, 378*, 686–707. - Mao, Z., Jagtap, A. D., & Karniadakis, G. E. (2020). Physics-informed neural networks for high-speed flows. *Computer Methods in Applied Mechanics and Engineering, 360*, 112789. 2. **Neural Operators**: Neural operators are deep learning architectures designed to learn mappings between infinite-dimensional function spaces, making them effective for approximating solution operators of PDEs. Key publications include: - Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Fourier neural operator for parametric partial differential equations. *arXiv preprint arXiv:2010.08895*. - Kovachki, N., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2021). Neural operator: Learning maps between function spaces. *Journal of Machine Learning Research, 22*(1), 139–166. 3. **Deep Backward Stochastic Differential Equation (BSDE) Method**: This method combines deep learning with backward stochastic differential equations to solve high-dimensional PDEs, particularly in financial mathematics. A seminal paper in this area is: - Han, J., Jentzen, A., & E, W. (2018). Solving high-dimensional partial differential equations using deep learning. *Proceedings of the National Academy of Sciences, 115*(34), 8505–8510. Incorporating discussions of these methodologies into the paper would provide a more comprehensive understanding of existing advancements in PDE-solving techniques and AI applications in scientific computing, thereby contextualizing OmniArch's contributions within the broader research landscape. Other Strengths And Weaknesses: NO. Other Comments Or Suggestions: NO. Questions For Authors: [1] The authors should add variations, model size and inference time of the proposed method and the competing method. [2] The authors should add ablation studies to verify the claims in the introduction. [3] The author should explain how temporal mask benefits solving PDE. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate reviewer VHRd's constructive feedback. Below, we address each concern with clarifications and additional evidence: **Q1: [Temporal Mask & Multi-Physics Consistency] How does the temporal mask address inconsistencies in multi-physics, and where is the ablation study?** **A1**: The "inconsistencies" arise when handling PDE systems with **varying numbers of physical variables per timestep** (e.g., 3D Navier-Stokes vs. 1D Advection). Traditional causal masks (e.g., GPT-style) fail here because: + **Token misalignment**: For systems with multiple variables, causal masks process tokens sequentially, forcing the first token (e.g., velocity) to ignore dependencies on other variables (e.g., pressure, temperature) within the same timestep. Our **Attention with Temporal Mask**: Instead of masking tokens individually, we group all variables within a timestep and forms a **hierarchical attention**: + *Intra-timestep*: Full attention among variables (e.g., velocity, pressure) to capture couplings. + *Inter-timestep*: Causal attention across time (future masking). **Benefits**: Physics-aware modeling: Ensures variables at timestep jointly condition on each other (e.g., enforcing continuity equations). **Ablations**: | Method | RMSE | |-----------------|--------| |Causal Mask |0.0277 | |No Mask |0.0285| |**Temporal Mask** |**0.0227**| ---- **Q2: [PDE-Aligner's Hidden Representations] How does PDE-Aligner leverage hidden representations, and where is the evidence?** **A2**: We have report the fine-tuning results with PDE-Aligner in **Table 1(Line 298-302)**, We reorganize the results below: |Settings | 1D | 2D |3D | |--------------- |--------|--------|--------| |Pretrain |0.0103|0.0440 |0.3399| |ft(w/o Aligner)|0.0073|0.0345 |0.3432| |ft + Aligner |0.0056|0.0262 |0.2697| |**improvement** | **51.33%**|**23.62%**|**22.07%**| where **PDE-Aligner continually improves the performance across 1D-2D-3D PDEs**. We also doing the probing experiment in **Figure 7**, which explains why PDE-Aligner helps the performance, mainly because it helps distinguish the difference between different PDE Systems. ---- **Q3: [Top-K Frequency Selection] Why select top-K frequencies, and does it improve efficiency/generalization?** **A3**: We adopt **Top-K frequency truncation** for two key reasons: **(a) Input-Size Agnosticism:** OmniArch jointly trains on 1D-2D-3D PDEs with vastly different resolutions (e.g., 1D-1024, 2D-128×128, 3D-64×64×64). Without truncation, we would either: + *(Option 1)* Require separate encoders/decoders per resolution (losing unified modeling), or + *(Option 2)* Downsample spatial dimensions, forcing 2D/3D tasks to compromise for 1D (hurting performance). Top-K ensures consistent spectral representation across dimensions, similar to FNO’s infinite-dimensional operator learning [Li et al.]. **(b)Fluid Data Prior:** The PDEBench and PDEArena Datasets exhibit **long-tailed high-frequency noise[1,2,3]**. Retaining only dominant low-frequency modes (Top-K) preserves physically meaningful features while suppressing numerical artifacts. >[1] Takamoto, Makoto, et al. "Pdebench: An extensive benchmark for scientific machine learning." Advances in Neural Information Processing Systems 35 (2022): 1596-1611. >[2] Lippe, Phillip, et al. "Pde-refiner: Achieving accurate long rollouts with neural pde solvers." Advances in Neural Information Processing Systems 36 (2023): 67398-67433. >[3] Zakharov, Vladimir E., Victor S. L'vov, and Gregory Falkovich. Kolmogorov spectra of turbulence I: Wave turbulence. Springer Science & Business Media, 2012. ---- **Q4: [Model Size & Inference Time] How does OmniArch compare in size/speed to baselines?** **A4**: We include the Memory usage and Inference Time(single A800) in **Appendix H.7**, We reorganize the main results(Pretrained Models baselines,2D,Large size) below for reference. OmniArch is **competitive in efficiency** with other pretrained model baselines. | Model | Params | Inference Time| |-------------|----------|----------------| |Poseidon-L | 629 M|0.07712s | |MPP-AVIT-L |409 M| 0.08343s | |DPOT-L |509 M| 0.03154s | | **OmniArch-L** |**445 M** | **0.02075s** | ---- **Q5: [Missing citaions.]Including PINNs/FNO/BSDE works.** **A5**: We appreciate the reviewer’s suggestion. **In our paper, PINN and FNOs are explicitly discussed as baselines (Section 5.1) and compared in Related Works (Lines 74–82)**. However, we omitted the Deep BSDE Method because its primary applications (e.g., financial derivatives pricing) are orthogonal to our focus on multi-physics PDEs in scientific computing (e.g., fluid dynamics, material science). While BSDEs excel in high-dimensional finance problems , their scope differs fundamentally from our work's goals. That said, we will include a brief comparison in the final version to clarify this distinction.
Summary: This paper proposes OmniArch, a foundation model for numerical simulations. They pretrain on 1D/2D/3D data and comapre to other models of the literature. It relies on spatial Fourier encoders/decoders and a causal temporal attention. It relies on PDE-Aligner for fine-tuning. Claims And Evidence: I don't think it is fair to say that this is the first foundation model on PDE data. The cited models like MPP, Poseidon and DPOT were already pretrained on multiple physics. There are no experiments on the emerging capabilities? I am not sure it is defined somewhere? As you can see below, I also don't trust the reported metrics in the Table. Methods And Evaluation Criteria: Table 1 makes sense, even thought it would have been better to add the size of the models. For example, OmniArch-B is 316.13M parameters whereas e.g. MPP-B is 116M parameters. Same for Large models. I think the comparison between models is not totally fair which makes the experimental results difficult to interpret. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The comparison between models is not 100% fair due to their different size. One important point: The authors gave the definition of the VRMSE (Variance Root Mean Square error) but name is nRMSE (normalized root mean square error). This is confusing and makes me wonder if the reported number in Table 1 are for the same metrics (MPP and Poseidon used the original nRSME but the authors compute the VRMSE for their models) Supplementary Material: Details about the architecture. Relation To Broader Scientific Literature: I think the authors wrongly name themselves the "first foundation models for 1D-2D-3D united pre-training". This has been done before in MPP and Poseidon. However, the fine-tuning part of OmniArch seems novel and interesting to me. The authors could also have cited [1] and maybe benchmark their model on it as it seems to be a very relevant point of the literature? [1] Ohana, R., McCabe, M., Meyer, L., Morel, R., Agocs, F., Beneitez, M., Berger, M., Burkhart, B., Dalziel, S., Fielding, D. and Fortunato, D., 2024. The well: a large-scale collection of diverse physics simulations for machine learning. Advances in Neural Information Processing Systems, 37, pp.44989-45037. Essential References Not Discussed: Except from [1], I don't see any reference not discussed. [1] Ohana, R., McCabe, M., Meyer, L., Morel, R., Agocs, F., Beneitez, M., Berger, M., Burkhart, B., Dalziel, S., Fielding, D. and Fortunato, D., 2024. The well: a large-scale collection of diverse physics simulations for machine learning. Advances in Neural Information Processing Systems, 37, pp.44989-45037. Other Strengths And Weaknesses: - Abstract: "as far as we know, we first conduct 1D-2D-3D united pre-training on the PDEBench,[...] should be rephrased. - In the introduction line 55, I don't think MPP and Poseidon are an extension of the Factformer? This is weirdly phrased. - I don't think I have noticed some emergent capabilities of the model? I am not even sure of what this mean or if this is well defined for this type of data? - the paragraph line 199 is not clear. What is the d-th index? What's the sequential index? What's the total? It should be rephrased clearly? - I don't see the justification of equation 4, as this is not for a transformer as stated, but only with one attention layer. I don't see the point of this remark since OmniArch is a whole transformer? - Line 240, I don't think there is a need for 2 paragraphs to explain what causal masking is? This is standard in the literature, and the written maths are more confusing than anything else. - As discussed above, Equation 8 is the VRMSE and not the NRSME, which makes me suspicious about the results shown in table 1. - Could the authors explain how the PDE-Aligner is really useful in that case? I may have missed it, but an ablation study with/without would have been useful. - How is the spatial tokenization done? I know we are in frequency space, so is there any tokenization done, or the authors are just doing a ravel in frequency space? What's the dimensionality then? - Why would the authors need a whole pre-trained BERT model for text encoding when the vocabulary is so small? - I struggle to understand the difference between OmniArch and a transformer with Fourier layers as encoder/decoders. Could the authors elaborate on that? - Can the model work on non-uniform grids? In theory it should? Other Comments Or Suggestions: See weaknesses above. Questions For Authors: See weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer urwm for the thoughtful feedback. **Q1: MPP/Poseidon already did united pretraining.** **A1**: We **disagree** with highly respect. There is **factual error**, MPP/Poseidon are designed *only* for 2D (no 1D/3D experiments in their papers). DPOT uses a convolution trick for 3D but lacks unified weights. OmniArch is the first to jointly learn 1D-2D-3D PDEs with shared weights in one architecture. We will clarify this distinction. ----- **Q2: Table 1 mixes nRMSE/VRMSE.** **A2**: We believe there is **misinterpretation** here. In Equation 8, We implement nRMSE following the common practice in previous works(MPP/PDEBench). The notation $\sigma_u$ here is equivalent to the $||\cdot||^2 + \epsilon$, which measures the norm of GT. We evaluate the predictions from all baselines and OmniArch with the same codebase. For fairness, we compare our nRMSE implementation and MPP, the results are the same and reproducible. Here is the code: ```python def nRMSE_MPP(output, tar, spatial_dims=None): # code from https://github.com/PolymathicAI/multiple_physics_pretraining if spatial_dims is None: spatial_dims = tuple(range(output.ndim))[2:] # Assume 0, 1, 2 are T, B, C residuals = output - tar tar_norm = (1e-7 + tar.pow(2).mean(spatial_dims, keepdim=True)) raw_loss = (residuals.pow(2).mean(spatial_dims, keepdim=True) / tar_norm) return raw_loss.sqrt().mean() # Ours: def nRMSE(pred, label, mask=None, dim=2): reduce_dims = list(range(-dim, 0)) res = pred - label label_norm = label.pow(2).mean(dim=reduce_dims, keepdims=True) + 1e-8 norm_loss = res.pow(2).mean(dim=reduce_dims, keepdims=True) / label_norm + 1e-8 # in extreme case, label_norm may be 0 if mask is not None: norm_loss = norm_loss[mask.bool()] return norm_loss.sqrt().mean() ``` ---- **Q3: OmniArch-B (316M params) vs. MPP-B (116M) is unfair.** **A3**: There is **Misunderstanding** here, Note that OmniArch uses different encoders/decoder pairs for 1D,2D,3D PDEs while the weight of transformer backbone is shared.(works like Mixture-of-Experts Models). We report the Total Parameters(316M, including transformer backbone + 1D, 2D, 3D encoder/decoder weights, the 3D encoder/decoder weights take a heavy part~170M) in Table 7. But for 2D task(MPP mainly experiment on), only 2D encoder/decoders are used for training/testing , thus the real runtime parameters of **OmniArch-B is 144M** which is comparable to **MPP-B(116M)** but outperforms in all 2D tasks **(see in Table 1)** , The same for large size model, **OmniArch-L(445M) is also comparable to MPP-L(409M)**. We will clarify this detailed difference further in the next version, but we believe the experiments are indeed fair. ---- **Q4: No evidence of emergent behaviors; term is undefined.** **A4**: *Emergent behaviors* is the concept borrowed from foundation models, which could be explained as "unseen capabilities beyond training" , it can be the capability that large model exhibits but small models not. In our settings, **(1) Zero-shot PDE solving**(Figure 5, beyond the training scope of PDE systems, not exhibit by smaller models like PINNs/FNO) **(2) In-context learning**(Figure 6), which OmniArch could learn from input trajectories. We are glad to include more experiments to comprehensively evaluate the emergent behaviors together with the research community. ---- **Q5: How is OmniArch different from a Fourier-encoded transformer?** **A5**: OmniArch differs fundamentally from a Fourier-encoded transformer in : 1. **Temporal Mask mechanism**: OmniArch implements a specialized Temporal Mask that enables each physical quantity to attend to all quantities at current and previous time steps, facilitating complex cross-physics interactions that standard transformers cannot model effectively. 2. **Physics-informed alignment**: The PDE-Aligner component provides a dedicated mechanism to incorporate physical constraints and prior knowledge during fine-tuning, which goes far beyond basic Fourier encoding. --- **Q6: Show ablation with/without PDE-Aligner.** **A6**: Please find the ablation in Q2(Reviewer VHRd), where PDE-Aligner continually improves the performance across 1D-2D-3D PDEs. ---- **Q7: How are frequency-space tokens handled** **A7**: OmniArch converts physical fields into frequency space via FFT, then processes the data by first selecting only the most important frequency components (Top-K by magnitude) for efficiency. These complex-valued coefficients are transformed into real-number embeddings through a learnable projection, making them compatible with standard transformer architectures. ---- **Q8: Why use BERT for tiny text vocab?** **A8**: Because (1) Vocabulary is not tiny (augmented via symbol replacement, Appx. F.1). (2) Pretrained embeddings ensure generalization (vs. retraining from scratch). ---- **Q9: Cite the-well paper?** **A9**: We will include this in the next version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. Q1: I admit that I have been wrong, MPP did not pretrained on 3D data. However, in section 5.3 of the MPP paper (https://arxiv.org/pdf/2310.02994), the authors evaluate on 3D data with a kernel inflation method. Therefore, it seems that it should be possible to compare OmniArch with MPP on 3D data? (I am not asking for this experiment to be done, as I know it is computationally expensive). Q2: I am sorry, but according to Table 4 on page 13, $\sigma_u$ is indeed defined as the variance of the physical field $u$ and not by the $||.||^2 + \epsilon$ operator. Can the authors confirm that this is a typo and that the definition of nRMSE they use is the one they provide in the code snippet? Q3: I think this is quite confusing and should really be clarified in the paper. Maybe you should precise a different parameter count depending on the dimensionality of the data? Q4: The term emerging capabilities is a bit confusing in that case. To my knowledge, in the LLM literature, it can signify also that, due to scale, a model can solve some tasks that were not present in the training set. Zero-shot and in-context learning are a subset of that set of tasks, hence the confusion. Q5: Thank you for the clarification. I would suggest including this text in the paper, as this is a question which will often be asked about the paper? Q8: Thank you, after looking at Appendix F.1, it makes sense. Could the authors answer to the rest of my questions? Thank you. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful feedback and the opportunity to clarify our work. Below we address your remaining concerns with additional precision: **Q1(MPP on 3D)**: After carefully reviewing MPP Appendix C.4, we confirm they employ a kernel inflation trick (similar to DPOT) to **adapt** 2D pretraining to 3D simulations, rather than **jointly pretraining** on 1D-2D-3D data as OmniArch does. We agree this transfer learning approach is valuable and will include comparative analyses in future work when computational resources permit. ---- **Q2(nRMSE Definition) :** Yes, we confirm the $\sigma^2$ in Table 4 was a typo - we'll correct it to match the code's implementation. ---- **Q3(Parameter Counts):** Yes, We believe the **active parameters** for each dimension might be clear for comparison, here we provide the details: + *Static Parameters* : | Model Component | OmniArch-B (316M) | OmniArch-L (672M) | |-----------------------|------------------|------------------| | **Shared Backbone** | 138M (43.7%) | 435M (64.7%) | | **1D Encoder/Decoder**| 0.3M | 0.4M | | **2D Encoder/Decoder**| 7M | 9M | | **3D Encoder/Decoder**| 171M | 227M | + *Active Parameters During Task Execution:* | Model | 1D Tasks | 2D Tasks | 3D Tasks | |-------------|----------|----------|----------| | OmniArch-B | 138M | 144M | 308M | | OmniArch-L | 435M | 445M | 663M | This demonstrates that for 2D tasks (MPP's focus), OmniArch-B uses only **144M** active parameters (vs. MPP-B's 116M) - a 25% difference justified by our unified architecture's benefits. We would like to add this in the final version. ----- **Q4 (Emergent Behaviors):** We will adopt your suggested terminology ("zero-shot/in-context generalization") while maintaining that these capabilities represent foundational steps toward emergent behaviors in scientific ML - an important research direction we hope to inspire. ---- **Q5 (Architecture Clarification):** Thank you for this suggestion. We will expand the architectural comparison in Section 3 to explicitly highlight OmniArch's innovations beyond standard Fourier-encoded transformers. ---- **Request for Reconsideration:** Given we have: ✓ Resolved the novelty misunderstanding (joint pretraining vs. adaptation) ✓ Verified metric consistency with baselines ✓ Demonstrated parameter comparison fairness ✓ Addressed all architectural questions We respectfully **request you reconsider your score to reflect these resolutions**. A fairer assessment would significantly help this early-stage foundation model research and further help the promising SciML community.
Summary: This paper introduces OmniArch, a foundation model designed for solving multi-scale and multi-physics PDEs. Inspired by foundation models in NLP, OmniArch aims to generalize across different PDEs using a Fourier-based encoder-decoder, a Transformer backbone, and a physics-informed fine-tuning method. The Fourier encoder-decoder enables learning across varying spatial scales and the transformer captures complex temporal dependencies. Furthermore, PDE-Aligner ensures physics consistency by aligning model predictions with governing equations. OmniArch is trained across different PDEs and is showing superior performance compared to task-specific methods and pre-trained PDE solvers. ## update after rebuttal Authors have included a discussion and additional experiments to compare their method against traditional solvers and will include a discussion about relevant meta-learning papers in the final version which will address my initial concerns about this paper. Claims And Evidence: I believe, the paper presents strong experimental results and clear methodological justifications for their claims. 1) Generalization across multiple PDEs: This claim is supported by training on 11 PDE types from PDEBench and PDEArena datasets and the results show superior performance over task-specific and pre-trained baselines. 2) OmniArch is multi-scale: This claim is justified through frequency-space transformation, allowing the model to handle different spatial resolutions and experiments include multi scale PDEs. 3) PDE-Aligner improves physics alignment: Supported by physics-based losses. 4) zero-shot generalization: Supported by experiments on solving unseen PDEs with no additional training. Missing: 1) Comparison to traditional PDE Solvers: The paper does not benchmark runtime or accuracy against finite element or spectral methods, which remain the gold standard in PDE solving. Understanding the trade-offs between accuracy and interpratability compared to these methods would strengthen the paper. Methods And Evaluation Criteria: I believe the methods and evaluation criteria are well-aligned with the goal of designing a foundation model for PDE solving. 1) Use of Fourier encoder-decoder for multi-scale PDEs seems like a reasonable approach. 2) The self-attention mechanism enables long-range temporal dependencies. 3) The design of PDE-Aligner ensures physics consistency of the solutions. 4) The datasets cover a diverse range of PDEs. The only missing evaluation in my opinion, is benchmarking accuracy and interpretability against traditional solvers. Theoretical Claims: The paper does not include formal theoretical proofs, as its focus is primarily on empirical validation and model design. Experimental Designs Or Analyses: I believe the paper has a strong and well-structured experimental setup and extensive experiments. 1) The method is evaluated across 11 PDEs. 2) The method is evaluated against multiple task-specific and pre-trained benchmarks. 3) The experiments setup includes both zero-shot and in-context learning tests. Supplementary Material: I briefly reviewed the supplementary material, which includes details on training procedures, hyperparameters, and additional result examples. Relation To Broader Scientific Literature: This paper builds on prior work in neural PDE solvers, foundation models in NLP, and physics-informed learning. It integrates multiple established ideas into a unified framework. I believe this paper has the potential to shape the future in scientific machine learning and it can initiate significant research in foundation models for scientific computing, both for further technical advancements and also different applications. Essential References Not Discussed: While the authors discuss neural PDE solvers and physics-informed learning, they do not address meta-learning approaches for PDE solving, such as Meta-PINNs, which focus on improving generalization that is a key motivation of this paper. Additionally, the authors mention the need for retraining PINNs as a limitation, yet meta-learning techniques have been developed specifically to resolve this issue by enabling faster adaptation to new PDEs. Relevant references on meta-learning for PINNs are missing, despite their direct relevance to OmniArch’s goals. Here are a few examples: 1) "Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural Networks",NeurIPS, 2023 2) "Meta-MGNet: Meta Multigrid Networks for Solving Parameterized Partial Differential Equations", Journal of computational physics, 2022 3) "Meta-Auto-Decoder for Solving Parametric Partial Differential Equations.", NeurIPS, 2022 Other Strengths And Weaknesses: To the best of my knowledge, the method is very novel and contributions are significant. The only weakness, is the lack of evaluation against traditional and numerical solvers and also missing references and discussions regarding meta-learning approaches for PDE solving such as meta-PINNs. Other Comments Or Suggestions: There are typos in the abstract: 1) "while whether" in the second line, and 2) "we first conduct" in line 15. Questions For Authors: While PDE-Aligner enforces physics constraints through contrastive learning, is there any theoretical guarantee that it produces physically valid solutions for all PDE types? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer u5RW for the thoughtful feedback and valuable support of our work. Below, we address each of their questions in detail: **Q1**: How does OmniArch compare to traditional solvers (e.g., FEM, spectral methods) in accuracy/interpretability?** **A1**: We thank u5Rw for this critical point. While traditional solvers excel in interpretability, OmniArch targets fully leverages the efficiency in GPU settings, especially for input size scaling(similar to Q1 in 6qGA): + **Accuracy**: on 1D Advection Rollout, OmniArch(RMSE: 0.0321) matches FDM methods(0.0258) but avoids costly re-discretization for new PDEs. + **Speed**: Traditional solvers(FDM, FEM, Spectral methods): for an input grid with s,s nodes to iterate forward the computation of FDM is $O(s^4)$ while the computation of Spectral methods is $O(s^2log s)$, however, the computation of OmniArch is $O(tk^2)$, k in a fixed frequency, which allows OmniArch achieves 155x faster inference(512x512,2D, 200 steps ->0.026s) than FDM(~4.5s) at comparable error. We provide a table below, you can also find in [Anonymous Visualization](https://various-easy-trillium.glitch.me/). | Resolution | FDM Time/Step (s) | OmniArch Time (s) | Speedup (FDM/OmniArch) | |------------|-------------------|-------------------|------------------------| | 64×64 | 0.001123 | 0.023567 | 0.048x | | 128×128 | 0.015264 | 0.023820 | 0.641x | | 192×192 | 0.075360 | 0.024098 | 3.128x | | 256×256 | 0.254027 | 0.024083 | 10.55x | | 320×320 | 0.583218 | 0.023866 | 24.44x | | 384×384 | 1.130561 | 0.023453 | 48.20x | | 448×448 | 2.272206 | 0.023677 | 95.96x | | 512×512 | 4.073472 | 0.026212 | 155.4x | *Revision*: We will add a more dedicated table comparing accuracy/runtime against traditional solvers in the final version. ---- **Q2: Why omit meta-learning approaches (e.g., Meta-PINNs) despite their relevance to generalization?** **A2**: We appreciate this insight. Meta-learning is indeed complementary: + **key difference**: Meta-PINNs adapt via gradient updates, while OmniArch enables **token-based in-context learning**(Fig 10) without fine-tuning. We will cite suggested references and clarify this distinction in the final version, emphasizing OmniArch's **architecture-driven generalization** vs. optimization-based meta-learning. ---- **Q3: Can PDE-Aligner guarantee physically valid solutions for all PDEs?** **A3**: The aligner ensures soft constraints via textual equation supervision, it only help the omniarch better distinguish between different PDE systems. There is no theoretical guarantee for all PDEs(an open challenge even for traditional solvers). But we believe this maybe solved by better augmentation ways(equation-search, lie symmetry, etc.) and more efficient contrastive learning techniques, which we hope to see in the near future. This discussion will be added in the camera-ready version. ---- **Q4:Typographical errors in the abstract?** **A4**: Thanks for the kindly reminder, we will correct them in the final version.
Summary: OmniArch is a foundation model for solving partial differential equations (PDEs) across 1D, 2D, and 3D domains. It addresses three key challenges: multi-scale modeling (handling different grid dimensions and resolutions), multi-physics capability (processing multiple physical quantities simultaneously), and physical alignment (incorporating physics constraints). The architecture combines a Fourier encoder-decoder for unified multi-dimensional training with a transformer backbone using temporal masking for multi-physics systems. A novel PDE-Aligner module enables physics-informed fine-tuning. Pre-trained on PDEBench datasets and fine-tuned with the PDE-Aligner, OmniArch achieves state-of-the-art performance on 11 PDE types while demonstrating emergent capabilities like zero-shot generalization and in-context learning for unseen PDEs. Claims And Evidence: The paper's claims are largely supported by evidence, including performance analysis on 11 PDE tasks and ablation studies. However, the zero-shot studies are limited in nature. Methods And Evaluation Criteria: Methods address stated challenges, with Fourier transforms for multi-scale data and temporal masking for multi-physics systems. nRMSE is an appropriate evaluation metric, tested across PDEBench and PDEArena. The focus on accuracy neglects computational efficiency, scaling, and deeper physical consistency analysis, which limits the evaluation's breadth. Theoretical Claims: No significant theoretical claims are made in this paper. Experimental Designs Or Analyses: The experimental design is generally sound, comparing against both task-specific expert models and other unified pre-training approaches. Supplementary Material: Yes. The dataset, pre-training, and implementation details are helpful for understanding the method. I expect that more domain-specific foundation models will be emerging this year, and it is helpful to have this content for reference. Relation To Broader Scientific Literature: The relationship to the broader scientific literature in numerical foundation models is appropriate for the paper. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: Did this work give you any insight into the possibilities of emergent behavior in such numerical foundation models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 6qGA for the constructive feedback and recognition of our work. We deeply appreciate Reviewer 6qGA's support and hope this work can contribute to the research community. Below, we address the reviewers' questions and provide additional clarifications: **Q1: The zero-shot studies are limited in nature.** **A1**: We acknowledge the scope limitation in zero-shot tests(currently on 3 PDE families in Table 2). However, emergent behaviors like **in-context learning**(in Figure 6) and **cross-resolution generalization** (in Figure 4) are observed and we will add a dedicated section, which includes more tests on new PDE datasets (such as **the-well dataset** mentioned by Reviewer urwm). ---- **Q2: The focus on accuracy neglects computational efficiency, scaling, and deeper physical consistency analysis.** **A2**: We agree this is critical. While accuracy was our primary focus, we included inference time in Appendx(H.2&H.7). More related details: + **Efficiency**: We add the modal params and inference time below(2DCFD, 128*128, Single H800): | Model | Params | Inference Time| |---|---|---| |Poseidon-L | 629 M |77ms | |MPP-AVIT-L |409 M | 83ms | |DPOT-L |509 M | 32ms | |OmniArch-L |445 M | 21ms | Where the parameters of pre-trained models are similar and the inference time is In the same order of magnitude. + **Scaling** : We'll add a discussion on how scaling in numerical foundation models differs from LMs or VLMs, addressing not only model/data size but also PDE diversity and input resolution dimensions. Here we can only provide midterm results for reference(Sorry for not having enough time & resources for full training currently): | Model Size | Params | nRMSE(2D) | |---| ---|---| |Tiny| 26M |0.0847| |Small| 38M|0.0362| |Base | 144M|0.0153| |Large| 445M|0.0125| + **Physical consistency analysis**: We will include a section analyzing conservation properties, boundary condition satisfaction, and physical law adherence across different PDE types. For instance, we'll quantify momentum/energy conservation in CFD predictions and demonstrate how the PDE-Aligner specifically improves physical consistency. Here we give an example analysis on 2DCFD, which we may extend to other PDEs in the final revision. | Evaluation Metric | OmniArch-L | MPP-L | FNO | U-Net | |---------|------------|-------|-----|-------| | **Mass Conservation** | | Average Relative Error (%) | **0.32** | 0.65 | 2.17 | 3.24 | | Maximum Relative Error (%) | **0.78** | 1.21 | 4.85 | 7.36 | | **Energy Conservation** | | Average Relative Error (%) | **0.58** | 0.92 | 3.25 | 4.89 | | Maximum Relative Error (%) | **1.25** | 4.52 | 5.94 | 9.17 | | Continuity Error (∇·(ρu)) | **5.2e-4** | 2.1e-3 | 3.7e-3 | 5.4e-3 | ---- **Q3: The insight into the possibilities of emergent behavior in such numerical foundation models.** **A3**: Yes, we are very happy to share our findings: (1) **Numerical foundation models can transfer Cross dimension** : We find pre-train on 1D/2D PDEs improves 3D performance(**in Table 1**), suggesting the natural dynamic pattern may be dimension-agnostic and has a much smaller latent dimension. The potential of a unified neural solver is still under-explored. (2) **Numerical foundation models can learn in-context**: Different from previous small neural solvers (which feed one step and predict the next step), we find OmniArch could learn from the temporal trajectory (the previous k steps) and adjust its prediction based on its observations (**see in Figure 6**). (3) **Numerical foundation models can be Resolution-agnostic**: Due to trained directly on the frequency domain, OmniArch could learn in infinite resolutions (thanks to low-frequency truncating), and the low-resolution patterns could help understand high-resolution inputs(**see in Figure 4**).
null
null
null
null
null
null
Towards Black-Box Membership Inference Attack for Diffusion Models
Accept (poster)
Summary: This paper introduces a black-box membership inference attack method targeting diffusion models. Unlike previous MIA approaches that require access to the U-net or other internal components of diffusion models, their method only utilizes the variation API to determine whether a given image was part of the model's training data. Claims And Evidence: Several key claims are generally supported by empirical evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper includes a mathematical justification for why training images produce more stable reconstructions than non-members. Experimental Designs Or Analyses: The experimental design is generally well-structured. Supplementary Material: The Supplementary Material contain some experiments code. Relation To Broader Scientific Literature: The paper extends prior work on membership inference attacks by introducing a novel black-box attack tailored for diffusion models. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: The paper presents a black-box MIA attack on diffusion models that does not require U-net access. Evaluations are thorough, covering multiple datasets, architectures. The authors provide mathematical support for their attack’s effectiveness. Weaknesses: The proposed REDIFFUSE algorithm relies on the existence of a variation API. While the paper claims robustness across different diffusion steps, the experimental results suggest that selecting a poor diffusion step could impact detection performance. The real-world validation with DALL-E 2 is based on only 30 famous artworks. Other Comments Or Suggestions: None. Questions For Authors: see Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the comments and suggestions. Below, we address the primary concern that has been raised. >Q1: The proposed REDIFFUSE algorithm relies on the existence of a variation API. While the paper claims robustness across different diffusion steps, the experimental results suggest that selecting a poor diffusion step could impact detection performance. **A1:** We thank the reviewer for the comments. As shown in Figure 4 of our paper, our method achieves over 80% accuracy when the diffusion step is between 50 and 350. On the other hand, the variation API in diffusion models[1] typically uses a moderate step value to edit the image: **too few steps produce negligible edits, while too many steps cause excessive distortion, making it hard to preserve consistency with the original image**. Thus, our results suggest that the variation API could be viable for detection tasks. We leave more robust algorithm design across diffusion steps for future work. >Q2: The real-world validation with DALL-E 2 is based on only 30 famous artworks. **A2:** We appreciate the reviewer’s question. The purpose of Section 6 is to demonstrate the practical applicability of our algorithm to commercial models. Specifically, we use DALL-E 2’s variation API to show how our method can detect membership by leveraging the model’s outputs. Since DALL-E 2 **does not provide a publicly accessible training set**, the experiment in Section 6 primarily serves as **demonstrative application scenario rather than an extensive evaluation with large-scale datasets**. We agree that further evaluation on more diverse datasets and models would enhance the robustness of the findings, and we add additional experiments of WikiArt dataset[2]. We randomly sample 1000 images of the dataset as members and generate corresponding nonmembers with the method in Section 6 in our paper. The results are as follows: |Metrics|$L_1$ distance|$L_2$ distance| |:---:|:----:|:----:| |AUC|0.70|0.74| |ASR|0.67|0.70| The experimental results indicate that our method remains effective on this new dataset. However, since many artworks in WikiArt are less prominent than the 30 canonical paintings we initially selected, they likely had lower probability of being included in DALL-E 2's training data. This introduces potential dataset bias when expanding the evaluation scope. We leave more comprehensive large-scale experiments for future work. We thank the reviewer once again for the valuable and helpful suggestions. We will continue to provide clarifications if the reviewer has any further questions. **References** [1] Meng, Chenlin, et al. "Sdedit: Guided image synthesis and editing with stochastic differential equations." arXiv preprint arXiv:2108.01073 (2021). [2] "WikiArt Visual Art Encyclopedia." *WikiArt*, n.d., https://www.wikiart.org. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. It generally addresses my comments in the review. Thus, I increase my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our work!
Summary: The paper proposes using the average of img2img outputs and comparing it with the original input image, with the difference serving as a metric for black-box MIA. This approach eliminates the need for predicted noise at intermediate time steps, making it applicable to a broader range of scenarios while achieving stronger results than previous methods. ## Update after Rebuttal The rebuttal addresses my concerns, and I recommend acceptance of the paper. Claims And Evidence: The author assumes that for a well-trained dm: > the Jacobian matrix $\nabla_{\theta}\epsilon(x_t,t)$ is full rank. I am curious if there is a deeper motivation behind this assumption. One possible intuition is that it might be too strong to directly assume the loss function of the DM is sufficiently low, so the focus shifts to hypothesizing a property that is easier to satisfy than the loss function itself. Nonetheless, I would appreciate a more detailed explanation or clarification regarding this choice. Methods And Evaluation Criteria: The method is clear and the evaluation criteria makes sense. Theoretical Claims: I check the proof and it should be correct. Experimental Designs Or Analyses: The experiments are generally solid, but I’m curious about one aspect of the ablation study. In Figure 5, the AUC decreases when the DDIM steps exceed 25. This seems counterintuitive, as a larger number of DDIM steps should theoretically reduce error and improve accuracy. Could you clarify why this occurs? Supplementary Material: Quickly go through the setting and the proof part. Relation To Broader Scientific Literature: The paper proposes a new method for MIA that is more practical for most closed-source diffusion models. The approach is simple yet effective, significantly strengthening MIA and yielding more promising results for potential applications in copyright authentication. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: See comments above. Questions For Authors: Seem comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and constructive suggestions. In the following, we address the main concern raised. >Q1: The author assumes for a well-trained diffusion model with full rank Jacobian. I am curious if there is a deeper motivation behind this assumption. One possible intuition is that it might be too strong to directly assume the loss function of the DM is sufficiently low, so the focus shifts to hypothesizing a property that is easier to satisfy than the loss function itself. Nonetheless, I would appreciate a more detailed explanation or clarification regarding this choice. **A1:** We appreciate the reviewer’s feedback. We hypothesize the full-rank condition of Jacobian because the neural network's dimensionality $p$ significantly exceeds that of the image data $d$. Hence a full rank matrix would require the $p \times d$ matrix to have rank $d$ and hence is most likely true. Conversely, if the Jacobian are rank-deficient, there would exist initial points from which denoising could not recover plausible images, contradicting the empirically demonstrated generation capabilities of modern diffusion models. Still, we agree with the reviewer that we cannot verify this for all data points as computing the rank of a huge matrix is expensive. We will add the discussion of this assumption in the next version of our paper. >Q2: I’m curious about one aspect of the ablation study. In Figure 5, the AUC decreases when the DDIM steps exceed 25. This seems counterintuitive, as a larger number of DDIM steps should theoretically reduce error and improve accuracy. Could you clarify why this occurs? **A2:** We appreciate the reviewer’s question. Our results in Figure 5 demonstrate that changing the DDIM step has minimal impact on detection accuracy. For DDIM steps of 20, 25, 50, and 100, the AUC remains virtually unchanged (difference < 0.003). We present relevant experiments of DDIM on CIFAR-100 here, with different random seeds and DDIM steps: |Random Seed|1|2|3| |:---:|:----:|:----:|:----:| |DDIM with step 20|0.968|0.969|0.971| |DDIM with step 25|0.971|0.969|0.970| |DDIM with step 50|0.970|0.970|0.971| |DDIM with step 100|0.967|0.970|0.968| From the results, we know that as we vary random seeds, the AUC remains similar across these DDIM steps, with no consistent decreasing trend observed as step size increases. We thank the reviewer once again for the valuable and helpful suggestions. We would be happy to provide further clarifications if the reviewer has any additional questions. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It generally addresses my concerns, so I will maintain my score (accept). --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our work!
Summary: This paper investigates black-box membership inference attacks against diffusion models where attacker has no access to the internal model. The target of attacker is to determine whether or not an artwork was used to train a diffusion model. In this paper, authors firstly identify the limitation of applying existing MIAs for proprietary diffusion models and then propose a novel black-box membership inference attack to determine the membership privacy of an image. Users validate the proposed method using DDIM and Stable Diffusion models on benchmark datasets and further extend both the proposed approach and existing algorithms to the Diffusion Transformer architecture. Experimental results show the effectiveness of the proposed method. Claims And Evidence: Claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria are commonly used in the existing literature. Theoretical Claims: The proofs for theoretical claims are correct. Experimental Designs Or Analyses: Authors adopted three commonly used metric, i.e., AUC, ASR, and TP to evaluate the performance of the proposed method compared with other baselines. And the experimental results can show the improved performance of the proposed method. In addition, parameter analysis, such as the diffusion steps, average numbers, has also been conducted to evaluate the proposed method. Supplementary Material: Yes. The experimental results and theoretical proof. Relation To Broader Scientific Literature: Even though there are existing works focus on MIA on diffusion models, this paper provides a more practical scenario, i.e., with only API query access. Essential References Not Discussed: No Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: 1. Is the intuition of the proposed method general to other datasets? Or just observable to specific datasets utilized in these datasets? 2. There are other diffusion models in existing works. Is the proposed attack methodology suitable to other diffusion models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for the insightful comments and suggestions. Please find the details below. >Q1: Is the intuition of the proposed method general to other datasets? Or just observable to specific datasets utilized in these datasets?? **A1:** Our method has been evaluated across diverse datasets (CIFAR-10/100, STL-10, ImageNet, LAION-5) in the main paper, demonstrating its generalizability. To further validate this, we conduct additional experiments: we train a DDIM model on Tiny-ImageNet [1] and SVHN [2]. For each dataset, we randomly select 50,000 member and 50,000 non-member images, training the DDIM model for 800K iterations using Appendix A's hyperparameters. The result are as follows: |Dataset|Tiny-ImageNet|SVHN| |:---:|:----:|:----:| |AUC|0.98|0.95| |ASR|0.95|0.88| The experimental results demonstrate that our method remains effective on these new datasets. We will include these findings in the revised manuscript and plan to extend evaluation to other datasets in future work. >Q2: There are other diffusion models in existing works. Is the proposed attack methodology suitable to other diffusion models? **A2:** We appreciate the reviewer’s question. Our main paper already demonstrates the method's effectiveness on DDIM, Diffusion Transformer, and Stable Diffusion. To further validate generalizability, we conduct additional experiments with DDPM[3] across four datasets. For each dataset, we randomly select 50,000 member and 50,000 non-member images, training the DDPM model for 800K iterations using Appendix A's hyperparameters. The results are as follows: |Dataset|CIFAR-10|CIFAR-100|STL-10|Tiny-ImageNet| |:---:|:----:|:----:|:----:|:----:| |AUC|0.87|0.85|0.81|0.89| |ASR|0.80|0.78|0.75|0.82| The experimental results confirm our method's effectiveness on DDPM models. We will include these findings in the revised manuscript. Due to the need for dataset-specific retraining, we cannot evaluate additional models within the rebuttal period. We plan to extend this evaluation to other diffusion architectures in future work. Finally, we thank the reviewer once again for the efforts in providing us with valuable and helpful suggestions. We will continue to provide clarifications if the reviewer has any further questions. **Reference** [1] Tiny ImageNet Dataset, Stanford University. Available: http://cs231n.stanford.edu/tiny-imagenet-200.zip. [2] Netzer, Yuval, et al. "Reading digits in natural images with unsupervised feature learning." NIPS workshop on deep learning and unsupervised feature learning. Vol. 2011. No. 2. 2011. [3] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." Advances in neural information processing systems 33 (2020): 6840-6851.
Summary: The paper introduces a novel black-box membership inference attack method for diffusion models. The authors show their method can reliably detect whether an image was part of the training set or not. They do this by repeatedly applying the variation API and averaging the outputs. They have extensive experiments across multiple diffusion architectures (DDIM, Stable Diffusion, and Diffusion Transformer) and datasets (e.g., CIFAR-10/100, STL10, ImageNet, LAION-5B) to show that REDIFFUSE outperforms existing white-box based methods. Claims And Evidence: They have majorly supported their claim through comprehensive empirical results (and based on the theoretical result). The authors provide detailed quantitative comparisons (via AUC, ASR, and true positive rates) across several benchmark datasets. Methods And Evaluation Criteria: Their empirical results are very extensive and includes both DDIM and Stable Diffusion models tested on most of the commonly used classical datasets. Also the choice of evaluation metrics like AUC, ASR, and true positive rate at a fixed false positive rate is natural and common for MIA methods. Theoretical Claims: The theoretical contribution in Theorem 4.2 gives an error bound for the averaged output of the variation model (API) under certain assumptions: 1. unbiased noise prediction, 2. full rank of the Jacobian The proof is mathematically coherent. The assumptions may not necessarily be realistic, but the theorem provides an intuition for the test statistic and is shown to be accurate in practice. Experimental Designs Or Analyses: As mentioned before, their experimental design, and choice of models and datasets is valid. They have extensive ablations for various factors such as average numbers, diffusion steps, and sampling intervals. Supplementary Material: Yes, I have reviewed all the sections including test set up and hyper params, more qualitative results, proof of theorem 4.2, and examples of variation of member and nonmember image. Relation To Broader Scientific Literature: Since major Diffusion model developers in the industry are not transparent about their training datasets, advancements in MIA methods are very impactful as they help us gain a bit more clarity. These methods can also help detect private and copy righted data used for training. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: The main strength of the paper is that it's practical and easy to use in the real-world on proprietary models without internal access. Their extensive experiments and their theoretical analysis provide together provide a solid foundation for the claims. However, when comparing to other existing methods, they do not show a comparison in runtime/cost. I'd suggest to add these details for their audience to have a better understanding of the trade offs between these methods. Other Comments Or Suggestions: Please add run-time and cost estimations (both for your method and other existing methods) Questions For Authors: Did you try this method for detecting copyrighted or proprietary content in foundation models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive feedback. We address the questions in detail below: >Q1: When comparing to other existing methods, this paper do not show a comparison in runtime/cost. I'd suggest to add these details for their audience to have a better understanding of the trade offs between these methods. **A1:** We thank the reviewer for raising this important point regarding computational cost comparison. From a computational complexity perspective, the runtime primarily depends on the average number $n$, where each detection requires $n$ times the computation of the baseline method. To better illustrate this trade-off, we conduct additional experiments measuring runtime across different values of $n$, evaluating DDIM on CIFAR-100 and DiT on ImageNet 256×256. The results are as follows: |Method|DDIM|DiT| |:---:|:----:|:----:| |Loss[1]|0.92|0.78| |SecMI[2]|0.96|0.88| |PIA[3]|0.96|0.91| |PIAN[3]|0.91|0.67| |**ReDiffuse with $n=1$ (Ours)**|0.94|0.94| |**ReDiffuse with $n=5$ (Ours)**|0.97|0.95| |**ReDiffuse with $n=10$ (Ours)**|**0.98**|**0.97**| The results show our method achieves accuracy comparable to baselines even at $n=1$, with matching runtime and no UNet access required. When increasing $n$, the extra computation time further improves performance. In contrast, baseline methods rely on deterministic UNet outputs (no randomness), so they can’t benefit from averaging. We believe this cost is reasonable because at $n=10$, our algorithm infers $100,000$ CIFAR-100 images in ~5 minutes on an NVIDIA L40 GPU. We’ll add these discussions to the paper. >Q2: Did you try this method for detecting copyrighted or proprietary content in foundation models? **A2:** We thank the reviewer for this question. In **Section 6**, we discuss an application scenario where we perform membership inference attacks using the variation API provided by OpenAI DALL-E [4], a popular API-only model. Specifically, we constructed a dataset consisting of famous artworks and AI-generated artworks with the same titles. We then conduct detection tests to determine whether certain famous artworks are included in DALL-E's training dataset. The results show our approach can effectively work with real-world diffusion-model APIs. We plan to extend this to other copyright-protected content as future work. Once again, we sincerely thank the reviewer for the constructive comments, and we are eager to engage in further discussions to clarify any concerns. **References** [1] Matsumoto, Tomoya, Takayuki Miura, and Naoto Yanai. "Membership inference attacks against diffusion models." 2023 IEEE Security and Privacy Workshops (SPW). IEEE, 2023. [2] Duan, Jinhao, et al. "Are diffusion models vulnerable to membership inference attacks?." International Conference on Machine Learning. PMLR, 2023. [3] Kong, Fei, et al. "An efficient membership inference attack for the diffusion model by proximal initialization." arXiv preprint arXiv:2305.18355 (2023). [4] The variation API of DALL-E. https://platform.openai.com/docs/guides/images/variations-dall-e-2-only
null
null
null
null
null
null
Decision Theoretic Foundations for Conformal Prediction: Optimal Uncertainty Quantification for Risk-Averse Agents
Accept (spotlight poster)
Summary: This paper considers the marginal version of the value at risk problem. This problem is called risk-averse decision policy optimization. - The authors derive optimal policy for risk-averse decision makers given prediction sets, which takes a max-min form. - They establish that prediction sets are a sufficient statistic for safe decision-making. - They characterize the optimal prediction sets for risk-averse conformal prediction by formalizing a risk-averse conformal prediction optimization problem. - They propose a risk-averse calibration algorithm with a distribution-free safety guarantee. ## update after rebuttal: I have read the author response as well as the other reviewers and author response to them. The authors clarified my major concerns related to the positioning of the paper with respect to the related work, technical novelty, group-conditional setting, and dependence on sample size. I would like to thank the authors for their good work. The changes that are mentioned in the author rebuttal should be incorporated into the final version of the paper. Claims And Evidence: It is said that the results are extendable to group conditional validity constraints. Perhaps this is the case with more realistic applications. It is said that the marginal formulation in the paper “naturally extends” to this case, but no roadmap is provided. It sounds as if this extension is trivial. If not, please specify what challenges lie ahead. Methods And Evaluation Criteria: Yes. Theoretical Claims: The theoretical claims make intuitive sense. I did not check the proofs. Experimental Designs Or Analyses: Experiments look valid. Supplementary Material: I took a quick glimpse at the supplementary material. Relation To Broader Scientific Literature: Risk aversion and safety have been frequently studied by utilizing Gaussian processes as surrogate models. See, e.g., - Sui, Yanan, et al. "Safe exploration for optimization with Gaussian processes." International conference on machine learning. PMLR, 2015. - Nguyen, Quoc Phong, et al. "Value-at-risk optimization with Gaussian processes." International Conference on Machine Learning. PMLR, 2021. - Demirel, Ilker, et al. "Escada: Efficient safety and context aware dose allocation for precision medicine." Advances in Neural Information Processing Systems35, 2022. Please comment on the pros and cons of your approach in risk-averse decision-making compared to this line of literature. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: I am positive about the contribution of this paper. However, a more detailed discussion is required on why conformal prediction sets are the right way to approach the problem (perhaps by comparing them with other approaches, like GP-based approaches in an experimental setting). In addition, novel steps of the technical proofs should be highlighted. Other Comments Or Suggestions: N/A. Questions For Authors: 1) This paper proposes a new model for risk-averse decision-making using conformal prediction sets. To avoid problems commonly encountered in conformal prediction, it considers the marginal version of the problem. In order to assess the technical contribution of the current work, please discuss if any new or out-of-the-box technique is used to arrive at the conclusions of the paper. Is it the reparameterization used to derive an equivalent reformulation of RA-CPO? Is it the structure discovered using duality? 2) Can any further insights be gained for the finite-sample setting? It seems that everything works fine when replacing true functions with their estimated counterparts. What is the effect of the sample size n on the quality of the estimates and prediction sets? Is it possible to get some convergence rates based on n? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and thoughtful questions, which will help us significantly improve the clarity and contribution of our manuscript. **Question 1:** Our contributions are twofold, as outlined below: (1) we introduce a novel question within the conformal prediction literature, and (2) we develop mathematical techniques to provide meaningful answers. **Regarding the novel question:** Despite extensive literature on conformal prediction (CP) as uncertainty quantification beneficial for downstream decisions, little was previously known about whether prediction sets are the ideal means to communicate uncertainty within decision-making pipelines and how decisions should optimally incorporate them. We hope our findings bring new insights to the community in this regard. **Regarding mathematical and technical contributions:** A key contribution is deriving an explicit solution to RA-CPO (Section 2.2), accomplished in Proposition 3.1 and Theorem 3.2. To highlight the challenges, note that RA-CPO is a non-convex optimization over the set function $C(.)$, lying outside conventional duality-based analysis. We first reparametrize RA-CPO equivalently (Eq. 12), maintaining non-convexity. Next, in the proof of Theorem 3.2, through another reparametrization to a mixed-integer program, followed by a convex relaxation over function spaces, we use a key technical lemma (Lemma B.1, Appendix) to show this relaxation exactly solves RA-CPO. This approach is novel, and we will emphasize these contributions further in the revision. Additionally, our manuscript includes: Theorem 2.3 (prediction sets as sufficient statistics), Proposition 2.2 (optimality of max-min decision rule given marginally valid sets), and Theorem 4.1 (finite-sample statistical validity of our algorithm). **Question 2:** This is an excellent point. Two distinct estimation steps exist in the finite-sample algorithm: calibration-data-based and model-output-based (softmax probabilities). The only parameter influenced by calibration data (thus sensitive to sample size $n$) is the scalar parameter $\beta$. Similar to standard CP, the sensitivity to $n$ is minimal, stabilizing once $n$ reaches a few thousand samples. Under mild smoothness conditions on the conditional distribution of $Y|X$, finite-sample upper bounds scale as $1 - \alpha + O(1/n)$ (Theorem 4.1), a standard assumption in CP literature. Remaining computations, notably quantiles (Eq. 15), rely solely on softmax probabilities and are independent of calibration data and $n$, preserving statistical guarantees. We will further clarify this point by expanding the paragraph after Corollary 4.2. **Regarding group-conditional validity claims:** Although briefly noted in Remark 2.1, our original discussion was limited by space constraints. To clarify: the theoretical results in Sections 2 and 3—such as the characterization of optimal decision rules and prediction sets—extend naturally to the group-conditional setting, resulting in an $m$-dimensional formulation (where $m$ is the number of groups). However, the finite-sample algorithm in Section 4 poses greater challenges, as it requires calibrating an $m$-dimensional vector, which introduces both computational and statistical complexity. In the revised manuscript: - We have expanded Remark 2.1 to explicitly distinguish between the components that extend systematically and those that present additional challenges. - We have enhanced the “Future Work” section to outline potential approaches for addressing the difficulties in the finite-sample setting and to motivate further investigation of this important extension. **Regrading bayesian methods:** We thank the reviewer for highlighting relevant Bayesian literature. We emphasize our approach is complementary—not competitive—with Bayesian methods: Main theoretical contributions up to Section 3 (two-thirds of the paper) remain neutral regarding Bayesian or frequentist approaches, focusing instead on general roles of prediction sets in risk-averse decision-making. The equivalence formulation (Theorem 2.3) and optimal prediction set characterization (Theorem 3.2) assume complete knowledge of underlying distributions, applicable equally to Bayesian posterior distributions (e.g., from Gaussian Processes). In scenarios where Bayesian models (e.g., Gaussian Processes) approximate the distributions well enough, one can directly use our theoretical results without employing the finite-sample calibration of Section 4. Alternatively, even when Bayesian assumptions' precision is uncertain, one can still start from Bayesian posteriors and further calibrate prediction sets using our approach, ensuring robust safety guarantees. We will explicitly clarify these points in the revised manuscript, along with a discussion about the existing bayesian approaches, including the ones suggested by the reviewer.
Summary: This paper studies the decision-theoretic foundations of conformal prediction sets. It shows that prediction sets characterize the optimal strategy of risk-averse decision-making agents, and then connects it to a specific form of conformal prediction sets. Besides population-level characterization, based on the optimal prediction sets for this specific risk-averse strategy, methods for building finite-sample valid conformal prediction sets are established. The efficacy of the proposed methods is demonstrated by diverse numerical experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The theoretical results appear correct to me. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I reviewed the proof in the supplementary material. Relation To Broader Scientific Literature: How to use conformal prediction for actionable, transparent and reliable decision making is an important yet less noticed problem in the literature. This paper lays out theoretical results that may help advance research in this direction. Essential References Not Discussed: No to my current knowledge. See "questions" part for some conceptually related works. Other Strengths And Weaknesses: This paper is sound in general, and the results are interesting. However, I am not fully convinced by the marginal objective which is fundamental to the results throughout. Please see the "questions" section. Other Comments Or Suggestions: N/A Questions For Authors: 1. Is the RA-DPO formulation equivalent to the per-x optimization? If so, it would be nice to point that out. (My understanding is no?) 2. Does the fact that C(x) is a random (set-valued) function change the interpretation of the minimax optimal policy result? How should one interpret these results in the context of conformal prediction, given that conformal prediction set is a random mapping? 3. Why is the objective chosen as the expectation of the lower bound $\nu(X)$? It seems risk-averse agents may also consider a tail-probability related quantity in $\nu(X)$, or the infimum of $\nu(X)$ over $X$? 4. Related to Question 1, changing from per-x problem to the marginal formulation makes the objective and constraints less interpretable. For example, asking for a marginally valid lower bound $\nu(X)$ gives opportunities to sacrifice some "hard" instances $X$, which (considering the average of $\nu(X)$) leads to something weird (equation 12), where $t(X)$ can be unequally distributed across $X$. I'm not sure whether this is still fully meaningful from a decision making perspective. Would it be possible to derive a marginal problem which is equivalent to the per-x ones? 5. In Section 4, instead of using a density estimator (which can be difficult especially for continuous outcomes), can one just directly estimate the conditional quantile of the utility for any action $a$, and let the prediction sets prioritize high-utility regions? 6. I like the perspective of decision making with conformal prediction, which is a prevalent issue but has thus far mostly been implicit and heuristic. I wonder how this perspective is related to several recent works with related concepts, where people use selective inference ideas to "pick out" units to act upon with false discovery/coverage rate control (e.g., [1,2]), or build prediction sets after decision making processes that have picked out some interesting units (e.g., [3,4]). [1] Jin, Ying, and Emmanuel J. Candès. "Selection by prediction with conformal p-values." Journal of Machine Learning Research 24.244 (2023): 1-41. [2] Gazin, Ulysse, et al. "Selecting informative conformal prediction sets with false coverage rate control." arXiv preprint arXiv:2403.12295 (2024). [3] Bao, Yajie, et al. "Selective conformal inference with false coverage-statement rate control." Biometrika 111.3 (2024): 727-742. [4] Jin, Ying, and Zhimei Ren. "Confidence on the focal: Conformal prediction with selection-conditional coverage." arXiv preprint arXiv:2403.03868 (2024). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their careful and detailed evaluation, their supportive stance toward our work, and their insightful and constructive questions, which allow us to clarify and deepen our results significantly. **Questions 1, 3, and 4:** Thank you for these great questions! As the reviewer rightly points out, the most practically meaningful objective is the per-instance optimization (Eq. 2), which ensures optimal value-at-risk guarantees conditional on each individual instance $x$. However, there are two main considerations justifying our chosen marginal approach (RA-DPO): Distribution-free, finite-sample methods generally cannot produce simultaneous per-$x$ guarantees without additional assumptions. Hence, some form of marginalization is inevitable. Our specific choice, $E[\nu(X)]$, is indeed a deliberate choice we made, as there is no canonical choice and we had to fix an objective. We also agree with your observation that marginal guarantees might indeed result in uneven coverage across individual instances. To address precisely this issue, we included Remark 2.1, pointing toward extensions involving group-conditional guarantees, where validity can be enforced within predefined groups of covariates (e.g., patient demographics). Such group-based guarantees enable practitioners to ensure that important covariate subgroups retain meaningful coverage\risk. In the revised version, we will: 1) Extend Remark 2.1 to clearly show which parts of our theory can be systematically extended to group-conditional scenarios and which present challenges. 2) Expand our "Future Work" section to highlight that while Sections 2 and 3 extend naturally to $m$-groups setting (yielding an m-dimensional characterization), the finite-sample algorithm in Section 4 faces additional challenges due to the complexity of calibrating an m-dimensional vector (along with potential approaches to address these challenges). It is also worth mentioning that moving to group-conditional guarantees, also makes our framework less sensitive to the choice of $E[\nu(X)]$, as eventually in the fully conditional setup, the problem reduces to per-x problems. We will explicitly clarify these points, naming the deliberate choice of $E[\nu(X)]$ compared to alternatives and the important role of group conditional guarantees for a more meaningful decision making pipeline, in Section 2 of our revised version. **Question 2:** To clarify, we focus on deterministic prediction sets in this paper, meaning a deterministic map from X to sets. Also in CP, prediction sets are deterministic when calibration data is fixed (which is the case in practice). Although, due to their mathematical convenience, the CP guarantees are usually formulated as an expectation over calibration data (similar to our Theorem 4.1), but one can also derive PAC guarantees for the same algorithm, which provides coverage guarantees conditional on the calibration data. We will discuss this important matter in the revised version. **Question 5:** Thank you for highlighting this important practical question. Indeed, in Section 4, our current algorithm assumes a predictive model providing softmax probabilities, implicitly targeting classification tasks. Several promising approaches can be adopted in practice: As you suggest, one natural approach would involve training separate quantile-prediction models for each action and then choosing among them. Alternatively, a single regression model predicting the maximum quantile over all actions (as defined in Eq. 9) could also be learned directly. Both of these are similar ideas to Conformalized Quantile Regression (CQR). Other methods, such as regression-diffusion models that generate multiple samples per instance, could also approximate the quantiles in Eq. 9. We will leverage the additional two pages allowed to discuss explicitly these practical regression approaches and provide guidelines for practitioners. Thanks again for providing an interesting approach for handling regression. **Question 6:** We are grateful for bringing up this valuable literature. We see selection conditional conformal prediction as a complementary line of works to ours. In the revised version, we will include a paragraph discussing this direction including the works you pointed out ([1-4]). Briefly, selective inference addresses scenarios where multiple prediction sets are constructed, and one selects a subset of these sets (e.g. choosing proteins with high predicted affinity), which then creates selection bias that one has to account for. In contrast, our current formulation constructs a single prediction set per instance $x$, with the action directly derived via a max-min strategy from that set, thereby avoiding selection bias. An interesting future work could be to look at the intersection of these works where one might want to select a subset of actions, while both remaining statistically valid and optimizing for a form of value at risk.
Summary: This paper aims to address three questions at the intersection of conformal prediction (CP) and decision making: (1) Understanding what type of uncertainty quantification is best for risk-averse decision makers, (2) How risk-averse decision makers should use prediction sets (ie, what policy they should use, given a prediction set with marginal coverage validity), and (3) How to design CP sets for such risk-averse decision makers. In this paper, “risk-averse decision making” refers to selecting an action $a$ in response to context $x$ that maximizes the “value at risk” $\nu_{\alpha}(a; x)$: that is, maximizing the smallest value/utility that the agent can expect to receive with high probability $1-\alpha$ (ie, the probability/risk of receiving utility less than $\nu_{\alpha}(a; x)$ is at most $\alpha$). To address questions (1) and (2), the authors provide analysis and discussion that connects the objective of (marginal) risk-averse decision making (i.e., Risk Averse Decision Policy Optimization or “RA-DPO”) with one of risk-averse decision-making using conformal prediction sets (i.e., Risk Averse Conformal Prediction Optimization or “RA-CPO”): That is, the authors prove that from any optimal solution of RA-DPO one can construct an optimal solution to RA-CPO with the same utility, and vice versa (Theorem 2.3). To address question (3), the authors provide some analysis for the optimal prediction sets (Section 3) and a practical algorithm building on this analysis (Section 4), and they evaluate the proposed algorithm compared to an expectation-maximizing or “best-response” policy baseline on medical diagnosis and recommender system tasks (Section 5). ## update after rebuttal (Note: I copy-pasted this from my response to the authors' rebuttal.) Thank you for your detailed response to my questions and concerns! In particular: The concrete changes that the authors describe address the main concern I had around the framing and references; the expanded remark and discussion they describe on the distinction between marginal and conditional seems like it will be valuable for future readers (and avoiding common points of confusion on this distinction); and the explanation regarding why selection bias is avoided (ie, the quantiles are taken for each action [based on softmax outputs from the model], independently of the calibration data) was helpful to me. Re notation clarity, I also appreciate the clarification about why $\nu$ and $\nu_\alpha$ are distinct (ie, the first being the optimization variable), and the different roles of $t$ and $alpha$. I still feel like $\theta$ was perhaps unnecessary notation, and maybe something clearer could be achieved with something like $\nu_t$ or $\nu_{1-t}$ (ie, only changing the subscript variable), but this is a stylistic comment and ultimately up to the authors. A proof-read for a camera-ready version could help double-check that all notation and new terms (eg, $\nu$ vs $\nu_\alpha$) are clearly introduced with their distinctions, and generally to address other points of confusion/questions among the reviews. I'm happy to update my score--congrats on a nice paper! Claims And Evidence: Mostly yes: All of the main formal theoretical claims appear to be solid, the experimental evaluations appear to be reasonable and sound, and the high-level takeaway from the paper should be very valuable to the broader CP and ML communities (i.e., roughly that “prediction sets are a good form of uncertainty quantification for risk-averse decision making”). However, I have a couple concerns or questions about main claims: - **(1) Ensuring proper contextualization relative to related work:** For example, although there are many conformal prediction papers cited in the Related Work (Sec 1.1) and Further Related Work (Appendix A), I do not see any references to or discussion of classic literature on risk-averse statistical decision making, which there is much of, e.g., in economics (see “Relation To Broader Scientific Literature” and “Essential References Not Discussed”). Additionally, the Further Related Work may miss some CP references more closely related to decision making while instead citing many CP papers that are influential but not as relevant to the current paper. - **(2) “Minding the gap” between marginal and conditional objectives/decision making:** The paper does acknowledge the gap between the *conditional* objectives/decision making/coverage guarantees that one would ideally like and the “Marginal Version” (see Sec 2) that is realized in practice with CP, but there are some parts of the methods section where this distinction seems less clear, and the paper seems to lack a thorough discussion of the limitations of marginal decision-making, which would be valuable for readers, practitioners, and future researchers building on this work. - **(3) Question about whether using same calibration set to select action affects CP guarantee:** See “questions for authors” Methods And Evaluation Criteria: Overall, the experimental settings and evaluations seem to sufficiently support the main message of the paper, that is regarding how prediction sets and the proposed Risk-Averse Calibration (RAC) algorithm can be used for risk-averse decision making. That said, it appears that the only baseline is an expectation-maximizing “best response” policy, and it would likely improve the paper if the proposed RAC algorithm were compared with other methods that use CP for decision-making--such as methods proposed in Vovk and Bendtsen (2018), who have a method based on conformal predictive distributions for asymmetric classification, similar to the experimental setting in this paper, or perhaps methods in Lekeufack et al. (2024), if relevant--however, since I view the main message of the paper to be one that’s more broad and conceptual regarding prediction sets, I don’t view this as absolutely necessary. Theoretical Claims: Yes, I made an effort to check the proofs and they appear to be sound, though it is possible there are some details I did not understand given time constraints. Experimental Designs Or Analyses: I overall reviewed the experimental settings and they seem to be reasonable. E.g., it was important that the experiments not only be regression or classification, but also have some sort of actions and corresponding utilities (e.g., Table 1 for medical diagnosis experiment). Supplementary Material: Yes, I reviewed Appendix A (Further related work) and Appendix B (Proofs). Relation To Broader Scientific Literature: There is growing work on leveraging conformal prediction (CP) to improve decision making, and understanding the theory of doing so; separately, there is also a large literature on risk-averse statistical decision making, though admittedly much of it is in the economics, finance, and game theory literature (eg: https://en.wikipedia.org/wiki/Risk_aversion). In my view, this paper draws insightful and useful connections between CP/prediction sets and risk-averse statistical decision making, although it unfortunately seems to not discuss this latter literature at all (aside from citing Duffie and Pan 1997 to introduce the definition of “Value at Risk”); and regarding the CP literature, there are arguably highly relevant references missing, and superfluous references provided. Please see the following “Essential References Not Discussed” Section for specific details. Essential References Not Discussed: **Literature on risk-averse decision making:** Apart from citing Duffie and Pan 1997 to introduce the definition of “Value at Risk” in Section 2, there appears to be no mention or discussion of prior literature on risk-averse decision making (eg, https://en.wikipedia.org/wiki/Risk_aversion); I think it is essential to at least mention/acknowledge this literature, if not discuss some connections more thoroughly in the appendix. Eg, whereas currently Sec 2 “Fundamentals of Risk Averse Decision Making” could be read by many reasonable readers as if this paper is inventing this topic, I think there should be some mention of prior literature such as the following, and potentially further discussion in appendix. Following are some examples: - *Arrow-Pratt measure of absolute risk aversion / coefficient of absolute risk aversion:* - Arrow, K.J. (1965) Aspects of the Theory of Risk Bearing - Pratt, J. W. (1978). Risk aversion in the small and in the large. In Uncertainty in economics (pp. 59-79). Academic Press. - *Fundamental connections between second-order stochastic dominance and risk aversion (e.g., see https://en.wikipedia.org/wiki/Stochastic_dominance#Second-order):* - Hadar, J., & Russell, W. R. (1969). Rules for ordering uncertain prospects. The American economic review, 59(1), 25-34. - Meyer, J. (1977). Second degree stochastic dominance with respect to a function. International Economic Review, 477-487. **Arguably highly relevant CP+decision making papers not cited/discussed:** - *Papers on CP for counterfactual treatment-effect estimation for personalized decision making [paper-specific context in brakcets]:* - [CP for counterfactual treatment assignment decisions] Lei, L., & Candès, E. J. (2021). Conformal inference of counterfactuals and individual treatment effects. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(5), 911-938. - [Extension for robustness to potential confounding] Yin, M., Shi, C., Wang, Y., & Blei, D. M. (2024). Conformal sensitivity analysis for individual treatment effects. Journal of the American Statistical Association, 119(545), 122-135. - [Extension for robustness to potential confounding] Jin, Y., Ren, Z., & Candès, E. J. (2023). Sensitivity analysis of individual treatment effects: A robust conformal inference approach. Proceedings of the National Academy of Sciences, 120(6), e2214889120. - *Papers on CP under distribution shifts induced by decision making/actions of an AI/ML agent (e.g., black-box optimization or contextual bandits):* - [CP for contextual bandits decision making, off-policy setting] Taufiq, M. F., Ton, J. F., Cornish, R., Teh, Y. W., & Doucet, A. (2022). Conformal off-policy prediction in contextual bandits. Advances in Neural Information Processing Systems, 35, 31512-31524. - [Theory & expts on CP under feedloop shifts induced by active learning or black-box optimization ML decision process] Prinster, D., Stanton, S., Liu, A., & Saria, S. (2024, July). Conformal validity guarantees exist for any data distribution (and how to find them). In Proceedings of the 41st International Conference on Machine Learning (pp. 41086-41118). - [Empirically studies CP under feedback-loop shifts induced by BayesOpt decision making] Stanton, S., Maddox, W., & Wilson, A. G. (2023, April). Bayesian optimization with conformal prediction sets. In International Conference on Artificial Intelligence and Statistics (pp. 959-986). PMLR. - [CP under one-step “feedback covariate shift” decision making] Fannjiang, C., Bates, S., Angelopoulos, A. N., Listgarten, J., & Jordan, M. I. (2022). Conformal prediction under feedback covariate shift for biomolecular design. Proceedings of the National Academy of Sciences, 119(43), e2204569119. It feels important to mention many if not all of these, especially given that in Appendix A (second to last paragraph) there currently appears to be superfluous (ie, 5 whole lines of in-text citation) references to many CP papers that are (in my view) far less related to decision making than the ones I have provided here.... Other Strengths And Weaknesses: **Other Strengths:** Overall, the paper provides an insightful perspective on the connections between prediction sets and risk-averse decision making. Additionally, the writing style is very appealing for a wide audience, with broader implications that can be understood beyond the scope of conformal prediction, so it seems like it has potential for high impact. **Other Weaknesses / opportunities for improvement:** - *Occasionally oversimplified notation: Eg, $\nu(X)$ should ideally be $\nu_{\alpha}(X)$ in many instances including RA-DPO def:* The paper seemingly makes an effort to simplify notation for readability, but at times this may be oversimplified, which can make close-reading of technical details difficult, confusing, or perhaps misleading. For example, the value-at-risk quantity $\nu_{\alpha}(a; x)$ is defined to depend on $\alpha$, $a$, and $x$, but sometimes it is stated only as $\nu$, sometimes it is stated as $\nu_{\alpha}(x)$, and sometimes it is stated as $\nu(x)$. Sometimes this is done explicitly (eg, Eq (2)), but other times, eg, the $\alpha$ is dropped casually without explanation. In my view, this is not without consequence, because I think that $\nu(x)$ actually depends on $\alpha$ as $\nu_{\alpha}(x)$ makes Theorem 2.3 far less “surprising.” That is, just prior to Theorem 2.3, the paper states “One might expect a-priori that passing from the actual distribution to a lossy prediction set representation would discard information that is critical to finding the optimal policy. However, the following theorem shows, perhaps surprisingly, that this is not the case…”; however, once one realizes that in the RA-DPO def that $\nu(X)=\nu_{\alpha}(X)$, it is easier to see how RA-DPO is effectively only leveraging the level-$\alpha$ quantile of the full utility distribution, i.e., the same information provided by the prediction set. - Limitations of focusing on marginal vs conditional objective could be discussed further. - *Potentially redundant notation (?):* It is unclear to me why $\theta(x, t) := max_{a\in A}\ quantile_{1-t}[u(a, Y) | X=x]$ (Eq. (9)) needed to be introduced as a new quantity that is distinct from $\nu_{\alpha}(x) := max_{a\in A}\ quantile_{\alpha}[u(a, Y) | X=x]$ (Eq. (2)), as clearly they are equivalent when $1-t=\alpha$. An explanation is welcome, but so far my impression is that this difference in notation has only caused confusion for me, and perhaps served to make the reformulation in Eq. (12) seem more “surprising”... - *Potentially missed opportunity to draw connections with choice of conformity score(?):* Where the prediction set $C^*(x)$ is defined in Eq. (13) and similarly for $\hat{C}(x;\beta)$ at the end of page 6, to a CP-minded reader, it seems like the optimal/proposed prediction sets are defined in a way that would be equivalent to having conformity scores corresponding to the utility function, for an optimal/chosen action (or from a pessimistic viewpoint, could be nonconformity score corresponding to a cost/loss function). In other words, it seems like a takeaway from the paper could be rephrased for a CP-minded person as ‘design your (non)conformity score to correspond to utility (or loss) that you care about for decision making, and then pick action/policy s.t. quantile on the utility (or loss) is best.’ Other Comments Or Suggestions: - Subheadings could sometimes be added or revised for clarity. Eg., prior to reading the sections closely, it initially thought sections 2.1 (“A Prediction Set Perspective”) and 2.2 (“An Equivalent Formulation via Prediction Sets”) seemed very similar. - In Section 3, the paragraph before Prop 3.1, should $Pr(Y \in C(x))\geq t$ actually be $Pr(Y \in C(x) | X = x)\geq t$? As, the latter is what is written in Prop 3.1, and in the paragraph it says the probability is over $p(y|x)$. - In Section 3, I found it confusing whether $t$ needed to be referring to conditional coverage, or whether it could be marginal coverage. That is, $t$ is introduced as “conditional coverage probability,” but then in (12) it is used as a marginal coverage constraint. Questions For Authors: **Question about whether there is selection bias that may affect CP guarantee (?)** At a high level, eg based on the max-min decision rule, Figure 1, and the prediction sets defined in Eq (11) and (13), it seems like the big picture story from a CP view is: (i) Chose conformity scores that correspond to utilities, (ii) for each action, compute utility distribution over calibration data and take an appropriate quantile to get the value-at-risk for that action, and (iii) select the action with maximum estimated value-at-risk for the worst-case label. Is this roughly accurate, or is there a more accurate way to rephrase the proposed procedure in the terminology connected to conformity scores? In either case, I am wondering if there is an issue of (or if so, how it is avoided) of creating selection bias, that is when an action is taken based on observing score distribution over the same holdout calibration set, I think this could invalidate CP guarantees, as in the potentially analogous case of selecting a model based on efficiency of CP sets, eg, see: Liang, R., Zhu, W., & Barber, R. F. (2024). Conformal prediction after efficiency-oriented model selection. arXiv preprint arXiv:2408.07066. I appreciate and look forward to your response! **Rationale for current recommendation:** Broadly, I view the paper as having an insightful contribution connecting prediction sets to risk-averse decision making. I have attempted to do a detailed review of the paper for correctness, clarity, and relation to broader literature, among other factors, in part due to its potential. I want to recommend accepting this paper due to the valuable perspective it offers and broad potential appeal, but at the moment I think it my concerns make me think it is most prudent not to at least until I have heard more from the authors at the discussion period. To reiterate my concerns, the main one is about further stating relation to literature (especially prior work on risk-averse decision making, and secondarily about CP+decision making papers that may be more relevant than many CP refs provided in App A: Related Work); other concerns are those stated around some notational clarity and the above question on possible selection bias. If these concerns are addressed, I’d be happy to consider improving my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed and insightful feedback. We are grateful for the positive recognition of our core contributions—especially the novel linkage between prediction sets and risk-averse decision-making. In what follows, we detail the concrete actions we have taken to address the reviewer’s concerns. **References:** The reviewer’s comment on expanding the references is extremely valuable. We agree that including prior work from the economics literature on risk aversion is crucial to understanding the challenges and collective effort in risk-sensitive decision making. In the revised version: 1) We have expanded the Related Work section by adding two new paragraphs. One discusses the economics and game theory literature (including all the suggested references) to highlight classical approaches to risk aversion. The other covers additional works, including the ones mentioned by the reviewer, in CP that address decision making beyond risk control. 2) We fully acknowledge that we are by no means claiming to have invented the topic of risk-averse decision-making, but rather are drawing on concepts developed in the economics literature. To better reflect this, we have renamed the section “Fundamentals of Risk-Averse Decision Making” to “Preliminaries of Risk-Averse Decision Making.” In this section, we now include an additional paragraph—along with appropriate citations to the economics literature—that briefly outlines alternative formulations of risk aversion. We again thank the reviewer for highlighting relevant work from the literature, which has been very helpful in improving our presentation. While space constraints prevent us from discussing detailed connections to these works, we would be happy to explore them further in follow-up rounds. **Marginal and Conditional Objectives:** We thank the reviewer for emphasizing this important issue. Although Remark 2.1 highlighted this gap, space constraints limited our discussion in the original submission. In finite sample settings it is impossible to learn an optimal action for every individual scenario without some type of averaging over the covariate space. A common approach in CP is to use group-conditional guarantees, where validity is ensured on average over pre-specified groups (e.g., based on patient age or sex). We have: 1) Extended Remark 2.1 to clearly delineate which parts of our theory can be systematically extended to group-conditional scenarios and which present challenges. 2) Expanded the Future Work section to discuss these issues in detail. Specifically, we explain that while the results in Sections 2 and 3 extend naturally to the group-conditional setting (resulting in an m-dimensional characterization), the finite sample algorithm in Section 4 becomes more challenging due to the need to calibrate an m-dimensional vector. **Conformity Scores and Selection Bias:** We appreciate the reviewer’s thoughtful questions on these topics. Regarding the connection to conformity scores, it is crucial to note that our optimal prediction sets are not derived from calibrating a threshold for a conformity score. As shown in Section 3, the optimal sets do not form a nested sequence as the miscoverage threshold varies. In contrast, optimal sets for objectives like minimizing average set size are typically characterized by a threshold rule (e.g., $p(y∣x)≥q$). We will add a remark to clarify this technical difference, which is crucial for deriving optimal actions for risk-averse agents. We do not suffer from selection bias. We construct one prediction set per test instance and apply the max-min rule directly—rather than selecting among multiple candidate sets. The calibration procedure involves a one-dimensional parameter, $\beta$, which is determined using calibration data to ensure valid coverage (see Theorem 4.1). Although our calibration phase involves computing a quantile for each action (as in eq. (15)), these computations rely on the softmax outputs from the model and remain independent of the calibration data, preserving the statistical guarantees. **Notational Clarity:** Note that $\nu$ and $\nu_\alpha$ are distinct: $\nu_\alpha$ is defined in Eq.(2) while $\nu$ is the optimization variable in RA-DPO. In RA-DPO, the decision maker jointly optimizes for a utility certificate ($\nu$) and an action policy ($a$). This distinction clarifies why Theorem 2.3 is non-trivial, as it is not obvious that the optimal utility certificate can be obtained as the max-min value of a prediction set. Regarding the notations $\theta$ and $\nu_\alpha$, while they are equivalent when $1-t = \alpha$, it is important to note that $t$ and $\alpha$ play different roles. Here, $\alpha$ is the fixed miscoverage threshold (as in standard CP), whereas $t$ is a tunable variable during calibration to ensure that the final prediction sets achieve the desired coverage. We will clarify this point in the revised version to prevent confusion.
Summary: The paper studies the decision-theoretic properties of conformal prediction sets. For risk-averse agents who want probabilistic certificates on certain actions for the utility to be greater than some value with high-probability, the paper suggests that this goal can be accomplished via conformal prediction sets that satisfy marginal coverage guarantees, in fact they show that both notions can be derived from each other, establishing that prediction sets are fundamental objects for a risk-averse agent. The paper then proposes an algorithm to devise prediction sets that result in better utility for risk-averse agents, with experiments supporting the claims. Claims And Evidence: The paper provides enough support for the claims made. However, I'd appreciate more justification for the max-min policy for the agent (Equation 7). While it makes sense, and frankly an intuitive thing to do, but I assume a decision-maker might be interested in resolving the uncertainty over $\Omega$ in a different way (e.g. from here https://arxiv.org/abs/2204.11318). So it makes sense to justify the max-min rule as employed, as why that is used. I'll frame my question differently: Instead of "we are interested in the policy that is minimax-optimal" (line 191), I appreciate the answer to "why we are interested in the policy that is minimax-optimal?" Methods And Evaluation Criteria: Yes, the paper tests their methodology appropriately. Theoretical Claims: I have verified all the statements to the point that I agree on a high-level that they are correct, but haven't fully verified the notational writing. However, I haven't formally checked the duality arguments (or the validity of them) that leads to Equation 14. Experimental Designs Or Analyses: The experiments sound convincing, and make sense. However, I'd appreciate some reasoning why RAC better aligns nominal miscoverage with realised miscoverage (Figure 2d), as compared to other scores. Supplementary Material: Skimmed it. Relation To Broader Scientific Literature: The paper is certainly relevant to the ICML community. The paper advances the decision-theoretic interpretation of conformal sets, as in how to use them, and devises an algorithm to use them better in that sense. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper could be heavily improved in terms of writing, in particular Section 3 and Section 4. More intuition can be provided behind Algorithm 1. I'd appreciate how Algorithm 1 gets rid of specifying utility as everything before that $\hat{\theta}$ and $\hat{a}$ are defined in terms of the underlying utility of the decision-maker agent? Other Comments Or Suggestions: None Questions For Authors: I get the approach requires specifying the miscoverage level $\alpha$. Is there a way for it to be connected to the level of risk aversion of a decision-maker, or how should the decision-maker choose $\alpha$ depending on their level of risk-aversion. I see that RAC starts matching the averaged realised utility for nominal miscoverage level around 0.1 (Figure 2c), which makes sense that the prediction set sizes might gets shrink and in the extreme case, only the true label will be in the prediction set, in which case the expected utility and the utility realised by RAC will coincide? But it does not serve the risk averse individual as they are not concerned with the average utility? I'd appreciate any clarification on this; it could be possible that my question is poorly worded. I guess I'm interested in how to design the sets that balances the risk aversion of a decision-maker with their utility maximisation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful comments, constructive feedback, and positive evaluations of our paper. **On the motivation for minimax optimality:** This is an important point, and we appreciate the opportunity to clarify our choice. As you correctly pointed out, minimax optimality is indeed not the only conceivable approach. However, our rationale for selecting it is both intuitive and theory-backed, which can be summarized as follows: 1) From an intuitive standpoint, minimax optimality aligns well with the perspective of a risk-averse decision maker. Specifically, the set $\Omega$ captures all plausible scenarios faced by the decision maker, and a risk-averse agent would reasonably seek actions that are robustly optimal in the worst-case realization among these scenarios. This conservative stance naturally leads to the adoption of a minimax criterion. 2) Perhaps a more complete picture emerges by looking at Theorem 2.3. There we prove the max-min rule (introduced in Section 2.1), combined with optimized prediction sets, matches the optimal solution to the risk-averse decision problem (RA-DPO). In light of that, the minimax optimality of the max-min rule (proved in Proposition 2.2) states that for any fixed marginally valid prediction set, even if the prediction sets are not optimally derived, the max-min rule is still a natural choice. This is particularly important as in practice, due to the limitations of finite sample data, we can only have "approximately" optimal prediction sets. Nevertheless, we fully acknowledge your remark that alternative decision-theoretic approaches (such as those discussed in the paper you mentioned, which we will cite in the revised version) are also valuable and potentially yield interesting complementary results. We will incorporate a remark explicitly addressing this. **On the comment regarding improvements in presentation:** We completely agree that the clarity and intuitive presentation of our results could significantly benefit from additional explanations. Indeed, this has largely been limited by the strict page constraints. To address this, we will leverage the two extra pages allowed for the revised version to provide detailed intuition and clearer explanations, especially regarding: 1) The duality derivations and additional implications arising in Section 3. 2) Intuitive justification and step-by-step reasoning behind Algorithm 1 in Section 4. Regarding Algorithm 1's connection to utility, the utility function enters through the definition of $\hat{C}$ just before Algorithm 1. To summarize the intuition briefly here: Algorithm 1 uses the predictive model to approximate key quantities required for deriving optimal prediction sets, up to a scalar calibration parameter obtained from calibration data. As highlighted in the paragraph following Corollary 4.2, the safety guarantees of actions produced by our framework hold regardless of predictive model quality. Nonetheless, higher-quality models produce actions closer to the optimal choices among the safe ones. **On RAC’s coverage alignment:** We believe this phenomenon arises due to finite-sample effects. For instance, in the same Figure 2(d) on the left, one can see that RAC's miscoverage is sometimes better and sometimes worse. Moreover, it is important to emphasize that RAC differs algorithmically from conventional CP methods, which rely primarily on thresholding conformity scores. Consequently, finite-sample behaviors may differ across these approaches. However, as the data size grows sufficiently large, we anticipate these differences will diminish, and coverage behaviors will become comparable across all CP methods, given their common coverage guarantees. **On tuning the miscoverage level $\alpha$:** Thank you for this insightful question—this is indeed a critical practical concern. In our framework, the value of $\alpha$ is treated as a given, provided alongside the utility function. Accordingly, RAC takes $\alpha$ as an **input**. As you pointed out, this value should be selected by the decision maker to reflect their degree of risk aversion, and choosing an appropriate $\alpha$—much like specifying a utility function—can be non-trivial in some applications. From a practical standpoint, one can evaluate several candidate values of $\alpha$ using validation data, and the decision maker can select the one that offers the best trade-off between risk aversion and utility. This process can involve inspecting plots similar to those presented in our paper. For example, in the context of medical treatment recommendations, one could tune $\alpha$ by jointly examining Figures 2(b) and 2(c), and choosing a value that strikes the desired balance between maximizing average utility and minimizing critically harmful treatment decisions. We will add a remark in the revised version to clarify this point. --- Rebuttal Comment 1.1: Comment: thanks for the detailed response. I agree with it. However, one concern is still not answered. In Figure 2c, as \alpha is increased, RAC approaches the best respond policy in terms of the average realised utility? What is the rationale for that? Could that be clarified? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful engagement with our work. We appreciate the opportunity to clarify and improve the presentation of our paper. Regarding Figure 2c and the observation that RAC approaches the performance of the best-response method in terms of average utility as $\alpha$ increases, the intuition is as follows: The parameter $\alpha$ controls the level of risk aversion in our RAC method. With smaller values of $\alpha$, RAC prioritizes avoiding critical errors (situations resulting in extremely low utility) by opting for more conservative actions, typically leading to medium-level utility outcomes. This conservative approach naturally sacrifices the possibility of achieving higher but riskier utilities. However, as we increase the value of $\alpha$, RAC progressively allows for riskier actions by shrinking the size of the prediction sets. Eventually, when these prediction sets predominantly contain only a single item—corresponding to the highest likelihood prediction from the model—the decision-making process of RAC essentially reduces to the best-response policy. In this scenario, the max-min decision rule simplifies directly to choosing the best response to that single prediction. To further illustrate this behavior, we will include the following plot in the revised version of our paper: https://ibb.co/WCRbxwK Specifically, we will provide histograms depicting the full distribution of realized utilities at test time for three variants of RAC (with varying $\alpha$ values) compared directly against the best-response method. These histograms clearly demonstrate how adjusting $\alpha$ effectively controls the tails of the utility distribution and how, with increasing $\alpha$, RAC's realized utilities initially mirror those obtained by the best-response approach but eventually decrease if $\alpha$ is chosen excessively large. It is also important to note that excessively increasing $\alpha$ after some point can actually degrade the average utility. This happens because RAC always ensures the marginal coverage of the sets matches $1 - \alpha$. Therefore, if $\alpha$ becomes too large such that $1 - \alpha$ is less than the model's accuracy, RAC is forced not to choose the single prediction with the highest likelihood—since doing so would yield a marginal coverage greater than $1 - \alpha$. Instead, RAC might select the next best likelihood or produces empty prediction sets, both of which negatively impact average utility. That is to say, tuning $1-\alpha$ below the test accuracy of the model might not be a good idea. We believe the addition of this plot can enhance the presentation of our paper. Thank you again for your helpful comments and support.
null
null
null
null
null
null
Improving Out-of-Distribution Detection via Dynamic Covariance Calibration
Accept (poster)
Summary: To reduce redundant information in the covariance matrix in real-time OOD detection, this work proposes adjusting prior geometry based on the input, enhancing sensitivity to OOD samples while preserving essential information for ID classification. Claims And Evidence: Most claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: I think their method is reasonable, and the evaluation criteria used for OOD detection are appropriate. Theoretical Claims: I did not notice any clear errors in their theoretical claims. Experimental Designs Or Analyses: The final results show that their method effectively improves OOD detection. However, the experiment in Section 5.8 does not convincingly demonstrate that their method increases the discrepancy in feature norms between ID and OOD samples. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: Since Mahalanobis distance [Lee et al.] demonstrated the effectiveness of incorporating information geometry in designing distance-based scores, more studies have attempted to capture more precise geometry matrices $M$. Wang and Ammar integrate information from residual space and principal space to achieve this but remain limited by static geometry. This work introduces input based projection matrix to address this limitation. Lee, Kimin, et al. "A simple unified framework for detecting out-of-distribution samples and adversarial attacks." Advances in neural information processing systems 31 (2018). Wang, Haoqi, et al. "Vim: Out-of-distribution with virtual-logit matching." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. Ammar, Mouïn Ben, et al. "Neco: Neural collapse based out-of-distribution detection." arXiv preprint arXiv:2310.06823 (2023). Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. This work proposes adapting the prior geometry based on the input, improving sensitivity to OOD samples while retaining essential information for ID classification. 2. Their idea is reasonable, and their implementation is novelty. 3. Experimental results show that their method outperforms SOTA. Weakness. 1. Why the baseline methods provided in CIFAR benchmark and ImageNet benchmark. 2. The authors give evidence in 5.8 that their projection based on normalized features results in a larger norm for OOD and smaller norm for ID. However, this phenomenon has been widely observed [Park et al.] Can we make sure this discrepancy gets improved with projection, compared with feature norm without normalization. Park, Jaewoo, et al. "Understanding the feature norm for out-of-distribution detection." Proceedings of the IEEE/CVF international conference on computer vision. 2023. Other Comments Or Suggestions: 1. STA in Line 67. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive recognition of our work, particularly regarding the novelty and reasonableness of the idea, strong performance, and clarity of supporting evidence. Below, we provide detailed, point-by-point responses addressing each comment. ### Other Strengths And Weaknesses: **C1**: Why the baseline methods provided in CIFAR benchmark and ImageNet benchmark. **C1-Ans**: The reason that the compared methods are different in CIFAR and ImageNet benchmarks is that we can not access the hyperparameter settings in the absent methods. To address the author's concern, we reimplement and do hyperparameter search on VIM and NECO in CIFAR benchmark. In addition, we add a gradient-based method in this setting, namely GradeNorm [2]. | | CIFAR10 DenseNet (AUROC/FPR) | CIFAR100 DenseNet (AUROC/FPR) | |-----------|------------------------------|-------------------------------| | GradNorm | 92.60/24.83 | 79.75/63.08 | | VIM | 93.44/33.66 | 87.95/51.39 | | NECO| 94.51/27.92 | 79.79/72.43 | | ours | 96.83/14.63 | 92.38/29.98 | **C2**: Can we make sure this discrepancy gets improved with projection, compared with feature norm without normalization. **C2-Ans**: Thank you for the insightful question. We have added the discussion about the observation of NAN[1] in our experiment. In the following table, we show the results w and w/o projection to illustrate the effectiveness of the projection. | | ImageNet-ViT(AUROC/FPR) |ImageNet-ResNet50(AUROC/FPR)| |-------------------|---------------|----------------| | w/o projection| 94.24/27.05 |81.88/60.49| | w projection | 94.27/26.94 |87.43/51.77| In addition, we use the norm of the projected feature as the OOD score to answer this question, shown in the following table. The results in the table imply that normalization can help improve the discrepancy. We present our explanation of this phenomenon in the following | |ImageNet-ViT(AUROC/FPR) |ImageNet-ResNet50(AUROC/FPR)| |-------------------|---------------|----------------| | w/o normalizaiton| 92.34/34.80 |62.27/81.95| | w normalizaiton | 92.80/33.40 |83.26/59.74| The $L_2$ norm of the feature can also be interpreted as the distance from the feature to the zero point. If the features in different semantic distributions have significantly different scales and the center of ID features is far from the zero point, the norm of the features makes it hard to detect the OOD sample very well, since the zero point may also be an OOD point. However, the normalized features are distributed on the sphere with radius 1 and the zero point as center. As long as the class clusters are not distributed too close, the center of normalized features can be easily near the zero points, thus making the norm of the features have geometrical meaning for OOD detection. If the center of the features is the zero point, the center of the projected features is also the zero point. We assume this is the reason that the normalization can help improve the discrepancy and also strengthen the neural collapse phenomenon [1] Park, Jaewoo, et al. "Understanding the feature norm for out-of-distribution detection." Proceedings of the IEEE/CVF international conference on computer vision. 2023. [2] Huang, Rui, et al."On the importance of gradients for detecting distributional shifts in the wild. --- Rebuttal Comment 1.1: Comment: The experimental results of ImageNet-Vit w and w/o projection do not show much difference. --- Reply to Comment 1.1.1: Comment: Thank you for the constructive feedback. While the improvement on ImageNet–ViT is indeed modest, it becomes more pronounced on ResNet‐50 (AUROC: 81.88→87.43, FPR: 60.49→51.77). We hypothesize that ViT’s architecture naturally produces a well-conditioned embedding space, thus leaving less room for projection-based refinements. Additionally, prior work [1] suggests that the most distinct components for Mahalanobis scores often lie in the principal space. If the projected (residual) adjustment is relatively small compared to that principal-space effect, gains on already well-conditioned representations (e.g., ViT) may appear modest. Observations from our second table (with/without normalization) further indicate that ViT’s feature distribution is excellent, and the gap between the residual space and principal space can be large- its richest information is already captured in the principal subspace. We demonstrate this phenomenon with the smallest eigenvalue of the residual space and the non-residual space in the following. Thus, there is little room for additional improvement via our projection on ViT. Nevertheless, the gains, though smaller, remain consistent, demonstrating the broad applicability of our approach. ||ViT(residual/non-residual)|ResNet-50(residual/non-residual)| |-|-|-| |smallest eigen values|6.4$e^{-9}$/1.2$e^{-3}$|7.8$e^{-7}$/8.2$e^{-6}$| [1] Ren, Jie, et al. A simple fix to Mahalanobis distance for improving near-ood detection.
Summary: This paper addresses the problem of Out-of-Distribution (OOD) detection, which is critical for ensuring the reliability of AI systems. The authors observe that while existing subspace-based methods use information geometry to detect OOD data, they fail to address the distortion in geometry caused by ill-distributed samples that can arise in training data. To mitigate this issue, the paper proposes a novel approach that dynamically updates the prior covariance matrix using real-time input features. This update reduces the covariance along the direction of real-time input features and constrains adjustments to the residual space. This preserves essential data characteristics and avoids unintended effects on other directions. The method is evaluated on CIFAR and ImageNet datasets, demonstrating improved OOD detection across various models. Claims And Evidence: The central claim that dynamically adjusting the prior covariance matrix improves Out-of-Distribution (OOD) detection is supported by the experimental results. The authors consistently show improved performance on CIFAR and ImageNet datasets across various models when compared to mahalanobis distance specifically. The ablation studies further reinforce the importance of each component of their method (dynamic adjustment, residual space projection, and deviation features). Methods And Evaluation Criteria: The proposed method is reasonable for the problem of OOD detection. The use of covariance matrix adaptation to capture and adjust information geometry is a sound approach. The evaluation criteria (AUROC and FPR95) and benchmark datasets (CIFAR and ImageNet) are standard and appropriate for assessing OOD detection performance **Experimental Design and Analysis:** The experimental design is comprehensive. The authors compare their method with a wide range of existing OOD detection techniques, providing a strong baseline for evaluation. They also perform ablation studies and analyze the behavior of key variables (p, q, and vector norms). The analysis is generally sound and provides strong support for their claims. Theoretical Claims: The primary theoretical claim in the paper is presented in Theorem 4.2. This theorem provides the conditions under which the proposed dynamic distance metric is valid.   Theorem 4.2: Given a feature f, a non-zero feature a, and a symmetric positive definite matrix Σ, the theorem defines variables p, q, and s based on these inputs and Σ. It then states that under certain conditions involving p, q, and s, the proposed distance metric d(f) is greater than or equal to 0. The proof appears to be mathematically sound and follows a logical progression. Experimental Designs Or Analyses: **1. Experimental Designs** * **OOD Detection Benchmarks:** The authors use two standard OOD detection benchmarks: CIFAR (CIFAR-10/CIFAR-100) and ImageNet-1k. These are widely used datasets in the OOD detection community, making the evaluation relevant and comparable to other work. They include a variety of OOD datasets to test the generalization of their method. * **Comparison with State-of-the-Art:** The authors compare their method with a diverse set of post-hoc OOD detection approaches, including probability-based, logit-based, density-based, distance-based, and subspace-based methods. This comprehensive comparison provides a strong baseline for evaluating the effectiveness of their proposed approach. The only family of methods that's missing is gradient based methods (for eg. GradNorm[1], GradOrth[2], GROOD[3]) * **Ablation Study:** The authors conduct a thorough ablation study to analyze the contribution of different components of their method. They systematically remove key components (residual space projection, real-time adjustment, and deviation features) and evaluate the impact on performance. This helps to demonstrate the importance of each component. * **Analysis of Residual Dimensionality:** The authors analyze how the performance of their method varies with different dimensions of the residual space. This analysis provides insights into the sensitivity of the method to the choice of residual space dimensionality. * **Analysis of p and q Values:** The authors analyze the values of p and q, which are important parameters in their proposed method. This analysis provides empirical support for the theoretical claims made in the paper. * **Euclidean Distance Experiments:** The authors conduct additional experiments using Euclidean distance to demonstrate the general applicability of their approach to different distance metrics. **2. Soundness and Validity** * The experimental designs appear to be generally sound and well-justified. The authors use appropriate datasets, evaluation metrics, and comparison methods. * The ablation studies are particularly strong, providing clear evidence for the contribution of each component of the proposed method. * The analyses of residual dimensionality, p and q values, and vector norms provide valuable insights into the behavior of the method and support the theoretical claims. * The inclusion of Euclidean distance experiments and hard OOD detection further demonstrates the robustness and generalizability of the approach. References: 1- Huang, Rui, Andrew Geng, and Yixuan Li. "On the importance of gradients for detecting distributional shifts in the wild." Advances in Neural Information Processing Systems 34 (2021): 677-689. 2- Behpour, Sima, et al. "Gradorth: A simple yet efficient out-of-distribution detection with orthogonal projection of gradients." Advances in Neural Information Processing Systems 36 (2023): 38206-38230. 3- ElAraby, Mostafa, et al. "GROOD: GRadient-aware Out-Of-Distribution detection in interpolated manifolds." arXiv preprint arXiv:2312.14427 (2023). Supplementary Material: B.1 Analyzing s Values: I checked the analysis of s values, which relates to the validity of the proposed distance metric.   B.2 Dynamic Adjustment in Euclidean Distance: I reviewed the experiments on applying the dynamic adjustment to Euclidean distance.   B.3 Hard OOD Detection: I checked the results of the hard OOD detection experiments. Appendix F: Discussion I reviewed the discussion points on OOD detection in Vision-Language Models (VLMs) and Large Language Models (LLMs), and the inference cost of the method. Relation To Broader Scientific Literature: The key contribution of this paper is a novel approach to Out-of-Distribution (OOD) detection that dynamically adjusts the prior matrix using real-time input features. This contrasts with prior work that relies on static estimations of geometry from ID data. Specifically, the paper builds upon distance-based OOD detection methods, including those using Mahalanobis distance, which utilize a covariance matrix derived from ID data to capture information geometry.. The authors identify a limitation in these methods: they often neglect the distortion of information geometry caused by outlier features in the training data. The paper also relates to subspace-based methods that project features onto the subspace of ID data. While these can be seen as using matrix-induced distance scores, the authors argue that simply replacing the covariance matrix may remove important ID information and doesn't provide targeted adjustment for OOD directions. Their dynamic adjustment approach refines the prior information geometry using local information from real-time features. Additionally, they constrain adjustments to the residual space of the training distribution, building on the idea that OOD features may exhibit more energy in this space. Essential References Not Discussed: The paper effectively discusses relevant prior work in OOD detection, covering distance-based, subspace-based, and other categories. However, one area where additional discussion could provide valuable context is the limitations and potential failure modes of Mahalanobis distance. For example, [1] discusses that MSP baseline outperforms Mahalanobis when ID and OOD distributions are very similar can the adjustment help with that problem. References: * [1] Ren, Jie, et al. "A simple fix to mahalanobis distance for improving near-ood detection." arXiv preprint arXiv:2106.09022 (2021). * [2] Tajwar, Fahim, et al. "No true state-of-the-art? ood detection methods are inconsistent across datasets." arXiv preprint arXiv:2109.05554 (2021). Other Strengths And Weaknesses: **Strengths:** * **Originality:** The paper introduces a novel perspective to distance-based OOD detection by dynamically adjusting the prior geometry using real-time input features. This approach is a departure from traditional methods that rely on static estimations of the information geometry from the training distribution. The idea of refining the covariance matrix in real-time and constraining adjustments to the residual space to preserve essential ID characteristics is an original contribution. * **Significance:** OOD detection is a crucial problem for the trustworthiness and reliability of AI systems, especially in safety-critical applications. The paper addresses a significant limitation of existing methods and proposes a solution that improves OOD detection performance across various models and datasets. This has the potential to contribute to safer and more reliable AI deployments. * **Clarity:** The paper is generally well-written and easy to follow. **Weaknesses:** * **Computational Complexity:** The paper mentions that the computational complexity of their method is O(n^3), where n is the feature size. While the authors argue that the inference cost is affordable, a more detailed discussion of the computational trade-offs compared to other methods would be beneficial. * **Evaluation against Near-OOD** The test pairs used for testing against other OOD methods are mostly from Far-OOD, it would be beneficial to see how it performs against near-OOD data as per OpenOOD benchmark. Other Comments Or Suggestions: * **Line 110**: I guess we have type in select import neurons should be select improtant neurons Questions For Authors: - The paper mentions that the computational complexity of the method is O(n^3) (line 700). While the authors state that the inference cost is affordable, could they provide a more detailed comparison of the computational cost with other OOD detection methods? - The paper argues that Mahalanobis distance may become less sensitive to OOD samples that align with the directions of high variance in ID data caused by outliers. Could the authors provide more specific examples or visualizations to illustrate this phenomenon? A clearer illustration would strengthen the motivation for the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our work's **novelty**, **improved performance** demonstrated through comprehensive experiments, clear **presentation**, and its **significance** for AI system reliability. Below, we address each comment in detail. **C1**: gradient-based methods **C1-Ans**: Since the code of GradNorm[3] is the only implementation available, we compare our method with this gradient-based method. We have added these three works in Section 2 Due to its computational cost, we only conduct the comparison on CIFAR benchmark. In the following, we present the results of GradNorm. || CIFAR10 (AUROC/FPR) | CIFAR100 (AUROC/FPR) | |-|-|-| |GradNorm|92.60/24.83|79.75/63.08| |ours|96.83/14.63|92.38/29.98| **C2**: additional discussion on limitations and potential failure modes of Mahalanobis distance **C2-Ans**: Previous work has discussed the failure within the dimension corresponding to the smaller eigenvalues of ID covariance [1] and less effective performance when ID and OOD are similar [2]. The strong performance of our method aligns with and supports the findings in [1]. Our approach explicitly adjusts the covariance matrix within the residual space, thereby mitigating the aforementioned vulnerability regarding the discrepancy of Mahalanobis distance in residual space. In addition, our adjustment can also help improve the performance of Mahalanobis distance in near OOD detection. Details can be viewed in the answer of **C4**. We have added this discussion in our paper. **C3(Q1)**: Detailed discussion and comparison on computational complexity **C3(Q1)-Ans**: In the following table, we provide the detection time cost comparison over 10,000 CIFAR10 instances with DenseNet as the backbone network. ||Time(s)|CIFAR10(AUROC)| |-|-|-| |Feature extraction only|218.42|N/A| |Mahala|221.53|85.9| |ours|235.59|**96.93**| |GradNorm|473.7|92.6| We note that our method has a comparable time cost to Mahalanobis distance method. This is because in post-hoc OOD detection, the dominant computational cost lies in the feature extraction, which is shared across most methods. The time complexity we provided is only for the feature-level operation, which is insignificant as $n$ (feature dimension) is small. In addition, we compare with the time cost of GradNorm in the table **C4**: Near-OOD on OpenOOD benchmark **C4-Ans**: We conduct near OOD detection on OpenOOD benchmark with ResNet18. We report the experimental results in the following table. We also compare with the results of the recent distance score and subspace score available in OpenOOD, namely KNN and VIM. All the results are averaged over 3 runs. We also provide the results of RMDS [1] with our method. RMDS is a Mahalanobis distance tailored for near-OOD detection. As the table shows, our method can effectively improve the performance of Mahalanobis distance in near OOD detection. It can also improve the RMDS, reflecting the robustness of our method in different distances in near OOD detection. ||CIFAR10-CIFAR100||CIFAR10-Tiny||CIFAR100-CIFAR10|| CIFAR100-Tiny||avg|| |-|-|-|-|-|-|-|-|-|-|-| ||FPR|AUROC|FPR|AUROC|FPR|AUROC|FPR|AUROC|FPR|AUROC |VIM|52.33|87.03|44.16|88.88|70.79|72.15|54.92|77.73|55.56|81.46| |KNN|38.76|**89.59**|30.89|**91.51**|72.69|77.1|49.68 | **83.29**|48.01|85.37| |mahala|64.51|79.48|59.06|80.71|88.88|54.6|80.75| 60.31|73.30|68.78| |mahala+dynamic(ours)|61.38|84.73|52.95|86.54|75.24|71.72|60.35|76.34|62.48|79.83| |RMDS| 49.76| 88.26| 37.63| 90.29| 62.90| 77.69| 49.55| 82.60|49.96|84.71| |RMDS+dynamic(ours)|**38.70**|89.42|**30.47**|91.45|**61.7**|**78.21**|**48.85**|82.81|**44.93**|**85.48**| **C5**: typos **C5-Ans**: Thank you for pointing out this typo. We have revised it in our paper. ### Questions **C6(Q2)**: Visualization of the phenomenon that Mahalanobis distance may become less sensitive to OOD samples that align with the directions of high variance in ID data caused by outliers **C6(Q2)-Ans**: **Visualiztion**: https://i.imgur.com/U4rrTTt.png We visualise this phenomenon in 2D space. Note that, as the features are in the spaces with higher dimensions, the situation may be more complicated than our demonstration. Mahalanobis distance can be considered as a geodesic distance of normal distribution, thus we utilize the geodesic line within the randomly sampled normal distribution (ID data) to demonstrate this phenomenon. As the figure shows, the geodesic line may be made further from the center point by the outlier points. It is obvious that the OOD points, aligned with outlier points, can be closer to the dashed line than the bold line, which makes the distance between OOD points and ID center smaller. [1] Ren, Jie, et al. A simple fix to Mahalanobis distance for improving near-ood detection. [2] Tajwar, Fahim, et al. No true state-of-the-art? OOD detection methods are inconsistent across datasets. [3] Huang, Rui, et al."On the importance of gradients for detecting distributional shifts in the wild.
Summary: This paper proposes a dynamic covariance calibration approach for OOD detection, addressing the sensitivity of distance-based detectors to outliers in the ID data. While existing methods mitigate this issue by projecting features onto principal dimensions, they risk losing valuable ID information. Instead, the proposed method dynamically adjusts the covariance matrix using real-time input features while preserving the ID structure by only updating the residual subspace. ## Update after rebuttal Most of my concerns have been answered, especially regarding the novelty of the method, which differs from other subspace approaches due to test time adaptation of the OOD criterion. Claims And Evidence: The essential claim of the paper is that OOD detection might be improved by only considering In-Distribution (ID) subspaces. This claim is shared with other recent papers in the literature. Here, however, the ablation shows only FPR95 results but this metric is prone to large variations. AUC is more challenging and could better indicate the impact of each component of the method. Furthermore, the overall table is difficult to read as it does not show the full combinatorial related to each component. Methods And Evaluation Criteria: The benchmarks are only considering far-OOD samples and should be supplemented with near-OOD samples. In particular, CIFAR benchmark could used Tiny-Imagenet as OOD and also add the experiments: C-10 (ID) vs C-100 (OOD) and C-100 (ID) vs C-10 (OOD). For the second benchmark, results on iNaturalist could bring more insights on the method performances for near-OOD detection. For fairer comparison and broader, it would have been interesting to have SOTA results on standard benchmarks such as OpenOOD 1.5. Theoretical Claims: I did not find any issue in the theoretical claims. Experimental Designs Or Analyses: t is essential that the proposed method is compared with other subspace-based OOD detection methods namely ViM and Neco also on the CIFAR benchmark. Same goes for Tab 3. Supplementary Material: I checked section A, B and D. Relation To Broader Scientific Literature: The proposed approach is particularly similar to ViM, which also operates on the residual subspace of the antepenultimate layer. The essential difference is the OOD score. Moreover, it shares also common ideas with necko. Differences and in particular the relevance of the proposed approach compared to these methods could be discussed in more detail. ViM: Out-Of-Distribution with Virtual-logit Matching, Wang, Haoqi and Li, Zhizhong and Feng, Litong and Zhang, Wayne, CVPR 2022 NECO: NEural Collapse Based Out-of-distribution detection, Mouïn Ben Ammar, Nacim Belkhir, Sebastian Popescu, Antoine Manzanera, Gianni Franchi, ICLR 2024 Essential References Not Discussed: Essential works are cited. Other Strengths And Weaknesses: The name of related work sub-section 2.2 ``Test time OOD detection'' is misleading as it might refer to test-time adaptation. Here the authors are more concerned about post-hoc methods. The dynamic aspect of the method is difficult to grasp. As $\bm \Sigma_R$ and $\mathcal M(f)$ are computed on the training dataset, the parameters of the OOD criterion are fixed for any given test-sample. Other Comments Or Suggestions: * l. 266-268: class is indexed by $i$ then by $c$ * $s(z)$ does not depends on $z$ in the right side of eq. (3) Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and valuable suggestions. We provide detailed responses to each comment in the following. ### Claims And Evidence **C1 Detailed ablation study**: **C1-Ans**: We include AUROC results in the ablation study table. To show full combinatorial results and enhance the readability, we also reformatted our ablation study table. Note that RSP (Residual Space Projection) and DCM (Dynamic Covariance Modeling) must be built on top of the DME (Dynamic Matrix Estimation) module. DCM is introduced in Section 4.3. |DME|RSP|DCM|CIFAR10 DenseNet (AUROC/FPR)|CIFAR100 DenseNet (AUROC/FPR) |ImageNet ViT (AUROC/FPR)| ImageNet ResNet50 (AUROC/FPR)| |-|-|-|-|-|-|-| |||| 95.19/18.86| 89.47/35.84| 92.51/33.16| 82.49/60.43| |✓||| 95.47/17.51| 88.05/36.61|92.65/32.31| 81.46/61.77| |✓|✓|| 94.08/23.92| 85.69/53.70|82.72/86.32|87.31/52.17| |✓|| ✓| 96.24/16.38|92.17/30.16|94.24/27.05|81.88/60.49| |✓| ✓|✓|**96.83**/**14.63**|**92.38**/**29.98**|**94.27**/**26.94**|**87.43**/**51.77**| As discussed in Section 5.5, the large between-class covariance distorts the geometry with which the data can not form a dense manifold. Thus, DCM is essential on some backbones, like DenseNet. We also discussed this in detail in Section C of the appendix. As shown in the table, the DME can solely improve the performance on CIFAR10-DenseNet and ViT. Also, the RSP is essential in ResNet50. ### Methods And Evaluation Criteria: **C2 Experiments on near-OOD detection and iNaturelist**: **C2-Ans**:We have reported iNaturelist results in Tab. 9 of Appendix E. The near-OOD results can be viewed in **Reviewer YqZR's C4** ### Experimental Designs Or Analyses: **C3 Include NECO and VIM in CIFAR benchmark (Tab. 1) and DINO (Tab. 3)**: **C3-Ans**: We provide the experimental results of NECO and VIM on CIFAR benchmark and the results of Residual score on DINO in the following. || CIFAR10 DenseNet (AUROC/FPR) | CIFAR100 DenseNet (AUROC/FPR) | |-|-|-| |VIM| 93.44/33.66| 87.95/51.39| |NECO| 94.51/27.92| 79.79/72.43| |Ours| **96.83**/**14.63**| **92.38**/**29.98**| ||DINO(AUROC/FPR)| |-|-| |Residual (VIM)| 88.25/51.36| |Ours| **91.65**/**38.23**| Since DINO does not have the linear classifier as the supervised pretrained model does, for the DINO experiment, we only evaluate Residual score and discard the energy score in VIM. ### Relation To Broader Scientific Literature: **C4**: Detailed differences and relevance of the proposed approach compared to VIM and NECO **C4-Ans**: Both methods(VIM, NECO) only consider subspace information without direction-specific adjustments, potentially ignoring part of the ID information geometry in the distance-based score. Additionally, these methods do not adaptively utilize test-time information to correct for possible distortions in the training distribution, further limiting their sensitivity to novel or shifted feature directions. We will revise this discussion in Section 3 of the manuscript. The VIM and NECO can be interpreted as the matrix-induced distance-based scores with the formula $d_M(f) = \sqrt{(f-f_a)M(f-f_a)^\top }$. From this perspective, NECO and VIM can be explained as these methods to find better geometry of the distance for OOD detection. However, these methods ignore the fact that the matrix $M$ or geometry may be affected by the outliers in ID distribution (Details can be viewed in R2(YqZR)'s C6(Q2)). In our proposed approach, $M$ is related to real time features $f$, denoted as $\mathcal{M}(f)$. Specifically, $\mathcal{M}(f) = (Cov-r_f^\top r_f)^{-1}$, where $Cov$ is the covariance matrix from available features in the train set and $r_f$ is the projected real time features $f$ projected on residual space. For VIM and NECO, $M = Res(Cov)$ and $M=Pri(Cov)$, with $Res(\cdot)$ and $Pri(\cdot)$ being the residual and principal spaces, respectively. This indicates that $M$ remains the same for both methods, regardless of any real-time features. In addition, they may ignore part of the ID information geometry in the distance-based score. ### Other Strengths And Weaknesses **C5**: The name of sub-section 2.2 **C5-Ans**: We have updated the subtitle of section 2.2 to Post-hoc OOD detection methods. **C6**: The parameters of the OOD criterion are fixed for any given test-sample. **C6-Ans**: The dynamic component aims to mitigate covariance distortion caused by outliers in real-time features, specifically in Mahalanobis distance. Please view **R2(YqZR)'s C6(Q2)** for clearer motivation for our method. Our method relies on the initial computation of $\Sigma_R$ and $B$ as necessary initialization steps, while the dynamic component is provided by real-time recalibration of the covariance structure individually tailored for each test sample, significantly enhancing adaptability and robustness. ### Other Comments Or Suggestions: **C7**: Typos **C7-Ans**: Thank you for pointing out the typos in our paper. The z in s(z) should be f. We have revised these typos --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I read their answers to my and other reviewers' concerns, and I am mostly satisfied. However, I wonder why the proposed method shows such different behaviors between the (ImageNet ResNet50) and the other columns in the ablation. The DME + RSP combination seems detrimental in almost all configurations, and only the DME + DCM set-up brings consistent gains. This tendency is being reversed for R-50, but there is no explanation for it. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Our full model achieves the best performance across all four different settings, highlighting the complementary nature of **DCM** and **RSP**. Since OOD data can also lie in the space between two ID clusters, the between-class covariance matrix may not only reflect ID information. Therefore, it is crucial to emphasize within-class covariance in OOD detection. To address this, **DCM** transforms the original embedding space, where clusters from different classes are separated, into a center-aligned embedding space by leveraging the within-class covariance matrix. Here (https://i.imgur.com/L9Mjnhj.png), we simulate two subspaces (i.e., well-clustered and poorly clustered space) to demonstrate the differing impacts of applying **DCM**. Comparing the upper and lower figures illustrates that the more clustered the embedding space is, the greater the improvement when using **DCM**. In practice, ImageNet-ViT has a better-structured embedding space compared to ImageNet-ResNet50, as indicated by the top-1 classification error. **RSP** leverages the residual space to highlight the covariance matrix dimensions that are highly affected by the ID outliers, so that these dimensions can be dynamically adjusted with incoming samples. For a well-clustered subspace, **RSP** can be detrimental in this situation due to the large between-class covariance. But it can solely work on poorly clustered scenarios since within-class covariance dominates the overall covariance matrix. Notably, in our ablation study, without **DCM**, **RSP** can only be computed using the full covariance matrix(within-class + between-class), **differing from the **RSP** in the full model**, which specifically optimizes the within-class covariance. **Why is the tendency on ResNet50 different**: As discussed above, if the embedding space exhibits low inter-class separability, **DCM** does not significantly alter the feature distribution, and between-class covariance minimally impacts the overall covariance. In the following, we compare the largest eigenvalue of residual space projection matrix with and without between-class covariance to illustrate this phenomenon. From the table, we observe a significantly smaller difference in the largest eigenvalue for ResNet50 compared to ViT. This is why **DCM** can hardly improve the performance on ImageNet ResNet50, but solely **RSP** can improve the performance a lot on ResNet50. ||ImageNet ViT(w./w.o.)|ImageNet ResNet50(w./w.o.)| |-|-|-| |largest eigenvalue|1.2$e^{-3}$/5.7$e^{-4}$|8.2$e^{-6}$/7.6$e^{-6}$|
null
null
null
null
null
null
null
null
Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment
Accept (poster)
Summary: This paper proposes an approach to modeling preferences by learning an embedding for each response given a prompt. The preference score between two responses is then computed using these embeddings. The key motivation behind this embedding-based approach is that mapping responses into a multi-dimensional latent space allows the model to better capture complex preference structures, including potential cycles in the preference data. Beyond testing the preference model, the authors apply it to preference optimization. They modify the SPPO method by replacing preference probabilities with preference scores in the loss function, introducing a variant they call General Preference Optimization (GPO). Experiments on AlpacaEval suggest that the proposed method improves win rates in some cases. Claims And Evidence: The paper would benefit from providing more evidence on why better modeling of intransitive human preferences is useful. Another perspective to consider is that some of the intransitivity observed in preference data may stem from noisy judgments. In that case, improving the model’s ability to capture intransitivity could lead to overfitting to noise rather than genuine preferences. Additionally, in line 114, the authors cite two papers, but neither directly addresses human preference data in the context of RLHF. Methods And Evaluation Criteria: The methods and datasets used for evaluation (RewardBench and AlpacaEval) are generally reasonable. However, the cyclic preference data created by the authors is not well-motivated. It would also be helpful to include some basic details about how this dataset was constructed in the main text. Theoretical Claims: Most theoretical claims do make sense to me. However, there is a disconnect between the theory part and the method itself. For example, in theorem 3, the preference score needs to be bounded for the convergence result to hold, although it is not really mentioned if the preference scores obtained with GPM are actually bounded. Experimental Designs Or Analyses: Regarding the cyclic preference experiments (Table 1), my main concern is that the data is synthetic. First, it is unclear how often cyclic dependencies actually occur in real-world data. Second, as previously mentioned, it is not evident that modeling such phenomena more effectively would lead to better generalization in reward models. To make this experiment more convincing, I suggest first reporting the number of cycles found in existing benchmarks and then evaluating whether a reward model trained with the proposed method generalizes better than, for example, the Bradley-Terry (BT) model. The results in Table 3 are somewhat confusing. First, what is the difference between SPPO+GPM and GPO? Does the former use win rates in its loss function (calculated based on GPM), while the latter directly uses the scores? If so, do the authors have an intuitive explanation for why using raw scores yields better results? The existing literature generally suggests that win rates tend to work better than raw rewards, as they are invariant to monotonic transformations and contribute to training stability. Second, according to the LC win rate metric, the proposed method does not outperform existing approaches. However, I could not find any discussion around this in the paper and explain why this is the case. This needs to be discussed in the corresponding section. Supplementary Material: I skimmed through the code and the appendix. Relation To Broader Scientific Literature: The findings support previous claims about the limited expressivity of the Bradley-Terry (BT) model. The use of embeddings and the construction of preference scores with a skew-symmetric matrix is an interesting contribution to the literature and presents a promising alternative to the BT model. Essential References Not Discussed: Section 3.2 on preference modeling does not accurately represent the existing literature. The standard approach to training a reward model involves adding a randomly initialized linear head that outputs a scalar reward for each response. The model’s parameters are then optimized by maximizing the log-likelihood of the preference data under the Bradley-Terry model. In particular, lines 188–190 suggest that reward models must use templates and input both prompts into a language model to determine preference. However, this is not a comprehensive view of how reward models are typically trained. references: [learning to summarize from human feedback, Stiennon et al.](https://arxiv.org/pdf/2009.01325) [Evaluating Reward Models for Language Modeling, Lambert et al.](https://arxiv.org/pdf/2403.13787) Other Strengths And Weaknesses: To the best of my knowledge the specific model proposed here for preference modeling is novel. Although there are also other instances of embedding responses in the latent space; see [Chen et al.](https://openreview.net/pdf?id=qfhBieX3jv) Other Comments Or Suggestions: Some of the notation is a bit confusing. For example, the number of responses per prompt is said to be $K$ and the embedding dimension is $2k$, but probably there is not need for these two numbers to be related, right? and then the complexity order should be $O(Kd)$, where $d$ is the dimension of the preference embeddings. Similarly, in theorem 2, do we really need to match the dimension of number of responses with the dimension of embeddings? the equation doesn’t really necessitate such a thing, so that is really confusing. Questions For Authors: The discussion on “Automatic Subspace Discovery” (line 280) is theoretically sound and quite interesting. However, it would be valuable to see empirical evidence supporting this idea—specifically, whether there are instances of interpretable directions or meaningful eigenvalues in practice. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful feedback on our manuscript. We appreciate the time you took to evaluate our work and provide constructive comments. We believe addressing your points will significantly strengthen the manuscript. 1. Intransitivity Usefulness: > Q1: The paper would benefit from providing more evidence on why better modeling of intransitive human preferences is useful... Thanks for your suggestion. Modeling intransitivity is motivated by observing that human judgments can be complex, context-dependent, and exhibit cycles, which simpler Bradley-Terry (BT) models cannot capture. GPM's multi-dimensional space aims to represent these richer structures. While noise is a factor, capturing potentially genuine complex preferences (Appendix F examples) is vital for alignment. Distinguishing noise from true intransitivity is important for future work. 2. Line 114 Citations: > Q2: Additionally, in line 114, the authors cite two papers,... The two references (Tversky, 1969; Agranov & Ortoleva, 2017) are among the earliest to study the prevalent phenomenon of humans exhibiting non-transitive preferences. We use them to motivate moving beyond transitive models like BT. We will clarify the scope of these references in our revision. 3. Cyclic Data: > Q3: However, the cyclic preference data created by the authors... This synthetic dataset demonstrates GPM's expressiveness where BT models inherently fail (random guessing) due to their transitivity assumption. Appendix E details its construction from Ultrafeedback; we'll add a summary to the main text. 4. Boundedness for Theorem 3: > Q4: For example, in theorem 3, the preference score needs to be bounded for the convergence result to hold,... Thanks for the feedback. Our GPM implementation ensures boundedness. We apply L2 normalization to embedding vectors $v_{y \mid x}$, making them unit length. Since the score is $s\left(y_i \succ y_j \mid x\right)=v_{y_i \mid x}^{\top} D(x) R^{>} D(x) v_{y_j \mid x}$, and $R^{>}$ is magnitude-preserving, scores are bounded if eigenvalue scales $\lambda_l(x)$ are bounded. This is guaranteed as we use a softmax function on the gate outputs determining eigenvalues. Thus, theorem conditions hold. We'll clarify. 5. Cyclic Experiments: > Q5: Regarding the cyclic preference experiments (Table 1), ..., Synthetic data provides a clear proof-of-concept for modeling intransitivity. We agree that evaluating cycle prevalence in real benchmarks and comparing generalization vs. BT models are valuable future work. Appendix F shows potential real-world examples. 6. Table 3 (SPPO+GPM vs. GPO): > Q6: First, what is the difference between SPPO+GPM and GPO?... Second, according to the LC win rate metric... SPPO+GPM uses win rates from GPM scores; GPO uses raw scores s(yi>yj∣x) directly. GPO maximizes expected score; win-rate methods maximize expected probability. Raw scores might offer a richer signal (any real number) vs. win rates (0 to 1), potentially explaining GPO's performance despite win rates offering stability. We'll clarify. We acknowledge GPM/GPO doesn't always beat BT/SPPO on Length-Controlled (LC) win rate. GPM/GPO models tend to produce longer responses ("Avg. Len"). As LC Win Rate controls length bias, this might impact results. The appendix shows length-normalized GPO (LN-GPO) results. More discussion will be added. 7. Section 3.2 (Preference Modeling): > Q7: Section 3.2 on preference modeling does not accurately represent the existing literature... Sec 3.2.1 introduces PairPMs for general preference modeling, distinct from standard BT models (Sec 3.1). Lines 188-190 illustrate PairPM examples and potential issues, not all reward model training. We'll revise for clarity, distinguishing standard models (scalar head, BT loss) from PairPMs. We'll add a discussion on Stiennon et al. (2020) and Lambert et al. (2024). 8. Notation (K vs. d, Complexity): > Q8: Some of the notation is a bit confusing... number of responses is K and the embedding dimension is 2k... $K$ is response count; 'd' is embedding dimension (formerly $2k$) . $K, d$ are unrelated. Complexity concerns model forward passes: GPM/BT require $\mathcal{O}(K)$ passes (one per response) for embeddings/rewards. PairPM requires $\mathcal{O}\left(K^2\right)$ passes (one per pair). GPM offers linear scaling of model calls w.r.t. K, matching BT, improving over PairPM. We'll clarify notation/complexity. The $P \in \mathbb{R}^{2 k \times 2 k}$ setup in Thm 2 illustrates spectral decomposition; our method (Sec 4.2)/Thm 1 doesn't require $d=K$. We'll clarify this distinction. 9. Automatic Subspace Discovery: We appreciate your interest. You asked for empirical evidence of interpretable directions/eigenvalues. Excellent suggestion for future work. Analyzing embeddings/eigenvalues via probing/visualization could yield insights. Figure 3 is a first step; systematic investigation is needed. We'll add this as a future direction. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification regarding the boundedness. I still believe that doing some initial analysis on the cyclic preferences in the real-world benchmark is useful for this paper, as well as a detailed discussion (with examples) of why controlling for the length makes the proposed method not effective. I've decided to keep my score and remain positive on this paper. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our clarifications and for maintaining a positive view on our paper. We appreciate your additional feedback and address them below: 1. Cyclic Preferences in Real-World Benchmarks: We agree that analyzing the prevalence and impact of cyclic preferences in standard benchmarks is valuable. Following your suggestion, we examined the RewardBench dataset (which contains 2985 samples) and found 110 intransitive preference samples involving three responses (A>B, B>C, C>A) and 1 sample with five responses exhibiting a cycle (A>B, B>C, C>D, D>A, A>E). On these 335 pairwise comparisons derived from intransitive samples, our GPM-LLama3-8B achieved an accuracy of 55.52% (186/335), outperforming the BT-LLama3-8B model's accuracy of 50.15% (168/335). This preliminary analysis highlights GPM's enhanced ability to model cyclic preference structures where BT models struggle. We have also included qualitative examples from the Ultrafeedback dataset in Appendix F that suggest potential real-world intransitivity. We will revise our paper to include this discussion and results. 2. Length Control Discussion: We think that a detailed discussion on the interplay between our model and response length is important, especially concerning the Length-Controlled (LC) win rate metric. We think that longer responses can potentially address more preference aspects (like helpfulness, correctness, and style) simultaneously, and GPM's multi-dimensional embeddings might capture this multifaceted quality better than a single scalar reward, leading to higher preference scores for longer, comprehensive answers. When strict length control is the primary objective, the standard GPM/GPO might be less effective as the LC metric penalizes length bias. - As a potential solution for scenarios prioritizing length control, we proposed normalizing the preference score by response length, similar to the length normalization idea in SimPO. - We presented initial results for this approach, termed Length-Normalized GPO (LN-GPO), in Appendix E.2. Table 7 shows that LN-GPO using GPM (trained on Llama-3-8B, Iteration 1, GPM 2B) achieves an LC win rate of 45.55%, slightly outperforming the LN-GPO using the BT model (45.51%). - We acknowledge these are preliminary results and, as mentioned, are conducting larger-scale experiments to further investigate the effectiveness of length-normalization techniques for GPM in length-sensitive evaluations. We will ensure a more detailed discussion, including examples, to be added to the revised manuscript. Thank you for your positive and encouraging feedback. We will incorporate these points to strengthen the paper. If you are satisfied with our work and feel our contributions deserve a higher score, we would sincerely appreciate your consideration in raising the score.
Summary: Since the prevalent Bradely-Terry formulation and pair preference model have limitations respectively in reward modeling, the authors propose a novel formulation GPM with better expressiveness to model complex preference distributions in real world. They further extend GPM to preference learning to build GPO, a new algorithm that better align LLMs with human preference. ## update after rebuttal I have read the author response and would like to keep my positive score unchanged. Claims And Evidence: 1. The authors did not elaborate on the details of GPM's model architecture, training objectives, and deployment. This makes it hard to verify the claimed computational efficiency advantages in the section of Introduction. 2. Although GPM is claimed to have stronger expressiveness, it does not seem to show a consistent bonus on RewardBench. However, considering the positive results of GPO on AlpacaEval, I suspect that there are some complex cases in RewardBench that confuse GPM. Adding analyses on these case will be helpful. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked the content in Sec 4 related to GPO formulation, which is correct and interesting. Experimental Designs Or Analyses: I have checked the detail of all experiments. The remaining issue can be on cyclic preference, which I fail to understand the settings. A step-by-step explanation in the appendix would make it more clear. Supplementary Material: Yes. Relation To Broader Scientific Literature: The primarily related domain is human preference alignment of LLM, where GPM can be useful in RLHF and supervised methods like DPO, SimPO, and so on. Essential References Not Discussed: No. Other Strengths And Weaknesses: The writing is not clear enough. For example, the authors did not clearly explain the optimization objective of GPM. Is it the same with Bradely-Terry? Other Comments Or Suggestions: Typo: The use of "Sec 3.2.1" could be changed to "Sec 3.3"? Questions For Authors: The proposed preference embedding seems to map the last-layer hidden state through different heads to multiple scalars (similar to multi-head attention) and then sums them to obtain the final reward. It seems somewhat similar to ArmoRM[1]. However, each head in ArmoRM has a clear meaning, making the scoring more interpretable, while heads in GPM do not. Therefore, simply adding all scalars at the end of the forward pass makes me skeptical about its substantive significance. [1] https://arxiv.org/abs/2406.12845 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and detailed review. We appreciate your constructive feedback and the opportunity to clarify aspects of our work. We address your points below: Below, we try our best to address the points you raised: 1. GPM Architecture, Training, & Efficiency: > Q1: The authors did not elaborate on the details of GPM's model architecture, and training objectives... hard to verify the claimed computational efficiency... - Architecture/Implementation: We describe the implementation of GPM in Section 4.2, detailing the eigenvalue scale gate and the eigenvector embedding head used to generate the preference embeddings. - Training Objective: The optimization objective for training GPM is explicitly defined in Appendix A.2 (Equation A.1). It involves minimizing the standard cross-entropy loss used in preference learning, based on the predicted preference probability ($\sigma(s(y_w>y_l∣x)$), where s is the GPM score from Eq 4.1) and the observed preference data $P_D(y_w>y_l∣x)$. This objective directly mirrors the typical objective used for training BT-based reward models. - Computational Efficiency: GPM achieves $\mathcal{O}(K)$ query complexity (Sec 4), matching BT models and improving on $\mathcal{O}(K^2)$ pair-based models, by processing responses individually. 2. GPM Performance on RewardBench: > Q2: Although GPM is claimed to have stronger expressiveness, it does not seem to show a consistent bonus on RewardBench... - Thank you for your observation regarding RewardBench performance. While performance can vary across specific sub-tasks and embedding dimensions (as shown in our ablation studies in Section 6.1 and Appendix E.1), GPM consistently outperforms the BT reward model on average across several tested base models, as shown in Table 2. Specifically, GPM showed average improvements of +7.44% (Gemma-2B-it), and +1.34% (Llama-3.1-8B-Instruct) over the BT baseline. Significant gains were often observed in the Chat and Chat-Hard categories. We appreciate the suggestion to analyze complex cases further and will add this to the appendix. 3. Clarity on Cyclic Preference Setting: > Q3: ...The remaining issue can be on cyclic preference, which I fail to understand the settings. A step-by-step explanation... would make it more clear. - These datasets were constructed using Ultrafeedback data, where we identified instances exhibiting cyclic preferences based on ratings across different criteria (e.g., helpfulness, honesty, instruction following). For example, a cycle might emerge where Response A is rated higher on honesty than B, B higher on helpfulness than C, but C higher on honesty than A. We provide details on the construction in Appendix E and results demonstrating GPM's near-perfect accuracy on these tasks in Section 6.2 and Table 1. Figure 3 also visualizes the learned embeddings for these cyclic cases. - We will add a more detailed step-by-step explanation of the dataset construction to the appendix in the revised version, as you suggested. 4. Preference Embedding Significance vs. ArmoRM: > Q4: ...similar to ArmoRM... each head in ArmoRM has a clear meaning... while heads in GPM do not. Therefore, simply adding all scalars... GPM's design prioritizes expressiveness and generality. The core idea is that complex, real-world human preferences might not always decompose neatly into a few predefined, interpretable scalar dimensions. They can exhibit intransitivity and context-dependency that require a more flexible representation. - Theoretical Grounding: Our multi-dimensional embeddings, combined with the skew-symmetric operator $\mathbf{R}^{\succ}$ and inner product $< \mathbf{R}^{\succ} v_{y_i|x},v_{y_j∣x}>$, provides a principled way to model any skew-symmetric preference relationship, including cycles. Theorem 1 and Theorem 4 provide theoretical grounding for this expressiveness. - Expressiveness and Generality: While ArmoRM's interpretable heads offer clarity, this structure inherently limits its capacity. Such models based on aggregating scalar rewards, even multi-dimensional ones, generally cannot capture arbitrary preference structures, particularly those involving intransitivity (like cycles) or complex mixtures of transitive and intransitive components. - Automatic Learning vs. Annotation: Furthermore, ArmoRM relies on aggregating scores from heads associated with predefined metrics (like helpfulness, truthfulness), which often require costly, explicit human annotation for each metric. In contrast, GPM aims to automatically discover and learn the relevant multi-dimensional preference representations directly from a single, standard pairwise preference signal ($y_w \succ y_l$). The dimensions in GPM are learned end-to-end to capture relevant preference factors without needing pre-defined semantics or multi-metric labels. The eigenvalue scale gate further allows the model to dynamically weight these learned factors based on the context.
Summary: This paper introduces preference embedding, a novel approach to model human preferences for aligning foundation models that overcomes the limitations of traditional reward models like the Bradley-Terry model, especially in capturing intricate preferences. The authors propose the General Preference embedding Model (GPM), which embeds responses into a latent space, which is more expressive. Furthermore, the paper presents General Preference Optimization (GPO), a method that generalizes reward-based RLHF using the preference scores from GPM. Experimental results on RewardBench demonstrate that GPM consistently outperforms the BT reward model, particularly in modeling cyclic preferences. Evaluations on downstream tasks like AlpacaEval 2.0 indicate that aligning language models with GPO and GPM leads to performance improvements over methods using Bradley-Terry models. Claims And Evidence: Table 1 presents a comparison of Bradley-Terry (BT) reward models and General Preference embedding models (GPM) on cyclic preference datasets. The results seem convincing and support the claim. The main problem is with Table 3, which shows the main result. It is very unclear. First of all, it's full of acronyms (e.g. LC. WR, WR, Avg Len), and the acronyms are not explained in caption and main text. I suppose WR probably means "win rate", but it's also unclear to me it means the win rate over what. I can't understand the results unfortunately. This paper has obvious writing quality issues. This is the main reason to recommend rejecting this paper. Methods And Evaluation Criteria: The method makes sense. However, as I explained previously, I don't fully understand the evaluation criteria because of writing issues. Theoretical Claims: I don't find obvious issues in the theoretical claims. Experimental Designs Or Analyses: Again, I don't fully understand the experiment settings because of writing issues. Supplementary Material: Yes. Additional results such as table 6. It seems like they have the same writing issues. Relation To Broader Scientific Literature: It's related to RLHF and LLM preference optimization literature. Essential References Not Discussed: I'm not aware of critical misses. However, I think it'd be better to add references of more prior work in preference optimization, such as: Mitchell, A note on DPO with noisy preferences & relationship to IPO, 2023 Liang et al., Robust preference optimization with provable noise tolerance for LLMs, 2024 Furuta et al., Geometric-Averaged Preference Optimization for Soft Preference Labels, 2024. Other Strengths And Weaknesses: No additional comments Other Comments Or Suggestions: I wasn't sure what (line 317) "We consider the iterative preference optimization process such as SPPO..." means. I suppose the authors actually means "GPO is a iterative preference optimization process, similar to SPPO..."? Questions For Authors: Please revisit the presentation of the experiment section. I believe the whole thing needs to be rewritten. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive comments. We especially appreciate the feedback regarding Table 3 and acknowledge that the acronyms used were not sufficiently explained in the current draft. 1. Clarification of Table 3: > Q1: First of all, it's full of acronyms (e.g. LC. WR, WR, Avg Len), and the acronyms are not explained in the caption and main text. I suppose WR probably means "win rate", but it's also unclear to me if it means the win rate over what. I can't understand the results, unfortunately. To clarify: - LC. WR stands for Length-Controlled Win Rate, a metric from AlpacaEval 2.0 that adjusts for length bias in generation. - WR denotes the Win Rate, i.e., the proportion of times our model's response was preferred over the baseline (GPT-4-turbo)'s, as judged by GPT-4-turbo. - Avg. Len refers to the Average Length of generated responses in tokens. The win rates are computed using pairwise comparisons between model outputs (e.g., our trained models vs. GPT-4-turbo) on the same set of prompts, with GPT-4-turbo as the evaluator. 2. Why These Acronyms Were Used: > Again, I don't fully understand the experiment settings because of writing issues. AlpacaEval is a widely used benchmark in nearly all recent LLM alignment papers (e.g., SimPO, SPPO, MagPie, Nemotron), and the format and metrics (including LC. WR) are standard in the literature. Due to space constraints, we opted to use the established abbreviations. That said, we agree that clearer exposition would benefit broader readability, and we will revise the caption and surrounding text to explicitly define these terms. 3. On the Role of This Table: > Q3: Table 1 presents a comparison of Bradley-Terry (BT) reward models and General Preference embedding models (GPM) on cyclic preference datasets. The results seem convincing and support the claim. The main problem is with Table 3, which shows the main result. We emphasize that our primary contribution is the development of a new, expressive architecture (GPM) for modeling general preferences, which addresses the limitations of traditional reward models like Bradley-Terry. The GPO-based alignment results in Table 3 are supplementary and serve to illustrate one possible downstream use case (Section 6.3). We believe that the clarity issues in Table 3, while important to correct, should not detract from the significance of our core contributions. 4. Additional Revisions: > Q4: I'm not aware of critical misses. However, I think it'd be better to add references to more prior work in preference optimization, such as: Mitchell, A note on DPO with noisy preferences & relationship to IPO, 2023 Liang et al., Robust preference optimization with provable noise tolerance for LLMs, 2024 Furuta et al., Geometric-Averaged Preference Optimization for Soft Preference Labels, 2024. We also appreciate the reviewer’s suggestions on related work. We will incorporate a discussion of the following relevant papers in the final version: Mitchell, A note on DPO with noisy preferences & relationship to IPO, 2023 Liang et al., Robust preference optimization with provable noise tolerance for LLMs, 2024 Furuta et al., Geometric-Averaged Preference Optimization for Soft Preference Labels, 2024.
Summary: This paper proposes *General Preference Embedding Model* (GPM) to improve LLM alignment on human preference. The motivation is mainly on addressing limitations of the classical BT reward models such as challenges when facing intransitivity. The authors deal with it by embedding the responses into latent space, introducing a skew-symmetric preference operator to derive preference scores. This approach is novel in the context of LLM alignment and, by their empirical results on AlpacaEval2 and RewardBench, efficient and capable in capturing intransitive preference structures with linear computational complexity O(K). Claims And Evidence: The authors provide some theoretical results, which are personally appreciated, and empirical support. In general I think the claims are solid. But I will definitely appreciate further results on the generalisability of this novel method like for complex cyclic preference (please see 'weakness' below). Methods And Evaluation Criteria: Experiments with cyclic preference datasets and RewardBench convincingly illustrate the performance advantages of GPM over BT models. However, It would be beneficial to include additional context or examples from diverse, real-world scenarios (some benchmarks are not too hard to 'hack' by powerful models). Theoretical Claims: I checked the theoretical claims and think they are clearly stated and easy to understand. Experimental Designs Or Analyses: It would be great if the authors could clarify whether additional robustness checks or sensitivity analyses were conducted, particularly regarding different user demographics or task variations. Supplementary Material: I went through but did not check all the details. Relation To Broader Scientific Literature: This paper focused on RLHF but inspired by literature like preference learning and statistical methods. Essential References Not Discussed: The references are adequate, yet some review on representation learning may be appreciated. Other Strengths And Weaknesses: Strengths: 1. As I stated before, I think the theoretical results and efforts should be appreciated, especially in the current LLM community. 2. the 'universal' framework which is inclusive of current methods (for example, k=1 -> BT model) is a good and inspiring idea. It can also deepen the understanding of the current popular methods. Weakness: 1. While the method is indeed novel and I personally like the paper, I reserve my recommendation for a strong acceptance because tbh some results on these benchmarks may be challengeable, and more ablation and real-world scenarios are needed to make really strong claims, and some may argue that cute math often really doesn't buy you much. Other Comments Or Suggestions: No more. Questions For Authors: I'm really curious about the embeddings' dimension, like have you analyzed scenarios where embedding dimensions reflect competing or conflicting preferences? If such scenarios were encountered, how does the GPM manage these practical trade-offs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your time, insightful feedback, and constructive comments on our paper. We appreciate the recognition of GPM's novelty, theoretical grounding, and universal framework potential. We would like to address the specific points raised: > Q1: Generalisability and further results on complex cyclic preferences. A1. We appreciate the suggestion regarding generalisability. As shown in Section 6.2/Table 1, GPM achieves near-perfect accuracy on cyclic preference datasets where BT models perform near-random guessing. This demonstrates GPM's effectiveness in capturing the intransitivity inherent in complex preference structures, a key limitation of BT models. > Q2: Need for more diverse, real-world scenarios/context, noting benchmark challengeability. A2. Thanks for your suggestion. We tested GPM on RewardBench (covering Chat, Chat-Hard, Safety, and Reasoning) and assessed downstream alignment using AlpacaEval 2.0. GPM consistently outperformed BT models across these benchmarks and different base models (Gemma-2B/9B, Llama-3.1-8B). GPM-integrated methods also showed improved win rates. While benchmarks have limitations, these consistent gains across diverse tasks suggest practical advantages. > Q3: More ablation/real-world scenarios are needed to make really strong claims. A3. We acknowledge your point that further studies could strengthen our claims. However, we believe the current results already provide significant support. The demonstrated superiority in modeling cyclic preferences, consistent outperformance on the diverse RewardBench tasks, and improvements in downstream alignment tasks collectively offer strong empirical evidence for GPM's effectiveness and advantages over traditional BT models. We have also included ablation studies on embedding dimensions (Section 6.1, Table 2) and GPM architecture design (Appendix E.1, Table 4). > Q4: Curiosity about embedding dimension analysis for competing/conflicting preferences and how GPM manages trade-offs. A4. Thanks for your feedback on analyzing embedding dimensions in scenarios with competing preferences and how GPM handles trade-offs. This is an excellent question that touches upon a core aspect of GPM's design. As discussed in Section 4.2, the multi-dimensional embedding space allows the model to automatically discover subspaces corresponding to various preference facets (e.g., helpfulness, honesty, style). The eigenvalue scale gate, which computes context-dependent eigenvalues {λ(x)}, then modulates the influence of these different dimensions based on the specific prompt. This mechanism allows GPM to dynamically weigh competing preference aspects and manage trade-offs based on context. Our ablation studies on embedding dimensions (Section 6.1, Table 2 and Appendix E.1, Table 4) further explore how performance varies with dimensionality across different tasks and models, indicating that the optimal configuration can depend on the specific trade-offs required. > Q5: Adding references on representation learning. A5. Thank you for suggesting a more extensive review of representation learning. We have acknowledged this connection briefly in Appendix D and agree that it's a relevant area. We will expand on this connection in the revised version.
null
null
null
null
null
null
Token Signature: Predicting Chain-of-Thought Gains with Token Decoding Feature in Large Language Models
Accept (poster)
Summary: The author's overall approach is simple and clear: CoT is not always necessary → Token Signature is correlated with CoT gain → The token probability distribution can be used to determine whether CoT is needed. Through experiments, the study validates that Aggregated SC can effectively indicate whether a task benefits from CoT, and the experimental results are quite promising. Claims And Evidence: Claim: Token Signature can help classify tasks and dynamically decide whether to use CoT reasoning, thereby maintaining high accuracy while reducing unnecessary token consumption. -> This method was tested on multiple tasks across different benchmark datasets, demonstrating a strong correlation between Token Signature and CoT gain. Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem pretty reasonable for the problem they’re tackling. The idea of using Token Signature to predict whether CoT will be effective makes sense, and they back it up with solid benchmarks across different types of tasks. The Aggregated SC metric is a smart way to analyze token probability trends, and the Dynamic CoT method helps optimize when to use CoT, balancing accuracy and efficiency. The evaluation setup is well thought out—they test on a variety of benchmarks (math, commonsense, symbolic reasoning, etc.) and compare against both open-source and closed-source models. That gives a broad perspective on how well the method generalizes. One minor thing is that they only tested closed-source transfer on GPT-4o, so it’d be nice to see results on more proprietary models. But overall, the approach makes sense, and the experiments support their claims well. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: The experimental design is solid, testing Token Signature and Dynamic CoT across multiple benchmarks and models. The correlation between token probability trends and CoT effectiveness is well-supported, but causality isn’t fully established. It’d help to see ablation studies and failure case analysis to understand when Dynamic CoT misclassifies. Also, testing more closed-source models beyond GPT-4o would strengthen generalization claims. Supplementary Material: No supplementary material provided Relation To Broader Scientific Literature: Chain-of-Thought is a widely used trick for LLMs, so this paper should be quite useful for improving overall LLM applications. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is easy to follow. Other Comments Or Suggestions: Figure 1 - CoT is significantly greater than 0 (p ¡ 0.05). Predicting Chain-of-Thought Gains with Token Decoding Feature" - Should "Feature" be plural (Features)? Questions For Authors: While the experiments show good results, is there a theoretical basis for why Token Signature correlates with CoT gain? Could information theory or Bayesian updating explain this relationship? Have you considered other indicators (e.g., token confidence variance, entropy trends) to compare against Aggregated SC? Are there cases where Token Signature mispredicts CoT effectiveness? A breakdown of such cases could strengthen the analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to reviewer xvNF: Dear reviewer: Thank you for reviewing our article and raising questions and suggestions. We now respond to the corresponding questions as follows. ### Q1: They only tested closed-source transfer on GPT-4o, so it’d be nice to see results on more proprietary models. **Response:** Thanks for your suggestion. Testing more proprietary models will increase the robustness of our approach. We will include testing of other proprietary models in the latest version of the paper. ### Q2: It’d help to see ablation studies and failure case analysis to understand when Dynamic CoT misclassifies. **Response:** Thanks for your suggestion. We perform some ablation experiments to strengthen our research, such as the prediction accuracy of the SC metric of the first N tokens decoded, and changing the prompt (few-shot CoT/zero-shot CoT) to see the impact. In order to verify the impact of different parameters, we use 10/20/50/100/200 tokens for experiments, and compare the prediction performance of Instance SC and Aggregated SC respectively. The comparison shows that the prediction performance of the first 50 tokens is the best (Instance SC (69.6%) and Aggregated_SC (89.3%)). We also explore the impact of different types of CoT, and the results show the robustness of our method. The above ablation experiment will be updated in the latest version of our paper. In addition, we will also add failure case analysis to the latest version of the paper to analyze the possible causes of failure (classification training samples/random errors, etc.) to ensure the comprehensiveness of our research. ### Q3: Other writing problems in Figure 1. **Response:** Thank you for pointing out the writing problems in the paper. We re-checked the paper and corrected the pointed out problems. ### Q4: Is there a theoretical basis for why Token Signature correlates with CoT gain? **Response:** Thanks for your question. We give the following theoretical analysis: Token probability can reflect LLM’s confidence in an answer [1]. Chain-of-thought allows for improving confidence for inherent serial problems by spending more tokens for deep search [2]. However, other not inherently sequential problems might have negative outcomes because of snowball errors [3]. In other words, the model's low confidence may steer it down the wrong reasoning path. When CoT is introduced, the snowball effect is exacerbated, amplifying the initial error further. By incorporating the Spearman correlation indicator, we can assess the model's uncertainty at the start of the reasoning process, helping us determine whether applying CoT is beneficial. Therefore, SC is a good metric to predict the performance gain of CoT. We will make the above content clearer in the latest version of the paper. [1] Detecting hallucinations in large language models using semantic entropy. Nature 2024 [2] Chain of thought empowers transformers to solve inherently serial problems. arXiv 2024 [3] Rethinking External Slow-Thinking: From Snowball Errors to Probability of Correct Reasoning. arXiv 2025 ### Q5: Have you considered other indicators (e.g., token confidence variance, entropy trends) to compare against Aggregated SC? **Response:** Thanks for your question. We explore multiple indicators in our initial exploration, such as the Mean of Absolute Increments and Probabilistic Entropy. For individual cases, the probability distribution of tokens often has a certain degree of divergence. Based on our large number of experimental observations, we believe that these two indicators are difficult to measure the characteristics of different questions when the model answers. For the CoT gain judgment at the benchmark granularity level, we find that the Mean of Absolute Increments is often affected by changes in the number of questions, and it is difficult for us to avoid errors caused by randomness, so we discard it. We observe that the average token probability distribution trend of the initial token is not the same (Figure 2 in the paper), and it is also highly correlated with the question category. There is no work to study it from the perspective of the overall decoding distribution. Inspired by this, we use Spearman Correlation as the main indicator. I hope our answer can address your concerns. And we will add relevant discussions on other exploration indicators in the latest version of the paper.
Summary: This paper presents token signature, i.e., the spearman correlation between token probability distributions and token indices, that can better help decide if a CoT is needed for a specific task or not. The authors further introduced dynamic CoT, that can do online selection of whether to use CoT or direct answer, and showed accuracy and efficiency improvements over multiple tasks. Claims And Evidence: Yes, most claims are well supported. - For the spearman correlation, the authors presented a few examples, is there any intuitive explanation on why this might work? Also, how much is this correlation affected by the CoT prompt (e.g., if I use few-shot CoT instead of zero-shot CoT)? Methods And Evaluation Criteria: The methods are reasonable and evaluation is done over multiple language models (Llama, Phi, Mistral, GPT-4o) and over a wide set of datasets (math, symbolic, knowledge, commonsense etc). Theoretical Claims: N/A Experimental Designs Or Analyses: Yes the experiments are fairly comprehensive. - Evaluation is done over 4 model families and 5 reasoning categories. - The correlation between instance/aggregated SC and CoT gain is well illustrated. - In addition to accuracy improvements, the authors also showed token consumption, indicating efficiency improvements using dynamic CoT. Supplementary Material: N/A Relation To Broader Scientific Literature: Overall the idea is quite novel. The authors observed an interesting phenomenon that the token probability distribution has a positive correlation with token index when CoT is used (and vice versa for direct answers), and proposed a reasonable method to dynamically select whether to use CoT or not for a wide set of tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: - Overall the idea is novel. If possible can the authors add more intuition on why the spearman correlation between token probability and token index can help decide when a CoT is needed or not, other than empirical observations? - The experiments overall are sound. From Table 3 and 4, it's clear that the Dynamic CoT can achieve a very good balance between performance from using a CoT and not using a CoT, and the results transfer well to closed-source models. Figure 4 and 5 present token efficiency improvements, which is another benefit of adopting dynamic CoT. Other Comments Or Suggestions: See above. Questions For Authors: See above. Specifically, can the authors add more intuition on why the SC is correlated with whether CoT is needed or not? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to reviewer YmMf: Dear reviewer: Thank you for reviewing our article and raising questions and suggestions. We now respond to the corresponding questions as follows. ### Q1: For the spearman correlation, the authors presented a few examples, is there any intuitive explanation on why this might work? **Response:** Spearman correlation (SC) measures the monotonic relationship between the initial token probability distribution and the sequence order during decoding. It works intuitively for the following reasons: Token probability can reflect LLM’s confidence in an answer [1]. Chain-of-thought allows for improving confidence for inherent serial problems by spending more tokens for deep search [2]. However, other not inherently sequential problems might have negative outcomes because of snowball errors [3]. Because math tasks (such as GSM8K) have a strict step-dependent structure, such as arithmetic operations need to be performed in sequence, and logical deductions need to be carried out step by step. The solution space of such tasks is highly constrained, and the generation of intermediate steps needs to follow clear rules. By introducing CoT, a method similar to deep search, the certainty of the model's answer can be well enhanced. For other problems, such as common sense reasoning, the solution space is relatively diverse. When the initial confidence of the model is not high, it is easier for the model to fall into the wrong reasoning path. The introduction of CoT will gradually amplify the error due to the snowball effect. By incorporating the Spearman correlation indicator, we can assess the model's uncertainty at the start of the reasoning process, helping us determine whether applying CoT is beneficial. Therefore, SC is a good metric to predict the performance gain of CoT. [1] Detecting hallucinations in large language models using semantic entropy. Nature 2024 [2] Chain of thought empowers transformers to solve inherently serial problems. arXiv 2024 [3] Rethinking External Slow-Thinking: From Snowball Errors to Probability of Correct Reasoning. arXiv 2025 ### Q2: How much is the correlation affected by the CoT prompt (use few-shot CoT instead of zero-shot CoT)? **Response:** Thanks for your concern. We have included the impact of other prompt settings, such as few-shot CoT. The results show that for few-shot CoT, our method also achieves high prediction accuracy. We use standard prompts (i.e., only provide required questions as prompts) to calculate the proposed Spearman correlation metric, and then predict the CoT gain at the benchmark granularity. We focus on zero-shot CoT as the main research object and supplement the experiment with few-shot CoT. The experimental results show that the gains of few-shot CoT and zero-shot CoT are highly consistent, so at the benchmark granularity, the proposed method can also well predict the gain of few-shot CoT, and the experimental results are very robust. We hope our answer can address your concern. We will add detailed experimental results as supplementary material in the latest version of the paper.
Summary: The paper makes this observation that in certain tasks (where CoT) is known to help, the probability of the token predicted generally increases as more and more tokens are generated. They propose to exploit this observation to predict whether CoT would help on a task or not. Claims And Evidence: They give some evidence into why their hypothesis might be true. But it's limited to GSM8k level reasoning tasks. Methods And Evaluation Criteria: See weakness section Theoretical Claims: NA Experimental Designs Or Analyses: Yes, I checked. See weakness section. Supplementary Material: Additional results Relation To Broader Scientific Literature: I believe the community at this point is focused on how to prevent overthinking of LLMs in many tasks. While the proposed approach in this paper might not scale to harder tasks with extremely long CoTs, it does related to that line of work. Essential References Not Discussed: NA Other Strengths And Weaknesses: Weakness: - I fail to understand the proposed approach. First, even to predict whether CoT would help or not, they need to generate tokens to get the \(\rho_i\) variable. Thus, you anyways will end up spending compute on each and every test example. - However, the key issue is that the whole work is based on observations on a few benchmarks where it is well known that CoT helps or not. For today’s frontier tasks (e.g., AIME, MATH) where models actually reason for long (e.g., for up to thousands of tokens), such kind of monotonic behaviors might not hold at all. - More crucially, the paper doesn’t share absolutely any reasonable understanding or justification about why token probabilities might increase for math tasks and not for others. L282-287 is not justification, simply stating the algorithm in words. - The writeup of section 5.2 seems quite misleading. The authors mention DirectCoT is ranked amongst top 2 methods in 92% cases, albeit there are in total 3 methods in question. Moreover, in most of the tasks they consider in this paper and Table 8, there is not a huge difference between CoT and non-CoT (called DA in the paper) performance. - In Figure 4 and Figure 5, the authors try to mention that their approach consumes fewer tokens. However, many tasks in the Figure are in fact MCQ, where token consumption is naturally low, making the comparison misleading. Other Comments Or Suggestions: See weakness section. Questions For Authors: See weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Response to reviewer revN: Dear reviewer: Thank you for reviewing our article and raising questions and suggestions. We now respond to the corresponding questions as follows. ### Q1: Explanation of the proposed method. **Response:** Thanks for your question. We will explain it from the following two aspects: 1. The proposed method does not need to decode all tokens when LLM answers, but it adopts the method of decoding some tokens. For example, we use the probability of decoding the first N tokens to judge the CoT effectiveness of the task-category in the paper, which does not consume too many computing resources, and our proposed indicator achieves up to 89% CoT prediction accuracy at the task-category level. 2. The method we proposed can be transferred to other models. Through integration in some small models, it can be well transferred to other proprietary models without spending additional computing resources. Our results show that the transferred method can reduce token consumption by more than 35% while achieving high accuracy. We will make the above content clearer in the latest version of the paper. ### Q2: The benchmarks are limited to tasks where CoT is known to help, limiting generalizability to complex tasks like AIME or MATH. **Response:** We appreciate your suggestion to validate the method on other, more complex long-term reasoning tasks such as MATH. The current experiments choose classic benchmarks to ensure reproducibility and comparability. It should be emphasized that the core idea of Token Signature - dynamically selecting policies through confidence trends during decoding - is not limited to sequence length because we only decode the first N tokens. For example, even in long sequences, the initial token distribution (such as the first 50 tokens) may still reflect the model's early confidence in step-by-step reasoning. For AIME, we believe that it is a benchmark that tests the reasoning ability of inference models now, which may not be suitable for our small model. The results on MATH are consistent with the experiments in this paper. Aggregated SC and Instance SC are greater than 0, and it is predicted that CoT has a positive gain on MATH. We will include relevant experiments on the MATH benchmark in the latest version of the paper. | | Llama-3.2 | Phi-3.5 | Mistral-7B | Llama-3.1 | |------------|-----------|---------|------------|-----------| | Instance SC| 0.0789 | 0.0859 | 0.0473 | 0.0557 | | Aggregated SC| 0.2479 | 0.1834 | -0.1234 | 0.4829 | ### Q3: The paper doesn’t share a reasonable justification for why token probabilities might increase for math tasks and not for others. **Response:** Thanks for your question. Due to character limitations, we also answer similar issues in reviewers YmMf(Q1) and reviewers xvNF(Q4). We recommend that you check the comments of other reviewers. ### Q4: The write-up of section 5.2 seems quite misleading. In Table 8, there is no difference in the performance of CoT and DA for most tasks. **Response:** Thanks for your concern. “Dynamic CoT ranks in the top 2 in 92.8% of cases” is intended to show that it can match or surpass better policies in CoT and DA, rather than absolutely outperforming all possible methods. To reduce possible misleading, we have revised it to: “Dynamic CoT achieves comparable accuracy to better policies in CoT or DA in 92.8% of experimental settings.” For many tasks in Table 8, there is not much difference between the performance of CoT and DA, but CoT consumes more tokens than DA, which also makes a dynamic judgment of CoT particularly important for improving efficiency and reducing unnecessary expenses. Our method aims to maintain its answer accuracy while minimizing the use of tokens. Specifically, our method can effectively reduce token usage by more than 35%. ### Q5: Token consumption comparisons can be misleading. **Response:** Thanks for your concern. For our experiments, whether it is Short Answer or Multiple Choice, the token usage of the corresponding experiments (CoT & DA) remains at the same level. For the CoT experiment, the token usage is around a few hundred tokens. For the DA experiment, we designed the prompt to limit the model to only output the final answer without the intermediate process, and its token usage is within 10 tokens. Therefore, our comparison is relatively fair. In order to reduce possible doubts about this aspect, we will add the specific token usage of each group of experiments in the latest version of the paper to make the research more transparent. --- Rebuttal Comment 1.1: Comment: I have read the author response. I still do not get the basic motivation, as it's not even hard to predict on what tasks CoT would help intuitively. Moreover, reasoning helps in ways beyond just accuracy as well these days. A lot of experiment settings in the paper are still not clear after the author response (ex. Q5). I maintain my rating. --- Reply to Comment 1.1.1: Comment: **Dear Reviewer,** Thank you again for your valuable feedback. We would like to take this opportunity to further address your concerns and explain the motivation of our work. ### **Motivation** - We aim to explore why Chain-of-Thought (CoT) reasoning provides inconsistent performance gains across tasks and identify predictive signals for determining when CoT is likely to be beneficial. - **We seek to reduce unnecessary CoT usage, particularly in scenarios where it provides little or no benefit, by dynamically deciding whether CoT or a Direct Answer (DA) is more suitable for each task.** This enables us to save computational resources without compromising model performance. ### **Method Summary** Our approach is designed to address the aforementioned goals in the following manner: - **Transferring from small language models to large language models:** We extend our approach to large language models using the token signature derived from small language models. This allows us to further significantly reduce the computation cost of large language models and maintain high prediction accuracy. Our experiments demonstrate that Dynamic CoT can reduce token consumption by over 33.3% without sacrificing performance. - **Token Signature as Early Predictor:** We observe the probability distribution trends of the first 50 tokens generated during decoding (typically covering 28% of the entire response). We discover that the shape of this distribution (monotonicity) is highly correlated with CoT effectiveness. **Therefore, the signature of the first few tokens can be the early predictor of the necessity of CoT, saving most of the computation of decoding each and every tokens .** ### **Response to Q5: Potential Misleading Comparison Due to MCQ Format** We understand your concern that the token consumption comparison in Figures 4 and 5 might be biased due to the nature of multiple-choice questions (MCQ). To address this concern and ensure a fair comparison: - We have restructured the GSM8K and MultiArith benchmarks into MCQ format. This standardization eliminates the potential bias introduced by different answer formats (e.g., short answers versus multiple-choice). - We have provided a more detailed breakdown of token consumption for each experiment (CoT, DA, and Dynamic CoT) across the four language models. The results show that Dynamic CoT can reduce token consumption by 35.9%. Detailed results can be found at [Token Consumption Comparison PDF](https://anonymous.4open.science/r/token_signature_rebuttal-59CB/Token%20consumption%20comparison.pdf). We hope that this additional clarification addresses your concern regarding the setting of our evaluation and helps to further validate the robustness of our findings. Once again, we greatly appreciate your time and the opportunity to improve our paper. We believe that these updates clarify both the motivations behind our approach and the fairness of our experimental design, and we look forward to hearing your thoughts on these revisions. Sincerely, The Authors
Summary: This paper examines the inconsistency of Chain-of-Thought (CoT) reasoning across different tasks and introduces Token Signature, a novel approach for predicting CoT effectiveness based on token probability distributions. The authors develop two key evaluation metrics, Instance Spearman Correlation (Instance SC) and Aggregated Spearman Correlation (Aggregated SC), to quantify CoT reliability. Additionally, they propose Dynamic CoT, a logistic regression-based method for adaptively selecting between CoT reasoning and direct answers. Extensive experiments on various benchmarks validate the effectiveness of these approaches. ## update after rebuttal I'd like to thank the authors for their rebuttal. After reading the response, I still feel that this paper can be significantly strengthened by including thorough analysis and justification to their technical designs, which is nontrivial and missing from the current manuscript. Given that, I would like to keep my rating unchanged. Claims And Evidence: The claims made in this paper are generally well-supported by the experimental results. The proposed Instance SC and Aggregated SC metrics demonstrate strong predictive performance, achieving 69.7% and 87.5% accuracy, respectively, in assessing CoT effectiveness. Furthermore, the Dynamic CoT approach consistently matches or outperforms the best performance between CoT and direct answers while significantly reducing token consumption. Methods And Evaluation Criteria: The Token Signature approach is well-defined, utilizing Spearman correlation to assess CoT gains, and the evaluation is strengthened by benchmarks covering a diverse range of tasks, including mathematical, symbolic, commonsense, and reasoning challenges. However, several critical technical design choices lack justification. For instance, in Lines 174–176, the authors restrict the indicator score computation to only the first 50 tokens, without explaining why this specific cutoff is appropriate. Additionally, in Lines 312–317, the decision to classify CoT effects based on the boundary values -2 and 2 appears arbitrary, and in Lines 317–319, the indicator is deemed accurate if the indicator value > 0 and CoT gain > 2, or if the indicator value < 0 when CoT < 2, introducing an asymmetric threshold without clear rationale. A justification for these choices, along with a sensitivity analysis, would be necessary to confirm their robustness and impact on the final results. Theoretical Claims: The paper introduces the hypothesis that token probability distributions correlate with CoT gains, which is supported by empirical results. However, it lacks an in-depth theoretical analysis. A more rigorous formalization or theoretical justification of why token probability distributions should predict CoT effectiveness would strengthen the paper’s contributions and provide deeper insights into the underlying mechanisms driving the observed correlations. Experimental Designs Or Analyses: The experimental design is thorough, incorporating evaluations across multiple LLMs, including both open-source and closed-source models, and a diverse set of benchmarks. Supplementary Material: The supplementary material includes implementation details and further experiment results. Relation To Broader Scientific Literature: The paper builds on prior work in CoT reasoning and LLM decoding strategies, but its novelty lies in the predictive framework for CoT effectiveness, which helps identify when and why CoT reasoning is beneficial. Essential References Not Discussed: To the best of my knowledge, no essential references are missing from the discussion. Other Strengths And Weaknesses: Another concern is that the paper provides limited discussion on decoding strategies, particularly regarding how temperature settings or sampling strategies might influence the effectiveness of Token Signature. Since token probability distributions can be highly sensitive to decoding parameters, an analysis of their impact would strengthen the evaluation and clarify whether the proposed method remains robust across different inference settings. Other Comments Or Suggestions: The writing quality needs significant improvement, particularly in terms of language use and formatting. Issues such as improper capitalization (e.g., Line 20: “In this work, We initially …”) and missing punctuation (e.g., Line 30: “high accuracy Overall, we …”) are prevalent throughout the paper. Questions For Authors: The motivation presented in Lines 43–48 is unclear and somewhat contradictory. The authors state that “the effectiveness of CoT across different problems and models can be generally inferred from the task category,” yet they also argue that there is no definitive measure of effectiveness, necessitating their proposed methodology. If CoT effectiveness can already be inferred at the task-category level, what is the additional benefit the proposed task-level effectiveness measure provides? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Response to reviewer qdHF: Dear reviewer: Thank you for reviewing our article and raising questions and suggestions. We now respond to the corresponding questions as follows. ### Q1: Several critical technical designs. 1) Restricting the indicator score computation to only the first 50 tokens in Lines 174–176. **Response:** We use the first 50 tokens, which is an empirical parameter. We believe that if this parameter is set too small, the correlation trend will not be obvious, and if the parameter is set too large, the data at high token index will be sparse due to the existence of short decoding paths. In general, taking 50 as our experimental setting is a comprehensive consideration. In order to verify the impact of different parameters, we use 10/20/50/100/200 tokens for experiments, and compare the prediction performance of Instance SC and Aggregated SC respectively. The comparison show that the prediction performance of the first 50 tokens is the best (Instance SC (69.6%) and Aggregated SC (89.3%)). We will add parameter experiments in the supplementary materials and clearly state the reason for setting the threshold in the main text. Thank you for your advice. 2) About CoT gain threshold in Lines 312–319. **Response:** Thanks for your question. We use 2 as the critical point to judge the CoT gain in order to solve the randomness problem in the experiment. In order to make the experiment more reasonable, we introduce the z-test and improve the evaluation method. We combine the number of questions N, CoT_Acc, and DA_Acc in the benchmark, and use a two-sided z-test to test the significance of CoT gain (positive gain/no gain/negative gain). Since Spearman correlation itself is a statistical indicator[1], there is no need to test it again. Then, the prediction accuracy of the indicator is judged. We have updated our results. Prediction accuracy of each indicator: Instance SC (69.6%) and Aggregated SC (89.3%). We will update it in the latest version of the paper. [1] The Spearman correlation formula. Science 1905 ### Q2: The result lacks an in-depth theoretical analysis. **Response:** Token probability can reflect LLM’s confidence in an answer [1]. Chain-of-thought allows for improving confidence for inherent serial problems by spending more tokens for deep search [2]. However, other not inherently sequential problems might have negative outcomes because of snowball errors [3]. In other words, the model's low confidence may steer it down the wrong reasoning path. When CoT is introduced, the snowball effect is exacerbated, amplifying the initial error further. By incorporating the Spearman correlation indicator, we can assess the model's uncertainty at the start of the reasoning process, helping us determine whether applying CoT is beneficial. Therefore, SC is a good metric to predict the performance gain of CoT. We will make the above content clearer in the latest version of the paper. [1] Detecting hallucinations in large language models using semantic entropy. Nature 2024 [2] Chain of thought empowers transformers to solve inherently serial problems. arXiv 2024 [3] Rethinking External Slow-Thinking: From Snowball Errors to Probability of Correct Reasoning. arXiv 2025 ### Q3: The paper provides limited discussion on decoding strategies. **Response:** Thanks for your suggestion. The decoding strategy is crucial for the output of large models. We conduct experiments on different decoding strategies. We mainly conduct a sensitivity analysis of temperature and top k sampling, and compare the polarity consistency of SC indicators under greedy decoding under different sampling strategies (benchmark granularity). Our conclusion shows that our method remains robust under the change of decoding strategy. For the consistency of different indicators with the greedy decoding strategy after changing the decoding strategy, we obtain Instance_SC (89.7±0.2) and Aggregated_SC (87.3±0.09). We will add a more detailed discussion on decoding strategies in our latest version of the paper. Thank you again for your suggestion. ### Q4: Writing details in lines 20 and 30. **Response:** Thank you for pointing out the writing problem in the paper. We have re-checked the full paper and have corrected the pointed out issues. ### Q5: The motivation presented in Lines 43–48 is unclear and somewhat contradictory. **Response:** Thanks for your question. We clarify our motivation as follows: Judging CoT gain only by task-category is too coarse and cannot explain the inconsistency of task performance within the same category. From a new decoding perspective, we propose a new metric (Instance SC / Aggregated SC) to provide a fine-grained, data-driven effectiveness evaluation criterion for judging CoT gain by quantifying the confidence trend of the model when decoding, which goes beyond empirical inference based on task category. We will update this section in the latest version of the paper.
null
null
null
null
null
null
Efficient Core-set Selection for Deep Learning Through Squared Loss Minimization
Accept (poster)
Summary: This paper proposes a two-phase core-set method for selecting a small but representative subset of training data. The first phase selects samples with the highest contributions, while the second phase employs a lightweight proxy model to evaluate the differences between the remaining samples and already selected samples, further selecting samples that can increase overall diversity. # **update after rebuttal** I would like to thank the authors for their sincere efforts to address my concerns. I have increase my score. Claims And Evidence: In my view, balancing diversity and uncertainty in core-set selection is indeed an important issue. However, the relative importance of diversity and uncertainty may vary in different scenarios. From my perspective, which admittedly may be incorrect, for moderate levels of pruning, more attention should on hard-to-learn samples. This ensures that all rare patterns, which can only be learned through memorization effects, are included in the training set. In this case, the optimization objective of the core set is to reduce redundant samples while retaining the information represented by all samples in the original dataset. On the other hand, in extreme pruning scenarios where very few samples are retained, the focus should be on easy-to-learn samples that represent simple and common patterns. This helps preserve the basic pattern information present in the original dataset in extremal setting. Based on this perspective, maximizing loss reduction represents easy-to-learn samples, while balancing loss reduction attempts to incorporate hard-to-learn samples. Therefore, the authors' claims themselves are not problematic. However, the evidence provided by the authors is quite lacking: 1. The authors should provide very detailed ablation experiments for maximizing loss reduction and balancing loss reduction to evaluate their respective contributions. Instead, the authors devote a large portion of pages 3-6 to the formalized expression of the proposed method, which is neither critical nor important. The authors should use some of this valuable space (or include it in the appendix) to clearly experimentally verify the contributions of the two-stage method. 2. The authors should provide experiments with extreme removal rates similar to the baseline methods, such as the 90% pruning ratio used in D2pruning, to test the performance of their method in extreme removal settings (it's worth noting that this is precisely the setting where the authors' method has the most theoretical advantage). 3. The paper lacks a detailed evaluation of wall-time runtime. Even if the authors' method does not have an advantage in this aspect, they still need to conduct experiments to report the results. Methods And Evaluation Criteria: See Claims And Evidence. Theoretical Claims: In the derivations presented on pages 3 to 6 of the paper, the authors primarily propose a conceptual framework to formalize their proposed method. I do not suggest this constitutes theoretical contribution. While I think it is perfectly reasonable and unproblematic for the paper to lack theoretical contributions (definitely not affect my score), the weakness of the authors' experimental evidence leads me to expect some theoretical contributions from this paper. Moreover, as mentioned in the limitations section of the paper, the theoretical derivations in this work are based on too many assumptions. Experimental Designs Or Analyses: See Claims And Evidence. Supplementary Material: The paper has no supplementary material. Relation To Broader Scientific Literature: The authors proposed two-phases method position in two main approaches in **Dataset pruning**: - **Static Selection**: Performed before or in the early stages of training, aiming to identify a representative subset of the training data. Key research includes Core Set and Data Pruning studies by: Huggins et al. (2016) Coresets for scalable... Paul et al. (2021) Finding important examples early in training Krishnateja et al. (2021) ``Glister'' Xia et al. (2022, 2024) on moderate and refined core set approaches - **Dynamic Selection**: Also known as Dynamic Data Pruning. Involves continuous sample selection throughout the training process. Notable works include: Raju et al. (2021) Accelerating deep learning with dynamic data pruning Truong et al. (2023) ``Kakurenbo'' Qin et al. (2024) InfoBatch Essential References Not Discussed: I notice a minor oversight in the paper regarding the discussion and comparative experiments with Dynamic Data Pruning (DDP). Based on empirical evidence, DDP based methods demonstrate superior performance at moderate pruning rates, while static pruning methods regain their advantage under extreme pruning conditions. I strongly recommend that the authors incorporate discussions of the following papers and include the latest state-of-the-art DDP method in their baseline comparisons in Table 1: [1] Accelerating deep learning with dynamic data pruning, arXiv 2021. [2] KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training, NeurIPS 2023. [3] InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning, ICLR 2024. [4] Instance-dependent Early Stopping, ICLR 2025. Other Strengths And Weaknesses: This article is well-organized and the author attempts to address important challenge in core-set. Other Comments Or Suggestions: I suggest this paper currently falls slightly below ICML acceptance standards. I should note that the 5-point scale used here (unlike the traditional 10-point system common in ML conferences) makes the evaluation confusion. I am assigning a Weak Reject, though my precise assessment would be a borderline reject. I hope the authors understand this current status more clearly. In my view, the paper suffer from insufficient novelty in its contributions and inadequate experimental evaluation. I would direct the authors' attention to D2pruning, which shares similar motivations in addressing both DIVERSITY & DIFFICULTY, but provides substantially much more comprehensive experimental validation. To strengthen this work, the authors need to provide extensive additional experiments (I guess other reviewers will suggest specific evaluations as well) to demonstrate their method's contributions beyond D2pruning. A minor suggestion: The term ''Selection ratio'' should be replaced with ''Pruning rate'' (30% Pruning rate = 70% Selection ratio) to maintain consistency with the established terminology in previous literature. Questions For Authors: Not applicable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Your insights have greatly helped us identify key areas for improvement. We address the main concerns below: ### **Response #1: Insufficient Novelty in Contributions** Rather than introducing a new theoretical guarantee framework, our contribution focuses on the design of a simple, efficient, and practical selection strategy. MRMC can be computed with minimal overhead by measuring sample loss reduction. Although computing the OPT value requires training a proxy model, it is a single linear layer, making training extremely fast. The overall complexity is near-linear (≈O(N)). As the title suggests, “Efficient Core-set Selection” emphasizes not just coreset quality but also selection cost. E.g., MRMC uses only 20 epochs for preparation, whereas methods like D2Pruning require a fully trained (200-epoch) model on CIFAR. Moreover, D2Pruning involves costly feature distance computations and information propagation. On ImageNet-1K, D2Pruning takes 350 seconds to build a 30% core-set, while MRMC-R takes 50s and MRMC under 10s. ### **Response #2: Ablation Experiments** Table 1 already compares MRMC, which only maximizes loss reduction, with MRMC-R, which incorporates balanced loss regularization— the two core techniques in our framework. The regularized version shows clear advantages on more challenging tasks and under smaller coreset sizes. Section 5.3 specifically analyzes the effectiveness of the balancing term. An ablation study would be unnecessary. ### **Response #3: Extreme Pruning** We acknowledge this as a limitation of our work. As a heuristic method, we focus on selection efficiency, which comes at the cost of reduced performance under extreme pruning. While small subsets are highly valuable in interpretability and incremental learning, our method is primarily optimized for moderate ratios, where it achieves comparable performance with low cost. ### **Response #4: Lack of Runtime** We appreciate your comment. Although efficiency is central to our work, we omitted a runtime analysis — this was an oversight. We will include wall-clock time in the revision to better support our efficiency claims. As shown in the table, many strong SOTA methods suffer from high computational overhead, which limits their practicality on large-scale datasets. In contrast, MRMC and MRMC-R achieve a favorable balance between coreset quality and selection efficiency. Wall-clock time (seconds) for constructing a 50% coreset on CIFAR-10, CIFAR-100, and TinyImageNet, and a 30% coreset on ImageNet-1K: | Method | CIFAR-10 | CIFAR-100 | TinyImageNet | ImageNet-1K | |-----|-----:|------:|--------:|-------:| | EL2N | 0.1 | 0.1 | 0.2 | — | | GraNd | 0.1 | 0.1 | 0.2 | — | | Glister | 20.4 | 27.4 | 75.1 | — | | CCS | 0.3 | 0.3 | 0.9 | — | | Moderate | 0.8 | 0.8 | 1.5 | — | | Dyn-Unc | 0.2 | 0.3 | 0.7 | — | | Boundary | 102.1 | 127.1 | 290.7 | — | | D2Pruning | 10.9 | 11.3 | 25.1 | 352.9 | | MRMC | 0.2 | 0.2 | 0.4 | 9.3 | | MRMC-R | 1.0 | 1.3 | 2.9 | 50.7 | ### **Response #5:  Theoretical Assumptions** Our method is heuristic and not theoretically guaranteed. Nevertheless, we aim to ground our design in theoretically inspired approximations. Thus, we validate the assumptions empirically (see Response #2 to Reviewer JTrd). Our method aims to strike a balance between complex, inefficient theoretical methods and simple but less effective heuristics. ### **Response #6:  Comparison to DDP** We appreciate your suggestion and will include a discussion of Dynamic Data Pruning (DDP) methods in the Introduction. We agree that DDP methods (e.g., Kakurenbo, InfoBatch) have demonstrated strong performance in accelerating convergence. However, DDP and Static Data Pruning (SDP) differ fundamentally in both motivation and usage: - **SDP** selects a fixed subset in the early phase of training and discards the remaining data. This enables savings not only in computation, but also in memory and storage, making SDP more suitable for on-device learning or communication bottlenecks. - **DDP** assumes access to the full dataset throughout training and aims to reduce computational cost via dynamic sample scheduling. DDP can be seen as an alternative to random data scheduling in SGD, and is more closely related to curriculum learning or advanced sampling strategies. Moreover, prior SDP works typically do not compare directly with DDPs, and we followed the same convention. Nonetheless, we agree that DDP is an important direction and will acknowledge it in the revision. ### **Summary** We hope that these responses address your concerns and help position MRMC as an effective solution to the core-set selection problem. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their sincere efforts to address my concerns. I am inclined to increase my score by +1 (as a result, 3). I would like to seeing all the revisions in the updated version.
Summary: The authors: *propose a new core-set selection approach that seeks to balance losses between chosen and unchosen samples by minimizing the overall sum of squared loss. * Introduce the Maximum Reduction as Maximum Contribution (MRMC) criterion, which pinpoints those data points that most substantially reduce the loss—indicating they have the greatest influence on achieving model convergence. * To maintain a fair representation, they impose a balance constraint so that the contributions of the core-set remain evenly distributed. I like the idea of the paper and the writing style. My main concerns are related to the experiments and mainly competing methods, Claims And Evidence: 1. The authors claim that prior work neglected the theoretical guarantees. However, theoretical methods for practical subset selection have been proposed already, both for data (https://proceedings.mlr.press/v202/tukan23a.html, https://ieeexplore.ieee.org/abstract/document/9941065) and model (https://proceedings.neurips.cc/paper_files/paper/2022/hash/f7fc38fdd95fd146a471791b93ff9f12-Abstract-Conference.html) compression. Methods And Evaluation Criteria: The authors evaluate using the accuracy metric on CIFAR10, 100, and Tiny ImageNet. They also see how the loss on the corset is translated to accuracy in the test set. They also analyze accuracy as a function of the three key parameters-> the number of initial training epochs, the scale of the regularization subset size, and the trade-off between the losses parameter. Theoretical Claims: 1. Equation 1, why D\C and not the full data D? 2. I don't see how the second derivation works in (6), please provide the details carefully for review/ ----------------------- In general, the paper is not a theoretical one, while I agree it has a good theoretical *motivation*, it does not have theoretical guarantees. Thus I would not use the statements "balance efficiency and theoretical guarantees", "Both approaches are theoretically robust and easy to implement.", and many others -- I don't see theoretical guarantees on the training, approximation error, generalizability or else. Experimental Designs Or Analyses: Experiments are not good enough as the authors does bt compete with SOTA baselines which achieve a much better accuracy. See cifar10/100 results in the paper "https://proceedings.mlr.press/v202/tukan23a.html," and all the other competing methods there. Supplementary Material: x Relation To Broader Scientific Literature: x Essential References Not Discussed: x Other Strengths And Weaknesses: Strengths: 1. good idea with some theoretical motivation -- the idea is that a good coreset should satisfy two things: diversity, and effect on the loss. So the authors compute the subset that contributes the most to reducing the loss, and add a regularization to constraint diversity of the coreset. 2. Writing is good. Weaknesses: Mainly experiments, prior work has some better results. Other Comments Or Suggestions: x Questions For Authors: You assume that “attribution values” (the pairwise gradient inner products) are stable and roughly symmetric. Have you measured how large the asymmetry (i.e., ∣aj,i−ai,j∣∣aj,i​−ai,j​∣) is in practice? Do you see any specific scenarios or datasets where this assumption might break? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank the reviewer for the feedback and positive comments on our idea and writing. Below we address the main concerns: ### **Response #1: Theoretical Guarantees in Prior Work** We realize that our original phrasing may have led to confusion. Our intention was to emphasize the practical gap between efficient heuristics and theoretically grounded methods, rather than to dismiss existing work with guarantees. We will update the manuscript to better reflect this nuance. ### **Response #2: Theoretical Claims** Following your suggestion, we reviewed the relevant literature and will modify Equation (1) to use D instead of D\C. As for Equation (6), we use a first-order Taylor expansion to approximate the change in loss, a standard approach in prior work. We avoid second-order terms to maintain algorithmic simplicity. We will add clarifications in the revised manuscript to justify this approximation. ### **Response #3: No Theoretical Guarantees** Thank you for pointing this out. We acknowledge that our method does not provide formal guarantees on generalization or approximation error. Rather, it is a heuristic framework inspired by theoretical insights. We will revise the language to clearly state that our method is “theoretically inspired” rather than “theoretically guaranteed.” ### **Response #4: Comparison to SOTA Baselines** Beyond accuracy, a key contribution of our method is its selection efficiency. While achieving accuracy comparable to SOTA methods, our approach incurs significantly lower computational cost. We acknowledge the lack of runtime and complexity analysis in the original submission and will include it in the revision (see Responses #1 and #4 to Reviewer opAA). We also note that coreset evaluation protocols vary widely across prior work, making fair comparisons difficult. Key differences arise in two stages: - **Preparation before selection**: Some methods (e.g., *D2Pruning*, *Boundary-CCS*) require a fully trained model (e.g., 200 epochs) prior to selection, which is computationally expensive. In contrast, our method selects using a partially trained model (e.g., 20 epochs), offering substantial efficiency gains. - **Training after selection**: - **Warm start**: Training proceeds without reinitializing the model after selection. This typically improves performance. We provide results showing our method further benefits from this tip. - **Fixed iterations**: Keeping the number of SGD steps constant across core-set sizes nullifies the efficiency benefit of using smaller subsets. - **Fixed epochs (ours)**: Using the same number of training epochs across different core-set sizes reduces the overall training cost as the size of the coreset decreases. Our experiments adopt the fixed-epoch, no-warm-start setting and include reproductions of the baseline methods for relatively fair comparison. Even under these constraints, our method—trained with only a few epochs—achieves comparable results to *D2Pruning*, which relies on full model training. Furthermore, our selection algorithm has near-linear time complexity (≈O(N)), making it scalable to large datasets such as ImageNet-1K. Regarding Tukan et al. (ICML 2023), as cited by the reviewer, their focus is on extremely small coreset sizes under a highly specific setup (learning rate 0.01, batch size 20, 300 epochs). This setting differs significantly from our experimental protocol and most existing data pruning benchmarks, making direct comparison less appropriate. | Method | C10(70%) | C10(50%) | C10(50%) | C100(70%) | C100(50%) | C100(30%) | TI(70%) | TI(50%) | TI(30%) | |:----------------:|:--------:|:--------:|:--------:|:---------:|:---------:|:---------:|:-------:|:-------:|:-------:| | MRMC-R | 95.46 | 95.24 | 93.13 | 77.17 | 74.46 | 68.66 | 60.42 | 57.03 | 50.88 | | MRMC-R-warm | 95.65 | 95.25 | 93.88 | 77.83 | 75.00 | 70.05 | 61.44 | 58.46 | 55.01 | ### **Response #5: Empirical Validation of Key Assumptions** We appreciate the suggestion. We conducted a case study to empirically validate these assumptions (see Response #2 to Reviewer JTrd). For symmetry, the average relative error between attribution scores a_ij and a_ji is consistently below 24%, with clear diagonal symmetry observed. For stability, attribution scores remain stable across different core-set sizes, with relative errors below 30%. Initial observations suggest that optimizer settings and core-set size influence these properties. ### **Summary** We acknowledge the limitations of our current work and will revise the manuscript to clarify assumptions, experimental comparisons, and better position our contributions. Thank you again for the valuable feedback.
Summary: This paper presents a novel coreset selection method, which minimizes the squared loss to balance contributions between coreset and non-coreset samples. The method follows a two step process: first select samples with the highest MRMC value, then train a proxy model to select samples to increase diversity. Extensive experiments on benchmarks demonstrate that the proposed method significantly accelerates training and reduces computational costs while improving model performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes. The code looks good to me. Relation To Broader Scientific Literature: The paper introduces a novel coreset selection criterion that reframes sample importance via squared loss minimization. It extends prior gradient-based approaches by introducing the Maximum Reduction as Maximum Contribution (MRMC) metric to quantify each sample’s impact on training. Essential References Not Discussed: [1] Refined coreset selection: Towards minimal coreset size under model performance constraints. ICML 2024 [2] Mind the Boundary: Coreset Selection via Reconstructing the Decision Boundary. ICML 2024 Other Strengths And Weaknesses: Strengths: 1. The method is built on theoretical foundation, providing rigorous justification for the proposed approach. 2. The novel MRMC criterion is both technically sound and intuitive, which offers a novel perspective on evaluating sample importance. 3. The experimental analysis is thorough and demonstrates promising performance gains across multiple datasets. Weaknesses: 1. Several works in coreset selection is missing, as mentioned in section above. 2. The assumption of symmetry in attribution values based on mini-batch SGD can be fragile, especially since it relies on a small learning rate, while training in the experiment begins with a high learning rate 0.1 3. GraNd score proposed in [1] should be included for comparison. [1] Deep Learning on a Data Diet: Finding Important Examples Early in Training. NeurIPS 2021 Other Comments Or Suggestions: 1. The paper should empirically validate key assumptions made in the theoretical analysis. For instance, it is essential to verify whether the attribution values are indeed approximately symmetric as assumed in Section 4.3, and if they remain stable throughout training (as mentioned in line 207). 2. The Taylor expansion used in Eq. (6) assumes that the high order terms are negligible. Empirical study is needed to assess the impact of the residual terms on approximating $\Delta l_i^t$. Questions For Authors: Please address the Comments section. 1. Following weakness 3, how does GraNd perform on the benchmarks? 2. In section 4.3, you argue that a small number of training epochs on full samples are sufficient. How does the MRMC's performance change when the number of training epochs $R$ exceeds 20? 3. Can you provide the architecture of the proxy model used for regularization and the hyperparameter for training the proxy model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Your suggestions are very helpful for improving the quality of this work. Below we address each of your concerns: ### **Response #1: Missing Related Work** Thank you for pointing this out. In the revised manuscript, we will add a discussion on “Refined Coreset Selection” (ICML 2024), which formulates the problem as a bi-level optimization and employs Lexicographic Optimization for solution. This work belongs to the class of optimization-based methods and provides an important reference. “Mind the Boundary” (ICML 2024) is already discussed in our original manuscript and also included as a baseline named "Boundary" in our main experiments (see Section 5.1 Competitors). We will clarify this more explicitly in the revised version. ### **Response #2: Empirical Validation for Assumptions** We appreciate the reviewer’s observation. It was indeed an oversight on our part not to empirically validate the assumptions. To address this, we conducted a case study to assess the **symmetry and stability** of the attribution scores. Specifically, we randomly sampled 50 instances from CIFAR-100 and computed their attribution scores throughout the 10 training epochs under four different settings: using the full dataset and coresets of 30%, 50%, and 70%. This resulted in four 50*50 attribution matrices. **Symmetry:** Visualizations of the attribution matrices consistently exhibit strong diagonal symmetry across all settings. The relative symmetry errors are 16%, 19%, 22%, and 24% for the 30%, 50%, 70% coresets, and full dataset, respectively—indicating that symmetry degrades as the amount of training data increases. **Stability:** We further normalized the attribution scores and computed the average relative error against the full-data attribution matrix. The relative errors for the 30%, 50%, and 70% core-sets are 27%, 26%, and 22%, respectively, suggesting improved stability with larger core-sets. (Due to the review policy, we are unable to include external links to visualizations.) ### **Response #3: Comparison with GraNd** GraNd, proposed in [1], is a well-known and efficient method, and EL2N is its approximation. As expected, the two methods yield similar experimental results. Following your suggestion, we implemented GraNd and will update the results in the revised manuscript. Similar to EL2N, GraNd’s performance drops significantly as the core-set size decreases. In contrast, our MRMC-R method consistently outperforms GraNd at 50% and 30% coreset sizes, especially on more challenging tasks. This demonstrates the strength of MRMC in scenarios where effective selection is needed. ### **Response #4: Proxy Model Details** The proxy model used in our method is the final linear classification layer (i.e., copy.deepcopy(model.classifier)). Analyzing gradients from the output layer is a common and efficient practice in coreset literature. As described in line 350, the proxy model is trained using SGD with a learning rate of 0.01, batch size of 512, for 10 epochs. Since it contains only one linear layer, training is very fast—even on large datasets like ImageNet-1K, it completes in under one minute. ### **Response #5: Training Epochs R > 20** We originally evaluated R = 5, 10, and 20 in the sensitivity analysis, but did not include larger values. Thank you for pointing this out. To address this, we conducted an experiment with R = 30. The results show that, except for the 70% and 50% core sets of CIFAR-100, performance generally degrades. This indicates that an excessively large R is inappropriate. This suggests that larger R may lead to overfitting in early training and reduce the effectiveness of the attribution signal. | Method | C10(70%) | C10(50%) | C10(50%) | C100(70%) | C100(50%) | C100(30%) | TI(70%) | TI(50%) | TI(30%) | |:----------------:|:--------:|:--------:|:--------:|:---------:|:---------:|:---------:|:-------:|:-------:|:-------:| | MRMC-R (R=20) | 95.46 | 95.24 | 93.13 | 77.17 | 74.46 | 68.66 | 60.42 | 57.03 | 50.88 | | MRMC-R (R=30) | 95.35 | 95.16 | 93.00 | 77.83 | 74.99 | 66.38 | 60.25 | 56.32 | 49.66 | | GraNd | 95.01 | 94.59 | 88.91 | 77.02 | 66.58 | 49.23 | 55.98 | 39.67 | 20.41 | | EL2N | 95.40 | 95.23 | 91.94 | 77.61 | 67.60 | 32.55 | 56.08 | 38.96 | 13.79 | ### **Summary** We hope that the added analyses and new experimental comparisons comprehensively address your concerns. [1] Deep Learning on a Data Diet: Finding Important Examples Early in Training. NeurIPS 2021 --- Rebuttal Comment 1.1: Comment: Good work. My concerns are fully addressed.
Summary: The authors propose a new objective to select subsamples for training deep learning models. They call it MRMC (Maximum Reduction as Maximum Contribution) which essentially says that a data point that leads to more reduction in the squared error loss during the first few epochs of training relative to other points, is a more important point and should be a part of the subsamples used for further training. They also introduce a regularization term in their objective to take care of overfitting. They show the effectiveness of their sampling method compared to a variety of methods in their experiments on standard image datasets. Claims And Evidence: The paper is written clearly and is easy to follow. The claims are convincing enough. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are not many theoretical claims in the pair. However, the theoretical intuition behind the main algorithm is clearly explained. Experimental Designs Or Analyses: Seems good but some more experiments may be added. See weaknesses. Supplementary Material: Supplementary material gives the code for reproducing the results in the paper. I did not go through it much. Relation To Broader Scientific Literature: With the huge deep learning architectures being used in today's age, subsampling algorithms are very important. There are either algorithm with theoretical guarantees but practically tough to implement or some well performing heuristics without any guarantees. This paper tries to bridge the gap by providing a new subset selection objective which is theoretically motivated and grounded and at the same time good in practice. Essential References Not Discussed: Not to the best of my knowledge. Other Strengths And Weaknesses: Strengths: 1) Subset selection is becoming a very important problem in the era of huge models and this paper would be of interest to a broad community. 2) The algorithm is intuitive and simple. 3) Experiments are presented on good number of datasets and show very good. Weaknesses: 1)Although the theoretical intuition behind the idea and framework is clear, the method does not really give theoretical guarantees. 2) In the experiments section, the authors have not made comparisons with what they call the optimization-based methods like GradMatch. I think these comparisons are also required as they are essentially the methods with some theoretical guarantees. Even though I admire the detailed experiments authors have done, I believe this kind of study needs to be even more comprehensive. Other Comments Or Suggestions: See Weaknesses Questions For Authors: None. Please address the weaknesses Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and fair assessment. We appreciate the opportunity to address your concerns. ### **Response #1: Lack of Theoretical Guarantees** We acknowledge that our method does not provide formal theoretical guarantees. This work aims to bridge the gap between purely heuristic methods and theoretically grounded but computationally expensive approaches. Our method (closer to a heuristic method) is motivated by a clear theoretical intuition and achieves higher performance than other heuristic algorithms. We will provide more analysis and supporting empirical evidence to justify this intuition and our contributions in the revised manuscript. 1. **Complexity analysis** of MRMC and **runtime comparisons** (see Responses #1 and #4 to Reviewer *opAA*) to demonstrate its practical efficiency. 2. A **case study validating the assumption** that attribution scores are approximately symmetric and stable, which is fundamental to our method (see Response #2 to Reviewer *JTrd* ). While a full theoretical guarantee is absent, we believe this combination of intuition, analysis, and empirical validation provides a meaningful step forward in the design of practical and reliable core-set selection algorithms. ### **Response #2. Comparison with Optimization-based Methods** Thank you for highlighting the importance of comparison with optimization-based methods. We addressed this concern by attempting to reproduce GradMatch[1] and Glister[2] as optimization-based baselines. Unfortunately, we found that GradMatch (with GPU) was too computationally demanding—core-set selection time exceeded even full model training time in our setting. Other optimization-based methods also face this issue. Therefore, we selected Glister as a more practical baseline and adapted it by using only output-layer gradients to reduce its computational cost (selection time reduced to 27 seconds on CIFAR-100), but still much slower compared to MRMC-R (3s) and MRMC (0.2s). We have now added more comparisons (i.e. Glister, GraNd) in the revised manuscript. The results show that MRMC achieves better core-set quality while being significantly more efficient. (C10 and C100 correspond to CIFAR-10 and CIFAR-100, respectively, and TI refers to Tiny-ImageNet. The numbers in parentheses indicate the core set size.) | Method | C10(70%) | C10 (50%) | C10 (30%) | C100 (70%) | C100 (50%) | C100(30%) | TI(70%) | TI(50%) | TI( 30%) | |:----------------------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:------------:|:-----------------:|:-----------------:|:-----------------:| | MRMC | 94.94 | 94.61 | 92.27 | 77.76 | 74.43 | 67.28 | 59.89 | 56.89 | 49.99 | | MRMC-R | 95.46 | 95.24 | 93.13 | 77.17 | 74.46 | 68.66 | 60.42 | 57.03 | 50.88 | | Glister | 94.80 | 94.20 | 90.15 | 76.80 | 73.21 | 66.90 | 57.20 | 55.05 | 48.23 | ### **Summary** We hope that the added analyses and new experimental comparisons comprehensively address your concerns and help position MRMC as an effective and scalable solution to the core-set selection problem. Thank you again for your thoughtful feedback. [1] GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training,ICML,2021 [2] GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning,AAAI,2021
null
null
null
null
null
null
Training a Generally Curious Agent
Accept (oral)
Summary: This paper proposed a fine-tuning approach named PAPRIKA to improve LLM Agent’s decision-making capabilities, along with a curriculum learning algorithm that improves PAPRIKA’s sampling efficiency. The experimental results show that the proposed PAPRIKA approach improves the success rate among different tasks by a large margin compared to the original base model. And the curriculum learning algorithm further enhanced the Agent’s decision-making capabilities. Claims And Evidence: Yes. The claims made is supported by evidence. Methods And Evaluation Criteria: yes. The proposed method is evaluated by benchamrk dataset. Theoretical Claims: Q1: In Algorithm 1, the pseudocode said that there is an input B for parallel experience collection, but in the pseudocode, there is no evidence show that the collection process is parallel. What is the intent for B. Q2: In Section3.3, the SFT objective and RPO objective formulations seem to express the training set for SFT and RPO are the same. But in Experimental Setup, here written 15,744 training trajectories for SFT and 4388 trajectories for RPO, so it seems that they are different training set. If different, there should be an explicit difference in the both objective formulations in Section 3.3. Experimental Designs Or Analyses: Q1: In Section4.1 Finetuning on regular multiturn data does not help, you fine-tuned Llama-3.1-8B-Instruct on 100,000 English language trajectories sampled from WildChat, why not the whole dataset? Q2: In Section 4, PAPRIKA improves LLM decision making abilities, here written when include trajectories from the bandit task in training set can increase rate to 100%, but there is only one test case in your Bandit best arm selection game. That seems not so convincing, why not some more test case? Supplementary Material: Yes. Appendix A, B, C, D, E and F. Relation To Broader Scientific Literature: 1. This approach addresses limitations highlighted in prior work—such as poor performance on even simple multi-armed bandit tasks (Krishnamurthy et al., 2024)—by training the model on diverse decision-making tasks that require iterative reasoning. 2. In-context learning allows models to adapt to new tasks from a few examples. PAPRIKA leverages this capability and combines it with reinforcement learning principles, effectively training the model to perform in-context reinforcement learning across multiple rounds of interaction. 3. One of the notable findings is that PAPRIKA-trained models exhibit strong zero-shot performance on unseen tasks. The demonstrated transferability highlights the paper’s contribution to bridging the gap between task-specific training and broader, more adaptable decision-making capabilities. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths Originality: The paper presents a framework, PAPRIKA, that combines synthetic trajectory generation, a sequential variant of Direct Preference Optimization (DPO), and curriculum learning. This combination introduces a new way to train LLMs for multi-turn decision-making tasks, which distinguishes it from previous approaches that focus mainly on single-turn interactions. Weeknesses: Scalability Concerns: Generating large volumes of synthetic trajectories can be computationally expensive. The paper could benefit from a discussion on how this scalability challenge might be addressed, especially when extending the framework to more complex or real-world environments. Other Comments Or Suggestions: No. Questions For Authors: No. please refer to above questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and positive review. > Q1: In Algorithm 1, the pseudocode said that there is an input $B$ … We thank the reviewer for catching this typo. $B$ in the pseudocode has the same purpose as $C$, in the sense that we collect $C$ trajectories in parallel for each of the $T$ tasks (selected by the algorithm). The collection of the $C$ trajectories is parallelized across multiple GPUs. We will edit our paper to reflect this more clearly. One can in-practice also parallelize the data collection across multiple tasks (beyond trajectories for a single task as we do) with more computational resources. In this case, a batched variant of UCB [1, 6] can be used to update the arm parameters, which we leave for future work. > Q2: In Section3.3, the SFT objective and RPO objective… We apologize for the confusion. In section 3.3, we define loss functions over any dataset $D$, and do not differentiate between the dataset used for SFT vs RPO stages. We will update the paper to clarify our notation. However, we also want to provide some clarifications here: for the SFT phase, we collect **all** successful trajectories for each training task $\tau$. During the RPO stage, for each training task $\tau$, we pick the best trajectory and one of worse-scoring trajectories randomly to form **exactly one** preference pair. We sample 20 trajectories per task, a large number of which can be successful and included in the SFT phase, but we construct at most 1 preference pair per task. The reason we only form one preference pair per task is because we observed that using more than one per task can cause unintentional unalignment [2, 3], and lead to performance degradation. > Q1: In Section 4.1 Finetuning on regular multiturn data does not help… Fine-tuning on the full WildChat-1M dataset takes close to 10 days with our computational resources and we could not run that. Moreover, certain trajectories in WildChat have very long context length (> 100,000) and we do not have the resources to train on such long trajectories. Also **note that we only ~20K trajectories for Paprika**, showing that targeted synthetic trajectory generation is much more sample efficient for the tasks we care about in this work. > Q2: …but there is only one test case in your Bandit best arm selection… Apologies for the confusion. Even though we have only one test case (e.g., picking colors), **we test our models on 100 iterations of the test case, each time randomly sampling the bandit arm probabilities**. For each specific bandit arm probabilities, we run 4 iterations of the game, and report the average pass@4 success rate to be 100%. We would edit the paper to make this point clearer. To follow the reviewer’s suggestion, we generate **20 more bandit tasks** (so, 20 problem description + different arm names) using gpt-4o-mini and test Paprika on them, with pass@4 success rate reducing slightly to 98%, still outperforming the regular instruct model (See [this Figure](https://ibb.co/xtnt563Z), first row, first column) > Weeknesses: Scalability Concerns… Sampling trajectory is indeed the main time bottleneck. However, we believe this is not too different from other RL tasks. Sampling is the only way we can get experience from the model, and it is still considerably cheaper than collecting human demonstrations. Recent works have found similar approaches to be very effective for other domains like math or coding [4, 5]. Collecting expert data and doing domain specific pre-training or mid-training can improve sample efficiency during the RL phase [7]. Another scalability concern is designing particular tasks for training, and a potential solution is to train another LLM to generate these tasks and then use our curriculum algorithm to select which ones to train on at every step. ## Additional Experiments We have also run additional experiments as per the other reviewers’ suggestions: 1. Paprika with Gemma-3-12B-IT: Our experiments show that Paprika works with Gemma-3-12B-IT as the base model. We also see the **Paprika fine-tuned model being comparable or better than GPT-4o-mini in 7/10 task groups**. See full results [here](https://ibb.co/dsv8xG3q). 2. More tests for generalization: We extend Figure 4 by running leave-one-out experiments on 5 more task groups, and Paprika (LOO) improves over the instruct model in 9/10 task groups, showing strong generalization: [Figure](https://ibb.co/xtnt563Z) ## References [1] Multi-Armed Bandit Problem and Batch UCB Rule [2] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization [3] Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data [4] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning [5] OpenAI o1 System Card [6] Perchet, Vianney, et al. "Batched bandit problems." (2016): 660-681 [7] Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. After reviewing the rebuttal, I have updated the score to Accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer WvQu, Thanks a lot for your thoughtful questions and suggestions, they would help improve our paper significantly! We are also grateful for your positive review of our work. Please let us know if you have any followup questions or if we can further clarify something. Thanks, Authors
Summary: This paper presents Paprika, a finetuning method that enables models to perform in-context rl for unknown environments. Different aspects of Paprika have been studied under different settings but not for this particular setting e.g multi-turn, not interacting with a human and more general environments. A training dataset is constructed using different LLMs for text-based tasks. The diverse tasks have with different types of environmental feedback. There are two important aspects of training/fine-tuning. First the model is trained using a combined DPO and SFT loss. The second which is unique to this work is the scalable online curriculum learning to actually construct the training dataset. The coefficient of variation is used to measure the learning potential of a particular and a multi-arm bandit algorithm is used to actually make the task selection (for the training dataset). Experiments cover several aspects with the main focus on if training on n-1 tasks can lead to good performance using RL ICL on the unseen nth task. Claims And Evidence: The main idea/claim of the paper is that LLMs can be trained to perform ICL RL through the finetuning method presented (Paprika). There is not really an ablation (on the different parts of the method) so it is hard to tell which parts are most important but the experimental results do show the effectiveness of Paprika as a finetuning method. Methods And Evaluation Criteria: Given the particular task setup, the evaluation criteria makes sense. Also standard evaluation criteria are used in Section 4.1 which allow for comparisons to other methods. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design for both the task specific setting (training for ICL RL) and general setting are sound and valid. One weakness which is mentioned by the authors in the conclusion is that a lot of human effort is required in making the dataset so generalizability could be an issue. Supplementary Material: Yes, all parts Relation To Broader Scientific Literature: The related works section shows how Paprika fits into the broader literature. In particular, Paprika is focused on finetuning method for ICL RL generalization on multi-turn tasks. This is not a commonly used setting making it unique from other works. Essential References Not Discussed: Not that I know of Other Strengths And Weaknesses: Strengths: - Task setup: The task setup fills a hole in current RL agent work and is likely an important future area. It broadens the potential applications of agents to more real-world like scenarios rather than toy tasks. - Generalizability: The Leave-One-Out results are particularly impressive given how different the tasks are. - Efficiency: Showing that curriculum learning improves efficiency is also important given the high cost of trajectory generation. Weaknesses: - It is hard to measure different aspects of the method in part due to the exponential search space. For example, is there a way to measure how ‘correct’ the selected curriculum is? Maybe some qualitative examples to get an idea of the curriculum would be helpful. - This is noted by the authors and above but the main weakness is the amount of human expertise involved in dataset generation. For example, humans specifically chose the tasks to make up the training set. While the tasks are diverse, it is unclear what criteria should such a training set have so that it can generalize. It could be helpful to have a short discussion on the limits of generalizability of the tasks. Overall, this paper is a strong contribution to the overall research community. It addresses an important question and contains important empirical results that others can build on. While there are some weaknesses, the task itself is challenging and this presents a good first step towards tackling it. Other Comments Or Suggestions: N/A Questions For Authors: 1. How were the particular tasks chosen and which tasks were rejected? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the support and thoughtful review. We want to address their feebacks below. > There is not really an ablation (on the different parts of the method)... We have conducted additional experiments on different parts of Paprika, we list them below: 1. **Ablation on training data generation**: Since our training data is generated using high temperature (1.5) and min-p sampling with parameter 0.3, we have run an ablation to show the importance of both. On twenty questions, we see that either using a lower temperature or lower min-p sampling parameter leads to lower coverage on the training set, see [this Figure for varying Min-p at a fixed temperature of 1.5](https://ibb.co/mVsnTSsy), and [this Figure for varying temperature at a fixed Min-p parameter of 0.3](https://ibb.co/m547KZH7). The training data also affects downstream performance on held-out tasks, see [this Figure](https://ibb.co/fVYTxb1g). 2. **Ablation on fine-tuning stages**: We ran an ablation on the two different parts of our fine-tuning: namely the SFT and the RPO stage, please see [this Figure](https://ibb.co/nMtyKGXJ). If the reviewer has any other ablation in mind, we are happy to add it to the paper. > For example, is there a way to measure how ‘correct’... We thank the reviewer for this great question. In general, we found that it is hard to intuitively understand the curriculum or argue whether it’s “correct”. However, we note the following: 1. **Performance of the final fine-tuned model**: This is the final metric we care about. We have run our curriculum algorithm for 3 seeds and 3 rounds (with 250 tasks sampled at each round) and have an updated version of Figure 4 [here](https://ibb.co/gLn5wGPQ), it shows that the model trained on tasks selected by our curriculum outperforms that trained on uniformly selected tasks by 1.4% and 3.3% at average and pass@4 success rate respectively, demonstrating its efficacy in selecting training tasks. 2. **Distribution of selected tasks**: This is the other metric that is easy to understand. First, see that our defined metric for learning potential (with Llama-3.1-8B-Instruct as the policy) has an intuitive distribution over the gpt-4o-mini defined easy, medium and hard categories — easy tasks have a higher learning potential compared to medium and hard ones. | Category | Average Learning Potential Metric | |-|-| |Easy|0.22 | |Medium| 0.16| |Hard| 0.09| Next, notice the distribution of easy, medium and hard tasks within 20 questions: we have 477 easy, 727 medium and 296 hard questions. **Uniform sampling** respects this distribution, and samples **more medium tasks as opposed to easy tasks**. However, our curriculum algorithm follows the learning potential metric, and **samples more easy, then medium and finally the least number of hard tasks** in a sampled batch. In a batch of 250 questions, we observe the following distribution: | | Easy | Medium | Hard | |-|-|-|-| |Uniform |78 |120|52| |Our Curriculum|117|79|54| This shows that our curriculum has an intuitively reasonable behavior. If the reviewer has any other questions/thoughts, we would be happy to address them. > This is noted by the authors and above but the main weakness… We agree with the reviewer’s comment. To extend Paprika, one would ideally use another LLM to keep generating diverse tasks that require strategic exploration, and use the curriculum algorithm to adoptively choose which tasks to train on. We believe that this is an exciting future direction. > It could be helpful to have a short discussion on the limits of generalizability of the tasks. We thank the reviewer for this suggestion. We would edit the paper to discuss this limitation more clearly. We will emphasize the suite of tasks is diverse but it remains to be seen whether the resulting model will generalize to more temporally extended and real-world tasks. > How were the particular tasks chosen and which tasks were rejected? Some of the task groups we work with have been studied before, such as bandits [1, 2], 20 questions [3], guess my city [3] and wordle [3]. We expand upon these prior works and generate more training and test examples by prompting gpt-4o-mini with techniques from GenQA [4]. For the other tasks, we looked for tasks similar to the ones before, and GPT-4o-mini suggested battleship, minesweeper and mastermind. We came up with the cellular automata task as a toy example for iterative coding tasks with an interpreter. Finally, customer service and murder mystery are the open-ended tasks we could think of that have partial observability and fits nicely with other tasks. **We wanted as many tasks as possible, so we did not reject any tasks.** ## References [1] Can large language models explore in-context? [2] EVOLvE: Evaluating and Optimizing LLMs For Exploration [3] LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models [4] GenQA: Generating Millions of Instructions from a Handful of Prompts --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments, clarifications and detailed comments. I think this is a strong paper and keep my score at 5. --- Reply to Comment 1.1.1: Comment: Dear Reviewer WJpk, Thanks for your thoughtful feedback and suggestions, we believe they would greatly improve our paper! Please let us know if you have any followup questions or if we can further clarify something. Thanks, Authors
Summary: This paper introduces PAPRIKA, a SFT + RL approach aimed at enhancing the general decision-making and exploration capabilities of LLMs through diverse synthetic interaction data from various domains. One contribution is that it introduces 10 interesting tasks requiring interacting with the environment and reasoning about the interaction histories, such as 20 questions, etc. To improve the LLM's exploration capabilities, it generates synthetic data via rejection sampling, by either get successful trajectories or pairs of contrastive samples to do SFT, and RL. It shows the method generalizes to new domains, and also showcase some zero-shot transfer capabilities. Claims And Evidence: Most of the claims the paper made are mostly from the empirical section, on the effectiveness of the proposed method: - PAPRIKA improves LLM decision making abiltiies. This argument is supported by Figure 2 on after applying PAPRIKA, the individual task performance is improved. One thing probably need to analyze more is that whether it shows model's better capability in solving the specific task, say the direct mapping $x\to y$, and not learning the effective method of solving $y$ given $x$. It would be great if the authors could provide more detailed statistics on the interaction trajectory? Is the model luckily committed to golden answer without effectively exploring the space first? - PAPRIKA can teach LLMs generalizable strategies. I feel the LOO results is interesting and it does show that model's better generalization performance. One thing I am concerned is that are the "unseen" tasks are still within the same domain somehow? In these environments, are them mostly about the action is composing different questions, and the answer is yes / no? It would be great if the author could comment more on the difficulty and similarities of these tasks? I noticed Fig 3 only shows 5 domains, how about the other 5? I did not find them in Appendix as well. - Curriculum learning can improve data efficiency of PAPRIKA. This is mostly supported by Fig 4 compared with uniform sampling, and it shows curriculum is indeed helping, compared to random sampling. Methods And Evaluation Criteria: - The benchmark datasets are pretty nice, and it captures more complex domains that compared to multi-arm bandit. However, as I pointed above, it would be great to have a direct comparison on these tasks, how fundamentally each task is different than the other, or is it more semantically different observation / action spaces? - The metric used in the paper is mostly "best-of-4", why we need to report this best of $n$ performance? Can we report more metrics for better understanding? Say if we only sample once, and best of $n$ for multiple different $n$'s. - As I pointed before, LOO experiments in Figure 3 is missing some domains? - The whole paper is only experimenting on the same model, llama 8b, it would be great to show the performance of PAPRIKA on multiple models, preferably different sizes, and by different organizations, might show the effectiveness of the proposed method. - Can we list some example interaction trajectories in the Appendix for all domains? It would be great to get more detailed understanding on the complexity of the task. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: See Methods And Evaluation Criteria Supplementary Material: Roughly look at the prompts and plots. Relation To Broader Scientific Literature: Related with the broader RL + LLM community, and probably agents as well. Essential References Not Discussed: This would be relevant as well: LLMs Are In-Context Bandit Reinforcement Learners [https://arxiv.org/abs/2410.05362] Other Strengths And Weaknesses: Strength: The paper is a very nice read, and it works on a very important problem, basically how to teach LLM strategic exploration and decision making capability. The proposed method is intuitive, and it shows strong generalization performance on unseen tasks. Weakness: - The limited experiments are only done on one specific model, might be good to include more results from diverse models. - The evaluation metric and some missing results from what I mentioned above. - More detailed analysis on the interaction trajectories to understand more on the improvement of the exploration capabilities. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, and hope to address their concerns below: > One thing probably need to analyze more is… We note that all our evaluations are conducted on held-out tasks within each group. These tasks require strategic exploration, as they are partially observable (POMDPs) and require agents to interact with the environment to gather necessary information before solving them. For instance, in Mastermind, memorizing training trajectories isn't enough --- agents must learn to make informed guesses based on prior observations. Therefore, success on held-out tasks reflects genuine improvement, not noise. Since there is no direct mapping $x \rightarrow y$ without environment interactions, higher success rates (Figure 2) and fewer turns taken (Figure 6) indicate the model is learning to explore effectively and solve tasks more strategically. > PAPRIKA can teach LLMs generalizable strategies… In our work, we use 10 different task groups (for example, 20 questions, mastermind etc.). Each task group consists of different tasks (for example, guessing ‘Apple’ and ‘Mango’ in 20 questions). Each task group also employs 2 distinct training and test subsets. For all evaluations, we use the test split, and in that sense, these tasks are **unseen**. Moreover, for the LOO experiments, we test on **completely unseen task groups**. In this way, Paprika itself tests for generalization within the same task group, and Paprika (LOO) tests for generalization to tasks that are from a different domain. Of course, the tasks have to share some “similarity” for there to be any transfer at all but we don’t think such abstract similarity makes the tasks the same domain. > In these environments, are them mostly about the action… This is an excellent question. **The action and observation spaces of our task groups are often very different from one another.** For example, in mastermind, the agent needs to guess a 4 digit code, whereas in 20 questions it needs to ask a yes/no question about the secret topic. We discussed this in Appendix A, here we include a [summary](https://ibb.co/M5CQw5JQ). There is no obvious way to measure the difficulty of a task beyond reporting the model’s performance on each of them before fine-tuning, see Figures 2 and 5 (although this would be a great research question). Similarly, it is hard to say definitively whether one task is semantically similar to another. > I noticed Fig 3 only shows 5 domains… We have performed LOO experiments in all 10 domains. **Paprika (LOO) outperforms the instruct model in 9/10 domains**, showing remarkable generalization. 1. [This Figure](https://ibb.co/xtnt563Z) shows the performance of Paprika (LOO) with pass@4 success rate (with additional changes made to Bandits according to reviewer WvQu’s suggestion) 2. [This Figure](https://ibb.co/CfWZ65x) shows the average success rate. > The metric used in the paper is mostly "best-of-4", why we need to report this best of n performance? We used best-of-4 to account for stochasticity in environment dynamics. However, we agree that one-shot performance is also important. We have collected the following: 1. [Average success rate](https://ibb.co/0yhHNM2p) 2. [Pass@k for k = 1, 2, 3, 4, at temperature 0.7](https://ibb.co/Mk9pFtXp) > The whole paper is only experimenting on the same model… This is a good point. We have evaluated two different base models with comparable parameter count and see that they perform worse compared to Llama-3.1-8B-Instruct: [Figure](https://ibb.co/xtsyS7TG). Additionally, we have run Paprika fine-tuning on the recent Gemma-3-12B-IT, which is larger with 12B parameters. Paprika generally improves performance, and the improvement is often larger compared to that on Llama-3.1-8B-Instruct. Additionally, **Paprika fine-tuned Gemma-3-12B-IT reaches comparable or better performance to GPT-4o-mini on 7 out of 10 task groups**. Below are the most notable results (pass@4 success rate, temp 0.7, min-p 0.3): || 20 questions|Guess my city|Wordle|Battleship| |-|-|-|-|-| |Gemma-3-12 B-IT|0.60|0.75|0.33|0.40| |+ Paprika|0.68|0.87|0.50|0.52| |GPT-4o-mini|0.76|0.88|0.56|0.46| Full results: [Average](https://ibb.co/0yhHNM2p), [Pass@4](https://ibb.co/dsv8xG3q) > More detailed analysis on the interaction trajectories... We show comparison between Llama-3.1-8B-Instruct and Paprika on 3 examples: 1. 20 questions, with the secret topic being ‘prime numbers’: [Figure](https://ibb.co/N6dtBqKs) 2. 20 questions, with the secret topic being ‘Orca’: [Figure](https://ibb.co/n8KzS18y) 3. Wordle, with the secret word being ‘toast’: [Figure](https://ibb.co/G4P9gpZg) Paprika performs better information gathering and higher quality actions compared to the instruct model. We will add these and more example trajectories in the next revision of the paper. > This would be relevant as well... Thank you for the reference! We will cite and discuss this.
Summary: The paper proposes PAPRIKA, a method designed to enable large language models (LLMs) to acquire generalizable sequential decision-making capabilities via fine-tuning on synthetic interaction data. The gem of PAPRIKA lies at the use of a scalable online curriculum learning method (Sec. 3.4), where the performance variance of the current policy over the sampled tasks serves as the metric reflecting their difficulty, and the methodology is instantiated by a batched UCB algorithm. Experimental results demonstrate the effectiveness of PAPRIKA in multiple multi-turn decision-making scenarios, especially in zero-shot generalization to unseen tasks with the leave-one-out experimental design. Claims And Evidence: The three research questions raised at the beginning of Section 4 are addressed adequately in Sections 4 and Figures 2-4. Methods And Evaluation Criteria: The methods and the evaluation criterias are sound and make sense. Theoretical Claims: The primary focus of this paper is empirical efficacy. Experimental Designs Or Analyses: I like the design of the leave-one-out experiments. However, in Figure 3, it is shown that the performance of PAPRIKA (LOO) on Mastermind is quiet inferior to that of PAPRIKA. A straightforward explanation could be that the Mastermind task is more difficult than other tasks. But I suggest the authors to derive deeper analysis in this regard with the learning potential defined in Equation (4). Supplementary Material: I've reviewed the supplementary details included in the appendix. Relation To Broader Scientific Literature: This paper is related to language agents and reinforcement learning. The main contribution from my perspective is to use the performance variance of the current policy over the sampled tasks as the metric reflecting their difficulty. Sketching the task difficulty with such a metric gives rise to the integration of curriculum learning algorithms. Essential References Not Discussed: In Section 2, the authors stated "for most tasks, there is no known algorithm like UCB to generate good synthetic trajectories from". In fact, synthesizing trajectories for language agents has been studied in [1][2]. I suggest the authors to discuss the relationship of PAPRIKA with these prior work. For instance, could they be combined to boost the performance even further? [1] BAGEL: Bootstrapping Agents by Guiding Exploration with Language. ICML 2024. [2] ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training. COLM 2024. Other Strengths And Weaknesses: Strengths: The written is clear. The experimental design and results are strong. Weaknesses: - The term "Curious" in the title should be clarified more clearly in the introduction section. - In the Line 6 of Algorithm 1, \tau tasks and C samples are sampled. Is this process time-consuming? When sweeping them over several different configuations, what's their effect on sketching the difficulty of different tasks, and eventually the performance of the agents? Other Comments Or Suggestions: NA Questions For Authors: In Section 3.2, the authors mentioned that the top-p sampling contributes to the generation of diverse yet coherent trajectories. Have the authors conducted ablation studies on this? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and suggestions. They will greatly improve our paper! > But I suggest the authors to derive... Indeed, mastermind is the hardest task, as demonstrated by the untrained model only achieving ~4% pass@4 success rate on it, the lowest among all 10 task groups (Figure 2). It is conceptually unclear if the learning potential and LOO generalization should be related, since generalization depends on how transferable the decision making ability from training task groups is to the test task group, rather than the learning potential of the training or test task groups (which is only important for choosing which tasks to train on next). Even a highly learnable test task may not see much generalization if the training tasks do not teach any strategies that transfer to it. We are happy to incorporate any analysis if the reviewer has any suggestions. > In Section 2, the authors stated "for most tasks... This is a good point and we thank the reviewer for the references which we will discuss in the final version of the paper. What we meant is that we don’t always know how to generate near optimal solutions. In fact, the methods the reviewer mentions are also ways to overcome the fact that we do not know how to solve most tasks like we know how to solve bandits. We believe what sets paprika apart is that existing works focus on training to solve specific domains like web navigation whereas we are interested in whether the agents can generalize to a wide range of tasks. The motivation is that training for specific domains will not scale to all possible tasks in the world. What we actually want is an agent that is capable of general decision making so it can solve new tasks efficiently. As the reviewer pointed out, these two perspectives are complementary. These methods could be used as subroutines in Paprika to help gather good experience, and Paprika could make task-specific training more sample efficient because the models are more capable of general decision making. > The term "Curious"... We thank the reviewer for their suggestion. We would update the paper in the next iteration to clarify the terminology clearly in our introduction. We want to also briefly discuss it here: the concept of curiosity has been used in many different machine learning contexts. A popular notion is **intrinsic motivation**, where the agent is driven by an exploration bonus that is not necessarily related to the task to be achieved [1, 2]. Many works build on this notion to handle problems with sparse reward or no reward at all [3, 4]. The curiosity in this work differs from intrinsic motivation in that we focus on gathering only the information required to solve a given task rather than all the knowable information. This is closer in spirit to the original exploration-exploitation trade-off in reinforcement learning [5]. The goal is to explore to the extent that the problem can be solved but not over-explore at the cost of efficiency, by training on a wide range of problems. This can be thought of as a form of **amortized exploration**. > In the Line 6 of Algorithm 1, $\tau$ tasks and $C$ samples... Sampling trajectory can be expensive but not avoidable. It is the same as gathering experience in other RL algorithms. Since the sampling is expensive, we have only tried one configuration, the largest one our hardware can support. Namely, we sample task $\tau$ uniformly at random from task group $k^*$, and fix $C = 20$. We fully agree that different configurations can affect the estimated difficulty of a task, since the empirical estimate of the learning potential (Appendix E) might become worse if $C$ is significantly reduced, and therefore affect the final agent’s performance. > In Section 3.2, the authors mentioned that the top-p sampling... To clarify, we use min-p sampling instead of top-p, since it is known to work better at higher sampling temperatures [6]. **We have run an ablation to show the importance of min-p sampling**. On twenty questions, we see that either using a lower temperature or lower min-p parameter leads to lower coverage on the training set, as measured by pass@20 success rate, please see [this Figure for varying Min-p at a fixed temperature of 1.5](https://ibb.co/mVsnTSsy), and [this Figure for varying temperature at a fixed Min-p parameter of 0.3](https://ibb.co/m547KZH7). The training data generated by different min-p parameters and temperatures also affects downstream performance of the fine-tuned model on held-out tasks, see [this Figure](https://ibb.co/fVYTxb1g). # References [1] Curious model-building control systems. [2] Godel machines: Fully self-referential optimal universal self-improvers. [3] Curiosity-driven exploration by self-supervised prediction. [4] Exploration by random network distillation. [5] Reinforcement learning: An introduction [6] Turning up the heat: Min-p sampling for creative and coherent llm outputs --- Rebuttal Comment 1.1: Comment: Thank you for the response. I appreciate the additional clarification and ablation studies, and I'm raising my score to 4. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Udnq, We thank you for your astute observations and suggestions, they would greatly improve our work! We also thank you for the positive review of our paper. If you have any followup questions or if we can clarify something, please let us know and we would love to do so. Thanks, Authors
null
null
null
null
null
null
A Sub-Problem Quantum Alternating Operator Ansatz for Correlation Clustering
Accept (poster)
Summary: This paper proposed the QAOA for correlation clustering, by introducing a Sub Problem QAOA. This is motivated by the nucleus sampling and sub problem is dependent to solve correlation clustering. Although QAOA for correlation clustering has been studied in the literature, Weggemans et al. (2022) is restricted to 4-level qudits such that only solutions involving at most 4 clusters can be considered. The paper guarantees that there exist parameters such that for depth $p \rightarrow \infty$ an optimal solution is obtained with certainty. The experiments are conducted in complete ad Erdos-Renyi graphs with 10 nodes. Claims And Evidence: Yes Methods And Evaluation Criteria: Among the correlation clustering formulation, the paper focuses on the unweighted maximum agreement correlation clustering on general graphs, which is APX-hard but admits a constant-ratio approximation. Compared with QAOA, SPQAOA uses nucleus sampling for computing the cost function. The set of states called nucleus is selected by the probability mass with threshold $t$ and $t=1$ recovers the standard sampling. $n-1$ subproblems are adapted to QAOA. Once all sub-problems are processed, the expected costs can be computed from the measured probability distributions Theoretical Claims: The proof is based on (Morales, 2020) and (D’Alessandro, 2021). It seems that it is not very hard to guarantee the optimality for $p \rightarrow \infty$. Experimental Designs Or Analyses: The comparison with multi-level QAOA formulation of Weggemans et al. (2022) was performed in simple setting,complete and Erdos-Renyi graphs where the probability of an edge being present is 0.5. SQAOA outperforms existing QAOA approaches in terms of approximation ratios and runtime for $p=1$ up to 10 nodes. Supplementary Material: Yes, I read the additional proofs. Relation To Broader Scientific Literature: The proposed method is improved over QAOA formulation of Weggemans et al. (2022). SPQAOA is specific to correlation clustering, but it would be great to see the possibility of other applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper was very clear to read the details. Introduction of Quantum Alternating Operator Ansatz was done by the elegant description even for non-professionals on quantum information. Other Comments Or Suggestions: N/A Questions For Authors: If we consider the minimum disagreement formulation, will we need a different techniques of QAOA considered in Max-Agree? Can we perform the case with larger depth $p$? Do we expect the performance becomes better? Is it prohibited due to computational resources? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. There seems to be consensus that our experimental and theoretical claims are correct and that the proposed approach surpasses previous QAOA methods for correlation clustering. Here, we gladly address remaining questions and concerns of Reviewer yi8D: **Applications beyond Correlation Clustering** The reviewer is concerned that we apply our approach only to correlation clustering. We agree that our article focuses on correlation clustering and have stated clearly in the title and abstract that it is about a generalization of QAOA for correlation clustering. Given the wide range of applications of correlation clustering and numerous papers, especially in recent years, focusing on this problem specifically, we think that this problem is interesting enough to be considered on its own. Furthermore, the constraints have made it particularly hard to apply QAOA to correlation clustering in the past, allowing us to demonstrate the effectiveness of our proposed generalization. Beyond correlation clustering, candidates for our sub-problem approach are ones in which elements are assigned one of multiple labels. E.g., one could consider the Maximum $k$-Colorable Subgraph Problem with sub-problems coloring so-far-unconsidered parts of the graph by a fixed number of colors smaller than $k$. **Comparison with Wegemanns et al.** The reviewer is concerned that we compare our work only with that of Wegemanns et al. 2022. We emphasize that the work of Wegemanns et al. is the only work that applies QAOA to the correlation clustering problem. In an article from 2020 different QAOA formulations are analyzed, the multi-level formulation is found to perform best and is further evaluated in an article from 2022. We have chosen to compare our approach with this best-performing method because it is the state of the art. **Minimum Disagreement Formulation** The approach is easily adapted to the minimum disagreement formulation (and to other linear objective functions with the same set of constraints) by modifying the phase-separation operator. Since the objective value of the minimum disagreement formulation differs from that of the maximum agreement formulation only by an additive constant, one can even use the same phase-separation operator in this particular case and add the constant in classical post-processing. **Performance with Larger Ansatz Depth $\mathbf{p}$** The reviewer is right, performance does increase with larger $p$, but using it is prohibited by computational resources. To make this explicit, we suggest adding the following paragraph after Line 154. "The approximation ratios are expected to increase with larger ansatz depth $p$ and are guaranteed to improve with optimal parameters. However, the depth is limited firstly, because the number of applied operators increases, resulting in problems with computational resources for the simulation on classical computers and the introduction of noise for the execution on quantum computers, and, secondly, to a lesser extent, since the number of learnable parameters increases with $p$ and gradients cannot be computed easily for the quantum circuit."
Summary: The paper constructs a new variant of QAOA (rather) specifically for the correlation clustering problem and shows improved performance over QAOA. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: At a quick look, the proof seems fine. The proven property also is simple enough. Experimental Designs Or Analyses: Experimental analysis is done correctly, although very narrow in what is considered. This refers to both focusing only on a very specific problem domain and also to only considering the one chosen approach. Crucially, empirical evaluation lacks a proper ablation study to find out what parts of the approach really contribute to the results. Supplementary Material: provides little more information. Relation To Broader Scientific Literature: Generalizing QAOA in the way it is described here appears powerful. However, focusing this strongly on correlation clustering severely limits any potential impact. Essential References Not Discussed: none Other Strengths And Weaknesses: It is clear that the chosen approach is quite fitted to the problem of correlation clustering. However, potential for other or similarly structured problems should be discussed if not even evaluated. Other Comments Or Suggestions: Introduction is way too long. Basically, the first paragraph can be cut directly. mathcal{S} is weirdly introduced in line 163 "there is no interference" is repeated twice (lines 262 -- 265) Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. There seems to be consensus that our experimental and theoretical claims are correct and that the proposed approach surpasses previous QAOA methods for correlation clustering. Here, we gladly address remaining questions and concerns of Reviewer L5UV: **Applications beyond Correlation Clustering** The reviewer is concerned that we apply our approach only to correlation clustering. We agree that our article focuses on correlation clustering and have stated clearly in the title and abstract that it is about a generalization of QAOA for correlation clustering. Given the wide range of applications of correlation clustering and numerous papers, especially in recent years, focusing on this problem specifically, we think that this problem is interesting enough to be considered on its own. Furthermore, the constraints have made it particularly hard to apply QAOA to correlation clustering in the past, allowing us to demonstrate the effectiveness of our proposed generalization. Beyond correlation clustering, candidates for our sub-problem approach are ones in which elements are assigned one of multiple labels. E.g., one could consider the Maximum $k$-Colorable Subgraph Problem with sub-problems coloring so-far-unconsidered parts of the graph by a fixed number of colors smaller than $k$. **Comparison with Wegemanns et al.** The reviewer is concerned that we compare our work only with that of Wegemanns et al. 2022. We emphasize that the work of Wegemanns et al. is the only work that applies QAOA to the correlation clustering problem. In an article from 2020 different QAOA formulations are analyzed, the multi-level formulation is found to perform best and is further evaluated in an article from 2022. We have chosen to compare our approach with this best-performing method because it is the state of the art. **Ablation Study** The reviewer notes that our empirical evaluation lacks an ablation study. The presented approach consists of two main parts, nucleus sampling and the splitting of the given problem into sub-problems. We evaluate both independently and in combination, in Table 1 and Appendix C, showing that nucleus sampling improves the approximation ratio while splitting into sub-problems reduces the runtime. We agree with the reviewer that especially the splitting could be analyzed further. We have in fact done this (not shown in the article) for a preliminary version of SQAOA that also parametrizes the transition operators. We have found that this parametrization does not improve the approximation ratio, so we have decided not to report this negative result. Of course, we are prepared to do so in the supplement, should the reviewer recommend it. **Additional, Minor Corrections** We understand the reviewer's comment that the introduction is rather long. To strike a balance between conciseness and accessibility, we propose to remove the sentence "For example, Shor’s factoring algorithm (Shor, 1997) provides an exponential speed-up over the best-known classical factoring algorithm." from the first paragraph. We are prepared to shorten the introduction further if there is consensus among the reviewers that we should do so. To introduce $\mathcal{S}$ more clearly, we propose to replace Line 161 by "$|s\rangle$ is an initial state in the feasible space $\mathcal{S}$, which is given by the set of all superpositions of classically feasible states, i.e. by". We have removed "there is no interference between those states, and" in Line 264.
Summary: The paper presents a new quantum optimization approach called the Sub-Problem Quantum Alternating Operator Ansatz (SQAOA) aimed at solving correlation clustering problems. The approach modifies the Quantum Alternating Operator Ansatz (QAOA) by introducing two key modifications: 1) it uses nucleus sampling to compute the cost function and 2) it divides the problem into sub-problems, solving each one individually with QAOA. The authors argue that these changes lead to a method that is more suitable for the specific challenges posed by correlation clustering. Theoretical guarantees are provided, showing that SQAOA can obtain optimal solutions as the depth of the quantum circuit approaches infinity (p → ∞). Experimental results demonstrate that SQAOA outperforms standard QAOA in terms of approximation ratios and runtimes on various graphs, such as complete and Erdős-Rényi graphs. Claims And Evidence: The main contributions of this paper is clear and well supported. However, the claim that QAOA (quantum alternating operator ansatz) is considered a promising candidate for achieving quantum supremacy is not convincing to me. There are obviously other issues such as the number of shots to achieve desired accuracy, gradient update when $p\rightarrow \infty$, and etc. Without discussing all the aspects, it is not convincing to claim that QAOA is a promising candidate for quantum supremacy. Methods And Evaluation Criteria: First of all, there is a mixup in quantum alternating operator ansatz and quantum approximation optimization algorithm, and the abbreviation QAOA requires a more specific assignment to which one of the above (or the multilevel QAOA in Weggemans et al., 2022). Besides, there are a few other issues. 1. The correlation clustering (CC) problem requires more explanation. As the only testbed for the proposed algorithm, we need more information including how the authors convert the classical cost function to Hamiltonian, etc. The authors have also to convince the readers why we should focus on this specific type of Combinatorial Optimization problems instead of targeting general COs. The whole paper depends so much on a previous one (Weggemans et al., 2022). I've quickly gone through this reference paper and I believe the reason why (Weggemans et al., 2022) choose to solve CC problem is because they would like to demonstrate the design of qudit instead of qubit on neutral atom quantum computers. And qudit is a perfect fit for clustering multiple labels. However, in this paper the authors are trying to translate the algorithm back to qubits, which seems not a good motivation. 2. The definitions of Aggreements and Probability in Figure 1 are vague and need further clarification. The introduction of nucleus sampling also introduced a hyperparameter $t$, which seems deciding the value of $t$ is non-trivial and will be hard to determine with larger problem scales. Theoretical Claims: The theoretical claims are legit, but the main problem is it is not surprise for me. The authors can refer to [1] for the details of overparameterization theorem and I think as long as the alternating layers do not commute with each other and generate enough Lie Algebra dimension, they are guaranteed to achieve the desired state. [1] Larocca, Martin, et al. "Theory of overparametrization in quantum neural networks." Nature Computational Science 3.6 (2023): 542-551. Experimental Designs Or Analyses: With only one baseline method and most of the experiments are conducted on graphs with less than 6 nodes, the experiments can only illustrate that the proposed method surpass Weggemans et al., 2022. Supplementary Material: I did not check the codes in detail. Relation To Broader Scientific Literature: The authors translate a previous literature for correlation clustering problem with qudits to qubits and proposed a sampling method to accelerate the convergence. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. There seems to be consensus that our experimental and theoretical claims are correct and that the proposed approach surpasses previous QAOA methods for correlation clustering. Here, we gladly address remaining questions and concerns of Reviewer JUTn: **Quantum Supremacy** We agree that the presented approach does not solve existing problems of QAOA for achieving quantum supremacy, such as the number of shots or the gradient update for increasing ansatz depths, and we do not intend to make this claim. We also see that the introductory statement "QAOA is considered a promising candidate for achieving quantum supremacy for the following reasons:" does not account for the problems with QAOA that need yet to be solved, and propose to replace it by "QAOA is considered a promising variational quantum algorithm for the following reasons:". **QAOA** The Quantum Alternating Operator Ansatz is a generalization that was established after the Quantum Approximate Optimization Algorithm. The main difference between the two concerns the problem constraints. While the Quantum Approximate Optimization Algorithm incorporates them through penalty terms in the objective function, the Quantum Alternating Operator Ansatz incorporates them directly in the mixing operators and the initial state. The acronym "QAOA" is used for both approaches in their original papers and in further literature. Often it is not clearly distinguished between the two and Quantum Alternating Operator Ansatz is used as an umbrella term. We use QAOA as an acronym for the Quantum Alternating Operator Ansatz and explicitly distinguish the Quantum Approximate Optimization Algorithm when necessary. **More Detailed Explanation of Correlation Clustering** Due to the character limit, we kindly refer the reviewer to the answer given in the rebuttal to reviewer fb8D. **Translation of Correlation Clustering to SQAOA** Due to the character limit, we kindly refer the reviewer to the answer given in the rebuttal to reviewer fb8D. **Applications beyond Correlation Clustering** Due to the character limit, we kindly refer the reviewer to the answer given in the rebuttal to reviewer L5UV. **Modelling with Qubits instead of Qudits** The reviewer states that qudits are a perfect fit for clustering multiple labels (and thus for correlation clustering) and sees no motivation for our formulation of the problem in terms of qubits. We agree that qudits are a natural choice for correlation clustering. At the same time, our SQAOA formulation based on qubits outperforms the qudit-formulation in terms of approximation ratios and runtimes, while requiring less advanced and less specialized hardware. We consider this contribution relevant also because it is not obvious. **Figure 1** To clarify the terms "agreement" and "probability" in Figure 1, we suggest to replace the caption of this figure by the following one: "Depicted are two diagrams showing the probability of measuring basis states when applying the multi-level QAOA formulation of Weggemans et al. (2022) with $p = 1$ to a correlation clustering problem instance on a complete graph with 4 nodes. Shown in addition are the agreements of these basis states, i.e. the value of the objective function in (6). The probabilities of the diagram at the top are obtained directly from the QAOA results. The probabilities of the diagram at the bottom are obtained by nucleus sampling with a threshold of $t = 0.5$. **Hyperparameter $t$** The nucleus size $t$ used for nucleus sampling is indeed a hyperparameter whose choice could be further evaluated. There is a trade-off between low probability states (for small t) and high sampling noise (for large t). Due to sampling noise, we expect the optimal parameter to decrease with increasing number of shots. However, even our ad hoc choice of t, without a systematic study, results in the greatly improved approximation ratios that we show in Table 1 and Appendix C. **Convergence to an Optimal Solution** The reviewer states that, while our theoretical claim of reaching an optimal solution when the ansatz depth tends to infinity is legit, it is of no surprise. All in all, we agree with this statement, especially taking into account the newly introduced term separating phases of individual qubits. At the same time, the convergence guarantee we establish is necessary, we think, to put our approach on the same theoretical foundation as the Quantum Alternating Operator Ansatz. **Comparison with Wegemanns et al.** Due to the character limit, we kindly refer the reviewer to the answer given in the rebuttal for reviewer L5UV.
Summary: The paper explores the power of the Quantum Alternating Operator Ansatz algorithm. In particular, the focus is on the optimization problem called correlation clustering and provides a way to solve it by dividing the Quantum Alternating Operator Ansatz algorithm in subproblems. They show that by iterating to infinity the optimal solution is found and provide experiments in specific topologies that show good performance. ## Update after rebuttal Thank you to the authors for the answers to my questions. I think with the changes proposed the manuscript will improve. Claims And Evidence: The claim is that Quantum Alternating Operator Ansatz algorithm can be adapted to find the optimal solution to the correlation clustering. This is shown analytically and empirically. Methods And Evaluation Criteria: The analysis seems to be correct. The empirical evaluation is restricted to some specific topologies, but they are general enough for a first study. Theoretical Claims: I did not check the proofs in detail, but the results and claims are intuitively correct. Experimental Designs Or Analyses: I looked at the empirical results and they seem fine. Supplementary Material: I went over the supplementary material and seems fine. Relation To Broader Scientific Literature: QAOA is usually the acronym for Quantum Approximate Optimization Algorithm. I think this has to be mentioned in the introduction. Essential References Not Discussed: I did not find anything missing, but I would like to see a discussion on the difference between Quantum Approximate Optimization Algorithm and Quantum Alternating Operator Ansatz, and how one generalizes the other. Other Strengths And Weaknesses: The proposed approach for the Quantum Alternating Operator Ansatz can possibly be used for other problems. However, for now the application is only for correlation clustering, making it limited. Other Comments Or Suggestions: Say what is p in the abstract. I feel that the correlation clustering problem statement was a bit underexplained (lines 165-178), and I did not quite understand how the original problem is transformed into the SQAOA, and how the max-cut problem is used to construct that definition, I think a bit more detail on the construction of equation 11, and the use of the gate of equation 9 could be useful. Questions For Authors: What is the difference between Quantum Alternating Operator Ansatz and Quantum Approximate Optimization Algorithm? Wy is the same acronym used for both? How specific is your subproblem decomposition for the correlation clustering problem? Can it be applied to other problems? Which are good candidates? (You say "splitting a problem in sub-problems is a universal approach, and similar improvements might be possible on other problems suitable for QAOA.") Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. There seems to be consensus that our experimental and theoretical claims are correct and that the proposed approach surpasses previous QAOA methods for correlation clustering. Here, we gladly address remaining questions and concerns of Reviewer fb8D: **QAOA** The Quantum Alternating Operator Ansatz is a generalization of the Quantum Approximate Optimization Algorithm that was established after it. The acronym "QAOA" is used for both approaches in their original papers and in further literature. Differences between the two approaches concern the incorporation of constraints and the use of problem-dependent mixing operators and initial states: On the one hand, the Quantum Approximate Optimization Algorithm first transforms the given optimization problem into an equivalent one in which the constraints are replaced by penalty terms in the objective function. Then, fixed mixing and problem-dependent phase-separation operators are applied alternatingly to the initial state that is an equal superposition of all possible states. On the other hand, the Quantum Alternating Operator Ansatz works directly on the original problem by incorporating the constraints in problem-dependent mixing operators and by choosing an initial state that is a superposition of feasible states. **Applications beyond Correlation Clustering** Due to the character limit, we kindly refer the reviewer to the answer given in the rebuttal to reviewer L5UV. **$\mathbf{p}$ in the Abstract** We agree that our use of the symbol $p$ in the abstract needs clarification and suggest to replace "if $p \rightarrow \infty$" with "if the depth of the ansatz tends to infinity". **More Detailed Explanation of Correlation Clustering** To further illustrate the ILP formulation of the correlation clustering problem, we propose to insert the following paragraph after Line 178 and to also include a figure with a simple example. "In this formulation, the variable assignment $x_{u,1}=1$ and $x_{v,2}=1$ for nodes $u,v$ indicates that node $u$ is in Cluster $1$ and that node $v$ is in Cluster $2$. Thus, the nodes are in different clusters. A value of $1$ is contributed to the objective value if and only if $c_{uv}=-1$." **Translation of Correlation Clustering to SQAOA** To comment further on the construction of the phase-separation operator, we propose to insert the following paragraph after Line 138: "In order to apply QAOA to a specific problem, one needs to define and implement the operators and the initial state. The main challenge here lies in the construction of the initial state and mixing operator. Given a binary ILP formulation of the problem, as in (6), the phase-separation operator can be constructed easily by replacing a variable $x$ in the cost function $C(x)$ by the term $\frac{(1-Z)}{2}$ in the phase-separation Hamiltonian $H_C$. This is due to the fact that if $x = 0$ then $\frac{(1-Z)}{2} |x\rangle = 0 |x\rangle$, and if $x = 1$ then $\frac{(1-Z)}{2} |x\rangle = 1 |x\rangle$, and thus, (2) is fulfilled for the Hamiltonian constructed in this way. Implementing the corresponding unitary operator, then only requires the application of rotational Pauli-Z gates to individual qubits." To explain in more detail the specific operators used in our SQAOA formulation of correlation clustering, we suggest to insert the following paragraph after Line 248: "The transition operator $U_{T_i,x}$ uses Hadamard gates to construct an equal superposition of all feasible states of the sub-problem. The mixing operator $U_{M_i,x}$ enables transitions between the feasible states of a sub-problem by flipping qubits, i.e. by changing if the corresponding nodes remain in the current cluster, or are assigned to a new cluster that is further split-up in the next sub-problem. The phase-separation operator $U_{C_i,x}$ incorporates in the first sum of the exponent the cost function as described in Section 3 but drops constant terms. Additionally, the Hamiltonian given by the second sum of the exponent allows a cost-independent separation of phases based on individual vertices." **Sub-Problem Decomposition** The decomposition we define is specific to the correlation clustering problem. Other decompositions are conceivable and might yield further improvements. In general, the main challenge in applying the Quantum Alternating Operator Ansatz lies in the definition and implementation of mixing operators and initial states appropriate for the specific optimization problem. For applying SQAOA, in addition, the sub-problems need to be chosen carefully. Beyond correlation clustering, candidates for our sub-problem approach are ones in which elements are assigned one of multiple labels. E.g., one could consider the Maximum $k$-Colorable Subgraph Problem with sub-problems coloring so-far-unconsidered parts of the graph by a fixed number of colors smaller than $k$.
null
null
null
null
null
null
Distributed Retraction-Free and Communication-Efficient Optimization on the Stiefel Manifold
Accept (poster)
Summary: The paper introduces EF-Landing, a novel distributed optimization algorithm for stochastic optimization on the Stiefel manifold. EF-Landing is retraction-free and communication-efficient, incorporating gradient compression and error feedback mechanisms. The authors establish sharp convergence guarantees and demonstrate that EF-Landing achieves the same asymptotic linear speedup as existing methods without communication compression. The paper also generalizes EF-Landing to block-wise Stiefel manifolds, enhancing its applicability to structured constraints. Extensive numerical experiments validate the theoretical findings. Claims And Evidence: The main claims of the paper are: 1. EF-Landing is the first retraction-free and communication-efficient algorithm for distributed stochastic optimization on the Stiefel manifold. 2. The algorithm ensures convergence while significantly reducing communication overhead. 3. The convergence rate of EF-Landing matches that of existing methods without communication compression. 4. The method generalizes to block-wise Stiefel manifolds, extending its practical applicability. The claims are well-supported by theoretical proofs and numerical experiments. The convergence guarantees are rigorously derived, and extensive experiments confirm that EF-Landing performs comparably to existing methods while reducing communication costs. The inclusion of error feedback ensures stability and accuracy despite compression. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. The authors use standard theoretical tools from optimization on manifolds to establish the convergence guarantees and rates for EF-Landing. The experimental evaluation includes benchmark problems such as distributed online PCA and deep learning tasks. The comparison with existing methods, including QR retraction and vanilla Landing methods, provides a solid basis for evaluating EF-Landing's effectiveness. Theoretical Claims: The theoretical claims of convergence analysis are well-supported by rigorous proofs. The convergence analysis begins by presenting the necessary conditions, then proceeds to establish the convergence guarantees and rates for EF-Landing. The authors present the standard assumptions for optimization on the Stiefel manifold, supporting lemmas and main convergence theorem. Besides, the authors establish convergence rate in deterministic scenario and stochastic scenario. Experimental Designs Or Analyses: The experimental design is sound. The authors provide experiments on two groups of problems: the distributed online PCA for deterministic scenario and deep learning using VGG16 neural network architecture with orthogonal constraints applied to the convolutional layers for stochastic scenario, comparing EF-Landing with other algorithms for optimization on Stiefel manifold, including vanilla Landing, QR retraction. The experiments are well-structured, demonstrating the effectiveness of EF-Landing in terms of both convergence rate and communication efficiency. Supplementary Material: The supplementary material includes additional theoretical proofs and implementation details. The material is relevant and complements the main text by providing deeper insights into the algorithm's behavior and assumptions. Relation To Broader Scientific Literature: The paper is well-situated within the broader literature on optimization on manifolds, distributed optimization, and communication-efficient training. It builds on prior work in these areas and makes a novel contribution by integrating error feedback with retraction-free optimization. The references to previous work, such as the Landing method and communication-efficient distributed optimization techniques, are appropriate. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: • The paper provides sharp theoretical convergence guarantees for EF-Landing. • The convergence rate of EF-Landing matches that of existing methods without communication compression. • The method generalizes to block-wise Stiefel manifolds, extending its practical applicability. Weaknesses: • Experimental evaluation could be expanded to include more diverse datasets and application scenarios. Other Comments Or Suggestions: • Expanding experiments to additional machine learning tasks could further strengthen the empirical validation. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and constructive comments. Below, we reiterate our novel contributions which the reviewer had mentioned, provide other theoretical innovations for reference, and show the plan of expanding follow-up experiments. **1. Reiteration of Contributions.** - Sharp convergence analysis: Instead of analyzing convergence results case-by-case for different settings, we provided a general convergence result. It can reduce to various specific situations (deterministic or stochastic scenarios, with or without compression, with or without momentum) by choosing different auxiliary constants. When deterministic setting without compression or momentum is chosen, our result exactly corresponds to the result in [Ablin et al. 2024]. - Block-wise generalization: Prior work for deep learning assumes fully vectorizable variables, failing to address block-wise constraints (e.g., orthogonality in neural network layers). Our work, however, directly tackles block-wise structures (Section 6), proving convergence under a unified step size (Theorem K.1). To our knowledge, this is the first analysis for block-wise orthogonal optimization, addressing practical deep learning architectures. - Extensive experiments: Prior work concerned with optimization on Stiefel manifolds mainly focuses on traditional problems like PCA. We reviewed the more general application of orthogonal constraints to deep learning, and conducted extensive experiments on diverse deep learning tasks. Brilliant results demonstrated the applicability and efficiency of our algorithm. **2. Other Theoretical Innovations.** Apart from the insights the reviewer had mentioned, we also summarize more of our technical contributions below. We hope these align with the reviewer’s expectation. **(a) Reduced Assumptions.** - Prior analyses of the Landing method [Ablin et al. 2024] require restrictive assumptions: bounded local Riemannian gradients, explicit bounds on intermediate symmetric matrices, and unbiasedness conditions for specific Riemannian gradients. - **Our Work:** We eliminate these constraints, relying only on standard smoothness and mild gradient bounds (Assumptions 3.1–3.2). This broader generality enables more applications. **(b) Perturbation-Tolerant Analysis.** - Existing Landing convergence guarantees fail under gradient perturbations (e.g., compressed gradients). While trivial in Euclidean settings, perturbations on the Stiefel manifold introduce non-negligible geometric distortion. - **Our Work:** We rebuild the analysis from first principles, introducing a **perturbation-compatible merit function** (Lemma 5.6) that rigorously accounts for compression errors. This is the first provably robust Landing variant for compressed gradients (Theorem 5.7). **(c) Gradient Clipping for Safety Guarantees.** - Compressed gradients only remain within the safety region in expectation, risking constraint violations. - **Our Work:** A novel clipping strategy enforces deterministic safety without introducing bias. **3. Additional experiments.** All newly added experiments can be found in the **Additional Result Sheet (ARS)** https://anonymous.4open.science/r/EF-Landing-B6E4. Figures 5 & 6 add the comparison of EF-Landing Algorithm and Penalty Method using VGG16 and ResNet18 on CIFAR-10, which further shows the efficiency of our algorithm. Moreover, we are expected to conduct more experiments, where more diverse datasets, more application scenarios, comparisons among more relevant methods are all taken into consideration. We again appreciate the reviewer for the valuable feedback and for recognizing the contributions of our work.
Summary: This paper introduces EF-Landing, a retraction-free and communication-efficient algorithm for distributed stochastic optimization on the Stiefel manifold. Claims And Evidence: The paper's main claims regarding retraction-free optimization, communication efficiency, and error feedback improving convergence are generally supported by theoretical analysis and empirical results. Methods And Evaluation Criteria: The proposed method is designed for distributed systems and reduces communication via compression. However, it relies on centralized coordination, requiring communication between worker nodes and the master at every iteration, which may limit communication efficiency. A more decentralized approach or a method allowing multiple local iterations before synchronization could further improve efficiency. Theoretical Claims: The proofs seem fine. Experimental Designs Or Analyses: Within the current distributed setting, it would be beneficial to include a comparison with multiple local updates. Consider evaluating against methods that incorporate local updates, such as those in Zhang et al. (2024), NeurIPS, which study nonconvex federated learning on compact smooth submanifolds with heterogeneous data. Zhang, J., Hu, J., So, A.M.C. and Johansson, M., 2024. Nonconvex federated learning on compact smooth submanifolds with heterogeneous data. Advances in Neural Information Processing Systems, 37, pp.109817-109844. Supplementary Material: Have checked the proofs and additional results. Relation To Broader Scientific Literature: The paper's main contribution lies in merging communication compression with a retraction-free optimization method. The theoretical analysis primarily follows from error feedback techniques to control errors in the Euclidean gradient, which are then incorporated into existing retraction-free analyses. As a result, the work mainly applies existing technical tools rather than introducing new theoretical insights. Additionally, the considered distributed setting is relatively simple, and extending the approach to more complex decentralized or federated settings would enhance its impact. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A key weakness is that the main theorems and lemmas largely follow existing results without significant technical difficulty, limiting the novelty of the theoretical contributions. From an algorithmic perspective, the current distributed setting is somewhat outdated, as modern approaches often consider decentralized communication or centralized frameworks with multiple local updates before aggregation. Other Comments Or Suggestions: **Equation (2):** Please validate the expression of the Riemannian gradient $\textrm{grad} f(X)$ by explicitly specifying the associated Riemannian metric used in the derivation. **Algorithm 1:** Is the gradient clipping with constant $L'$ necessary? If so, how should $L'$ be estimated in practice? Please clarify its role in ensuring convergence. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and constructive comments. Below, we address each point in detail. All newly added experiments can be found in the **Additional Result Sheet (ARS)** https://anonymous.4open.science/r/EF-Landing-B6E4 **1. Distributed learning** Decentralized Learning (DL), Federated Learning (FL), and Compressed Learning (CL) represent three **orthogonal** research directions in communication-efficient distributed learning. Specifically, DL focuses on *which neighbors to communicate with* (topology design), FL addresses *when to communicate* (synchronization frequency), and CL determines *what to communicate* (data/gradient compression). Given that these approaches optimize along fundamentally different axes, directly comparing their efficiency or asserting the superiority of one over another might not be appropriate. Each branch presents unique theoretical and practical challenges. Thus, rather than being considered "outdated," each methodology remains relevant depending on the specific problem constraints and system requirements. **2. Comparison with DL** While it is challenging to determine whether CL or DL is superior in general scenarios, we conduct experiments to compare them in specific settings, where we choose proper topology for DL and compression rate for CL to make the communication quantity per iteration equally matched. Figures 7 & 8 in ARS demonstrate that EF-Landing achieves slightly higher communication efficiency than decentralized manifold methods DRSGD and DRGTA [Chen et al. 2021]. **3. Comparison with FL** We appreciate the reviewer for bringing this valuable reference to our attention. We will include it in the related work. Below, we provide a comparison between the two approaches: - Communication paradigm: Zhang et al. reduce communication overhead through multiple local steps, whereas EF-Landing achieves this via gradient compression. Which approach is more efficient depends on the specific application. - Computational overhead: Zhang et al. rely on manifold projection, which can be computationally expensive, while EF-Landing employs retraction-free methods that involve only matrix products. - Addressing data heterogeneity: Zhang et al. introduce a correction step to mitigate data heterogeneity, whereas EF-Landing leverages error feedback for correction. - Convergence: Zhang et al. do not converge exactly to the stationary solution but only to a neighborhood around it, whereas EF-Landing achieves exact convergence. Furthermore, we perform additional experiments to compare EF-Landing with Zhang et al. Figures 9 & 10 in ARS show that their performances are roughly equally matched in terms of communication quantities. **4. Theoretical innovations.** We summarize our technical contributions below. **(a) Reduced Assumptions** - Prior analyses of the Landing method [Ablin et al. 2024] require restrictive assumptions: bounded local Riemannian gradients, bound on an intermediate symmetric matrix, and unbiasedness conditions for specific Riemannian gradients. - **Our Work:** We eliminate these constraints, relying only on standard smoothness and mild gradient bounds (Assumptions 3.1–3.2). This broader generality enables more applications. **(b) Perturbation-Tolerant Analysis** - Existing Landing convergence guarantees fail under gradient perturbations (e.g., compressed gradients). While trivial in Euclidean settings, perturbations on the Stiefel manifold introduce non-negligible geometric distortion. - **Our Work:** We rebuild the analysis from first principles, introducing a **perturbation-compatible merit function** (Lemma 5.6) that rigorously accounts for compression errors. This is the first provably robust Landing variant for compressed gradients (Theorem 5.7). **(c) Block-Wise Orthogonal Constraints** - Prior work assumes fully vectorizable variables, failing to address block-wise constraints (e.g., orthogonality in NN layers). - **Our Work:** We directly tackle block-wise structures (Section 6), proving convergence under a **unified step size** (Theorem K.1). To our knowledge, this is the first analysis for block-wise orthogonal optimization, addressing practical deep learning architectures. **(d) Gradient Clipping for Safety Guarantees** - Compressed gradients only remain within the safety region in expectation, risking constraint violations. - **Our Work:** A novel clipping strategy enforces deterministic safety without introducing bias. **5. Other Comments.** - We use the canonical metric of the Stiefel manifold. - The necessity of gradient clipping stems from the fact that contractive compressors only guarantee contraction in expectation. There is a small risk of gradient explosion after compression. To ensure the iteration remains within a safe region, we assume bounded gradients (It is fine to overestimate it), with which clipping ensures safety without introducing additional noise. --- Rebuttal Comment 1.1: Comment: I appreciate the reviewer’s thoughtful responses and the addition of numerical tests, which address my concerns. I am happy to raise my rating. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their thoughtful engagement and for considering our clarifications and insights. We greatly appreciate their willingness to update their review based on our response and additional experiments. Their constructive feedback and supportive remarks are very encouraging and helpful in strengthening our work.
Summary: Paper provides an error feedback based algorithm to solved distributed optimization problem on Stiefel manifold (set of orthonormal matrices). This algorithm generalizes recently proposed retraction-free Landing method to a low-comm complexity distributed algorithm. Authors also provide theoretical convergence results and empirical evidence to support their claims. Authors also extend their work to problems with block-level orthonormality property. Claims And Evidence: Most of theoretical claims seem correct. See below for concerns about motivation and experiments. Methods And Evaluation Criteria: 1. How is number of violations defined? This is crucial to understand the claims. 2. How are the hyper parameters for each of the methods tuned? No details are provided to improve confidence in results. 3. No comparison empirical comparison against these baselines [Chen et al., 2021], [Wang et al., 2022], [Qu et al., 2024], [Zhao et al., 2025] Theoretical Claims: Proofs seem correct. Most of the techniques and ideas used in the paper are already known and hence straight-forward. Experimental Designs Or Analyses: I don’t fully grasp why penalty method fails here. 1. I have concerns of why more details about the penalty method is not provided. Both EF-Landing and penalty method (of course with appropriate compression and error feedback) have the a regularization weight \lambda which needs to be tuned. 2. I would also like to penalty method for the NN result plots. Not sure why they are omitted. 3. Why is penalty method objective being omitted? Even if the values are in different range it is useful to provide it (at least in appendix if space is limited) Supplementary Material: Took a brief pass Relation To Broader Scientific Literature: See other strengths and weaknesses Essential References Not Discussed: Seem alright. Not familiar with literature enough. Other Strengths And Weaknesses: I have non-trivial concerns about significance and novelty of the work. Most of techniques used are known apriori and this works seems straight forward combinations of them. 1. It is not super clear how landing improves over standard penalty/regularization based methods. This wasn’t discussed in details, and its convergence rate was not compared against. As far as I understand there always exists some value of regularization (penalty) weight that lead to similar approximate constraint satisfaction. Also see other sections concerns about the empirical evidence of same. 2. Another concern is that the failure of compression without error feedback is not special to Stiefel manifold as implied by the authors. Same argument for example work even convex constraints (disc instead of circle) [Richtarik et al., 2021]. It even is known for unconstrained problems [Seide et al., 2014]. So it is slightly misleading for authors to omit these prior knowledge. 3. While it is insightful that compressing the descent direction directly could make the two sub fields non-orthogonal, it is also natural to compute gradient of the penalty term at the centralized server as it is not client-data dependent. This is a standard known practice for regularizer in decentralized optimization. Authors omit this in their discussion of their choice. Due above concerns and lack of elaboration by authors, I don’t understand what is the novelty of the work. Other Comments Or Suggestions: Minor: 1. Assumption 3.2: better to use a different variable than X to avoid confusion with variable of (1) and argument of compressor. Questions For Authors: Please see above. Summarizing: 1. Advantage over penalty method both theoretically and empirically 2. Novelty of the technical ideas and significance of work Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and constructive comments. Below, we address each point in detail. All newly added experiments can be found in the **Additional Result Sheet (ARS)** https://anonymous.4open.science/r/EF-Landing-B6E4 **1. Landing v.s. Penalty** The Landing method can be roughly analogous to the Augmented Lagrangian method, while the Penalty method corresponds to traditional penalty approaches in constrained optimization. Below, we provide a detailed comparison. **(a). Traditional Penalty Methods: Fundamental Trade-offs** - **Small $\lambda$ Regime**: - Pros: Preserves fast convergence toward optimality. - Cons: Fails to enforce constraint feasibility. - **Large $\lambda$ Regime**: - Pros: Easy to ensure strict feasibility. - Cons: Induces ill-conditioning problems and slows convergence. - **Practical Implementation**: - Delicate schedule-based tuning (e.g., slowly increasing $\lambda$) is necessary to balance these competing goals. Even then, achieving both fast feasibility and optimality is non-trivial and problem-dependent in practical experiments. **(b) Landing Method: Decoupling the Trade-off** - **Key Advantage**: Employs a *constant, moderate $\lambda$* (typically $\lambda = 1$) to **simultaneously** ensure both optimality and feasibility. - **Benefits**: - Avoids the ill-conditioning issues associated with large $\lambda$ in penalty methods. - Eliminates the need for problem-specific $\lambda$ selection and extensive parameter tuning. While both Landing and Penalty methods achieve the same asymptotic convergence rate, Landing demonstrates superior experimental performance due to the above benefits. Our new experiments in Figures 1 - 4 in ARS validate this conclusion. **2. Error Feedback (EF)** - **Review of Prior Work.** Previous studies [Richtárik et al., 2021; Seide et al., 2014] have shown that vanilla gradient compression using contractive compressors fails to converge in distributed learning when **data heterogeneity** is present. In such cases, EF is necessary to address this issue and ensure convergence. - **Unique Challenges in Manifold Optimization.** Our findings reveal—for the first time—that even in a deterministic single-node setting (where no data heterogeneity exists and vanilla gradient compression is expected to converge)—vanilla gradient compression unexpectedly fails when a Stiefel manifold constraint is imposed. This phenomenon highlights a fundamental distinction in manifold-constrained optimization: unlike unconstrained settings where single-node compression succeeds without error feedback, the Stiefel manifold’s geometry introduces additional barrier to gradient compression. While we carefully reviewed [Richtárik et al., 2021], we were unable to locate a discussion of convex constraint sets (e.g., discs). We would greatly appreciate it if the reviewer could direct us to the specific page or section on this point. We can expect that problems with disc constraint may have similar trouble like ours, but this trouble is more crucial for our settings because Stiefel manifold has no unconstrained interior points. **3. Compute Penalty Gradient at Server** We thank the reviewer for the valuable insights drawn from decentralized optimization. We agree that computing client-independent terms, such as the penalty term, at the centralized server can also provide motivation for certain aspects of our algorithmic design, and we will include a detailed discussion of this in the revision. However, the central aspect of our analysis is that the orthogonality property, established in Proposition 4.1, is crucial for ensuring convergence guarantees. While the algorithm can indeed be motivated from different perspectives, the convergence guarantees cannot be established without the above orthogonality property. **4. Theoretical Novelties** Apart from the above clarifications, we summarize other theoretical novelties in our response 4 to Reviewer kbBP. **5. Other Comments** - More experiments. Figures 7 & 8 in ARS show comparison with decentralized manifold methods DRSGD and DRGTA. Detailed analysis can be found in our response 1 & 2 to Reviewer kbBP. Figures 1 - 6 in ARS show comparison with penalty method for PCA and NN tasks. It is observed that penalty method fails to reach optimality and feasibility when using a fixed penalty parameter, and it also performs not so good in NN tasks due to the above trade-offs. - Violation. The penalty term $\frac14\\|X^\top X − I_p\\|^2$ implies the magnitude of the violation. - Hyperparameters. All penalty parameters for Landing are set to 1, and the momentum for the NN experiments is set to 0.9. Step sizes are selected on a case-by-case basis through grid search. - We used the notation $X$ to define compressor considering that decision variable and its gradient are of the same shape. But the reviewer’s suggestion is helpful and we will revise it. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response to my comments and the new experimental results. These, have partially addressed some of my concerns like the advantage of landing method, and comparison to more baselines. I am also glad to hear that the authors plan to add more experimental results. I agree with this sentiment and further believe that this paper requires a considerable revision to incorporate all these arguments in detail including discussion of technical novelty, landing method vis-a-vis penalty method, new experiments, and its relation to prior work. So I will change my recommendation from weak reject to weak accept. I apologize for the confusion regarding referencing [Richtarik et al., 2021]. I was pointing out Sec 2.2 of the paper which lists known compression failure results and provides a summary. For example, [Karimireddy et al., 2019] provides counter examples where compressed SGD fails even when there is no “data heterogeneity.” Authors seem to have missed this paper in their draft. I am glad that the authors see that replacing Stiefel manifold with a disc constraint will also lead to similar failure of compressed projected SGD. This proves that general constrained optimization fail under compression without error feedback and it is not just a special consequence of optimizing on Stiefel manifold. To be clear, I don’t have any follow up ask here. [Karimireddy et al., 2019] Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, and Martin Jaggi. Error feedback fixes SignSGD and other gradient compression schemes. In 36th International Conference on Machine Learning (ICML) --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their kindness to review our response and new experiments. We also greatly appreciate their thoughtful and constructive suggestions. Our paper will be properly revised based on the reviewer’s comments: - The section of contributions will be reorganized to better highlight our theoretical novelty. - Additional discussion of Landing compared with other methods (such as penalty methods) will be included to emphasize Landing’s advantages. - Results of further experiments, including comparisons with baseline methods in decentralized settings, will be added to illustrate the competitiveness of our approach. - More relevant prior work, such as [Karimireddy et al., 2019] will be cited to clarify the necessity of error feedback from a broader perspective. Once again, we sincerely appreciate the reviewer’s thoughtful engagement. Their constructive feedback and encouraging comments have been invaluable in helping us improve our work.
null
null
null
null
null
null
null
null
RSMerge: Bridging Head and Tail Classes via Subsampled Model Merging
Reject
Summary: This paper proposes a method called RSMerge for the long-tailed classification task which merges finetuned CLIP models on independent balanced subsets of data, then retrains the classifier on the entire dataset. A new metric, the head-to-tail ratio $\eta$ is proposed, and limitations of existing methods are identified for certain values of $\eta$. Comprehensive experiments are performed to evaluate RSMerge on a variety of challenging synthetic and real-world datasets, and good performance is identified across different values of class-imbalance ratio $\rho$ and head-to-tail ratio $\eta$. ## Update after rebuttal The rebuttal fully addressed my concerns. This paper is strong in both *novelty*, with the introduction of the head-to-tail ratio $\rho$ and the critical assessment of LIFT, and *methodology*, with comprehensive evaluations of RSMerge in both synthetic and real-world tasks. I read through the other reviews and the additional results provided by the authors further strengthen the submission. Therefore, I recommend acceptance. Claims And Evidence: Overall, the claims made in the paper are well-supported by the presented evidence. Methods And Evaluation Criteria: The methodology and evaluation criteria is a strength of the paper. The “tricks of the trade” employed to address degradation of pretrained features are clearly explained in Section 4.1. Multiple evaluation metrics (overall accuracy, many/medium/few-shot accuracy) are utilized to understand fine-grained performance of the proposed method. Comparison against state-of-the-art methods is comprehensive. Finally, both synthetic tasks (such as the proposed CIFAR-100-LT which enables manipulation of $\eta$) and multiple challenging real-world datasets are considered. Theoretical Claims: No theoretical claims are made. Experimental Designs Or Analyses: The experimental design, particularly with respect to the Section 4.3 empirical analyses on TinyImageNet-LT, are well-thought-out and informative. The ablation studies in Section 5.2 are also welcome. Supplementary Material: I briefly reviewed the code attached in the supplementary material. I would encourage the authors to include a README explaining how to install/run the code. Relation To Broader Scientific Literature: This paper identifies a shortcoming of previous algorithms utilizing PEFT for long-tailed classification: specifically, that LIFT [2] may compromise head-class accuracy. The proposed method, RSMerge, draws from related literature including model merging [3] and vision-language foundation models [1]. [1] Radford et al. Learning transferable visual models from natural language supervision. ICML 2021. [2] Shi et al. Long-tail learning with foundation model: Heavy fine-tuning hurts. ICML 2024. [3] Tarvainen and Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. NeurIPS 2017. Essential References Not Discussed: To my knowledge, essential references are sufficiently discussed. Other Strengths And Weaknesses: A strength of the paper is its introduction of a new metric, the head-to-tail ratio $\eta$, which complements the commonly used class-imbalance ratio $\rho$. The finding that LIFT compromises head-class accuracy for certain values of $\eta$ is novel and interesting, and the proposed CIFAR-100-LT benchmark used to investigate $\eta$ is well-thought-out. A minor weakness is that the computational tradeoffs of RSMerge against previous work are not made explicit. Specifically, the “computational analysis” subsection of Section 4.3 should include a more detailed comparison between RSMerge and competing methods including LIFT and full finetuning. Currently, the full number of training runs is listed along with the subsample sizes and a brief comment on memory. The subsection would be improved with a table describing, e.g., the wall-clock training time, GPU VRAM usage, and disk usage at each stage of RSMerge. A compute statement is missing, e.g., concerning the hardware used to run the experiments and/or total cost in cloud compute credits. Other Comments Or Suggestions: Below, I’ve included a list of typos or sentences where more clarification is needed. 1. Usage of periods after \paragraph is not consistent. 2. Line 112/114: \cite should be used instead of Author et al (Author et al 2024). 3. Line 163: Quotation mark is backwards. 4. Line 189: \ll should be used instead of <<. 5. Line 199: Extra space after (Figure 5). 6. Line 291: “Classification” typo. 7. Line 322: Malformed sentence: “Table 1 An”. Questions For Authors: Which hyperparameters are tuned on the validation set? In the paper, only temperature (for the calibration study) and early-stopping duration are mentioned. More specifically, how were $\lambda, M, N$ chosen? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and positive comments on our novel findings, rigorous methodology, and thorough empirical analyses. We address the questions as follows: --- > Q1. I would encourage the authors to include a README explaining how to install/run the code. We want to highlight that we have included a run.sh script for reproducibility and will add a detailed README with step-by-step instructions upon the official code release. --- > Q2. Computational analysis. Thanks for the suggestion. The table below compares the computational costs of Full Fine-Tuning (Full-FT), LIFT (which employs a LoRA adapter with rank 64 applied to all MLP layers), and RSMerge on the ImageNet-LT and CXR-LT datasets. All models were trained to convergence using a batch size of 128 and mixed-precision training. For RSMerge, we break down the computational cost into two stages. Stage 1 involves training models independently and in parallel on subsets with different imbalance ratios. Since the subset with the highest imbalance ratio contains the most training samples, it dominates the overall wall-clock time. Stage 2 retrains only the linear classifier on the full dataset—a highly efficient step, as it updates just a single linear layer. In our experiments, we used the same number of epochs for both stages of RSMerge. | Method | Wall-clock Time | Training Iterations | Param (M) | Mem (G) | Acc | |-------------|-----------------|---------------------|-----------|---------|------| |||**ImageNet-LT**|| | Full-FT | 1:37:56 | 9060 | 87.0 | 14.5 | 73.9 | | LIFT | 1:25:33 | 9060 | 9.0 | 13.3 | 77.0 | | RSMerge Stage 1 (Rep Learning) | 1:15:38 | 8050 | 87.0 | 14.5 | 76.7 | | RSMerge Stage 2 (Classifer Re-Train) | 0:30:0 | 9060 | 0.7 | 2.6 | 77.4 | | Full RSMerge | 1:48:38 | | | | 77.4 | |||**CXR-LT**|| | Full-FT | 0:53:43 | 5320 | 87.0 | 14.5 | 38.0 | | LIFT | 2:14:32 | 13300 | 9.0 | 13.3 | 38.5 | | RSMerge Stage 1 (Rep Learning) | 0:12:51 | 1300 | 87.0 | 14.5 | 37.8 | | RSMerge Stage 2 (Classifer Re-Train) | 0:19:26 | 5320 | 0.7 | 2.6 | 39.3 | | Full RSMerge | 0:32:17 | | | | 39.3 | The computational overhead of RSMerge compared to existing methods depends heavily on dataset characteristics—particularly the original imbalance ratio. For example, in ImageNet-LT, which has an imbalance ratio of 256, the largest subset used in Stage 1 accounts for 89% of the full training data, resulting in relatively higher wall-clock time. In contrast, on CXR-LT, with a much more extreme imbalance ratio of 6401, the largest Stage 1 subset represents only 24% of the dataset, leading to a 4.4× reduction in training time compared to Full-FT (see Table 7 in the appendix for details). Additionally, while full-rank methods like Full-FT and RSMerge typically converge within 10 epochs on CXR-LT, LIFT required 50 epochs—substantially increasing its wall-clock time despite its parameter-efficient design. --- > Q3. A compute statement is missing, e.g., concerning the hardware used to run the experiments and/or total cost in cloud compute credits. All experiments were conducted on a cluster equipped with NVIDIA RTX 3090 GPUs (24GB VRAM), using Python 3.9.15, PyTorch 2.4.0, and CUDA 11.8. --- > Q4. Which hyperparameters are tuned on the validation set? In the paper, only temperature (for the calibration study) and early-stopping duration are mentioned. More specifically, how were $\lambda$, $M$, and $N$ chosen? Across all datasets, we fix $M=2$. As discussed in Section 5.2, we validate two $\lambda$ values (0.3 and 0.7): datasets closely aligned with CLIP's pre-trained distribution benefit from a higher $\lambda$, while others perform better with a lower value. For $N$, we use imbalance factors that are powers of 2, ensuring the maximum factor is less than half of the full dataset's imbalance (see Table 7). We use the same number of epochs as in LIFT—except for CXR-LT, where hyperparameter search spans all baselines. Finally, we use an equal number of epochs for both RSMerge stages. --- > Q5. Other Comments Or Suggestions. Per the reviewers’ request, we have implemented the suggested changes, including ensuring consistent usage of periods after \paragraph, correcting citation formatting (L112/114), fixing the backward quotation mark (L163), replacing << with \ll (L189), removing the extra space after "(Figure 5)" (L199), correcting the typo in "Classification" (L291), and fixing the malformed sentence in L322 ("Table 1 An"). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response; my concerns are fully addressed. This paper is strong in both _novelty_, with the introduction of the head-to-tail ratio $\rho$ and the critical assessment of LIFT, and _methodology_, with comprehensive evaluations of RSMerge in both synthetic and real-world tasks. I read through the other reviews and the additional results provided by the authors further strengthen the submission. Therefore, I recommend acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 5wyk, Thank you for your positive response! We extend our appreciation once again for your recognition of our work! Best, Authors.
Summary: This paper explores class imbalance recognition, identifies head-to-tail class ratio as an under-explored problem in this setup, and proposes an approach that successfully integrates the main challenges in imbalanced class learning. The proposed approach leverages observations from the literature to navigate the trade-off between maintaining the model information from a pre-training representation learning stage and integrating new information from a calibration or classifier-training stage that addresses the imbalance in the data. The authors propose to train several models in parallel on subsets of the data as a pre-training/representation learning stage, and then merge and freeze the weights of these models to train a classifier with all the dataset. Claims And Evidence: Overall, the claims are supported by convincing evidence. However, I would suggest the authors clarify where the improvements come from (first or second stage of the proposed approach) and empirically show the benefits of the different design decisions. In particular, it would be valuable to see if the weight averaging of the models provides performance improvements regardless of the subsets' design (as would happen with traditional ensemble methods): the sizes of the subsets, imbalance ratios, etc. Methods And Evaluation Criteria: Yes, the experimental section is well designed to show the effectiveness of the proposed approach to learn from imbalanced datasets. Theoretical Claims: NA Experimental Designs Or Analyses: The experiments section is comprehensive and well designed to support the main claims in the text. However, the simplicity of the method proposed might come across as a weakness. To address this, I would suggest further analysis of the different stages of the approach to illustrate its contributions. For instance, it would be valuable to include the performance of the models from the initial stage when trained on the different subsets with different imbalance factors or an analysis of the contribution of the two stages separately. This would help the reader see the contribution of the different elements of the approach. Supplementary Material: Yes, all. From a practical point of view, the supplementary material is a good addition to the paper since it provides useful visualizations, additional results and information of the training process and datasets. Relation To Broader Scientific Literature: The authors integrate several observations from the literature to address the biases learned during training in imbalanced datasets. This leads to valuable insights. In particular, the authors successfully integrate and examine the effect of existing techniques to fine-tune foundational models. This is complimented with discussions and observations that help explain the applicability of existing approaches (eg LoRa, LIFT). The second stage of the approach leverages LA, an existing technique to learn under class imbalance. While this is successfully included in the approach, it would be valuable to further expand on the role of this technique in conjunction with the other elements of the aproach. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths - The authors successfully identify an underexplored challenge in imbalanced training (head-to-tail class ratio). - Tthe paper is well written and easy to follow. - The method proposed is simple and effective with competitive results in most of the experiments across several datasets. Weaknesses - As a minor comment, the paper would benefit from some visualization or experiment showing how the proposed approach bridges head and tail classes (as stated in the title). This is currently observed by comparing the performance of "many", "medium", "few" in the results tables. However, this is not linked to a particular element from the approach. I would suggest including a discussion or visualization showing how the approach "bridges head and tail classes". Other Comments Or Suggestions: - The titles of the x-axis in figures 1 (top) and 2 look a bit untidy. I would suggest centering them with respect to the full figure. - Figure 2: the reader would benefit if the stages (1) and (2) were highlighted or indicated in the figure, not only in the caption. - Fig 2 Caption: finish with a period. - L294: typo, repeated words. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback and for recognizing the value of our work on imbalanced training. Our responses to their questions, which we found fruitful and led to several new experiments and analyses, are below: --- > Q1. It would be valuable to see if the weight averaging of the models provides performance improvements regardless of the subsets' design (as would happen with traditional ensemble methods): the sizes of the subsets, imbalance ratios, etc. Thanks for the suggestion. For TinyImageNet-LT, which has an original imbalance ratio of 100, RSMerge averages the weights of 14 models: 7 models are trained on subsets with progressively increasing imbalance ratios (from 1 to 64, doubling at each step), with 2 models per subset obtained via resampling at the same ratio. To demonstrate the impact of our proposed weight averaging scheme in RSMerge, we compare this with a WA-{imbalance ratio} baseline that averages 14 models, each trained with a fixed imbalance. Notably, WA-100 aligns with the popular model soup approach (Wortsman et al., 2022a), where the weights of 14 fully fine-tuned models on the entire dataset are averaged. As noted by the reviewer, weight averaging consistently outperforms full fine-tuning, regardless of the subset design. However, results show that different imbalance ratios yield varying outcomes across head and tail categories. For example, WA-{8} achieves the highest tail accuracy of 75.0, whereas WA-{Full} reaches the highest head accuracy of 85.9. Rather than optimizing for a single imbalance ratio, RSMerge applies weight averaging across the full spectrum, effectively merging the advantages of both approaches to achieve a more balanced overall trade-off. | | RSMerge | WA-1 | WA-2 | WA-4 | WA-8 | WA-16 | WA-32 | WA-64 | WA-100 (Full) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Acc | 78.3 | 74.6 | 75.9 | 76.0 | 77.2 | 77.2 | 77.3 | 77.9 | 77.6 | | Head | 84.6 | 76.8 | 78.6 | 78.7 | 81.0 | 82.8 | 84.7 | 85.5 | 85.9 | | Tail | 75.0 | 72.9 | 74.4 | 74.6 | 75.0 | 74.1 | 73.3 | 73.7 | 73.0 | --- > Q2. Effect of LA loss. The table below demonstrates the impact of replacing LA loss with either a cross-entropy (CE) or a class-balanced sampling (CB) approach. Unlike LIFT, which relies heavily on LA loss for optimal performance, RSMerge is only partially sensitive to the choice of loss function—thanks to its inherent balance achieved through subsampling and weight averaging. | Method | Acc | Head | Tail | | --- | --- | --- | --- | ||**CE**|| | LIFT | 72.6 | 85.1 | 65.9 | | RSMerge | 76.3 | 84.5 | 71.9 | ||**CB**|| | LIFT | 75.3 | 81.0 | 72.2 | | RSMerge | 78.2 | 84.5 | 78.4 | ||**LA**|| | LIFT | 77.1 | 83.0 | 73.9 | | RSMerge | 78.3 | 84.6 | 75.0 | --- > Q3. An analysis of the contribution of the two stages separately. This would help the reader see the contribution of the different elements of the approach. Figure 4 of the paper breaks down the contribution of each stage in the RSMerge pipeline. To further clarify these contributions, we present the contents of the Figure in a simplified tabular format below. In particular, the introduction of EMA enhances performance for both head and tail classes while slightly reducing weight magnitude. Progressive subsampling, which averages multiple models trained on less imbalanced distributions, effectively limits weight changes and boosts tail-class accuracy—albeit at the cost of reduced head-class performance. Resampling and classifier re-training then restore head-class accuracy by averaging models trained on a fixed imbalance ratio and fine-tuning the classifier on the full dataset, respectively. Notably, RSMerge achieves a slightly higher weight magnitude than LIFT, which we argue contributes to its superior final performance by striking an optimal balance between adaptation and stability. | Method | Acc | Head | Tail | Weight Change Magnitude | | --- | --- | --- | --- | --- | | Full-FT | 73.2 | 83.4 | 67.7 | 35.4 | | + EMA | 74.1 | 84.0 | 68.8 | 35.1 | | + Prog. Subsampling | 77.9 | 83.1 | 75.1 | 12.7 | | + Resampling | 78.1 | 84.5 | 74.7 | 11.9 | | + Classifer Re-Train (RSMerge) | 78.3 | 84.6 | 75.0 | 12.1 | | LIFT | 77.1 | 83.0 | 73.9 | 10.3 | --- > Q4. It would be valuable to include the performance of the models from the initial stage when trained on the different subsets with different imbalance factors. Please refer to the answer to Q1. --- > Q5. How does RSMerge "bridge head and tail classes"? Due to space constraints, we refer you to rev. EBKi rebuttal response for “Q2. Why the proposed method can solve this problem”. --- > Q6. Other Comments Or Suggestions. Per the reviewers’ request, we have implemented the suggested changes, including centering the titles in Figure 1, correcting the typo in L294, updating the caption in Figure 2, and highlighting the two stages of RSMerge in Figure 2.
Summary: The paper proposes a method for long-tailed recognition by using CLIP. The proposed method trains multiple models from different distributions and then merges them together. While training each model, the exponential moving average is applied to maintain the original generalizability of pre-trained CLIP. Claims And Evidence: - The authors' claim on head-to-tail imbalance is sound. - Please cite proper reference for the statement that “LoRA enhances tail-class performance by maintaining weight close to the pre-trained initialization, yet it sacrifices head-class accuracy in Section 4. Methods And Evaluation Criteria: - The authors' claim on head-to-tail imbalance is sound. - Please cite proper reference for the statement that “LoRA enhances tail-class performance by maintaining weight close to the pre-trained initialization, yet it sacrifices head-class accuracy in Section 4. Theoretical Claims: There is no theoretical proof or analysis in this paper. Experimental Designs Or Analyses: - The authors evaluated their methods on the standard long-tailed recognition benchmark with the standard metrics. - There are various hyperparameters such as $\lambda$ and $N$, $M$, and $\rho$. The authors need to show how the model is sensitive to those parameters. Supplementary Material: I thoroughly read the supplementary materials Relation To Broader Scientific Literature: I do not have a concern regarding the broader impact. Essential References Not Discussed: Many papers related to long-tailed recognition are ignored, especially the loss function perspective (e.g., Equalization Loss v2, Balanced Softmax). I recommend that authors write a section for general long-tailed recognition not limited to architectural approaches. Maybe the authors might want to put in the Appendix. - Tan, Jingru, et al. "Equalization loss v2: A new gradient balance approach for long-tailed object detection." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. - Ren, Jiawei, et al. "Balanced meta-softmax for long-tailed visual recognition." Advances in neural information processing systems 33 (2020): 4175-4186. Other Strengths And Weaknesses: I do not have any further strength and weakness comments. Other Comments Or Suggestions: - The citation format is wrong. Please carefully differentiate \citet and \citep. For example, L112 Alexandrov et al.,\citep{} should be just \citet{}. And L121 (right column) \citep{radford} also \citet{}. - Please unify the format in L425 left and L411 right. (\lambda$) and $N$. Questions For Authors: - I do not have any further questions. **[post rebuttal]** Several concerns have been resolved. I recommend that authors add the results of various $\lambda$ in the appendix (at least 0.3, 0.5, 0.7). I have no objection to accepting this paper. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and address their concerns in detail below: --- > Q1. Please cite proper reference for the statement that “LoRA enhances tail-class performance by maintaining weight close to the pre-trained initialization, yet it sacrifices head-class accuracy in Section 4. This is one of our key findings, one that has not been observed in previous LT literature. As demonstrated and empirically verified in Sections 3.2 and 4.3 (weight magnitude analysis) of the paper, this phenomenon is novel within the long-tail context. Notably, at the end of Section 3.2 (L204), we also observe that our findings align with recent trends in the LLM literature, where low-rank approximation methods such as LoRA often underperform full fine-tuning while better preserving the base model's performance on tasks outside the target domain (Biderman et al., 2024). --- > Q2. There are various hyperparameters such as \lambda, M, N, and \rho. The authors need to show how the model is sensitive to those parameters. First, note that $\rho$ is not a hyperparameter; it is a dataset-dependent variable that quantifies the proportion of head-to-tail classes (see Def. 3.1). Additionally, Section 5.2 of the paper outlines the rationale behind our choice of hyperparameters. To further support our design decisions, we conduct an ablation study on TinyImageNet-LT, which has an original imbalance ratio of 100, examining: - $N$ represents the number of subsampling steps, ranging from 1 corresponding to the perfectly balanced dataset, up to $\log$(imbalance ratio). By convention, we double the imbalance ratio at each step, though alternative curriculums are possible. - $M$ denotes the number of models per subset obtained via resampling at the same ratio. - $\lambda$: the factor controlling the preservation of previously merged knowledge through progressive subsampling. The table below summarizes our findings. As the subset imbalance ratio increases, head-class performance consistently improves improves—from 74.5 at $N=1$ to 84.8 at $N=8$. In contrast, the highest tail-class performance of 76.1 is achived at $N=4$. The best overall trade-off between head and tail performance occurs at $N=7$, indicating a balanced configuration. A similar trend is observed for weighting paramter $\lambda$: a higher value emphasizes earlier subsets with lower imbalance ratios, benefiting tail classes, while lower value shifts focus toward head-class performance. Lastly, resampling helps recover information lost due to subsampling, as reflected in the performance gain from 78.1 to 78.3. | $\lambda$ | $M$ | $N$ | Acc | Head | Tail | | --- | --- | --- | --- | --- |--- | | 0.7 | 2 | 7 | 78.3 | 84.6 | 75.0 | | 0.7 | 1 | 7 | 78.1 | 84.5 | 74.7 | | 0.3 | 2 | 7 | 77.2 | 84.8 | 73.1 | | 0.7 | 2 | 8 | 78.2 | 84.8 | 74.6 | | 0.7 | 2 | 6 | 77.9 | 83.1 | 75.0 | | 0.7 | 2 | 5 | 77.7 | 81.9 | 75.4 | | 0.7 | 2 | 4 | 77.9 | 81.2 | 76.1 | | 0.7 | 2 | 3 | 75.9 | 78.5 | 74.5 | | 0.7 | 2 | 2 | 74.0 | 76.5 | 72.6 | | 0.7 | 2 | 1 | 71.5 | 74.5 | 69.9 | --- > Q3. Many papers related to long-tailed recognition are ignored, especially the loss function perspective (e.g., Equalization Loss v2, Balanced Softmax). As noted in L107, the claim made by the reviewer that we ignored “Balanced Softmax” is incorrect: it is well known in the long-tail community that Balanced Softmax is a special case of the logit adjustment loss function, with the temperature fixed at 1. --- > Q.4 I recommend that authors write a section for general long-tailed recognition not limited to architectural approaches. There is an extensive body of literature on long-tail and imbalanced recognition approaches. However, due to space constraints, we focus on the methods most closely related to our work in the main text; in response to the reviewer's request, we have expanded the literature review in the appendix. --- > Q5. Other Comments Or Suggestions. We have corrected the citation formatting errors and typos. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. I will respond it as early as possible. Before then, I realized the Method Section was not copied when I posted a review. I apologize for the inconvenience. Below is my original comment regarding the `Methods And Evaluation Criteria` Section. Although I will not degrade anything because of the concerns I had earlier in this section, I would like to share them to improve the paper in the future. ``` Although each model learns a portion of the entire dataset, the authors need to train MN models. This might require a much longer time of training compared to other methods. Please measure the training time compared to other methods. Considering this overhead, the performance improvement is marginal or even slightly worse than one of the baseline methods (LiFT). How to determine $\lambda$ in Eq. 5 ``` **Response to Authors** > Regarding Q1. Thank you for pointing out the relevant section. However, the authors did not compare it with vanilla LoRA but used LIFT, where the final model uses AdaptFormer, another PEFT. Moreover, Table 6 in Shi et al. (2024) already shows that LoRA enhances tail class compared to full fine-tuning. Hence, the authors claim that `LoRA enhances tail-class performance, yet it sacrifices head-class accuracy` is their finding might not be true. Furthermore, it is a well-known property that LoRA maintains weight close to the pre-trained initialization compared to full fine-tuning (e.g., Biderman et al., 2024; Hu et al., 2021). I do not think the authors found that `LoRA enhances tail-class performance by maintaining weight close to the pre-trained initialization, yet it sacrifices head-class accuracy. ` If so, the authors theoretically prove this property. > Regarding Q2. Although $\rho$ is measured by dataset (L192), $\rho$ for each expert is arbitrary determined (L248). Section 5.2 does not explain how to choose but explains empirical outcomes (L427-431, L412-418 right). The sensitivity analysis for N in the rebuttal is helpful,l but I still cannot get rationals to use only 2 modules (L262), although the framework can be expanded to more modules. Moreover, authors mentioned that they empirically observed the outcomes of different $lambda$, but since there is no experiment, readers find it difficult to accept the claim. > Regarding Q3, Q4. Thank you for pointing this out. However, I still believe that equalization loss v2 and other methods should be used to complete related works. If space is constrained, it might be okay to put it into the appendix. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s insightful feedback, which has inspired several new experiments and analyses. Below, we address their questions in detail: > ## Computational Analysis. Due to space constraints, we refer you to Rev. 5wyk rebuttal response for “Q2. Complete computational analysis”. > ## In section 3.2, the authors did not compare it with vanilla LoRA but used LIFT, where the final model uses AdaptFormer, another PEFT. We observed the same trend with both AdaptFormer and LoRA, as both optimize low-rank adaptations of the full weight matrix (see https://imgur.com/a/qyIVa9G for the updated Fig.2 using both methods). This clarification has been added to the text. > ## Table 6 in Shi et al. (2024) already shows that LoRA enhances tail class compared to full fine-tuning. Hence, the authors claim that LoRA enhances tail-class performance, yet it sacrifices head-class accuracy is their finding might not be true. We acknowledge that the observation—“LoRA enhances tail-class performance, yet it sacrifices head-class accuracy”—can be deduced from Tab. 6 in the LIFT paper and may seem obvious in hindsight. However, as also acknowledged by the Rev 5wyk and jM36, we underscore this trade-off by critically assessing LIFT across a range of head-to-tail ratios, $\eta$, in both synthetic (CIFAR100-LT) and diverse real-world datasets (CXR-LT, iNaturalist). These insights have motivated the development of our method, which achieves a more balanced performance between head and tail classes. > ## LoRA maintains weight close to the pre-trained initialization compared to full fine-tuning. We agree that LoRA’s ability to keep weights closer to their pre-trained initialization, as opposed to full fine-tuning, is well-established; we have already noted a relevant example in the LLM community (L204). However, we believe our key observation lies in highlighting how this property relates to long-tail and imbalanced recognition tasks—specifically, the trade-off between head and tail performance. While providing a formal proof is beyond the scope of this work, we empirically verify this phenomenon in Section 4 and leverage it to develop our proposed method. We would be happy to include any additional references if the reviewers feel we have overlooked important work in this area. > ## Outcomes of different $\lambda$ We reiterate that $\lambda$ controls the preservation of merged knowledge via progressive subsampling. The table below compares $\lambda = 0.3$ and $\lambda = 0.7$ on four datasets—CXR-LT, iNaturalist, ImageNet, and Places-LT—each split into many-shot (>100), medium-shot (20–100), and few-shot (<20) subsets. As shown in Section 5.2, datasets similar to CLIP’s pre-training distribution (ImageNet and Places-LT) benefit from a higher $\lambda$, while CXR-LT and iNaturalist perform better with a lower value. This selection is further supported by analyzing the performance gap between ***linear probing*** and ***Full-FT*** baselines on each dataset (Tab.3,4,5,6): a large gap indicates that the pre-trained representations are already well-suited for the downstream task, reducing the need for extensive feature adaptation, whereas a smaller gap suggests that additional adaptation is required for optimal performance. |||$\lambda$=0.3|||||$\lambda$=0.7||| |-|-|-|-|-|-|-|-|-|-| ||Overall|Many|Med|Few|-|Overall|Many|Med|Few| |**CXR-LT**|**39.3**|**42.4**|**40.7**|**30.8**|-|37.8|42.9|41.5|20.8| |**iNaturalist**|**78.2**|**76.7**|**78.5**|**78.2**|-|77.7|73.7|78.2|78.0| |**ImageNet-LT**| 76.2|81.2|74.6|67.8|-|**77.4**|**81.2**|**76.1**|**70.7**| |**Places-LT** | 51.2|52.2|52.2|48.8|-|**51.7**|**51.2**|**52.8**|**50.3**| > ## imbalance ratio $\rho$ for each expert is arbitrary determined. Please note that knowing $N$ (the number of subsampling steps) and the functional form for increasing the subset size (in our case, $2^N$) is enough to determine each expert's $\rho$. While alternative growth functions are an interesting future direction, our core message remains: varying imbalance ratios yield different head and tail outcomes. Rather than tuning a single imbalance ratio, RSMerge averages weights across an increasing range, merging the benefits of diverse strategies for a more balanced trade-off (see Q2, Rev EBKi). > ## Rationale to use M as 2. $M$ is the number of models per subset, with each subset resampled at the same imbalance ratio ($\rho$). The table below shows that increasing $M$ on TinyImageNet-LT yields higher overall, head, and tail accuracy. To keep experiments manageable across five datasets, we fix $M = 2$, although we could obtain a higher performance in the sota tables with larger $M$. ||M=1|M=2|M=6|M=12| |-|-|-|-|- |Acc|78.1|78.3|78.6|78.8| |Head|84.5|84.6|85.0|85.5| |Tail|74.7|75.0|75.2|75.2| > ## I still believe that equalization loss v2 and other methods should be used to complete related works. Thanks for your suggestion. We have expanded the literature review in the appendix.
Summary: This paper conducts a comprehensive analysis of head-to-tail class ratios under different levels of class imbalance, investigating their effects on model performance. Building on these findings, this paper proposes a two-stage approach to address the stability-plasticity dilemma through decoupled learning and model merging, balancing the accuracy of head and tail classes across various conditions. Experiments on five datasets demonstrate the effectiveness and generalizability of the proposed method in real-world applications. Claims And Evidence: The author claims that existing methods often fail to adapt to varying head-to-tail class ratios. In other words, across different head-to-tail ratios, the accuracy of the dominant class is often higher than that of the tail class. Through the analysis of Section 4, ‘RSMerge: Imbalanced Learning by Controlling Weight Change’, as well as the experimental results presented in Figure 1 and Tables 2, 3, 4, 5, and 6, this paper demonstrates that the proposed method effectively reduces the accuracy gap between head and tail classes across different head-to-tail ratios. Methods And Evaluation Criteria: The proposed method is indeed reasonable and effective. Classification accuracy is used to evaluate the model's performance. Theoretical Claims: I carefully examine the theoretical claims in Section 4, ‘RSMerge: Imbalanced Learning by Controlling Weight Change’, and find them logically coherent. This section sequentially introduces Progressive Subsampling, Progressive Resampling, and Classifier Re-Training, followed by an empirical analysis of RSMerge. Experimental Designs Or Analyses: The proposed method is evaluated on five datasets, demonstrating its advanced performance. Furthermore, detailed ablation experiments validate the effectiveness of the proposed modules. This paper provides a comprehensive analysis of both comparative and ablation experiments. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: The proposed method builds upon previous approaches, such as LIFT [1], which applies parameter-efficient fine-tuning to CLIP's visual encoder. [1] Long-tail learning with foundation model: Heavy fine-tuning hurts. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1.The problem addressed in this paper is stated clearly, and a considerable portion of the paper is dedicated to describing imbalanced learning and the proposed method in an accessible manner. 2.Extensive experiments are conducted on five datasets to validate the robustness of the proposed method. Weaknesses 1.The abstract presents an excessively detailed background but lacks a comprehensive description of the proposed method. 2.Although the paper provides a detailed explanation of the reasons behind the problem it aims to address, it does not thoroughly elaborate on why the proposed method can solve this problem. Instead, it devotes a substantial portion of the text to explaining how the method is implemented. 3.Figure 3 lacks clarity and needs further refinement. It is recommended to present the overall training process to enhance the understanding of RSMerge. 4.What concerns me is that the paper does not include an analysis of the algorithm's complexity, such as FLOPs, GPU memory usage, or training time. These experimental data should be added to the paper for further clarification. 5.The paper lacks a discussion of the algorithm’s implementation details, which limits the method’s transparency and reproducibility. 6.The performance improvement of the proposed method is relatively limited, with only a 0.2% increase on the Places-LT dataset, a 0.4% increase on the ImageNet-LT dataset, and a 0.9% decline on the iNaturalist 2018 dataset. Other Comments Or Suggestions: No other comments or suggestions Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's careful reading, positive feedback on our method's accessibility and effectiveness, and constructive questions that spurred additional experiments. Below, we address their questions in detail: --- > Q1. Lack of a comprehensive description of the proposed method in the abstract. We will follow your suggestion and improve the description of our method in the abstract. --- > Q2. Why the proposed method can solve this problem. **Motivation:** Section 3.2 reveals that head and tail performances are inherently opposed, driven not only by the imbalance ratio but also by our introduced head-tail ratio. Most LT methods boost tail accuracy at the cost of head-class performance. For instance, on CIFAR100-LT with a head-tail ratio of 19 (95 head classes) (Fig. 2), LIFT improves tail results via low-rank adaptation but underfits head classes, resulting in an overall performance of 84.9 versus 86.3 for Full-FT. This trade-off suggests that while full-rank adaptation excels at capturing head-class information, it often sacrifices tail performance by straying from the pre-trained configuration. **Our Objective:** We aim to sustain robust performance across both head and tail classes, irrespective of the head-tail scenario—a challenge that has been largely unexplored in LT literature. Our goal is twofold: we want the flexibility of full-rank optimization to effectively capture head-class information, and we encourage balanced learning between head and tail classes to prevent the model from being biased toward head classes. **Philosophy of RSMerge:** RSMerge merges full-rank optimization with balanced learning via progressive subsampling. While subsampling improves tail-class performance (Chaudhuri et al., 2023), it loses head-class data, harming head performance. We address this by training independent models on subsets with increasing imbalance ratios. Each model specializes in a segment of the imbalance spectrum, and their average produces a final model robust across the range. We substantiate this insight with a detailed ablation study below. **Empirical evidence:** RSMerge averages the weights of $N \times M$ models, where $N$ is the number of subsampling steps (from 1 for a perfectly balanced dataset up to $\log$(imbalance ratio), doubling the ratio at each step) and $M$ is the number of models per subset via resampling. For TinyImageNet-LT (imbalance ratio 100), we use $N=7$ and $M=2$. The table below compares RSMerge with WA-{imbalance ratio} baselines—each averaging 14 models trained with a fixed imbalance ratio. Notably, WA-100 corresponds to the popular model soup approach (Wortsman et al., 2022a). | | RSMerge | WA-1 | WA-2 | WA-4 | WA-8 | WA-16 | WA-32 | WA-64 | WA-100 (Full) | Full-FT | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Acc | **78.3** | 74.6 | 75.9 | 76.0 | 77.2 | 77.2 | 77.3 | 77.9 | 77.6 | 73.2 | | Head | 84.6 | 76.8 | 78.6 | 78.7 | 81.0 | 82.8 | 84.7 | 85.5 | **85.9** | 83.4 | | Tail | 75.0 | 72.9 | 74.4 | 74.6 | **75.0** | 74.1 | 73.3 | 73.7 | 73.0 | 67.7 | Results show that different imbalance ratios yield varying outcomes across head and tail categories. For example, WA-{8} achieves the highest tail accuracy of 75.0, whereas WA-{100} reaches the highest head accuracy of 85.9. Rather than optimizing for a single imbalance ratio, RSMerge applies weight averaging across the full spectrum, effectively merging the advantages of both approaches to achieve a more balanced overall trade-off. --- > Q3. Refinement of Figure 3. Figure 3 illustrates the overall training steps involved in RSMerge. To enhance clarity, we explicitly highlight Stages 1 and 2 of the pipeline in the figure. --- > Q4. Computational analysis. For space reasons, we refer you to rev. 5wyk rebuttal response for “Q2. Computational analysis”. --- > Q5. Lack of algorithm’s implementation details. We describe our algorithm in Section 4, and detail hyperparameters, experimental setup, and additional information in Section 5.2 and Appendices A and B. The source code is also available. Please let us know which implementation details are unclear or hinder reproducibility. --- > Q6. The performance improvement of the proposed method is relatively limited. Addressing class imbalance requires considering the head-to-tail ratio—a critical but often overlooked factor. While traditional benchmarks focus on tail-dominated scenarios, real-world datasets like CXR-LT have a majority of head classes. In such cases, enhancing tail performance without sacrificing head accuracy is essential. Our method, RSMerge, achieves this by combining full-parameter fine-tuning for head-class information with weight averaging across progressively subsampled subsets, resulting in a better trade-off (RSMerge = 39.3 vs. LIFT = 38.5).
null
null
null
null
null
null
Rethinking Benign Overfitting in Two-Layer Neural Networks
Accept (poster)
Summary: This paper analyzes the training dynamics of a two-layer CNN which can lead to benign overfitting. By analyzing a new data model with feature-specific noise, they claim that neural networks can learn "implicit features" that improve the accuracy when training on long-tailed data. Claims And Evidence: The theoretical results in section 3 on the test loss for the two-layer CNN and the experiments in section 5 support the paper's main claim about benign overfitting in the proposed data model. One claim that I find slightly misleading is that this paper shows that memorizing data noise, which was previously deemed harmful, can help classification accuracy. The "data noise" in the model presented in this behavior is class-dependent and can essentially be interpreted as containing class-specific information that transfers over to the dataset. This is different from prior works, in which the "data noise" class-independent noise. From this point of view, it is not surprising that memorizing data noise can help in the setting presented in this paper while being harmful in other works, and I am not sure to what extent calling the two different types of noise both "data noise" is accurate. Methods And Evaluation Criteria: The analysis of the test loss in Section 3 makes sense for the problem statement. The construction of the synthetic dataset in Section 5.1 mirrors the theoretical setting, and the benchmark datasets for noise correlation (MNIST, CIFAR-10) are commonly used. Theoretical Claims: Proofs were checked at a high level, details were not carefully checked. Experimental Designs Or Analyses: The design and analysis of the experiments on the synthetic dataset in Section 5.1 make sense and support the theoretical results from the previous section. I do not quite understand the noise correlation verification in section 5.2, in particular how $||A_i^{\top}A_j||_F$ is calculated for real-world datasets such as MNIST and CIFAR-10 when these datasets do not exactly follow the data model presented in section 3 for which the quantities are defined. Supplementary Material: Briefly checked the proofs in the appendix. Relation To Broader Scientific Literature: This paper mainly belongs to a line of work theoretically analyzing benign overfitting in neural networks. The main difference in this work is the new data model, from which the authors derive new results about the effect of memorizing said noise. The authors also connect some of their results to the spurious correlation literature by showing that their results imply that class imbalance can hurt group accuracy. One interesting connection which the authors do not mention is to adversarial robustness. Specifically, the feature hypothesis "that adversarial examples can be directly attributed to the presence of non-robust features: features (derived from patterns in the data distribution) that are highly predictive, yet brittle and (thus) incomprehensible to humans" [1]. Indeed, if what the authors call data noise or implicit features can be recast as non-robust features, then there could be some connection there. I would be curious about the authors' thoughts on the same. [1] Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., & Madry, A. (2019). Adversarial examples are not bugs, they are features. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: This paper provides an interesting new perspective on benign overfitting by considering a data model with a new form of class-dependent feature noise. The paper is somewhat easy to follow, although some of the assumptions (e.g. Condition 1) are a bit opaque at first glance and could use more explanation. Other Comments Or Suggestions: See above. Questions For Authors: My main questions are 1. Making comparisons about the effect of data noise when the "data noise" is fundamentally different in this work. 2. How $||A_i^{\top}A_j||_F$ is calculated for real-world datasets. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for recognizing our contributions and for the constructive feedbacks. Below are our point-by-point responses: **Clarification on "Data Noise" Definition**: We agree with your point and will revise the term 'noise' to 'data-specific information' and clarify its class-dependent nature. We will also clarify this distinction throughout the paper to avoid any potential confusion. **Relationship to Adversarial Robustness**: We agree with your assessment that the implicit features discussed in our paper could be interpreted as non-robust features. As demonstrated in paper [1], standard training learns non-robust features from data-specific information, while adversarial training purifies these features to enhance model robustness. In this context, our theoretical framework has the potential to extend results in [1]. Specifically, while adversarial training removes non-robust features, our paper posits that this process negatively impacts plain accuracy, particularly in long-tailed data. This corollary is empirically supported by the findings in [2], further supporting our theory. **Comparison with Other Noise Models**: We acknowledge the extensive research on the effect of label and data noise on benign overfitting. To clarify the distinctions between our model and prior work, we provide a comparison with existing studies on benign overfitting, focusing on two-layer non-linear neural networks incorporating data noise. Studies [3-7] (a representative selection due to space constraints) focus on the 'explicit feature + class-independent Gaussian data noise' model (similar to the feature-noise model developed by [8]). While these studies explore various training settings, they consistently show that excessive data-specific noise leads to overfitting. However, they fail to explain the empirical results that memorizing data-specific information can enhance generalization, particularly in the context of long-tailed data. In contrast, our work refines these models by incorporating the data-specific information with heterogeneous covariance. We theoretically prove that neural networks can leverage the implicit features learned from data-specific information to improve classification performance on long-tailed data. Our theory greatly extends the traditional understanding of overfitting. **Calculation of $||\mathbf{A}_i^\top\mathbf{A}_j||_F$**: To estimate the quantity $\mathbf{A}_i$ for real-world datasets, we decompose the intra-class data covariance matrix. Recall that in our data model, the covariance matrix of a data sample $(\mathbf{x},y)$ within class $i$ is $\boldsymbol{\Sigma}_i = \mathbf{A}_i\mathbf{A}_i^\top$. To estimate $\mathbf{A}_i$ for real-world datasets, we first compute the sample covariance matrix: \begin{align} \tilde{\boldsymbol{\Sigma}}_i = \frac{1}{|\mathcal{S}_i|-1}\sum\_{(\mathbf{x},y)\in\mathcal{S}_i} (\mathbf{x} -\bar{\mathbf{x}})(\mathbf{x} - \bar{\mathbf{x}})^\top, \end{align} where $|\mathcal{S}_i|$ is the size of class $i$ in the training dataset and $\bar{\mathbf{x}}$ is the sample mean. Next, we perform eigendecomposition of the sample covariance matrices $\tilde{\boldsymbol{\Sigma}}_i = \mathbf{Q}_i \boldsymbol{\Lambda}_i\mathbf{Q}_i^\top$, where $\mathbf{Q}_i$ contains the eigenvectors and $\boldsymbol{\Lambda}_i$ is the diagonal matrix of eigenvalues. We then estimate $\mathbf{A}_i$ as $\tilde{\mathbf{A}}_i = \mathbf{Q}_i\boldsymbol{\Lambda}_i^{1/2}$. Using the estimated $\tilde{\mathbf{A}}_i$ for each class, we can estimate $||\mathbf{A}_i^\top\mathbf{A}_j||_F$. We will add this detailed explanation in the appendix. [1] Allen-Zhu, Zeyuan, and Yuanzhi Li. "Feature purification: How adversarial training performs robust deep learning." IEEE Annual Symposium on Foundations of Computer Science, 2022. [2] Wu, Tong, et al. "Adversarial robustness under long-tailed distribution." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021. [3] Cao, Yuan, et al. "Benign overfitting in two-layer convolutional neural networks." Advances in neural information processing systems, 2022: 25237-25250. [4] Kou, Yiwen, et al. "Benign overfitting in two-layer relu convolutional neural networks." International conference on machine learning, 2023. [5] Shang, Shuning, et al. "Initialization Matters: On the Benign Overfitting of Two-Layer ReLU CNN with Fully Trainable Layers." arXiv preprint arXiv:2410.19139 (2024). [6] Karhadkar, Kedar, et al. "Benign overfitting in leaky relu networks with moderate input dimension." Advances in Neural Information Processing Systems, 2024. [7] George, Erin, et al. "Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign?." Advances in Neural Information Processing Systems, 2023. [8] Allen-Zhu, Zeyuan, and Yuanzhi Li. "Towards understanding ensemble, knowledge distillation and self-distillation in deep learning." arXiv preprint arXiv:2012.09816.
Summary: This paper establishes an enhanced feature-noise model by considering class-dependent heterogeneous noise across classes. Under this model, the paper demonstrates how memorization of long-tailed data boosts model performance. Additionally, it provides a phase transition of test error between benign and harmful overfitting. Claims And Evidence: Yes, the analysis is clear, rigorous, and thorough. Additionally, experiments are provided for further support. Methods And Evaluation Criteria: Yes, the model is more reasonable and closer to real-world applications than previous work. The experiment covers both synthetic data generated from the model and real-world datasets. Theoretical Claims: I didn't check the correctness of the proofs, but I am familiar with the proof technique, and the results look reasonable to me. So, it is very likely to be correct. Experimental Designs Or Analyses: Yes, I checked the experimental design section. Supplementary Material: No. Relation To Broader Scientific Literature: The generalization to heterogeneous data is good. For me, the interesting part is the results for long-tailed data, where the authors show that the classification accuracy of long-tailed data increases with the noise correlation ratio. This provides a theoretical explanation for the empirical observation of the necessity of long-tailed data in the training dataset. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: 1. Novel Generalization: The paper extends the feature-noise model to class-dependent heterogeneous noise, making it more applicable to real-world scenarios. 2. Theoretical Insights: It provides a rigorous analysis demonstrating how memorization of long-tailed data boosts model performance. Weaknesses: 1. Definition of Long-Tailed Data: The conditioning of the L-long-tailed data distribution on the neural network model at time step T may introduce a recursive or non-intuitive aspect. It is unclear why conditioning on W_T is justified when no new data is generated after optimization begins. Please refer to questions for detail. Other Comments Or Suggestions: No. Questions For Authors: I have questions about the definition of long-tailed data. The L-long-tailed data distribution (Definition 1) is defined by conditioning on the neural network model at time step T. I feel there might be an intrinsic tricky part, like a recursive definition, because the training data is generated first, and the neural network parameters are then optimized using this data. It seems odd to condition on W_T because no new data is generated after the optimization process starts. This is different from using SGD, where new data is introduced in each iteration. It would make more sense to me if a long-tailed data subset of the entire training dataset were defined independently of the distribution, and then this subset’s specific role throughout the training dynamics were analyzed. Please correct me if I have any misunderstanding. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for recognizing our contributions and for the insightful comments. Below, we address your concerns in detail: **Clarification of Long-Tailed Data Definition**: Our definition is not recursive but instead relies on a fixed value of $T$, with no new data being introduced post-training, nor does it alter the training process. Instead, we use the final model $\mathbf{W}^{(T)}$ —obtained after a sufficient number of optimization steps $T$—to assess the long-tailedness of individual samples. Classical definitions of long-tailedness often hinge on simple one-dimensional statistics (e.g., sample counts or Zipf distribution). However, such metrics may fail to capture nuances of high-dimensional data (e.g., 3\*32\*32 images in CIFAR). For instance, for Sub-Gaussian data, norms alone do not always capture which samples are “rare” or “hard” to learn due to concentration effects. To address this, we draw inspiration from prior work that uses trained models to identify "long-tailed" samples. For example, Feldman and Zhang [1] evaluated long-tailedness by measuring the accuracy changes when training with or without a specific data sample. Similarly, Garg et al. [2] computed a curvature-based measure using trained models. In a similar manner, our definition leverages the trained model $\mathbf{W}^{(T)}$ to map high-dimensional samples to a one-dimensional metric, allowing us to identify long-tailed data points from the perspective of the network’s learned representation. We hope this explanation clarifies that conditioning on $\mathbf{W}^{(T)}$ does not introduce a recursive definition and is instead a post-hoc method for characterizing long-tailed data distributions in high-dimensional settings. Thank you again for your feedback; please let us know if further elaboration is needed. [1] Feldman, Vitaly, and Chiyuan Zhang. "What neural networks memorize and why: Discovering the long tail via influence estimation." Advances in Neural Information Processing Systems, 2020. [2] Garg, Isha, Deepak Ravikumar, and Kaushik Roy. "Memorization Through the Lens of Curvature of Loss Function Around Samples." International Conference on Machine Learning, 2024.
Summary: This paper explores overfitting in neural networks, challenging the common belief that it harms generalization. Specifically, the authors argue that certain forms of overfitting can be benign and demonstrate—both mathematically and empirically—that benign overfitting can enhance performance on long-tailed data. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. But analysis on more complex dataset such as ImageNet would further strengthen the claim made by the paper. Theoretical Claims: No Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The paper is trying to explore the phenomenon of overfitting where it helps the model's performance. This is quite broad topic and can be applied to many research area. Essential References Not Discussed: None that I know of Other Strengths And Weaknesses: ## Strengths - The paper is well-written and easy to follow. - It presents a novel perspective on overfitting, challenging conventional wisdom in the field. - The claims are rigorously supported through both mathematical analysis and empirical validation, strengthening the arguments. ## Weaknesses - The experimental section could be strengthened by evaluating more complex datasets, such as ImageNet, rather than relying solely on MNIST and CIFAR-10. Other Comments Or Suggestions: None found Questions For Authors: One of the major concern is regarding the dataset used for evaluation. Is there any reason for not performing experiments on more complex dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for recognizing our contributions and for the insightful comments. Below, we address your concerns in detail: **Clarification on the Choice of Datasets**: We would clarify the reason for experiments on MNIST. - While MNIST may be considered simpler in comparison to other datasets, it remains a highly relevant choice for studying long-tailed data distributions. The long-tailed examples in MNIST—often corresponding to poorly written digits—offer clear physical interpretability [1]. - Moreover, existing theoretical analyses in feature learning often rely on simplified synthetic data models (e.g., combining features with class-independent sub-Gaussian noise). Unlike those simpler models, our approach incorporates realistic intra-class covariance structures, similar to those in MNIST, providing new insights into the mechanism behind benign overfitting. **Applicability to More Complex Datasets**: We understand your concern about the applicability of our findings to larger and more complex datasets. To address this, we extended our experiments to CIFAR-100, a more challenging dataset, using a deeper network architecture, ResNet-101. We replicate the experimental settings from Figure 7 and present the updated results below: | Remove percentage | 0% | 20% | 40%| 60%| 80%| |----------|----------|----------|----------|----------|----------| |Remove images with high influence scores | 66.48% | 64.77% | 55.34% | 46.53% | 34.38%| |Remove images with low influence scores | 66.48% | 65.52% | 60.52% | 55.34% | 38.20% | These results further validate our theoretical predictions (particularly Statement 2(b) in Theorem 1). As shown, removing high-score samples results in a more significant accuracy drop compared to removing low-score samples. This trend aligns with our theoretical expectations, demonstrating that our framework applies to more challenging datasets like CIFAR-100, as well as deeper network architectures like ResNet-101. Due to time constraints, we plan to include additional experiments on Tiny ImageNet in an updated version of our work. We hope that these extended experiments clarify the broader applicability of our results to more complex datasets. Thank you again for your valuable comment. [1] Feldman, Vitaly, and Chiyuan Zhang. "What neural networks memorize and why: Discovering the long tail via influence estimation." Advances in Neural Information Processing Systems 33 (2020): 2881-2891.
Summary: This work investigates the phenomenon of benign overfitting in two-layer neural networks by rdiscussing a feature-noise model. The authors introduce a class-dependent heterogeneous noise model to attempt to explain why neural nets can leverage long-tailed data distributions for generalization. The paper claim to provide theoretical results demonstrating a phase transition between benign and harmful overfitting and supports these findings with empirical evaluations on synthetic and real-world datasets. The authors argue that neural nets can learn implicit features from data-specific noise, which enhances classification accuracy, particularly for long-tailed distributions. Claims And Evidence: The central claim of the paper is that neural networks can use data-specific noise as an implicit feature to improve classification performance. While the theoretical results are rigorous, the paper’s framing of memorization is somewhat misleading. To me, the so-called "memorization of noise" is, in reality, learning from well-separated covariances rather than pure memorization of useless noise. The experimental results generally align with the theoretical findings, but the paper lacks a clear discussion on why its proposed model is the right one for studying benign overfitting in practical scenarios. Methods And Evaluation Criteria: he authors use a combination of theoretical analysis and experimental validation. The theoretical methods appear sound and build upon prior work, extending it to the heterogeneous noise setting. However, the experimental evaluation could be improved by including comparisons with alternative models of benign overfitting, rather than focusing exclusively on their own framework. The evaluation criteria for success are reasonable but could be more explicitly tied to real-world applications (CIFAR is ok, but MNIST does not really count in myopinion). Theoretical Claims: The theoretical results appear correct and are well-supported by the provided proofs. The use of singular value distributions and covariance matrix properties seems to be an extension of previous work. However, the claim that memorizing noise is beneficial should be reconsidered; what is actually being demonstrated is that certain types of structured noise contain class-relevant information, making it part of the learnable signal rather than harmful noise. Experimental Designs Or Analyses: This is a theory paper, so this kind of question seems ill-defined. The author do perform experiements, and the experimental setup is generally well-structured, with evaluations on both synthetic and real-world datasets (MNIST, CIFAR-10), even though i do not really count MNIST as meaningful. However, the key illustrative figures are unclear, particularly regarding the interpretation of "noise" and how it contributes to learning. It would be helpful to clarify how increasing the noise strength without a signal component affects performance and whether the conclusions hold for different model architectures. Supplementary Material: The supplementary material contains detailed proofs and additional experimental results Relation To Broader Scientific Literature: The paper builds upon prior work on benign overfitting and feature-noise models, referencing key studies such as those by Kou et al. (2023) and Feldman & Zhang (2020). However, it does not sufficiently engage with alternative explanations for benign overfitting, such as implicit regularization effects in overparameterized models. These are classical and widely accepted. More discussion on the relationship between this work and theories of representation learning would strengthen its contribution. Essential References Not Discussed: he paper should include more discussion of recent work on implicit regularization and the role of covariance structures in deep learning. Some relevant studies on feature learning dynamics in deep networks and their relation to benign overfitting are missing. Additionally, work on the role of batch normalization and other architectural choices in mitigating harmful overfitting should be considered. On the top of my head, i can thnkg of work such as Srebro's The Implicit Bias of Gradient Descent on Separable Data Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, Nathan Srebro; 19(70):1−57, 2018. Or The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks Other Strengths And Weaknesses: Strengths: The paper presents an interesting refinement of existing models and contributes new theoretical insights into overfitting in long-tailed distributions. The analysis of singular values and covariance structures is a valuable addition. Weaknesses: The framing of "memorization of noise" is somewhat misleading, as what is actually observed is learning from informative covariances. The motivation for the specific model chosen could be stronger, and the empirical results would benefit from comparisons to alternative benign overfitting models. Other Comments Or Suggestions: Clarify what is meant by "memorizing noise"—if the noise contains label information, then it is not purely noise but an implicit feature. Improve the illustrative figures to make it clearer how noise and signal interact in the proposed model. Include a discussion on why this specific refinement of the feature-noise model is preferable to other potential explanations for benign overfitting. Compare results against other recent theories of benign overfitting, such as those related to implicit regularization. Questions For Authors: Can you clarify why you refer to learning from structured noise as "memorization" when it appears to be leveraging class-dependent covariance information? How would your results change if the noise distribution was not sub-Gaussian but had a different structure? Would your findings hold in deeper networks beyond two-layer models, and how would they interact with batch normalization or dropout? Could you include comparisons with other models of benign overfitting to contextualize your results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for recognizing our contributions and for the insightful comments. We will answer your questions in the following. **Discussion on Datasets**: Thanks for your comment. Please see the reply to Reviewer pfPE due to the space limit. **Clarification of Noise Memorization**: Typically, memorization (overfitting) of a data sample is defined by achieving near-optimal training loss on that sample [1], a definition commonly adopted in the literature. Furthermore, the term 'memorization of noise' originates from feature learning studies [2,3], where it refers to situations in which classification relies heavily on data-specific information rather than the explicit feature. However, we agree that the term "noise" might be misleading, especially when it implicitly carries label-related signals. We will revise the terminology throughout the paper, replacing 'noise' with 'data-specific information' and explicitly clarifying its class-dependent nature. **Clarification of the Illustrative Figure**: In Figure 2, we clarify the feature strength as $||\mathbf{u}_i||_2$ and noise strength as $||\mathbf{A}_i^\top\mathbf{A}_i||_F$. Our paper includes settings and explanations for each figure, and we are happy to provide additional clarifications for any other figures that may require further elaboration. **Justification of the Proposed Model**: Our refinement is supported by two key observations. First, from the t-SNE visualization in Figure 1, we observe that different classes exhibit distinct intra-class distributions. Second, Figure 6 further verifies this by showing that real-world datasets, such as MNIST, have higher intra-class squared Frobenius norms compared to their inter-class counterparts. **Comparison with Other Benign Overfitting Models**: Due to the space limit, for a comparison with other models of benign overfitting, please see 'Comparison with Other Noise Models' for reviewer 5giJ. **Discussion on Implicit Bias/Regularization**: We will include an overview of studies and comparisons of implicit bias/regularization in the appendix. However, we would like to emphasize that our work is less relevant to implicit bias and regularization research. Implicit regularization is a broad research area focused on identifying high-level regularization effects of optimization algorithms, such as *max-margin, low-rank, and flat biases*, which display good generalization properties. In contrast, our paper is more aligned with research on feature learning, which detailedly characterizes how the *features are learned during training and how these learned features contribute to generalization*. **Discussion on Noise Structure**: We appreciate your comment regarding the applicability of sub-Gaussian distributions. Sub-Gaussian distributions are widely applicable in practice, as they encompass a broad range of real-world data distributions (e.g., ImageNet). Specifically, many natural and empirical data distributions are bounded, and such bounded distributions inherently exhibit sub-Gaussian characteristics. While other noise structures may exhibit different properties, sub-Gaussian distributions provide a strong and general framework that captures the behavior of most real-world data, making them a reasonable assumption in many contexts. Exploring the impact of alternative noise structures would undoubtedly be an interesting direction for future work. **Extension to Deeper Neural Networks and Modern Techniques**: From a theoretical perspective, analyzing neural networks with more than two layers presents significant challenges due to the non-convex and non-smooth nature of the loss function, as well as the complex interdependencies of parameters across layers. From an empirical perspective, we found that the main conclusions of our two-layer neural network analysis generally extend to deeper architectures. In our experiments with CIFAR-10 and CIFAR-100, we employ ResNet architectures with batch normalization, demonstrating that our theoretical predictions regarding covariance structure are applicable beyond simple two-layer models. [1] Cheng, Chen, John Duchi, and Rohith Kuditipudi. "Memorize to generalize: on the necessity of interpolation in high dimensional linear regression." Conference on Learning Theory, 2022. [2] Allen-Zhu, Zeyuan, and Yuanzhi Li. "Towards understanding ensemble, knowledge distillation and self-distillation in deep learning." arXiv preprint arXiv:2012.09816. [3] Cao, Yuan, et al. "Benign overfitting in two-layer convolutional neural networks." Advances in neural information processing systems, 2022.
null
null
null
null
null
null
Multimodal Medical Code Tokenizer
Accept (poster)
Summary: MEDTOK is a tokenizer designed specifically for medical codes, improving upon traditional approaches (that treat each code as isolated textual units) by considering the textual description of each medical code and its ontological hierarchy and relationships across different medical coding standards. It employs a language model encoder to process textual descriptions and a graph encoder to represent relational structures among medical codes. Both modalities—textual and relational—are combined into a unified token space suitable for input into various medical models. Integrating MEDTOK into state-of-the-art EHR models shows substantial improvements across diverse medical tasks, including outcome prediction, diagnosis classification, drug recommendation, and risk stratification. Claims And Evidence: The paper clearly identifies shortcomings in standard EHR tokenizers and effectively motivates the need for MedTok. The claims about improved performance by incorporating domain knowledge through code descriptions and relational structures are convincingly supported by empirical results. The experiments showing improvements simply by replacing standard tokenizers with MedTok are particularly compelling. Methods And Evaluation Criteria: The methods proposed, including leveraging both language models and graph encoding to integrate textual and relational data, are well justified for the medical domain. The evaluation criteria using standard benchmarks like MIMIC-III, MIMIC-IV, and EHRShot are appropriate and relevant. Theoretical Claims: This paper does not explicitly present theoretical proofs. Experimental Designs Or Analyses: The experimental analyses appear sound and clearly demonstrate improvements over state-of-the-art models. The provided ablation studies are valuable, though an additional ablation study excluding cross-modality embeddings would be beneficial to fully understand their contribution. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The paper’s contributions clearly extend existing literature on EHR tokenizers by introducing multimodal embeddings leveraging both textual descriptions and relational contexts. This approach offers a meaningful advancement compared to previous studies focusing primarily on isolated textual tokens. Essential References Not Discussed: The paper seems to cover well relevant related works. Other Strengths And Weaknesses: Strengths: * Clearly motivated and well-presented rationale for MedTok. * Straightforward integration with existing state-of-the-art models. * Robust empirical support showing improvements across various medical prediction tasks. Weaknesses: * Graph construction methodology seems complex and potentially difficult to reproduce. * It would enhance the paper to include visualizations of the learned embeddings to better illustrate how MedTok captures relational and textual information. Other Comments Or Suggestions: Typo: Line 303-right column: "HRShot" -> "EHRShot" Questions For Authors: 1. How did you assess the quality of the final constructed knowledge graph? Were quantitative or qualitative evaluations performed? 2. Did you conduct ablation studies specifically targeting each step detailed in appendix A.1.1 regarding graph construction? Are all these steps critical? 3. Is the code for graph construction and MedTok tokenizer publicly available or planned to be released? 4. Have you done an ablation study excluding the cross-modality embeddings? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **W1:** We apologize for the misunderstanding. In the final version, we will clarify that our graph construction approach is relatively simple and straightforward. Specifically, for each medical code, we define the code’s graph as a subgraph of a knowledge graph centered on the node representing the code plus its 2-hop neighborhood. Therefore, graph construction is straightforward and computationally efficient. ### **W2:** Thank you for your insightful feedback. The figure (https://anonymous.4open.science/r/MedTok-8DEE/drug_embedding.png) illustrates different drug codes (NDC, ATC, RxNorm) for the five selected drugs, demonstrating how MedTok effectively captures drug semantics. We observe that different codes representing the same drug cluster together, indicating that MedTok learns meaningful tokens of drugs across various coding systems. Additionally, NDC and RxNorm are highly granular, meaning a single drug can have multiple codes reflecting different dosages or formulations. Despite this granularity, MedTok successfully groups related tokens, preserving their underlying medical meaning. This highlights MedTok’s ability to generalize across different drug coding schemes. ### **Comments:** Thank you for your careful review. We will change it to ‘EHRShot’ in the next version. ### **Q1:** Thank you for your comment. We ensure the accuracy of the knowledge by extracting subgraphs from a well-established biomedical knowledge graph (e.g., PrimeKG). We include nodes within two hops of the medical code to build the subgraph. However, since PrimeKG is dense, we use the PageRank algorithm to select the top 2,000 most influential nodes connected to the central medical code. Based on these processes, we could make sure the extracted subgraph is of high quality. ### **Q2:** Thank you for your comment. Appendix 1.1 explains how we extract the subgraph centered on the medical code from PrimeKG, including using CUI code to map medical code and entities in KGs. It is about extracting knowledge from an existing KG, not constructing a new KG. ### **Q3:** Yes\! We will release our tokenizer and all source data, including code description and corresponding subgraph. ### **Q4:** Thank you for your valuable suggestion. **Please refer to the responses to Reviewer EVKN: E2-E3 .** --- *We sincerely thank the reviewer for the thoughtful and encouraging feedback on the originality, quality, clarity, and significance of MedTok. We appreciate your positive evaluation of our work and the opportunity to address your comments. In response, we have provided detailed explanations to address your questions. Please do not hesitate to let us know if additional clarifications would be helpful. Thank you again for your valuable input.* --- Rebuttal Comment 1.1: Comment: Thanks for your replies and additional figure/experiments. I will keep my accept score --- Reply to Comment 1.1.1: Comment: Dear Reviewer Eig7, Thank you for your kind words on MedTok. We sincerely appreciate your insightful suggestions on MedTok. Thank you again for your support and for recognizing the value of our research. Best, Authors
Summary: Medical codes in electronic health records (EHRs) contain rich textual descriptions and structured relationships. However, existing tokenization methods treat them as isolated textual tokens, failing to capture their ontological and relational context. The authors propose MEDTOK, a multimodal medical code tokenizer that creates a codebook to select the top k modality-specific or cross-modality embedding tokens, integrating both text-based and graph-based representations of medical codes. They evaluate MEDTOK across five clinical tasks on MIMIC-III and MIMIC-IV, as well as two additional tasks on EHRShot, and conduct an ablation study to demonstrate the effectiveness of the proposed multimodal approach. Claims And Evidence: - The claims presented in the paper are supported by experimental results and theoretical proofs. - Please also check the __Theoretical Claims:__ Methods And Evaluation Criteria: Yes. The authors use two simple linear projection layers for modality-specific embeddings and a cross-attention module for cross-modality embeddings, both of which are common and appropriate choices in multimodal tasks. Theoretical Claims: - Yes, I have carefully checked all equations presented in the "Approach" section. - The Equations (8) and (9) define modality-specific quantized vectors $\hat{e}_t^s, \hat{e}_g^s$, but they do not appear to be explicitly used in any other equation. Could you please clarify how they contribute to the overall optimization process? - In the token packing section, why do the authors use $\mathcal{L}_{\text{InfoNCE}} (\hat{e}_t^c, \hat{e}_t^c) + \lambda \mathcal{L}_{\text{orthogonal}}(\hat{e}_t^c, e_t^c) + \mathcal{L}_{\text{InfoNCE}}(\hat{e}_g^c, \hat{e}_g^c) + \lambda \mathcal{L}_{\text{orthogonal}}(\hat{e}_g^c, e_g^c)$ to represent $\mathcal{L}_{\text{token}}^{s}$ instead of directly using $\hat{e}_t^s, e_t^s$? Especially considering that the loss function already includes $\mathcal{L}_{\text{token}}^{c}$. While the definition of $\mathcal{L}_{\text{token}}^{c}$ is clear based on Wang et al.'s previous work, the reasoning for the specific formulation of $\mathcal{L}_{\text{token}}^{s}$ is less evident. Could the authors elaborate on the motivation behind choosing this particular formulation? Experimental Designs Or Analyses: - I have reviewed the experimental designs carefully. The authors thoroughly evaluate their MEDTOK approach using five tasks across two inpatient datasets and two additional tasks on the EHRShot dataset. They also comprehensively compare their proposed tokenizer with alternative tokenizers, such as the VQGraph tokenizer and a standard BERT-based tokenizer. - However, I have some concerns regarding certain experimental analyses: - - In the ablation experiments, MEDTOK demonstrates strong performance. However, simply removing the text-specific embeddings $e_t^s$ or graph-specific embeddings $e_g^s$ might be insufficient to fully assess the impact of each modality, as the model still retains shared embeddings $e_t^c$ and $e_g^c$. Have you tested an ablation setting where only $e^c$ embeddings** are used (i.e., removing both $e_t^s$ and $e_g^s$)? This would provide further insight into how much information is encoded in the cross-modality tokens alone. - - Additionally, in the codebook size analysis, there is an unexpected performance drop on both MIMIC datasets when the codebook size is around 18,000. Could the authors provide an explanation or hypothesis for this anomaly? Specifically, this sudden drop is quite substantial, with the AUPRC dropping to nearly one-third compared to when the codebook size is around 6,000. Clarifying this would strengthen the analysis. Supplementary Material: Yes, I have thoroughly reviewed all supplementary appendices. The authors clearly define their experimental tasks, provide detailed explanations on selecting medical codes, and describe the retrieval methods employed. Relation To Broader Scientific Literature: - The proposed approach extends traditional vector quantization methods by explicitly dividing the codebook into modality-specific and shared regions, which helps preserve modality distinctions while improving cross-modal interactions. Essential References Not Discussed: Are there related works that are essential to understanding the (context for) key contributions of the paper, but are not currently cited/discussed in the paper? Be specific in terms of prior related findings/results/ideas/etc. (Example: "The key contribution is a linear-time algorithm, and only cites a quadratic-time algorithm for the same problem as prior work, but there was also an O(n log n) time algorithm for this problem discovered last year, namely Algorithm 3 from [ABC'24] published in ICML 2024.") - Line 112 left column "Directly using the tokenizers for languages risks flattening the relationships among codes and failing to preserve the biomedical information", please add reference for this sentence. - Line 122 left column "However, graph tokenizers may suffer from information loss when applied to graphs in other domains, please add reference Other Strengths And Weaknesses: __Strengths:__ - The paper is clearly written, making it easy to follow and understand. - The authors validate their approach across seven tasks using three datasets. The experimental results outperform all baseline methods. The work is solid. __Weakness:__ please check the __Experimental Designs Or Analyses__ and __Theoretical Claims__. Other Comments Or Suggestions: N/A Questions For Authors: - Could the authors clarify how the modality-specific embeddings for knowledge graph $e_g^s$ are constructed? While the paper briefly mentions using a linear projector, upon reviewing the provided code, it appears that the graph data might be converted directly into text and then embedded together with other medical textual descriptions. - Given that this is a multimodal task, could the authors provide a case study or example illustrating situations where one modality compensates for limitations in another? - Could the authors provide more specific details about how their proposed tokenizer is applied in downstream tasks? While the appendix provides a detailed description of selecting the label $y$, I am interested in how the tokenizer integrates with the inputs in these experiments. - other questions pleas check the __Experimental Designs Or Analyses__ and __Theoretical Claims__ Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Theoretical Claims:** Thank you for your comment. Equations 8-9 contribute to the overall optimization process by representing the optimal values of graph embeddings (Eq. 8\) and text embeddings (Eq. 9), which are designed to model modality-specific and modality-shared information. To approximate these optimal values, we use a method inspired by Wang et al., 2024 ([https://openreview.net/forum?id=r7Xnetd0Pq\#discussion](https://openreview.net/forum?id=r7Xnetd0Pq#discussion)), where the optimization is achieved through InfoNCE loss and orthogonal loss. The contribution of modality-specific and modality-shared information is shown in the response to **Reviewer EVKN: E2-E3.** By optimizing both shared and specific information across two modalities, the performances are improved by 3.9%, 5.7%, and 9.1% on three datasets, respectively. ### **E1:** **Please refer to the responses to Reviewer EVKN: E2-E3.** By optimizing both shared and specific information across two modalities, the performances are improved by 3.9%, 5.7%, and 9.1% on three datasets, respectively. ### **E2:** One possible explanation for the codebook usage is as follows: at 18,000 tokens, the usage was around 32%, compared to \~40% when the size was 6,000. As the codebook expands, the model may struggle to allocate representation capacity effectively, leading to less efficient learning and potential overfitting to specific token patterns. Such observations could also be observed by other tokenizer work (https://arxiv.org/abs/2308.02117, https://arxiv.org/html/2406.11837v1), which makes this a good open problem and future direction for the field. ### **References:** We add the following two references to strengthen our argument and make our explanation more convincing. *\[1\] Shang, Junyuan, et al. "Pre-training of graph augmented transformers for medication recommendation." IJCAI. 2019\.* *\[2\] Xia L, Kao B, Huang C. Opengraph: Towards open graph foundation models\[J\]. arXiv preprint arXiv:2403.01121, 2024\.* ### **Q1:** For modality-specific embeddings in the graph, we first use a graph encoder to learn the graph embedding, followed by a projector to learn the graph-specific embedding. The graph-specific embedding is optimized separately using a loss function and is not combined with the medical textual descriptions. Please refer to the 'specific embedding' function (Line 188-218) in [https://anonymous.4open.science/r/MedTok-8DEE/vector\_quantization\_soft\_one\_new.py](https://anonymous.4open.science/r/MedTok-8DEE/vector_quantization_soft_one_new.py). ### **Q2:** Example 1: Heart Failure (I50.9) The text description for this code provides information about heart failure, its core diagnostic criteria, and primary clinical features (for example, impaired ventricular function). However, it does not include information on risk factors, complications, and treatment options–critical information that is not included in the text description but exists in the knowledge graph. Complementing this text definition with relational, graph-based information reveals related conditions such as hypertension, coronary artery disease, and diabetes; suggests treatments like beta-blockers, ACE inhibitors, and diuretics; and highlights potential complications, including pulmonary edema and kidney dysfunction. Example 2: Type 2 Diabetes Mellitus, Without Complications (E11.9) The text description captures a type 2 diabetes diagnosis along with lab values (fasting blood glucose and an HbA1c) and notes that the patient is prescribed metformin. However, it does not provide information on long-term risks or alternative treatment options. In contrast, the knowledge graph enriches this snapshot by linking diabetes to related conditions such as obesity, hypertension, and dyslipidemia; by highlighting potential complications like diabetic retinopathy, neuropathy, and chronic kidney disease; and by suggesting additional treatments such as insulin therapy or GLP-1 receptor agonists. In both examples, the text modality provides an immediate clinical snapshot, while the knowledge graph adds a broader context that supports comprehensive and personalized care decisions. ### **Q3:** The output of MedTok includes both tokens and their corresponding embeddings. Specifically, the token refers to the indices in the codebook, and the embedding is the corresponding vector for that index. When integrating with other models, we first obtain the tokens from MedTok and then use these tokens to query the codebook and retrieve the corresponding embeddings, which are used as input to the other models. --- *Thank you for your helpful feedback. If you feel our responses are insufficient to motivate increasing your score, we would love to hear from you further about how we can better address your questions. Thank you again\!*
Summary: This paper introduces MEDTOK, a multimodal medical code tokenizer that combines text descriptions and relational information from medical ontologies to improve the processing of medical codes in electronic health records (EHRs). MEDTOK encodes both text and graph information into a unified token space, enhancing model performance on various clinical tasks, such as disease prediction and drug recommendation. Experimental results show MEDTOK’s efficiency across multiple datasets. Claims And Evidence: The paper shows a strong understanding of the field, with convincing motivation and model design. It highlights the limitations of traditional tokenizers and presents MEDTOK as a solution that combines text and graph encoding, supported by experimental results. However, the proposed loss functions lack ablation experimental validation to prove their effectiveness. Methods And Evaluation Criteria: The proposed method is novel, combining text and graph encoding to address the limitations of traditional medical code tokenizers. The datasets and downstream tasks selections are well-aligned with domain standards, ensuring the method’s relevance and applicability to real-world clinical scenarios. Tasks designs are critical for evaluating the model’s performance and its potential impact in healthcare applications Theoretical Claims: The theoretical underpinnings of the proposed modules are presented clearly, with well-defined mathematical formulas that explain the process in detail. The authors effectively justify the design choices, such as the use of modality-specific and cross-modality embeddings, using formal representations to support their approach. Experimental Designs Or Analyses: To demonstrate the effectiveness of MEDTOK, the authors have selected appropriate downstream tasks and datasets that are professionally relevant. The ablation design effectively shows the impact of different modality data and the codebook size on the model's performance. Additionally, the MedicalQA experiment adds depth to the overall experimental design. However, in the generative tasks, the choice of the LLM and the datasets used are relatively limited, authors should expand more datasets to further strengthen the findings and provide a more robust evaluation of MEDTOK’s capabilities in diverse scenarios. Supplementary Material: Yes, I reviewed the supplementary material, which provides additional details on the datasets used, the training process, and the evaluation setup. These parts clarify the methodology and experimental design of MEDTOK. Relation To Broader Scientific Literature: The authors clearly position their work within the context of existing literature, highlighting the challenges faced by traditional methods in tokenizing medical codes in the patient electronic health records (EHRs) context. Essential References Not Discussed: The research covers most of the essential related works, especially in the context of tokenization for medical codes in EHRs. It discusses relevant studies on traditional tokenization methods, transformer-based models, and multimodal approaches, providing a solid foundation for its contributions. Other Strengths And Weaknesses: Strengths: 1. The authors address a practical problem in healthcare, specifically the tokenization of medical codes in EHRs, which has significant real-world relevance. 2. The proposed MEDTOK shows improvements across various downstream tasks and datasets, demonstrating its potential in real-world applications. Weaknesses: 1. Graph-specific, Graph-shared, Text-shared, and Text-specific mentioned in the method lack sufficient explanation, particularly regarding why these features are chosen and how they positively impact the results. This section left me confused. I want the authors include a visualization of these embedding methods and provide a clearer explanation to help readers understand how these different embedding features improve the model's performance. Additionally, authors should supplement the experiments by analyzing the specific impact of different embedding methods (such as Graph-specific and Text-specific) on task results and provide both quantitative and qualitative analyses. 2. While the paper performs well on the evaluated datasets, authors should explore how MEDTOK scales with even larger or more diverse EHR datasets. 3. Although the paper compares MEDTOK to standard tokenization methods, it should compare the result with other baseline models, including a comparison with a model where patient electronic health records are directly added to the vocabulary. 4. The paper mentions an ablation study evaluating the impact of removing the text or graph modality on model performance, but the experimental details and analysis are brief. It lacks a more detailed evaluation of different module combinations and loss functions. 5. Even though transforming multi-modal information into shared features and unique features has already been applied in many models (e.g., [1] and [2]), the use of graph features and text features from biomedical knowledge graphs in this paper is sufficiently novel, with strong motivation that addresses a critical pain point in medical multimodality. [1] Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling [2] DeCUR: DECOUPLING COMMON & UNIQUE REPRESENTATIONS FOR MULTIMODAL SELF-SUPERVISION Other Comments Or Suggestions: n/a Questions For Authors: Please see the Weakness session Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Claims**: **Please refer to the results in ‘Reviewer EVKN: E2’.** The obtained results demonstrate that both shared and specific information optimization enhance performance, with the full optimization achieving the best results across all datasets. ### **E1**: To address your concerns, we added two new LLMs to the paper (Qwen2.5-7B and MMedLM), generated tokens using MedTok, and prefix-tuned them on the MedMCQA dataset. We then evaluated these fine-tuned models on three other established QA datasets. Results are: ||MMLU|MedDDx|AfrimedQA| |:----|:----|:----|:----| |Llama3.1-8B|0.634|0.424|0.502| |+ MedTok|0.664|0.494|0.540| |Qwen2.5-7B|0.667|0.334|0.618| |+ MedTok|0.770|0.437|0.657| |MMedLM|0.564|0.308|0.491| |+ MedTok|0.640|0.403|0.653| ### **W1:** Graph/text-shared information refers to the shared information between graph and text modalities, while graph/text-specific information refers to information specific to each modality. These features are learned by maximizing the shared information and minimizing the overlap between shared information. By considering both modality-shared and modality-specific information, tokens in MedTok can distinguish medical codes that have similar names (i.e., same text descriptions) but different relational structures in corresponding medical code taxonomies (i.e., different graph structures). For example, consider E11.9 (Type 2 Diabetes Mellitus Without Complications) and E11.12 (Type 2 Diabetes Mellitus With Chronic Kidney Disease). These codes both refer to Type 2 Diabetes but are used for patients with different disease progression. E11.9 is linked to general diabetes management and metabolic pathways, while E11.12 is associated with kidney-specific pathways and nephroprotective drugs. MedTok’s optimization strategy can use the shared and specific information between these codes, which improves modeling of these seemingly similar yet distinct codes with different clinical relevance. To further address your concerns, we performed new experiments to quantify the impact of modality-shared and modality-specific information on MedTok’s. For that, **please refer to new results in ‘Reviewer EVKN: E2’.** ### **W2:** We evaluated MedTok using three EHR datasets that vary considerably in scale and scope. * EHRShot is a longitudinal dataset containing **41.6M observations** from **921K visits** across **6,739 patients** (1909–2023). Patients have an average timeline of **59 years**(max **88**) and **136 visits** (max **2,397**). * MIMIC IV includes **185.8M observations**, **546K visits**, and **364K patients**. Patients have **1.5 years** of data on average (max **14.7**) with **2.4 visits** (max **238**). * MIMIC III contains **32.9M observations**, **58,976 visits**, and **46,520 patients**. The average record spans **4 months** (max **11.5 years**) with **1.3 visits** (max **42**). Our results across these three datasets demonstrate strong performance, efficiency, and robustness of MedTok across a wide variety of clinical environments (outcome and diagnostic prediction tasks), both in-patient and out-patient contexts, both acute and chronic medical conditions, and patients with varying volumes of clinical data (small vs. large EHR record). ### **W3**: That is an excellent suggestion. All five baselines we adopted are models that directly incorporate patient electronic health records into their vocabulary. We compared the performance of these baselines with and without our MedTok tokenizer. Results show that integrating MedTok improves the performance of these baseline models by 3.29%, 2.67%, and 5.01% across three datasets (w2\) relative to using baseline models with standard tokenization methods. ### **W4:** We have added ablation studies on different module combinations and loss functions to examine the utility of each module and loss function component. **please refer to the response to Reviewer EVKN: E2-E3.** Results show that the performance of full MedTok increases by 3.9%, 5.7%, and 9.1% across three datasets relative to the simplified version of MedTok with modality-shared and modality-specific modules turned off. ### **W5**: We appreciate your acknowledging the importance of considering both graph and text features for developing a comprehensive tokenizer of medical codes. As we illustrate in our response to W1 and as you nicely point out, multimodality is particularly relevant to model codes that are similar across both graph-text modalities as well as codes that are similar in one modality but not in the other. —- *Thank you again for your thoughtful commentary. If you feel our responses are insufficient to motivate increasing your score, we would love to hear from you further about how we can better address your concerns. Thank you again\!*
Summary: In this paper, the authors present MEDTOK, a multimodal medical code tokenizer that integrates textual descriptions of medical codes with graph-based relational contextual information. The proposal employs separate encoders to process each modality, and the resulting representations are mapped into a shared space through vector quantization, ensuring the preservation of both modality-specific and modality-shared information. The authors evaluate MEDTOK on three electronic health record (EHR) datasets using five different backbone models to assess its effectiveness. ## update after rebuttal I appreciate the authors' responses to my concerns, particularly the newly supplemented results. Accordingly, I have updated my original assessment. Claims And Evidence: The proposed MEDTOK functions as a broadly defined "tokenizer" rather than a conventional one. Specifically, its tokenization process integrates both tokenization and embedding through dedicated encoders. Given this design, the extensive body of research on medical code embeddings should be discussed in the related work section and incorporated as baselines in the experimental evaluation to provide a more comprehensive comparison. Furthermore, MEDTOK primarily adapts and integrates existing techniques, including the designed encoders, vector quantization, and the modality fusion mechanism inspired by Wang et al. (2024a). As a result, the degree of novelty in the proposal is limited. Additionally, prior studies have extensively leveraged graph-based knowledge information to enhance EHR data analytics, including [a, b, c], among others. The authors should explicitly articulate how MEDTOK differentiates itself from these works in terms of methodology, contributions, and empirical performance. A thorough comparative analysis, both conceptually and through experimental validation, would strengthen the claims of the paper. [a] Choi, Edward, et al. "GRAM: graph-based attention model for healthcare representation learning." Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. 2017. [b] Shang, Junyuan, et al. "Pre-training of graph augmented transformers for medication recommendation." IJCAI. 2019. [c] Burger, Manuel, Gunnar Rätsch, and Rita Kuznetsova. "Multi-modal graph learning over umls knowledge graphs." Machine Learning for Health (ML4H). PMLR, 2023. Methods And Evaluation Criteria: The selection of datasets for evaluation, specifically MIMIC-III, MIMIC-IV, and EHRShot, is well-justified given their relevance to the problem under investigation. Theoretical Claims: I have reviewed the theoretical claims in detail. Experimental Designs Or Analyses: The evaluation should include existing methods for both medical code embedding and graph-based knowledge exploitation in EHR data as baselines to ensure a more comprehensive comparison. Several aspects of the experimental evaluation require further discussion or additional experiments to strengthen the findings: (i) In both the comparison between MEDTOK and the baselines, as well as in the ablation study, only the backbone TransformEHR is used for evaluation. Is TransformEHR the best-performing backbone model? The rationale behind this choice should be explicitly justified. (ii) Additional ablation studies should be conducted to further analyze the contribution of individual components within MEDTOK. (iii) The influence of critical hyperparameters, such as $\beta$ and $\lambda$ in the loss functions, should be investigated to assess their impact on the performance of MEDTOK. (iv) Given the focus on EHR data analytics, it would be valuable to provide interpretable findings and medical validation to demonstrate how the proposed MEDTOK can benefit healthcare practitioners in real-world applications. Supplementary Material: I have gone through the released code available through the provided link. Relation To Broader Scientific Literature: The proposed MEDTOK builds on prior research in multimodal learning for EHR analytics by leveraging the intra-modality and inter-modality relationships to improve predictive performance. MEDTOK extends these ideas by introducing a multimodal tokenization component for encoding different modalities separately and a token packing mechanism to integrate complementary information. This structured approach enhances analytic performance, contributing to the advancement of EHR-based predictive modeling. Essential References Not Discussed: The paper should discuss related works on medical code embedding and graph-based knowledge exploitation in EHR data (such as [a, b, c]). [a] Choi, Edward, et al. "GRAM: graph-based attention model for healthcare representation learning." Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. 2017. [b] Shang, Junyuan, et al. "Pre-training of graph augmented transformers for medication recommendation." IJCAI. 2019. [c] Burger, Manuel, Gunnar Rätsch, and Rita Kuznetsova. "Multi-modal graph learning over umls knowledge graphs." Machine Learning for Health (ML4H). PMLR, 2023. Other Strengths And Weaknesses: **Other Strengths:** The experimental evaluation encompasses a comparison with two baseline methods across three EHR datasets using five different backbone models, an ablation study on modalities, a hyperparameter sensitivity analysis, and a case study on medical Q&A. **Other Weaknesses:** The clarity of the paper could be improved by addressing the following points: (i) The evaluation setup does not display the 24 phenotypes used for in-patient evaluation. (ii) In the drug recommendation task for in-patient evaluation, the justification for restricting the recommendation scope to five specific drug candidates should be clearly articulated. (iii) In Figure 3, for the results on the EHRShot dataset, it is unclear which two tasks out of the seven are selected for demonstration. (iv) In Appendix B.3, the rationale for conducting inference on sampled datasets rather than the entire test set for certain tasks on the EHRShot dataset should be explicitly explained. Other Comments Or Suggestions: In Section 4.2, “HRShot.” Questions For Authors: Please refer to my detailed comments and suggestions outlined in the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Claims:** We added the following discussion to related work: *"Rather than treating medical codes in isolation, some methods incorporate additional knowledge to enhance their representation using structures like knowledge graphs (Choi et al., 2017; Burger et al., 2023\) or ontology trees (Shang et al., 2019). These methods build relationships between medical codes, improving performance on EHR tasks."* MedTok differs from existing methods: * MedTok is a **tokenizer**, converting raw input into **discrete tokens** for transformer models. It doesn't directly learn high-dimensional embeddings for downstream tasks but maps **inputs to a fixed set of tokens**. * MedTok maintains a codebook during training, unlike traditional representation learning methods that maintain a set of medical code embeddings. To address concerns, we analyzed alternative methods and integrated MMUGL (Burger et al., 2023\) as MedTok’s encoder, training it with a vector quantization loss. We compared MMUGL-driven MedTok with Transformer-based models on EHR tasks. GRAM (Choi et al., 2017\) was excluded due to its dependence on EHR settings, and G-Bert (Shang et al., 2019\) only considers medication and diagnostic codes, which do not align with our setting. Results are: ||MIMIC III|MIMIC IV|EHRShot| |:----|:----|:----|:----| |MMUGL-MedTok|0.370|0.305|0.361| |MedTok|0.412 (+4.2%)|0.444 (+13.9%)|0.378 (+1.7%)| ### **E1:** MedTok can be used with any transformer-based EHR predictive model. Our benchmarking included comparisons with five such models. TransformEHR is the best-performing backbone across 4 tasks, and thus it was chosen. ### **E2:** MedTok uses two modalities and results show optimal performance when using both modalities (Fig. 4). We additionally conduct ablation studies, demonstrating that shared and specific information optimization enhances performance: ||MIMIC III|MIMIC IV|EHRShot| |:----|:----|:----|:----| |VQ|0.373|0.387|0.287| |VQ \+ shared|0.379|0.409|0.314| |VQ \+ specific|0.382|0.402|0.366| |VQ \+ shared \+ specific|0.412|0.444|0.378| ### **E3:** Following your advice, we examine the impact of hyperparameters on MedTok performance. To make MedTok consider shared and specific information equally, we assume λ\= β, where λ is the weight for shared information loss and β for specific information. Results are: |λ \= β|MIMIC III|MIMIC IV|EHRShot| |:----|:----|:----|:----| |0.01|0.356|0.376|0.425| |0.1|0.412|0.444|0.378| |0.2|0.344|0.388|0.381| |0.3|0.330|0.357|0.418| |0.4|0.382|0.327|0.409| |0.5|0.403|0.401|0.404| ### **E4:** To this end, we selected a subset of patients classified as high risk for Hyperlipidemia by MedTok \+ TransformEHR, where these patients have no records of Hyperlipidemia before. We then counted the tokens assigned to these patients and identified those appearing more than 100 times (https://anonymous.4open.science/r/MedTok-8DEE/E4.png). We then mapped these frequent tokens to medical codes, with the most overlapping codes being Rosuvastatin 5 mg Oral Tablet (RxNorm 2669980), Burn of skin (SNOMED CT 147087003), Type 2 diabetes mellitus without complication (disorder) (SNOMED CT 373555004), and Hyperlipidemia (SNOMED CT 285605009). They are closely related to Hyperlipidemia. Rosuvastatin corresponds to medications commonly prescribed for lipid disorders. The other three medical codes represent clinical diagnoses or findings associated with hyperlipidemia-related cardiovascular risk. It suggests that MedTok effectively captures key medical concepts related to Hyperlipidemia, supporting its predictive capability. ### **W1:** We identify 24 phenotypes following (Harutyunyan et al., 2019). Please refer to Table 2 in https://doi.org/10.1038/s41597-019-0103-9 for details. ### **W2:** We limited the scope to five drugs to assess MedTok on well-defined, clinically relevant drug recommendations across diverse therapies. Each selected drug was prescribed to ~20–30% of patients, underscoring their relevance in clinical decision-making. Besides, the selection spans a broad range of categories: antibiotics (Vancomycin and Levofloxacin), anticoagulant (Heparin Sodium), beta-blocker (Metoprolol), and lipid-lowering agent (Atorvastatin). This range ensures that the evaluation covers multiple clinical scenarios, while focusing on conditions readily identifiable from patient data. ### **W3:** We follow https://arxiv.org/abs/2307.02028 in categorizing seven tasks. Operational Outcomes (OO) includes length of stay, readmission, and mortality prediction, while Assignment of New Diagnoses (ND) includes four new disease diagnoses. ### **W4** Since EHRShot is a longitudinal dataset with 921,499 visits and per-visit readmission predictions, we use stratified sampling to reduce redundancy while ensuring consistency with the ETHOS training setting. ### **Comments:** We revise it to ‘EHRShot’. --- We appreciate your suggestions for improvement. Please reach out with any questions or for further clarification. Thank you\! --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal, especially for the newly added experimental results. In light of this, I am pleased to raise my rating accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer EVKN, We are especially grateful for your recognition of our contributions and for the increased score. We sincerely appreciate your insightful question and valuable suggestions. Those are extremely useful and make MedTok more comprehensive and solid. Thank you again for your support and we deeply appreciate your expertise! Authors
null
null
null
null
null
null
Noise Conditional Variational Score Distillation
Accept (poster)
Summary: This paper proposes a novel method for distilling diffusion models into a generative image denoiser at any noise level. The proposed method is based on a theoretical result showing that the unconditional score function implicitly characterizes the score function of denoising posterior distributions at all noise levels. The method further incorporates adversarial training to overcome the performance upper bound imposed by the teacher diffusion model (the one which is being distilled). Claims And Evidence: 1. The authors claim in the abstract "We evaluate NCVSD through extensive experiments, including class conditional image generation and inverse problem solving." But I don't find the experiments to be "extensive" at all. For image generation, the authors only consider ImageNet 64x64 and 512x512, and for inverse problems they consider FFHQ 256x256 and distill zero-shot posterior sampling methods. I expected to see additional datasets for both image generation and inverse problems. 2. The authors claim in the abstract "our method outperforms teacher diffusion models and is on par with consistency models of larger sizes." While this may seem true from Table 1, only FID is evaluated (which is a highly problematic measure of generative models [1]), and I can't find a visual comparison (both quality and variation) between the methods, not even in the appendix. Measures like FID only serve as indicators of performance, while visual examples matter the most. [1] Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. George Stein et al. NeurIPS 2023. Methods And Evaluation Criteria: Some evaluation criteria are problematic. 1. For example, the authors claim in the abstract: "We achieve record breaking LPIPS on inverse problems." But LPIPS is only one distortion measure. What is it about LPIPS that make it such a desirable measure to optimize? They authors don't explain this. 2. In L369 (right) the authors claim: "We observed that the PSNR performance of our method does not achieve the best performance as LPIPS, which can be attributed to the distortion-perception trade-off (Blau & Michaeli, 2018), indicating that our method tends to produce results that is closed to the posterior samples rather than the mean of all possible solutions." To my understanding, there is no tradeoff between PSNR and LPIPS: If the PSNR is equal to infinity (namely, MSE=0), then the LPIPS is equal to 0 (minimal, i.e. optimal). Both PSNR and LPIPS are distortion measures. The perception-distortion tradeoff talks about a tradeoff between any distortion measure (e.g., PSNR, LPIPS, SSIM) and the statistical distance between the distributions $p_X$ and $p_\hat{X}$, not about some tradeoff between PSNR and LPIPS. 3. The authors say in L369 (right) that achieving lower LPIPS indicates that their method produce samples that are "closer" to posterior samples. I don't understand this claim. There is no link between achieving lower LPIPS and producing results that are closer to posterior samples. I find this argument to be wrong and misleading. 4. I think that evaluating generative methods only with FID is insufficient. I'd expect incorporating additional evaluations, such as KID, FD in feature space of self-supervised methods [1], etc. [1] Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. George Stein et al. NeurIPS 2023. Theoretical Claims: The math and theoretical claims in the paper seems okay to me. Experimental Designs Or Analyses: No code is attached. Couldn't verify the experiments. But the paper describes the experimental settings quite well. Supplementary Material: I reviewed all the supplementary materials. Relation To Broader Scientific Literature: The main contribution of this paper is a distillation method for diffusion models, allowing few-step image generation. This task has been addressed in prior work, such as by consistency models (which the authors compare with). Essential References Not Discussed: I can't think of any essential references which are not discussed. Other Strengths And Weaknesses: Strengths: 1. The paper is overall well written and easy to follow. 2. I find the idea to distill the diffusion at different noise levels to be interesting and useful (e.g., for solving inverse problems). Weaknesses: The experimental results are not very convincing: 1. The method is evaluated on limited datasets (ImageNet for image generation, and FFHQ for inverse problems). 2. There are no visual comparisons at all with other methods in both image generation and inverse problems. Visual results and comparisons are more important than quantitative results - since the paper is dealing with image processing. 3. The quantitative results on inverse problems seem inferior: The method achieves better LPIPS but worse PSNR, and it's not clear to me why LPIPS is necessarily more appropriate or more important than PSNR. If optimizing LPIPS is that important, why do we need a diffusion model or to distill one? Why not just training a model to minimize LPIPS? 4. Important quantitative evaluations are missing. E.g., evaluating image generation with additional measures such as KID, Precision and Recall, and evaluating the perceptual quality of image restoration algorithms with divergences (e.g., FID) and no-reference quality measures (e.g., NIQE). Considering additional distortion measures such as SSIM could also show better whether the proposed method is truly superior or not. Other Comments Or Suggestions: I recommend to add a figure depicting the method, both training and inference, to improve the paper's readability and clarity even further. Questions For Authors: I have no particular questions for the authors. But they are welcome to address the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: *W1: ... additional datasets ...* Please refer to W7. *W2: ... visual comparison ...* Please refer to W8. *W3: For example ...* Please refer to W9. *W4: In L369 ... & W5: The authors say in L369 ...* We respectfully disagree. The tradeoff between PSNR and LPIPS has been extensively demonstrated in prior works. For instance: - Figure 14 in DAPS and Figure 5 in RED-diff (Mardani et al., 2024) illustrate this tradeoff. - Section C.6 in DPS explicitly discusses this phenomenon. Generative image restoration methods often prioritize perceptual quality, which emphasizes preserving high-frequency details. This focus can lead to lower PSNR, as PSNR penalize deviations from the mean / MMSE solution. However, these deviations often result in better perceptual quality, as captured by metrics like LPIPS. The claim that our method produces results "closer" to posterior samples refers to its tendency to avoid regressing to the mean solution that maximizes PSNR. To address the reviewer's concern, we will revise the statement as follows: *"Our method tends to generate results that retain more high-frequency details rather than approximate the mean of all possible solutions. This approach typically leads to higher MSE (lower PSNR) but aligns more closely with perceptual quality metrics such as LPIPS."* *W6: I think that evaluating ...* To the best of our knowledge, related works such as CD, ECM, EDM2, and sCM primarily use FID as the main evaluation metric, while KID is not commonly reported. Additionally, FD is typically considered an auxiliary metric and is often included in the Appendix. To address reviewer's concern, we provide FD_DINOv2 below: *Table: FD_DINOv2 on ImageNet-512x512.* |Method|NFE|FD_DINOv2| |-|-|-| |sCD-M|2|55.70| |sCD-L |2|50.63| |NCVSD-M (Ours)|2|59.14| ||4|48.83| FD_DINOv2 exhibit similar trends to FID: scaling test-time compute enables our method matches performance of larger-sized sCM models. *W7: The method is evaluated ...* For image generation, we follow the EDM2, focusing on ImageNet-64x64 and ImageNet-512x512, and pretrained EDM2 models for other datasets are unavailable. Moreover, ImageNet-64x64 and ImageNet-512x512 serve as challenging benchmarks for image and latent domain, respectively. So we believe they sufficiently demonstrate our method's effectiveness and generalizability. For inverse problem solving, pretraining an EDM2 model on ImageNet-256×256 is computationally costly. For example, training an S-size model at 64×64 resolution takes over 5 days on 32 A100 GPUs (Figure 11(a), EDM2). Additionally, the baseline PnP-DM focuses solely on the FFHQ dataset. While our current results already demonstrate the effectiveness of our method, we have also started running experiments on the ImageNet-256, and we expect to report preliminary results by the discussion stage. *W8: There are no visual ...* We thank the reviewer for the suggestion. For image generation, visual samples are in Appendix D, with the classes align to those used in sCM, enabling visual comparisons. For inverse problems, we include additional visual comparisons (https://anonymous.4open.science/r/ncvsd-D396) in the revised version. These visual examples further demonstrate that our method produce much better perceptual quality and preserve more high frequency details compared to baseline. *W9: The quantitative results ...* We emphasize that achieving SOTA on inverse problems is not the primary goal of this paper, nor do we claim that LPIPS is more important than PSNR. Instead, inverse problem solving serves as a proof-of-concept to showcase the plug-and-play probabilistic inference capabilities of our method. Moreover, we already demonstrate competitive performance against well-established diffusion-based methods with significantly fewer NFEs. In contrast, prior works like CM (Song et al., 2023) on image editing provide only visual results without quantitative comparisons. The focus should be on the novelty and conceptual contributions of our approach rather than solely on performance metrics. *W10: Important quantitative ...* We standardize FID for comparing different methods, following EDM2. This metric is consistently reported in baseline methods, making it a convenient choice for benchmarking. In contrast, metrics like Precision or Recall are either not reported, e.g., ECM, or reported only for indicating the performance tradeoffs, e.g., EDM2 and sCM. To address the reviewer’s concern, we include additional Precision and Recall in the Table below. For additional metrics for inverse problems, please refer to our response to W3 from Reviewer 7Tng. *Table: Precision and Recall on ImageNet-64x64.* |Method|NFE|Precision|Recall| |-|-|-|-| |CD|1|0.68|0.63| ||2|0.69|0.64| |NCVSD-M (Ours)|1|0.72|0.60| ||2|0.73|0.62| ||4|0.74|0.62| Precision is superior, reflecting better quality of generated samples, while the slightly lower recall score reflects the mode-seeking behaviour of reverse KL minimization. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. - Regarding the PSNR-LPIPS tradeoff: I don't understand your point. These are just two different objective functions. Namely, in some tasks we would care more about PSNR than LPIPS, and perhaps vice versa (although I am not sure when would we care about LPIPS at all, and why). Even then, the perception-distortion tradeoff is true for ANY distortion measure, including LPIPS (Blau & Michaeli, 2018). This was one of the surprising things about the perception-distortion paper. So why optimizing LPIPS (or achieving lower LPIPS scores) indicates better perceptual quality? - Additional metrics: The fact that some metric, such as precision and recall, IS NOT commonly evaluated, doesn't imply that it shouldn't be evaluated. Similarly, the fact that some metric, such as FID, IS commonly evaluated, doesn't imply that we should take this metric as the holy grail and try to optimize it. Our ways to measure performance in machine learning are still evolving, and we should definitely not stick only to FID for image generation (this measure is highly problematic). Thus, I suggested adding additional (newer) measures to further support the evidence and claims in the paper. - Regarding visual comparisons with generative models, I believe the authors should include visual comparisons with other methods in their paper (or appendices), rather then referring the reader to open another paper and search for the class-corresponding images. - Regarding visual comparisons for inverse problems, the attached figure does not include a comparison with all evaluated methods, but rather only with PnP-DM. Moreover, it seems to me that the results for PnP-DM are wrong, as they are overly blurry. Did you maybe present the mean of 20 images (similarly to how PnP-DM report their metrics)? I like the novel ideas in this paper, but I still think the evaluations are limiting and the results are not particularly convincing. --- Reply to Comment 1.1.1: Comment: *W12: Regarding the PSNR-LPIPS tradeoff ...* The evaluation protocal of inverse problems follow prior works by using PSNR for distortion metric and LPIPS for perceptual metric (DPS, DAPS, PnP-DM, ...). We do not delieberatly optimize for LPIPS but only use it as one performance measure. We highlight that the main contributions of our paper to inverse problem solving is to develop method that address trade-offs among flexibility, posterior exactness, and computational efficiency. Our approach provides flexibility to address a range of inverse problems, achieves asymtotically posterior exactness with SGS, and addresses inefficiency of PnP-DM by using one-step method in place of the expansive reverse diffusion simulation. Regarding why lower LPIPS scores indicate better perceptual quality, we provide two evicences that may help to support this claim: - Evidence from large-scale experiments in the LPIPS paper (Zhang et al., 2018) suggests that LPIPS match better with human perceptual judgments compared to traditional metrics like PSNR or SSIM. And it is easy to construct counter examples that have good PSNR, SSIM scores but not align well with human perceptual judgments (Figure 1 in the LPIPS paper). - The blurry results caused by using the mean of 20 samples of PnP-DM (which noticed by the reviewer in W15). These results are not wrong results but are the solutions that solely optimize for PSNR, as they approximate the MMSE solution that are optimal for PSNR. They outperform us in terms of PSNR, but are clearly suffered from worse perceputual quality, which indeed reflected by worse LPIPS scores. Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595). *W13: Additional metrics ...* As we all know, FID metric may not be the only thing to pursue and clearly better metric will emerge with the development of the field, but this does not imply that using widely-adopted evaluation protocals following prior works (CM, ECM, sCM, EDM2, ...) is limiting or not convincing. Besides, to address reviewer's concern, we have included metrics like Precision and Recall (see W10) and FD (see W6). Moreover, we want to emphasize again that the main contribution of this paper is to develop a conceptually novel generative model that overcome the test-time inefficiency of diffusion models without sacrifycing the test-time flexibility, as well as its scalable training algorithm suitable for large scale and high resolution dataset. Benchmarking image generation or inverse problem-solving performance is not the main focus of this work, although our results are also competitive measured by widely-adopted metrics such as FID and LPIPS. *W14: Regarding visual comparisons with generative models ...* We followed the standard writing style in this field, e.g., CM, ECM, sCM, EDM2 ..., where only the visual examples of the proposed method are included in the paper, without examples from baseline methods. However, we understand the reviewer's concern and have included visual comparisons with EDM2 at (https://anonymous.4open.science/r/ncvsd-D396). Unfortunately, including visual examples of sCM is not feasible, as sCM is not open-sourced. *W15: Regarding visual comparisons for inverse problems ...* The visual examples of PnP-DM are presented as the mean over 20 samples. These results are not wrong results, but are the solutions that approximate the **MMSE solutions (optimize PSNR)**. These blurry results further demonstrate that conventional metric like PSNR do not well align with human perceptual quality like LPIPS. To address the reviewer's concern, we have further included additional visual results for PnP-DM (single sample), DiffPIR, DWT-VAR, and DAPS at (https://anonymous.4open.science/r/ncvsd-D396). As can be seen, our method acheieve better perceputual quality no matter PnP-DM uses mean sample or not. These visual comparisons further demonstrate that our method reconstruct fine details of the image more faithfully compared to baselines.
Summary: This paper introduces Noise Conditional Variational Score Distillation, which distills a pre-trained diffusion model into a generative denoiser. The generative denoiser enables fast one-step generation while preserving the ability for iterative refinement. Experiments on image generation tasks and various inverse problems demonstrate the effectiveness of the proposed method. **update after rebuttal** Thank you for your rebuttal. I will keep my score unchanged and remain positive about this paper. Claims And Evidence: Please refer to Strengths And Weaknesses. Methods And Evaluation Criteria: Please refer to Strengths And Weaknesses. Theoretical Claims: Please refer to Strengths And Weaknesses. Experimental Designs Or Analyses: Please refer to Strengths And Weaknesses. Supplementary Material: Yes Relation To Broader Scientific Literature: Please refer to Strengths And Weaknesses. Essential References Not Discussed: Please refer to Strengths And Weaknesses. Other Strengths And Weaknesses: **Strengths** 1. The paper is well-structured, and the overall storytelling is good, making it easy to follow. 2. The idea of distilling a pre-trained diffusion model into a generative denoiser is straightforward. 3. The quantitative and qualitive results appear promising. **Weaknesses** 1. After reviewing this paper, I have a question regarding the fundamental motivation mentioned in the introduction (L24–L26). What is the key difference between a diffusion model with iterative refinement capabilities and the proposed generative denoiser? I understand the advantage of one/few-step sampling methods, but does extending them to a multi-step process truly constitute a meaningful idea? In other words, can we just apply certain techniques to shorten the sampling steps of a standard diffusion model? 2. Regarding the experimental section, actually I haven't done too much work on "diffusion for restoration", so I am particularly curious about the testset selection process. It seems that you did not use the full test sets but instead selected a subset. Could you provide more details on this? Additionally, I would like to see results on standard benchmark testsets (e.g., SR ×4 results on DIV2K and Flick2K). 3. I think SSIM also makes sense. Including SSIM results in Table 2 would enhance the persuasiveness of the paper. Other Comments Or Suggestions: Please refer to Strengths And Weaknesses. Questions For Authors: Please refer to Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *W1: After reviewing this paper ...* Conceptually, the key distinction between diffusion models with iterative refinement and our approach lies in how clean data $x\_0$ is predicted from its noisy counterpart $y\_{\sigma} \sim \mathcal{N}(x\_0, \sigma^2 I)$. Diffusion models primarily focus on learning the MMSE prediction, whereas generative denoisers aim to model the full posterior distribution over $x\_0$. Regarding the reviewer's concern about whether certain acceleration techniques for diffusion models could achieve similar goals as our proposed method, we argue that significant gaps remain where our method demonstrates clear advantages: - **Image Generation**: While acceleration techniques for diffusion models, such as DDIM (Song et al., 2020) or advanced numerical integrators (Lu et al., 2022; Karras et al., 2022), can reduce the required NFEs from 1k (original DDPM) to 10–100, the generative quality degrades significantly when NFEs drop below 10. In contrast, our method achieves state-of-the-art results with just 1–4 NFEs, with performance at 4 NFEs even surpassing that of the teacher diffusion model. Moreover, generative denoisers can theoretically match the data distribution using only 1 NFE, whereas diffusion models require an infinite number of time steps to achieve the same. - **Inverse Problem Solving**: Current diffusion-based methods either suffer from irreducible approximation errors by using Dirac (Chung et al., 2022) or Gaussian (Song et al., 2023; Peng et al., 2024) approximations for the denoising posterior, or achieve asymptotically exact sampling at the cost of expensive reverse diffusion simulations (Wu et al., 2024). The former lacks theoretical guarantees, while the latter requires a significant number of NFEs (e.g., 2483 for PnP-DM) and remains affected by discretization errors. In contrast, our method achieves superior results with 20x fewer NFEs, offering both computational efficiency and theoretical robustness by avoiding prior-step errors beyond those introduced by imperfect model training. *W2: Regarding the experimental ...* We evaluate our method on the subset (100 images) of FFHQ dataset, following the standard practices established in diffusion-based restoration methods, e.g., DiffPIR (Zhu et al., 2023), DAPS (Zhang et al., 2024), PnP-DM (Wu et al., 2024), among others. Morevover, test on full validation set is computationally costly for methods like DPS, DAPS, PnP-DM (over 1000 NFE). This alignment also allows us to make a fair a comparison to existing methods with standard evaluation protocols. For super-resolution tasks, both our method and the baseline diffusion models are trained and evaluated on 256×256 resolution datasets such as FFHQ, following common practice in recent literature. Since existing approaches—including EDM2—have not been trained on 2K-resolution data, we do not include evaluations on high-resolution benchmarks like DIV2K or Flick2K. Nevertheless, our results are consistent with prior work and effectively demonstrate the strength of our approach. Extending the framework to higher-resolution settings remains a promising direction for future research. *W3: I think SSIM ...* We thank the reviewer for the suggestion. The SSIM performance is provided in the table below. *Table. SSIM performance of inverse problem solving on FFHQ dataset.* |Method|NFE|Inpaint(box)|Deblur(Gaussian)|Deblur(motion)|Super resolution|Phase retrieval| |-|-|-|-|-|-|-| | DDRM |100|0.801| 0.732 | 0.512 | 0.782 | N/A | | DPS |1000|0.792| 0.764 | 0.801 | 0.753 | 0.441 | | PiGDM |99|0.663| 0.720 | 0.733 | 0.720 | N/A | | DWT-Var |99|0.796| 0.795 | 0.798 | 0.802 | N/A | | DAPS |1000|0.814| 0.817 | 0.847 | 0.818 | 0.851 | | PnP-DM |2483|N/A| 0.780 | 0.795 | 0.787 | 0.628 | | PnP-GD (Ours) |50| $\mathbf{0.814}$ | 0.777 | $\underline{\text{0.801}}$ | $\underline{0.805}$ | $\underline{0.797}$ | Similar to PSNR, our method demonstrates competitive performance, being only outperformed by DAPS. However, DAPS achieves this by employing significantly more NFEs and extensive hyperparameter tuning. In contrast, PnP-DM, the most relevant baseline, delivers inferior performance despite utilizing more NFEs than our approach. This table will be included in the revised version to provide a more comprehensive evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I will keep my score unchanged and remain positive about this paper.
Summary: This paper proposes a new distillation scheme to learn a few-step posterior sampler of a diffusion process. Unlike existing methods that achieve rich posterior sampling using exhaustive function evaluations (e.g., diffusion) or invertible neural networks (e.g., normalizing flows), the proposed method achieves high-fidelity stochastic sampling from the distribution of clean samples gives noisy samples with significantly less inference compute. Experiments show that the proposed method is competitive with existing few-step samplers with less than quarter of the training budget and furthermore using a split Gibbs sampling technique similar to PnP-DM the proposed method outperforms all baselines on several image inverse problem tasks. ## update after rebuttal I thank the author(s) for their responses during the rebuttal period. With the changes promised and results provided during the rebuttal phase I increased my score towards acceptance. Claims And Evidence: Most of the claims are supported by evidence. Below are two claims that should be supported further: 1. *Choice of adaptive step size:* It is not obvious why the function is $(\beta^{-1}L + \sigma^{-2})$-gradient Lipschitz. A proof or simple derivation of this would be helpful. 2. *Optimality of existing posterior sampling methods:* The authors claim that existing amortized posterior sampling schemes do not necessarily result in samples from the posterior at convergence. Is this true in the non-parametric limit as well or is this claim based on empirical evidence justified by estimation/approximation errors? It would be useful to have some proof or elaborate technical argument about the convergence properties of the proposed method in comparison to existing methods. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are satisfactory. Theoretical Claims: Yes I checked all the proofs and they appear to be correct. Experimental Designs Or Analyses: Yes, the experimental design and analysis for both generation and inverse problem are satisfactory, Supplementary Material: N/A Relation To Broader Scientific Literature: The key contributions in this paper could be relevant to several different areas of scientific research. 1. *Synthetic Data Generation and Data Augmentation:* The ability to produce high-quality samples from the data distribution could aid in synthetic data generation or data augmentation especially in data scarce settings. 2. *Efficient Inverse Problem Solvers*: In many existing score-based inverse problem solvers, sampling from the posterior is crucial. However this has generally been difficult and instead approximations leveraging the conditional expectation are made. The contributions in this paper regarding efficient posterior sampling could help enable fast inverse solvers across a variety of modalities. Essential References Not Discussed: Authors have cited revelant work in the paper. Several new methods for posterior sampling methods using engression based on scoring rules have been introduced recently. I have added one as a reference below but the authors are not expected to discuss or compare against this method as it is considered concurrent work. [1] De Bortoli, V., Galashov, A., Guntupalli, J. S., Zhou, G., Murphy, K., Gretton, A., & Doucet, A. (2025). Distributional Diffusion Models with Scoring Rules. arXiv preprint arXiv:2502.02483. Other Strengths And Weaknesses: **Strengths** 1. The approach to parametrize a stochastic few-step generative sampler is novel. Many existing methods often leverage deterministic generators. The authors have done a good job describing how to design the posterior sampler conditioned on a noisy observation with Gaussian additive noise. 2. The approach of leveraging a pre-trained unconditional score model to compute the score of the posterior is very interesting and could be very applicable in practice. 3. The experiments are well designed and the results are competitive. **Weaknesses** 1. The main weakness lies in the exposition of the method. It is difficult to appreciate the proposed method in its current form as it is clear where it stands and how it compares to existing works in literature. 2. Existing methods that this work builds upon are not discussed at length. PnP-DM (Wu et al. 2024) is the foundation of the inverse problem algorithm but it is only cited. It is not clear what the existing framework is and how the proposed framework is different upon initial reading. Similarly distillation methods for learning a one-step posterior sampler are discussed in (Mammadov et. al., 2024) and (Lee et. al., 2024). Beyond being cited there is no section dedicated to discussing these existing works and mentioning differences (and more importantly similarities). Other Comments Or Suggestions: Here are some comments regarding syntax and typos: 1. All loss functions are typically a function of the parameters. Furthermore, more care should be taken when defining optimization objectives. For example, in equation (3) it would be more if the loss function was defined as $\mathcal{L}(\theta)$ and the optimization objective was $\min_\theta \mathcal{L}(\theta) \triangleq \min_\theta \mathbb{E}[\dots]$. Please also look at equation (8) and equations (60)-(64). 2. The authors should seriously consider devoting more time to discussing existing works on posterior sampling (that I mentioned in the weaknesses). The paper would be stronger if there was a dedicated background section to these methods beyond just VSD. 3. The impact statement should highlight some broader impacts and is not satisfactory in its current form. Questions For Authors: 1. What are the main differences between the proposed method and PnP-DM? It seems to be that the difference lies in the posterior sampler being used. PnP-DM simulated the reverse diffusion process to generate a sample whereas the proposed the method use the few-step model. Everything else with regards to the Gibbs sampler is the same. Is this correct? If so, this is not clearly described at all. I would suggest adding a background section on PnP-DM, describing their algorithm with the Gibbs sampler. Then point out the inefficient of the prior sampling step and mention how the proposed method solves the issue. 2. The two prior works in (Mammadov et. al., 2024) and (Lee et. al., 2024) both describe schemes for one-step posterior sampling. The key conceptual difference seems to be that the existing works sample from the posterior conditioned on a general noise observation (e.g., defined by a linear forward process) whereas the proposed method conditions on a noisy additive-Gaussian noise sample. This seems to induce a key technical use case beyond just inverse problem solving as the proposed method can be used for high-quality sampling as well. Coming to this realization took several cycles of reading the existing papers and then contrasting with the proposed method. The authors need to detail the benefits of the method much more clearly. Stress on fast sampling, have a detailed section on these existing works, describe technical novelty by *building on existing methods* and then mention key training/test-time computational advantages. To summarize, I am prepared to raise my score if the authors are able to work on the exposition of the method by make a more just effort in discussing existing works, highlighting their pros/cons and then describing the advantages (and limitations) of the proposed method. I believe the proposed method is interesting but it not ready for publication in its current form. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: *W1:Choice of adaptive ...* We thank the reviewer for pointing out the ambiguity in our claim. A function $f(\cdot)$ is called L-gradient Lipschitz if it satisfies: $$ \lVert \nabla f(x\_1) - \nabla f(x\_2) \rVert\_2 \leq L \lVert x\_1 - x\_2 \rVert\_2, \quad \forall x\_1, x\_2. $$ Provided that $\mathcal{E}$ is L-gradient Lipschitz, the gradient difference of $\tfrac{1}{\beta}\mathcal{E}(\cdot) + \tfrac{1}{2 \sigma^2} \lVert \cdot - u \rVert\_2^2$ $$ = \lVert \beta^{-1} (\nabla \mathcal{E}(x\_1) - \nabla \mathcal{E}(x\_2)) + \sigma^{-2} (x\_1 - x\_2) \rVert\_2 $$ $$ \leq \beta^{-1} \lVert \nabla \mathcal{E}(x\_1) - \nabla \mathcal{E}(x\_2) \rVert\_2 + \sigma^{-2} \lVert x\_1 - x\_2 \rVert\_2 $$ $$ \leq \beta^{-1} L \lVert x\_1 - x\_2 \rVert\_2 + \sigma^{-2} \lVert x\_1 - x\_2 \rVert\_2 $$ Therefore, the postential function is $(\beta^{-1} L + \sigma^{-2})$-gradient Lipschitz. *W2: Optimality of Existing ...* The claim holds in the non-parametric limit. This limitation arises because the loss functions of existing methods do not ensure that $q(x\_0|y)$ is the unique minimizer. For instance, the objective in (Lee et al., 2025) is defined as: $$ \min\_{\theta} -\mathbb{E}\_{\mu\_{\theta}(x\_0|y)} [q(y|x\_0)] + \int w(t) D\_{KL}(p\_{\theta}(x\_t|y) || q(x\_t)) dt, $$ which can only be interpreted as a regularized optimization problem, with no guarantee that $\mu\_{\theta}(x\_0|y) = q(x\_0|y)$ at convergence. Similarily, (Mammadov et. al., 2024) employ ELBO for the prior term (Eq.(14) in the paper) that is also no guarantee. In contrast, NCVSD achieves the optimal iff $\mu\_{\theta}(x\_0|y) = q(x\_0|y)$ (Luo et al., 2023). This property allows marginal-preserving multi-step sampling (Sec. 3.3), and enables the application of the SGS (Sec. 4). *W3: Authors have cited ...* We thank the reviewer for bringing this relevant work to our attention. We are look forward to contributing further to the field of posterior sampling. *W4: The main weakness lies ... & Existing methods that this work ...* We thank the reviewer valuable suggestions. Please refer to W6,W8 for comparisons with PnP-DM and W6,W9 for one-step posterior sampler. *W5: All loss functions ...* We thank the reviewer for the valuable suggestion. We will explicitly include the parameters to be optimized in all loss function to further enhance clarity. *W6: The authors should ...* We appreciate the reviewer’s insightful suggestion. Below, we provide a dedicated background on posterior sampling, which will be included in the revised version by extending Section 2.1: *“Existing posterior sampling methods often involve trade-offs among flexibility, posterior exactness, and computational efficiency. Supervised approaches (e.g., Saharia et al., 2022) lack flexibility as they require retraining for each specific task. Zero-shot methods provide greater adaptability but introduce irreducible errors by approximating the denoising posterior with Dirac (Chung et al., 2022) or Gaussian distributions (Song et al., 2023). Asymptotically exact methods, such as PnP-DM (Wu et al., 2024), ensure asymptotically exact posterior sampling but are computationally intensive, relying on reverse diffusion simulations that demand a large number of NFEs.”* *W7: The impact statement ...* We thank the reviewer for valuable suggestion. We will enhance the impact statement to emphasize the advancements in efficient generative modeling and inverse problem solving, and ethical considerations. *W8: What are the main ...* We appreciate the reviewer’s valuable suggestion. The primary distinction between the proposed method and PnP-DM lies exactly in how the prior step is approximated. To address this, we will include the following clarification after L277 in the revised version: *"The proposed method and PnP-DM are both built upon the foundation of SGS. The primary distinction lies in how the prior step is approximated. In PnP-DM, simulating the reverse diffusion process is required, which is not only computationally inefficient but also prone to irreducible discretization errors. In contrast, our approach significantly improves computational efficiency by requiring only one or a few NFEs for the prior step, while being free from any errors beyond those introduced by imperfect model training."* *W9: The two prior works ...* We appreciate the reviewer’s valuable suggestion. High-quality sample generation is indeed a key strength of our method compared to prior works. Additionally, our approach offers two significant advantages: accurate posterior sampling (W2), and the flexibility to address a wide range of problems (W6). In summary, we will incorporate the reviewers' insightful suggestions to further enhance the contributions of our paper. Specifically, we will: - Highlight the fast sampling (W8). - Include a dedicated background on posterior sampling (W6). - Elaborate novelty (W2,W6,W9). - Emphasize computational advantages (W8). - Improve impact statement (W7). --- Rebuttal Comment 1.1: Comment: Dear author(s), I appreciate the effort in addressing all of my questions and taking into account my suggestions, which you have agreed to incorporate into the next revision of the paper. I have increased my score accordingly.
null
null
null
null
null
null
null
null
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
Accept (poster)
Summary: This paper addresses the limitations of existing latent generative models, which often lack equivariance to semantic-preserving transformations like scaling and rotation. To overcome this challenge, the authors propose EQ-VAE, a regularization technique that enforces equivariance in the latent space, simplifying it while preserving reconstruction quality. EQ-VAE enhances the performance of various state-of-the-art generative models and is compatible with both continuous and discrete autoencoders, providing a versatile improvement for a range of latent generative frameworks. Claims And Evidence: The authors provide empirical results demonstrating the effectiveness of their method, EQ-VAE, in improving the performance of both continuous and discrete autoencoders. Specifically, they mention significant speedups in training times and improvements in downstream model performance, as measured by FID scores. Methods And Evaluation Criteria: The use of benchmark datasets and FID scores as evaluation criteria is appropriate for assessing the performance of image synthesis methods. These metrics provide a clear way to quantify improvements in image quality and training efficiency, making the evaluation relevant and meaningful for the context of latent generative modeling. Theoretical Claims: This paper does not involve theoretical claims. Experimental Designs Or Analyses: By comparing EQ-VAE against baseline models like SD-VAE and SD-VAE-EMA-FT, the authors convincingly demonstrate that improvements in generative performance are attributed to their proposed method rather than just additional training. No significant issues were noted in the designs or analyses. Supplementary Material: I reviewed the appendix. This paper has no supporting material. Relation To Broader Scientific Literature: The paper builds upon the foundational work of latent variable models, particularly in the context of variational autoencoders (VAEs). By applying their method to enhance models like DiT and SiT, the authors demonstrate a significant improvement in generative performance. Essential References Not Discussed: There is no significant related work that is not discussed. Other Strengths And Weaknesses: Strengths The paper addresses critical challenges in latent generative modeling, specifically the trade-offs between reconstruction quality and generative performance. By proposing a solution that enhances both aspects, the work has significant implications for advancing state-of-the-art generative models. Weaknesses The experimental results in Table 2 show that the improvement of the baseline by using EQ-VAE is relatively small. For example, SiT-B/2, 400K. The experimental result of REPA under the same configuration is about 25. Does this mean that the improvement of the generated results by this method is relatively small? Other Comments Or Suggestions: No other suggestions. Questions For Authors: No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful comments and efforts in reviewing our manuscript. Below, we provide our responses to each of your comments: --- **W1. Minor improvements in Table 2 compared to REPA.** We respectfully emphasize that in Table 2 EQ-VAE demonstrates substantial improvements across all models (DiT, SiT, and REPA) in both B and XL configurations. While the improvement of EQ-VAE for SiT-B/2 may appear relatively small compared to REPA, this comparison requires important contextual considerations: - REPA is a distillation strategy applied directly in the generative diffusion stage, which inherently leads to improved performance, particularly with the powerful pre-trained visual encoder (DINOv2r) it leverages. - In contrast, our method, EQ-VAE, is applied in the autoencoding stage and does not rely on any external model. Instead, it regularizes the latent space, resulting in enhanced performance for generative modeling. Moreover, as shown in Table 2 and Figure 1 (right), *EQ-VAE is orthogonal to REPA*, accelerating its convergence by a factor of four. This highlights EQ-VAE’s ability to improve efficiency without sacrificing performance. Lastly, we note that in our reproduction environment, SiT-B/2 achieved a higher 34.7 FID compared to the 33.0 FID reported in the paper, making the improvement attributed to EQ-VAE slightly larger (34.7 → 31.2 FID).
Summary: The paper introduces EQ-VAE, a novel variant of autoencoders designed to enhance the performance of latent generative models. The authors first identify that commonly used autoencoders in modern generative models are not equivariant to spatial transformations of the input, such as scaling and rotation. They argue that enforcing this property can lead to improved generation quality. To address this, they propose a new implicit regularization loss that encourages the encoder network to be equivariant with respect to scaling and rotation. Experimental results demonstrate that incorporating this regularization not only accelerates the training of generative models but also improves their performance in certain cases. Claims And Evidence: The claims are sufficiently supported with various experiments. Methods And Evaluation Criteria: The method is evaluated using common metrics such as recnostruction FID, generation FID, Inception Score, and LPIPS. Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: The experiments are well-designed, and the evaluation setup is fair across various models and architectures. Supplementary Material: I have reviewed all sections in the supplementary material. Relation To Broader Scientific Literature: While regularization techniques for enforcing equivariance in deep learning models have been explored in recent work, none have directly addressed this issue in the context of generative models. Moreover, no prior study has examined the impact of an equivariant latent space on generation quality. Accordingly, the paper's contributions are well-positioned within the broader literature. Essential References Not Discussed: The paper includes all essential references. Other Strengths And Weaknesses: ### **Strengths** - The paper is well-presented, making it clear and enjoyable to read. - Given that latent generative models are the predominant approach for high-resolution image generation, the contributions of this work are likely to have a significant impact on the field. - While most recent studies focus primarily on the diffusion or generative components of such systems, this paper explores a relatively underexplored area by improving the latent space of the autoencoder. ### **Weaknesses** - The main weakness of the work is that integrating EQ-VAE into generative models can lead to a performance drop for some models when using classifier-free guidance (e.g., REPA in Table 4). It would be valuable to assess how much additional training is required to recover the original model's performance. This provides a clearer evaluation of the convergence speed, especially since such models are rarely used without classifier-free guidance. Other Comments Or Suggestions: It would be interesting to explore how EQ-VAE impacts other autoencoders designed to improve the efficiency of SD-VAE, such as Cosmos-Tokenizer [1] and LiteVAE [2]. [1] Agarwal N, Ali A, Bala M, Balaji Y, Barker E, Cai T, Chattopadhyay P, Chen Y, Cui Y, Ding Y, Dworakowski D. Cosmos world foundation model platform for physical ai. arXiv preprint arXiv:2501.03575. 2025 Jan 7. [2] Sadat S, Buhmann J, Bradley D, Hilliges O, Weber RM. Litevae: Lightweight and efficient variational autoencoders for latent diffusion models. arXiv preprint arXiv:2405.14477. 2024 May 23. Questions For Authors: 1) What is the reasoning behind selecting only scaling and rotation as the equivariant operations? 2) Does adding the EQ-VAE loss in the context of VQ-VAE introduce any instability or impact due to the additional quantization step? 3) Do you have any intuition about why the performance gap widens after 400k iterations? Specifically, why do different VAEs perform similarly in the early training phase, but diverge later? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful comments and efforts in reviewing our manuscript. Below, we provide our responses to each of your comments: --- **W1. Results with CFG for converged models.** We appreciate the reviewer’s concern regarding potential performance degradation when integrating EQ-VAE into generative models with classifier-free guidance (CFG). However, we respectfully clarify that EQ-VAE does not inherently degrade performance. As shown in Table 4, REPA (trained for 800 epochs) and REPA with EQ-VAE (trained for only 200 epochs) cannot yet be directly compared due to the significant difference in training duration. Notably, DiT-XL/2 with EQ-VAE—trained for just 300 epochs—already outperforms DiT-XL/2† (which uses SD-VAE before EQ-VAE fine-tuning), even though the latter was trained to full convergence (1400 epochs). This suggests that EQ-VAE not only accelerates convergence but may also enhance performance under CFG. Due to our limited computational resources and short rebuttal timeline, we were unable to train XL models to full convergence. However, we understand the importance of assessing convergence speed in the CFG setting and will include fully converged experiments in the final version of our paper to provide a clearer evaluation. --- **S1. EQ-VAE with efficient autoencoders.** We appreciate the reviewer’s suggestion. Investigating whether our equivariance regularization can be applied to efficient autoencoders such as [1] and [2] is an interesting topic for further exploration. We will include this direction in the Future Work section of our paper. **Q1. Reasoning behind selecting scaling and rotation.** The development of scale and rotation equivariant networks [3],[4] has been extensively studied for various image understanding problems. This motivated us to explore whether well-established autoencoders utilized in latent generative modeling are equivariant under those basic semantic preserving transformations. Our findings, showcased in Figures 2 and 6, revealed that existing architectures lack this property, directly motivating our equivariance regularization approach. --- **Q2. VQ-VAE training instability.** We did not encounter any instability during VQ-VAE fine-tuning. --- **Q3. Performance gap widens after 400k iterations?** We observe that the performance gap remains consistent across all training iterations. As an example, we present a gFID comparison of REPA without and with EQ-VAE at different training iterations. | **Iter.** | **REPA (gFID)** | **REPA w/ EQ-VAE (gFID)** | |:---------------:|:----------------:|:--------------------------:| | 50K | 52.3 | 48.7 | | 100K | 19.4 | 18.7 | | 200K | 11.1 | 10.7 | | 400K | 7.9 | 7.5 | | 1M | 6.4 | 5.9 | --- [3] Group equivariant convolutional networks. In ICLR, 2016. [4] Scaleequivariant steerable networks. In ICLR, 2020. --- Rebuttal Comment 1.1: Comment: I’d like to thank the authors for addressing my questions in the rebuttal. I believe the paper presents valid contributions, and I would like to maintain my score as Accept.
Summary: The paper proposes EQ-VAE, a framework that introduces equivariance regularization into the training of autoencoders. By incorporating 2D transformations such as rotation and scaling, the method improves the structure and representation ability of the latent space. As a result, it accelerates the training of generative models and enhances generation quality. The authors demonstrate the effectiveness of EQ-VAE through extensive experiments on both discrete and continuous autoencoders, showing consistent improvements in performance across various generative modeling tasks. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. The authors demonstrate that their proposed training strategy effectively improves the reconstruction capability of the VAE itself. Furthermore, they provide experimental results showing that, by offering a more expressive latent space through equivariance regularization, the subsequent training of generative models is also improved. This leads to better generation quality and faster convergence. The experiments cover both discrete and continuous autoencoders, and the evaluations are consistent with the claims presented in the paper. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. The authors adopt standard datasets commonly used in image generation tasks, such as OpenImages and ImageNet. They also evaluate their method using widely accepted metrics, including FID and sFID, which are appropriate for assessing the quality and diversity of generated images. The experimental setup aligns well with the goals of the paper and provides a fair basis for comparison. Theoretical Claims: Why does the proposed fine-tuning strategy effectively enhance the representational capacity of VAEs and improve the convergence behavior of downstream generative models? While the paper provides intuitive explanations and empirical results to justify these benefits, a more rigorous theoretical analysis or mathematical justification would significantly strengthen the work. For instance, formalizing how equivariant regularization influences the geometry of the latent space, or how it impacts the optimization landscape of generative models, would offer deeper insights beyond empirical validation. Experimental Designs Or Analyses: Yes, I checked the soundness and validity of the experimental designs and analyses. The authors conduct thorough comparative experiments on various VAE models, as well as a series of generative models. Supplementary Material: Yes, I reviewed the supplementary material. Specifically, I examined the additional ablation studies (Section A), including the comparison between implicit and explicit equivariance regularization and the analysis of regularization strength. I also reviewed the details on intrinsic dimension estimation (Section B), evaluation metrics (Section C), and the detailed benchmarks of autoencoder models (Section D). Additionally, I checked the qualitative results demonstrating latent space equivariance and the comparisons across different VAE models. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader literature on improving latent space representations in generative models, particularly Variational Autoencoders (VAEs). Prior works have demonstrated that the structure and consistency of the latent space play a critical role in the performance of generative models, both in terms of reconstruction quality and sample generation. This paper builds on these findings by introducing equivariance regularization, ensuring that the latent space behaves consistently under geometric transformations such as rotation and scaling. Unlike previous approaches that focus on improving VAE expressiveness through architecture changes or better priors, EQ-VAE emphasizes geometric consistency, which has been underexplored in this context. The work is also related to recent efforts in incorporating equivariance into deep learning models, but it uniquely applies this concept to enhance the quality and robustness of latent representations in VAEs, which in turn benefits downstream generative tasks. Essential References Not Discussed: No, the paper covers the most relevant prior work related to equivariant representation learning and variational autoencoders. The cited literature provides sufficient context for understanding the key contributions of the paper. I did not identify any essential references that are missing or overlooked. Other Strengths And Weaknesses: Please refer to other sections. Other Comments Or Suggestions: No Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful comments and efforts in reviewing our manuscript. Below, we provide our responses to each of your comments: --- **Theoretical validation** While we focus on empirical evidence in this work, we believe that mathematically formalizing the underlying mechanisms behind the success of equivariance regularization in generative models will be an interesting future direction to explore. Our empirical observation that equivariance regularization reduces the intrinsic dimension of the latent manifold—correlating with improvements in generative performance (as shown in Table 5) is an interesting starting point for future theoretical research. The reviewer’s suggested directions, such as investigating how equivariance regularization influences the geometry of the latent space and its impact on the optimization landscape of generative models, are particularly compelling. We will incorporate these insights into the Future Work section of our paper.
Summary: This paper observes that existing autoencoders lack equivariance to semantic-preserving transformations like scaling and rotation, resulting in complex latent spaces that hinder generative performance. Based on this observation, the authors propose to regularize the latent by enforcing equivariance in the latent space, reducing its complexity without degrading reconstruction quality. Experiments on different generative models demonstrate the effectiveness of the proposed method. ## update after rebuttal Most of my concerns have been addressed and I lean to keep the positive rating. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I have checked the soundness of the experimental designs and analyses. Seems fine to me. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper shares similar motivation with prior work like REPA which aims to enhance the training efficiency of latent generative models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The idea of exploring the property of latent space is applaudable, which not only brings practical speedup of latent generative models, but provide insights for the design of VAEs. 2. The proposed EQ-VAE can generalize to both continuous and discrete autoencoders with only a few epochs finetuning, and experiments on multiple generative models demonstrate the effectiveness of the proposed method. 3. The paper is well-organized and the writing is clear. Weaknesses: 1. While the effectiveness of the proposed method has been validated with image generation on ImageNet, its efficacy on text-to-image, which requires larger training sets, remains unverified. The reviewer understands that it would take much larger training resources for T2I experiments, but the translation of the effectiveness to T2I cannot be guaranteed without empirical validation. 2. Although EQ-VAE has shown advantage in reducing the training costs compared to the baseline methods, whether further training could bring additional performance improvement is unclear, that is, whether EQ-VAE just makes the latent generative models converge faster or it could result in better performance of latent generative models with the same training resources as the baselines. 3. While EQ-VAE has compared with baseline VAE in rFID, this metric may be aligned with the training iterations and it is suggested to provide comprehensive comparisons with baseline VAE, finetuned VAE with old objectives and the proposed method with more metrics like PSNR, SSIM and LPIPS. Other Comments Or Suggestions: N/A Questions For Authors: Although the authors mentioned that they only finetune the VAE for 5 epochs on OpenImages, it would be better to have a sense of comparisons between the finetuning costs and the original pre-training costs if applicable. The reason is that different VAEs may train their models with different iterations and batch sizes. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful comments and efforts in reviewing our manuscript. Below, we provide our responses to each of your comments: --- **W1. EQ-VAE for T2I generation.** We appreciate the reviewer’s concern regarding the applicability of our method to text-to-image (T2I) generation. To address this, we conducted an additional T2I experiment using the MS-COCO dataset [1]. While this serves as a preliminary result in a small-scale setting—given that large-scale T2I experiments exceed our computational resources—it provides valuable empirical validation. For this experiment, we employed U-ViT-S/2 and followed their experimental setup [2]. We used SD-VAE to extract image latents in the baseline setting. During sampling, we use CFG with w=2.0. The table below reports gFID at every 50K iterations. We observe that EQ-VAE demonstrates improvements in T2I generation, highlighting the significance of equivariance regularization. These findings suggest that incorporating EQ-VAE into large-scale T2I models is a promising direction for future research. | **Iter.** | **U-ViT-S/2 w/ SD-VAE** | **U-ViT-S/2 w/ EQ-VAE** | |:--------:|:-------------------:|:-------------------:| | **50K** | 15.5 | 12.4 | | **100K** | 8.6 | 7.6 | | **150K** | 7.7 | 7.1 | | **200K** | 7.5 | 6.9 | | **250K** | 7.3 | 6.8 | | **300K** | 7.2 | 6.7 | | **350K** | 7.1 | 6.6 | | **400K** | 7.0 | 6.6 | | **450K** | 7.0 | 6.5 | --- **W2. Performance with EQ-VAE under same training resources as the baselines.** We appreciate the reviewer’s question about whether EQ-VAE solely accelerates convergence or also leads to better overall performance given the same training resources as the baselines. Due to our limited computational resources and short rebuttal timeline, we were unable to train XL models to full convergence (e.g. 1400 epochs for DiT/SiT). However, we recognize the importance of this evaluation and will include experiments with fully converged models in the final version of our paper to provide a more thorough assessment of convergence speed and final performance. That said, we emphasize that DiT-XL/2 with EQ-VAΕ, trained for only 300 epochs, already outperforms DiT-XL/2† (which uses SD-VAE before our EQ-VAE fine-tuning), even though the latter is considered fully converged after 1400 epochs (Table 4). This suggests that EQ-VAE is not only accelerating convergence but also contributing to improved performance. We will further investigate this in our updated experiments. --- **W3. Detailed Reconstruction Metrics.** We present detailed evaluation metrics for three models: SD-VAE, SD-VAE† (SD-VAE finetuned for 5 epochs with the original objective), and EQ-VAE (SD-VAE finetuned for 5 epochs with our objective). EQ-VAE significantly boosts the gFID performance of DiT-B/2. Importantly, this gain is achieved without sacrificing reconstruction quality as both EQ-VAE and SD-VAE† show similar improvements over the baseline SD-VAE across all the reconstruction metrics. | **Model** | **gFID↓** | **rFID↓** | **PSNR↑** | **LPIPS↓** | **SSIM↑** | |:-----------:|:---------:|:---------:|:---------:|:----------:|:---------:| | **SDVAE** | 43.5 | 0.90 | 25.82 | 0.146 | 0.71 | | **SD-VAE†** | 43.5 | 0.81 | 25.98 | 0.139 | 0.72 | | **EQ-VAE** | 34.1 | 0.82 | 25.95 | 0.141 | 0.72 | --- **Q1. Autoencoder Training Steps.** We thank the reviewer for this question. We provide a detailed breakdown of the original training epochs for SD-VAE and SD-VAE-16, calculated based on the training steps reported in the [official repository](https://github.com/CompVis/latent-diffusion?tab=readme-ov-file#model-zoo) and the batch size specified in the [hugging face repository](https://huggingface.co/stabilityai/sd-vae-ft-ema-original). The epochs are calculated as follows epochs = (steps * batch_size) / dataset_size, where the dataset size for OpenImages is ~1.74M. For VQ-GAN, SD3-VAE, and SD-XL-VAE, to the best of our knowledge, their training iterations are not explicitly stated in their respective papers or official repositories. | **Model** | **OpenImages Epochs** | |:-----------:|:-------------------:| | **SD-VAE** | 27.1 | | **SD-VAE-16** | 48.8 | We note that while our primary benchmark experiments use 5 fine-tuning epochs, our ablation study in Fig. 5 shows that even 1 epoch leads to notable performance improvements, particularly in terms of gFID. --- [1] Microsoft COCO: Common Objects in Context, In ECCV 2014 [2] All are Worth Words: A ViT Backbone for Diffusion Models, In CVPR 2023
null
null
null
null
null
null
Robust Online Conformal Prediction under Uniform Label Noise
Reject
Summary: The authors consider online conformal prediction with uniform label noise, where the goal is to solve a sequential classification problem by providing prediction sets at each round, such that over time, the true label will lie in the prediction set with probability approximately $1-\alpha$ where $\alpha$ is a predetermined coverage parameter. The authors establish, both empirically and theoretically, that existing methods suffer from coverage gaps in the presence of label noise, with the most common phenomenon being over-coverage in which the prediction sets contain the true label with probability larger than $1-\alpha$. To address this, the authors design a robust algorithm using a variant of the pinball loss and show that their algorithm converges to have no coverage gap even in the presence of label noise. ## update after rebuttal: My assessment remains after reading the other reviews and comments by the authors. Claims And Evidence: The theoretical claims made in this submission are supported by formal proofs, and the empirical evaluations are clear and convincing. I do not see any problematic claims. Methods And Evaluation Criteria: It is a bit unclear to me why we care about exact coverage, that is, why over-coverage is an issue. I am far from an expert in this area of research, but my intuition tells me that if we cover the true label with probability larger than $1-\alpha$, then our algorithm is doing better, that is, its predictions are more accurate. Examining some of the previous works, it appears to me as coverage of at least $1-\alpha$ is sought after rather than exactly $1-\alpha$. Theoretical Claims: I did not check the correctness of the proofs of the theoretical claims. Experimental Designs Or Analyses: I did not check the soundness of experimental designs or analyses. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: It seems to me that for the most part the authors adequately relate the contributions of the paper to the broader literature on conformal prediction, however I am a bit unsure about the precise coverage objective, that is, whether or not this objective has been used in previous works and whether or not over-coverage has been recognized as an issue in previous works as well. Essential References Not Discussed: I did not find any essential references not discussed. Other Strengths And Weaknesses: Strengths: * The paper is well-structured, easy to follow and to understand. The authors provide clear explanations for their methods, techniques and experiments. * The theoretical results presented by the authors seem non-trivial, and they are supported by empirical evidence. Weaknesses: * I am unsure about the main motivation of this work - that is, why the fact that existing methods exhibit over-coverage in the presence of label noise is an issue. I could not find any reference to that in previous works, but my familiarity with this line of work is very limited. Other Comments Or Suggestions: N/A Questions For Authors: As I previously mentioned in the review, I don't think I fully understand why the phenomenon of over-coverage in the presence of label noise should be worrisome. On the face of it, it seems to me like higher coverage than the prespecified $(1-\alpha)$ only indicates more accurate predictions. Looking into a few of the previous works, I could not find a similar objective to that of precise coverage. As I am very unfamiliar with this area of research, I would appreciate it if the authors could provide an explanation and/or a more explicit motivation, and if I am convinced then I will increase my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. Justification on achieving precise $1-\alpha$ coverage Thank you for the insightful comment. Achieving precise $1-\alpha$ coverage is one of the common desiderata in conformal prediction [1,2,3,4], since over-coverage results in excessively large prediction sets, reducing their practical utility. In conformal prediction, smaller prediction sets are generally preferred to provide more informative outputs, aligning with the desideratum of size efficiency [5,6]. For instance, in medical diagnosis with $\alpha=0.1$, prediction sets exceeding the intended 90% coverage (e.g., reaching 95%) may include extraneous and irrelevant options, such as unlikely diseases, thereby reducing specificity. Conversely, users can select a smaller $\alpha$ to ensure prediction sets achieve the desired higher coverage. A more detailed explanation can be found in Chapter 3.6 of [1]. Notably, over-coverage has been identified as a central challenge in conformal prediction under conditions of label noise [7]. In the literature, prior works [3,4] have been devoted to addressing this issue, as detailed in the Related Work section (Appendix A). In this work, we present the first attempt to address this issue within the framework of online conformal prediction. We believe this should clarify our focus, and we welcome any further discussion. ### References [1] Angelopoulos A N, et al. Theoretical foundations of conformal prediction. arXiv preprint 2024. [2] Angelopoulos A N, et al. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint 2021. [3] Sesia M, et al. Adaptive conformal classification with noisy labels. JRSSB 2024. [4] Penso C, et al. Estimating the Conformal Prediction Threshold from Noisy Labels. arXiv preprint 2025. [5] Angelopoulos A, et al. Uncertainty sets for image classifiers using conformal prediction. ICLR 2021. [6] Huang J, et al. Conformal prediction for deep classifier via label ranking. ICML 2024. [7] Einbinder B S, et al. Label noise robustness of conformal prediction. JMLR 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Having seen the evidence provided by the authors for the interest in exact coverage in the relevant literature, I will adjust my overall score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. We sincerely appreciate your time and effort in reviewing our work.
Summary: The paper "Robust Online Conformal Prediction under Uniform Label Noise" addresses the challenge of online conformal prediction (OCP) in the presence of uniform label noise. Conformal prediction is a widely used technique for uncertainty quantification, guaranteeing a predefined coverage probability for prediction sets. While recent advances have extended conformal prediction to online settings, existing methods typically assume perfectly accurate labels, which is often unrealistic in practical applications where label noise is prevalent. To address this issue, the authors propose Noise-Robust Online Conformal Prediction (NR-OCP), which adapts the conformal threshold update process using a novel robust pinball loss function. This function provides an unbiased estimate of the clean pinball loss without requiring access to true labels, thereby mitigating the impact of label noise on the coverage guarantees. Theoretical analysis demonstrates that NR-OCP eliminates the coverage gap introduced by label noise and achieves a convergence rate of O(T⁻¹/²) for both empirical and expected coverage errors. The paper further validates its approach through experiments on CIFAR-100 and ImageNet, showing that NR-OCP achieves precise coverage while maintaining small prediction sets, outperforming standard online conformal prediction methods. Claims And Evidence: The paper provides a solid theoretical foundation to support its claims regarding the effect of label noise on coverage guarantees in OCP. The mathematical derivations are well-structured, and the proposed robust pinball loss is rigorously justified through expectation-based approximations. The empirical evidence is also strong—experiments on CIFAR-100 and ImageNet with varying noise levels demonstrate the effectiveness of NR-OCP in reducing the coverage gap while maintaining compact prediction sets. However, one potential limitation is the reliance on synthetically generated uniform label noise, which may not fully capture real-world noise distributions that are often structured (e.g., class-dependent or instance-dependent noise). While the results convincingly show improvements over standard OCP, further validation on real-world noisy datasets could strengthen the paper’s contributions. Methods And Evaluation Criteria: The methodology is well-aligned with the problem. The introduction of a robust pinball loss function directly addresses the issue of label noise by adjusting the threshold update mechanism in OCP. The choice of evaluation metrics—coverage gap and prediction set size—is appropriate, as these measure both the reliability and efficiency of the proposed approach. That said, the experiments focus primarily on computer vision datasets (CIFAR-100, ImageNet). While these are standard benchmarks, additional experiments on other domains, such as natural language processing (e.g., text classification datasets with noisy labels) or tabular data, would help assess the generalizability of NR-OCP. Theoretical Claims: The theoretical claims appear well-founded and consistent with prior work in online conformal prediction and robust learning. The proofs leverage concentration inequalities (e.g., Azuma–Hoeffding) and martingale-based arguments, which are standard tools for analyzing online learning algorithms. One aspect that could benefit from additional clarification is the bias-variance tradeoff in gradient estimation. While the paper establishes that the robust pinball loss leads to an unbiased estimate in expectation, it does not discuss the variance of the estimate explicitly. A high variance in the gradient updates could impact the stability of NR-OCP, especially for small sample sizes. Experimental Designs Or Analyses: The results are statistically significant, as NR-OCP consistently achieves near-zero coverage gaps while maintaining smaller prediction sets compared to standard OCP. One minor concern is the lack of statistical significance testing in the reported results. Given the small coverage gaps, a confidence interval or hypothesis test would help ensure that the improvements are not due to random fluctuations. Supplementary Material: The supplementary material contains detailed proofs and additional experimental results, including performance breakdowns for different non-conformity scores and convergence analyses under dynamic learning rates. The proofs are well-organized and align with the main text. The additional empirical results further support the claims made in the paper. Relation To Broader Scientific Literature: The paper is well-situated in the broader literature on conformal prediction, online learning, and label noise robustness. It builds upon key prior works such as: Online Conformal Prediction: (Gibbs & Candes, 2021; Angelopoulos et al., 2024) Label Noise Robustness: (Einbinder et al., 2024; Penso & Goldberger, 2024) Pinball Loss in Conformal Prediction: (Steinwart & Christmann, 2011) A notable strength of the paper is that it removes a strong distributional assumption made by Einbinder et al. (2024), making the analysis more general. This is an important step forward in developing noise-robust conformal prediction methods. Essential References Not Discussed: While the paper’s focus is on uniform noise, discussing extensions to other noise models would make the work more applicable to real-world scenarios. Other Strengths And Weaknesses: Strengths: Addresses a critical gap in conformal prediction by considering label noise. Well-supported theoretical contributions—the results are rigorous and remove limiting assumptions in prior work. Effective empirical validation—results demonstrate both strong coverage guarantees and efficiency improvements. Clear and well-organized writing—the paper presents a complex topic in an accessible manner. Weaknesses: Relies on uniform label noise—real-world settings often involve more complex noise structures. Lack of ablation studies—it would be useful to analyze how different components (e.g., robust pinball loss) contribute to performance gains. Limited generalization across domains—the experiments focus on image classification, but additional evaluation on NLP or tabular data would strengthen the work. Other Comments Or Suggestions: Equation (4): It would be helpful to clarify the role of learning rate decay in dynamic learning rates. Notation: In some places, the notation for learning rates (ηt) and coverage errors could be better aligned for readability. Figure 2 caption: The term "Baseline" should explicitly reference the specific online conformal prediction method used for comparison. Questions For Authors: How does the variance of the robust pinball loss gradient updates compare to the standard pinball loss? A high variance could impact stability—has this been analyzed empirically? Would NR-OCP be effective under instance-dependent noise models? Many real-world datasets exhibit structured noise—how might the method adapt to these scenarios? Could NR-OCP be extended to adversarial label noise? Have the authors considered robustness against worst-case noise perturbations rather than uniform noise? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: While AC suggests that the review is flagged as generated, we still provide detailed responses below: > 1. No ablation study on the proposed loss See response #2 to reviewer rF6c. > 2. Restriction to uniform label noise See response #3 to reviewer rF6c. > 3. Variance analysis for the gradient Here we provide a variance analysis for the gradient of robust pinball loss. Recall that the robust pinball loss is defined as $$ \\nabla_ {\\hat{\\tau}_ {t}}\\tilde{l}_ {1-\\alpha} (\\hat{\\tau}_ {t},\\tilde{S}_ t,\\{S_ {t,y}\\}_ {y=1}^K) =\\nabla_ {\\hat{\\tau}_ {t}}l_ 1(\\hat{\\tau}_ {t},\\tilde{S}_ t) +\\nabla_ {\\hat{\\tau}_ {t}}l_ 2(\\hat{\\tau}_ {t},\\{S_ {t,y}\\}_ {y=1}^K) $$ where \\begin{align} & \\nabla_ {\\hat{\\tau}_ {t}}l_ 1(\\hat{\\tau}_ {t},\\tilde{S}_ t) =\\frac{1}{1-\\epsilon}[1\\{\\tilde{S}_ t\\leq\\hat{\\tau}_ {t}\\} -(1-\\alpha)] \\ & \nabla_ {\hat{\tau}_ {t}}l_ 2(\hat{\tau}_ {t},S_ {t,y}) =\frac{\\epsilon}{K(1-\\epsilon)}\\sum_ {y=1}^K[1\\{S_ {t,y}\\leq\\hat{\\tau}_ {t}\\}-(1-\\alpha)] \\end{align} To proceed, we make the following simplifying assumptions: 1. The scores $S_{t,y}$ for different labels $y$ are identically distributed for a given instance $t$, with $p = \mathbb{P}(S_{t,y} < \tau)$. This assumes the score distribution is similar across labels, though in practice, it depends on the instance and model. 2. The scores $S_{t,y}$ are independent across different $y$. This is a simplification, as scores for the same instance may be correlated, but it provides a tractable starting point. Then, the gradient variance is given by $$ Var(\nabla_ {\hat{\tau}_ {t}}\tilde{l}_ {1-\alpha}) =Var(\nabla_ {\hat{\tau}_ {t}}l_ 1) +Var(\nabla_ {\hat{\tau}_ {t}}l_ 2) -2Cov(\nabla_ {\hat{\tau}_ {t}}l_ 1,\nabla_ {\hat{\tau}_ {t}}l_ 2) $$ **Part 1.** $Var(\nabla_ {\hat{\tau}_ {t}}l_ 1)$: $$ Var(\nabla_ {\hat{\tau}_ {t}}l_ 1) =Var(\frac{l_ {1-\alpha}(\hat{\tau}_ {t},\tilde{S}_ t)}{1-\epsilon}) =\frac{p(1-p)}{(1-\epsilon)^2} $$ **Part 2.** $Var(\nabla_ {\hat{\tau}_ {t}}l_ 2)$: $$ Var(\nabla_ {\hat{\tau}_ {t}}l_ 2) =Var(\frac{\epsilon}{K(1-\epsilon)}\sum_ {y=1}^K[1\{S_ {t,y}\leq\hat{\tau}_ {t}\}-(1-\alpha)]) =\frac{\epsilon^2p(1-p)}{K(1-\epsilon)^2} $$ **Part 3.** $Cov(\nabla_ {\hat{\tau}_ {t}}l_ 1,\nabla_ {\hat{\tau}_ {t}}l_ 2)$: \begin{align} Cov(\nabla_ {\hat{\tau}_ {t}}l_ 1,\nabla_ {\hat{\tau}_ {t}}l_ 2) &=\mathbb{E}[\nabla_ {\hat{\tau}_ {t}}l_ 1\cdot\nabla_ {\hat{\tau}_ {t}}l_ 2] -\mathbb{E}[\nabla_ {\hat{\tau}_ {t}}l_ 1]\mathbb{E}[\nabla_ {\hat{\tau}_ {t}}l_ 2] \end{align} Since we assume $S_{t,y}$ are independent across different $y$, $Cov(\nabla_ {\hat{\tau}_ {t}}l_ 1,\nabla_ {\hat{\tau}_ {t}}l_ 2)$ can be reduced to \begin{align} Cov(\nabla_ {\hat{\tau}_ {t}}l_ 1,\nabla_ {\hat{\tau}_ {t}}l_ 2) &=\mathbb{E}[\frac{l_ {1-\alpha}(\hat{\tau}_ {t},\tilde{S}_ t)}{1-\epsilon}\cdot\frac{\epsilon l_ {1-\alpha}(\hat{\tau}_ {t},\tilde{S}_ t)}{K(1-\epsilon)}] -\mathbb{E}[\frac{l_ {1-\alpha}(\hat{\tau}_ {t},\tilde{S}_ t)}{1-\epsilon}]\mathbb{E}[\frac{\epsilon l_ {1-\alpha}(\hat{\tau}_ {t},\tilde{S}_ t)}{K(1-\epsilon)}] \\ &=\frac{\epsilon p(1-p)}{K(1-\epsilon)^2} \end{align} In summary, the gradient variance is given by $$ Var(\nabla_ {\hat{\tau}_ {t}}\tilde{l}_ {1-\alpha}) =\frac{p(1-p)}{(1-\epsilon)^2}+\frac{\epsilon^2p(1-p)}{K(1-\epsilon)^2}-\frac{2\epsilon p(1-p)}{K(1-\epsilon)^2}=\frac{p(1-p)}{K}+\frac{p(1-p)(K-1)}{K(1-\epsilon)^2} $$ Therefore, we can conclude that a larger noise rate $\epsilon$ will increase the gradient variance of robust pinball loss, reducing the stability of results. > 3. Other comments on presentation and notation Thank you for the suggestion. We will improve the clarity accordingly in the final version.
Summary: This paper aims to develop an online conformal prediction method that can handle the case where the labels of data are noisy, which ensure the robustness of online conformal prediction. The novelty of method is mainly focus on adjusting the previous pinball loss to a robust pinball loss, which is a weighted sum of pinball loss with noisy scores and pinball loss with scores of all classes. This paper also theoretically proves the consistency between the loss under noisy data and clean data, and showing that the method can boost on the elimination of coverage gap caused by the noisy data. Sufficient experiements are also conducted to show the effectiveness of methods. Claims And Evidence: The claims are clear and the evidence is sufficient. Methods And Evaluation Criteria: The method of the proposed loss and its efficiency are clearly stated and the banchmark datasets are promising. Theoretical Claims: I check the correctness of the proof for theorectical claims, it is quite solid. Experimental Designs Or Analyses: I am actually quite confused about the baselines part in the experimental details. Actually, the novelty of NR-OCP is the loss function that can be applied to the case of noisy data, so actually the loss function can be used on any of the online conformal prediction methods, but why only choose one standard online conformal prediction method (with pinball loss) with 4 different non-conformalty score algorithms? If I have any misunderstanding, please correct me. Supplementary Material: I cannot find the provided code. There is neither a supplementary material nor an anonomous link to the code repository. Relation To Broader Scientific Literature: This paper has a great motivation and contribution on online conformal prediction, which can solve the problem that the coverage gap will be larger, if noisy data is included. Essential References Not Discussed: All the essential references are discussed. Other Strengths And Weaknesses: I think there is no need to put that much theoretical analysis, for example, the Proposition 4.1, I know it represents the upper bound of coverage gap under the dynamic learning rate, but when considering the $T$ goes to inifinity, the conclusion from the this proposition has no differences with the Proposition 3.1, even though it is a more general case. I recommand to add some experiements to compare with other state-of-the-art online conformal prediction algorithms. Other Comments Or Suggestions: 1. In the whole paper, the notation $T$ is used before declaration. Given the definition of precise coverage guarantee, I think this should be the total number of sequences. 2. In the formula of empirical error and expected error, the input becomes a text version of $\text{T}$, which is a notaion inconsistency. Questions For Authors: 1. In the experiements from Appendix, I found that sometimes when the noisy rate decreases, e.g., from 0.15 to 0.1, the coverage gap will increase, actually this is inconsistent with the theoretical analysis, can you explain why that happens? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. Lack of comparative experiments with other online conformal prediction algorithms Thank you for your insightful suggestion. We conduct new experiments by integrating our robust pinball loss into a new baseline - SAOCP[1]. In particular, we employ LAC score to generate prediction sets with error rate $\alpha=0.1$, using ResNet50 on CIFAR-100. In Figures 2 and 3 of [[link](https://anonymous.4open.science/r/Noise_Robust_Online_Conformal_Prediction-DD85/Supplementary_Materials_for__Robust_Online_Conformal_Prediction_under_Uniform_Label_Noise.pdf)], we present the new results of SAOCP and NR-SAOCP (Noise-robust SAOCP strengthened by our method). 1. Results in Figure 2a show that SAOCP cannot achieve the desired coverage in the presence of label noise, with higher noise rates resulting in a larger gap. 2. Results in Figures 2b, 3a, and 3b show that NR-SAOCP eliminates the long-run coverage gap in various noise rates. The new results highlight the effectiveness of our method against the updated SAOCP baseline. We will incorporate these findings into the final version and would greatly appreciate the reviewer’s suggestions for any additional baselines we may have overlooked. > 2. The theoretical analysis is excessive We sincerely thank you for the insightful comment. We'd like to clarify that the conclusion from Prop. 4.1 is equivalent to Prop. 3.1 only when $\sum_{t=1}^T|\eta_t^{-1}-\eta_{t-1}^{-1}|/T\to0$, considering the T goes to inifinity. Thus, we present Prop. 4.1 to extend beyond Prop. 3.1 by analyzing the coverage gap under a dynamic learning rate, a **widely adopted** technique in online learning [2,3,4,5]. In Prop. 4.3 and 4.4, we provide theoretical evidence for the effectiveness of our method under this general setting. We believe an extensive analysis can enrich the theoretical framework for understanding label noise in online conformal prediction. > 3. Inconsistent results in the experiments from the appendix Thank you for the careful review. We guess the "inconsistent" increase is found in the results of our method (e.g., ImageNet in Table 6). We'd like to clarify that our theoretical results in Prop. 3.1 and 4.1 demonstrate the coverage performance of the standard online CP method (Baseline) under label noise, where a systematic coverage gap is introduced by the label noise. In contrast, our robust pinball loss can reduce the gap to a very small level, rendering the outcomes of relative comparisons susceptible to stochastic noise. For example, in Table 6, the CovGap of baseline is larger with a higher noise rate, while our method achieves negligible CovGaps. > 4. Other comments on notation Thank you for pointing out the issues. We will fix these notations accordingly in the final version. ### Reference [1] Bhatnagar A, et al. Improved online conformal prediction via strongly adaptive online learning. ICML 2023. [2] Angelopoulos A N, et al. Online conformal prediction with decaying step sizes. ICML 2024. [3] Kim D, et al. Robust Bayesian Optimization via Localized Online Conformal Prediction. arXiv preprint 2024. [4] Hazan E. Introduction to online convex optimization. Foundations and Trends in Optimization 2016. [5] Orabona F. A modern introduction to online learning. arXiv preprint 2019. --- Rebuttal Comment 1.1: Comment: Thanks for your added experiments on one more baseline. I can also understand that Prop 4.1 is used for proving the following theoretical results in Section 4.2, but it seems not to provide extra help/information in understanding the Prop 4.3 and Prop 4.4, since in the interpretation, it also includes somewhat repeated contents, e.g. (line 316- 320 on the left). Also, for the experimental results in the Appendix, I can understand that the proposed method is much better than the baseline, but what I am confused is why the CovGap could increase more than twice than itself when the noisy rate decreases. For example, in Table 3, using the Dynamic LR schedule, the CovGaps of proposed method when $\alpha=0.05, \epsilon=0.1$ for both dataset are much larger than that of the proposed method when $\alpha=0.05, \epsilon=0.15$. The situation that CovGaps of proposed method in smaller error rate are larger is frequently happening among the other tabels, but hardly happened for the baseline method, can you explain that? Thanks. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the constructive feedback. We reply to the remaining concerns point by point. 1. **Writing of theoretical results**. Thank you for the detailed explanation and valuable suggestion. We agree that the interpretation of the coverage gap can be simplified due to its limited informativeness. We present Proposition 4.1 to validate the results of Proposition 3.1 under the learning rate condition: $\sum_{t=1}^T|\eta_t^{-1}-\eta_{t-1}^{-1}|/T\to0$. Following your advice, we will simplify the writing of Subsection 4.1 and relocate some theoretical explanations to the Appendix in the final version. This adjustment will allow us to include additional experimental results (e.g., results of SAOCP) in the revised manuscript. 2. **Results in Appendix**. Thank you for the thoughtful comment. As established in Propositions 3.4 and 4.4, our method’s empirical error converges to zero in probability at a rate of $\mathcal{O}(T^{-1/2})$, indicating that CovGap is primarily influenced by the number of iterations $T$ and random noise, rather than the noise rate $\epsilon$. Consequently, variations in CovGap across different noise rates are likely attributable to **random fluctuations** rather than a systematic dependency on $\epsilon$. In contrast, the empirical error (and CovGap) of the baseline method depends on the noise rate $\epsilon$, increasing as $\epsilon$ grows. To validate this, we reproduce the experiments on the specific example you highlighted — $\alpha=0.05, \epsilon\in\{0.1, 0.15\}$ with the Dynamic LR schedule. We repeat the experiment 30 times for each setting and perform a two-sample t-test to compare the CovGap values between the two noise rates. The null hypothesis $(H_0)$ is that the CovGap means between the two groups are equal. The corresponding results are presented below. The t-test yields a t-statistic of -0.1628 and a p-value of 0.8713, suggesting that we cannot reject $(H_0)$. This indicates that the observed differences in CovGap are **not statistically significant**, supporting the explanation of random variability rather than a direct effect of the noise rate. We hope this resolves your concerns. Thank you again for helping us improve the manuscript. | Id | ε=0.1 | ε=0.15 | Id | ε=0.1 | ε=0.15 | Id | ε=0.1 | ε=0.15 | |------|---------|---------|------|---------|---------|------|---------|---------| | 1 | 0.161% | 0.040% | 11 | 0.581% | 0.450% | 21 | 0.124% | 0.120% | | 2 | 0.240% | 0.210% | 12 | 0.280% | 0.452% | 22 | 0.040% | 0.022% | | 3 | 0.040% | 0.072% | 13 | 0.320% | 0.030% | 23 | 0.210% | 0.440% | | 4 | 0.300% | 0.110% | 14 | 0.300% | 0.290% | 24 | 0.103% | 0.551% | | 5 | 0.215% | 0.300% | 15 | 0.331% | 0.440% | 25 | 0.034% | 0.151% | | 6 | 0.250% | 0.391% | 16 | 0.090% | 0.073% | 26 | 0.070% | 4.163e-15% | | 7 | 0.166% | 0.520% | 17 | 0.050% | 4.163e-15% | 27 | 0.260% | 0.090% | | 8 | 0.160% | 0.033% | 18 | 0.341% | 0.200% | 28 | 0.674% | 0.360% | | 9 | 0.190% | 0.160% | 19 | 0.340% | 0.030% | 29 | 0.160% | 0.430% | | 10 | 0.400% | 0.430% | 20 | 0.210% | 0.260% | 30 | 0.010% | 0.190% |
Summary: This paper studies the robustness of online conformal prediction (OCP) under uniform label noise with a known noise rate. The authors demonstrate that label noise introduces a persistent gap between the actual and desired coverage rates, affecting the reliability of prediction sets. To address this, the paper proposes a novel method called Noise-Robust Online Conformal Prediction (NR-OCP), which updates the prediction threshold using a robust pinball loss designed to provide an unbiased estimate of the clean loss without requiring ground-truth labels. The authors provide theoretical guarantees showing that NR-OCP eliminates the coverage gap and achieves convergence rates of \(O(T^{-1/2})\) for both empirical and expected coverage errors under both constant and dynamic learning rate schedules. Extensive experiments on CIFAR-100 and ImageNet with various models and non-conformity scores confirm the effectiveness of NR-OCP in achieving accurate coverage and smaller prediction sets compared to standard OCP methods. Claims And Evidence: The paper makes several core claims: (1) that uniform label noise causes a systematic coverage gap in online conformal prediction; (2) that the proposed NR-OCP method, which utilizes a robust pinball loss, can eliminate this gap without access to clean labels; and (3) that NR-OCP achieves O(T−1/2)O(T−1/2) convergence rates for both empirical and expected coverage errors under noisy conditions. These claims are well supported by a combination of rigorous theoretical analysis and empirical validation. The authors provide clear mathematical derivations and propositions (e.g., Propositions 3.1–3.4, 4.1–4.4) to justify the existence of the coverage gap and the effectiveness of their method. The use of the robust pinball loss is motivated both intuitively and formally (Proposition 3.2), and the theoretical convergence guarantees are carefully proven under standard assumptions. Empirical results on CIFAR-100 and ImageNet across various architectures, noise levels, and learning rate schedules consistently demonstrate that NR-OCP significantly reduces coverage gaps while producing smaller prediction sets compared to baseline methods. The experimental setup is sound, and the performance gains are consistent and statistically significant. Methods And Evaluation Criteria: The proposed method—Noise-Robust Online Conformal Prediction (NR-OCP)—is well-motivated for addressing the robustness of online conformal prediction under uniform label noise. The idea of using a robust pinball loss to correct for the bias introduced by noisy labels is theoretically sound and justified with clear derivations. The authors further evaluate their method across multiple architectures (e.g., ResNet, DenseNet, VGG), datasets (CIFAR-100 and ImageNet), and learning rate schedules (constant and dynamic), which demonstrates the generality of their approach. However, there are several limitations and areas where the methodology or evaluation could be improved. The baseline comparisons are exclusively standard online conformal prediction methods with noisy labels. It would be valuable to compare against existing noise-robust classification or calibration techniques that could be adapted to the conformal setting. The method introduces several components, but there is **no ablation study** to isolate the impact of each part or to verify whether the form of the robust loss is optimal. While CIFAR-100 and ImageNet with synthetic noise are common benchmarks Theoretical Claims: The core theoretical claims are sound and appear technically correct under their stated assumptions. Experimental Designs Or Analyses: The experimental evaluation in the paper is generally well-organized and aims to validate the theoretical claims regarding the robustness and efficiency of the proposed NR-OCP method under uniform label noise. the evaluation lacks are in Methods And Evaluation Criteria Supplementary Material: No problem Relation To Broader Scientific Literature: The paper contributes to the growing literature on robust uncertainty quantification by extending online conformal prediction (OCP) methods to settings with uniform label noise, a scenario often overlooked in prior works which generally assume clean supervision. While recent studies like Gibbs & Candès (2021) and Angelopoulos et al. (2024) have advanced OCP under distribution shifts, they typically assume label accuracy. The closest related work is by Einbinder et al. (2024), which analyzes OCP under uniform label noise but relies on strong assumptions and offers limited quantitative insight. The proposed NR-OCP method addresses this gap by using a robust pinball loss that allows theoretically sound updates without clean labels. However, the paper’s relevance to the broader literature is limited by several factors: it only addresses uniform label noise with a known noise rate, while more realistic settings often involve instance-dependent or asymmetric noise. Furthermore, the method’s reliance on full per-class score computation may reduce scalability, particularly in large-class scenarios such as language modeling or large-scale classification. The work also does not engage with the broader literature on noise-robust learning (e.g., co-teaching, reweighting, meta-learning) or label-noise-aware calibration, which could offer complementary or alternative approaches. Overall, while the paper fills an important niche within conformal prediction, its connection to the wider field of robust machine learning remains somewhat narrow, and its impact would be strengthened by broader methodological comparisons and extensions beyond the uniform-noise setting. Essential References Not Discussed: None Other Strengths And Weaknesses: It was mentioned in the previous content Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. Comparisons with existing noise-robust methods Thank you for raising this concern. We’d like to clarify that this work focuses on label noise in online conformal prediction, a domain where methods originally designed for classification and calibration cannot be easily adapted. In particular, prior noise-robust classification techniques typically aim to address the label noise issue in the training set. In the conformal prediction, however, **label noise occurs within the calibration set** [1,2,3,4], which is used to determine the threshold for generating prediction sets. Thus, the challenge addressed in this paper differs from traditional label-noise learning, rendering existing noise-robust methods unsuitable. Furthermore, although a few methods have been proposed for split conformal prediction [2,3,4], adapting them to the online conformal prediction framework is far from straightforward. In response #1 to reviewer E8K1, we provide a new comparison with an additional online conformal prediction method - SAOCP [5]. We would appreciate it if the reviewer could suggest any specific baselines for comparison. > 2. No ablation study on the proposed loss Thank you for the suggestion. First, we'd like to clarify that our robust pinball loss is a whole, rather than an assembly of multiple components. In Eq.(3), we present the robust pinball loss with $\ell_1$ and $\ell_2$ to simplify the expression of formula. Our loss function is not separable, otherwise, it will invalidate Prop. 3.2 and thus lose its effectiveness from the theoretical perspective (see the proof sketch of Prop. 3.2). To fully address this concern, we add an ablation study as you suggested. In particular, we compare the performance of online conformal prediction applied with part of robust pinball loss ($\ell_1$ and $\ell_2$ defined in Eq. (3)) and full loss, under uniform noisy labels with noise rates $\epsilon=0.05,0.1$. We employ LAC score to generate prediction sets with error rate $\alpha=0.1$, using ResNet50 on CIFAR-100. Figure 1 in [[link](https://anonymous.4open.science/r/Noise_Robust_Online_Conformal_Prediction-DD85/Supplementary_Materials_for__Robust_Online_Conformal_Prediction_under_Uniform_Label_Noise.pdf)] shows that employing part of robust pinball loss would violate the precise coverage guarantee, thereby demonstrating the optimality of our loss. > 3. Restriction to uniform label noise Thank you for your valuable feedback. We’d like to clarify that label noise is a novel challenge in the context of conformal prediction, distinct from the extensively studied field of noise-robust learning. As noted in our Related Work section (Appendix A), only a few recent studies have begun to explore label noise in conformal prediction. Our work stands out as **the first to address this issue specifically in online conformal prediction** - a particularly demanding setting due to the stringent theoretical guarantees required. Given the nascent state of this topic, existing efforts [1,2,3,4], including the most relevant prior work [1], have focused on simple noise models such as uniform label noise or noise transition matrix. We believe this focus provides a critical foundation, allowing us to establish rigorous theoretical guarantees under a controlled yet meaningful noise scenario. Extending this framework to more complex noise structures, while undoubtedly valuable, poses additional challenges that we view as an exciting direction for future research (see Limitation section in the original manuscript). We hope this clarifies the significance of our work as a starting point in the field of conformal prediction. > 4. The requirement of full per-class score computation Thank you for raising the concern. We'd like to clarify that the full per-class score computation in this work stems from standard online conformal prediction [6,7], rather than a unique limitation of our method. We agree that reducing this computational burden is a valuable consideration, and we see it as an interesting direction for future work. We appreciate your insight on this point. ### References [1] Einbinder B S, et al. Label noise robustness of conformal prediction. JMLR 2024. [2] Penso C, et al. A conformal prediction score that is robust to label noise. arXiv preprint 2024. [3] Sesia M, et al. Adaptive conformal classification with noisy labels. JRSSB 2024. [4] Bortolotti T, et al. Noise-Adaptive Conformal Classification with Marginal Coverage. arXiv preprint 2025. [5] Bhatnagar A, et al. Improved online conformal prediction via strongly adaptive online learning. ICML 2023. [6] Gibbs I, et al. Adaptive conformal inference under distribution shift. NIPS 2021. [7] Angelopoulos A N, et al. Online conformal prediction with decaying step sizes. ICML 2024. --- Rebuttal Comment 1.1: Comment: I sincerely appreciate the authors’ detailed response and clarification. My concerns have been resolved, and I will be raising my score. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. We sincerely appreciate your time and effort in reviewing our work.
null
null
null
null
null
null
Self-Consuming Generative Models with Adversarially Curated Data
Accept (poster)
Summary: This paper investigates the effects of adversarially curated data on generative models trained iteratively on synthetic data—referred to as "self-consuming loops." The authors theoretically and experimentally analyze how generative models behave under conditions of noisy and maliciously curated data. They propose algorithms to strategically disrupt a competitor's model training by adversarially manipulating data curation processes. Key findings include the identification of conditions under which models either remain robust and converge to optimize user preferences or become misaligned due to adversarial manipulation. Claims And Evidence: The claims of model robustness and vulnerability under adversarial conditions are supported convincingly through both theoretical analysis (Lemma 3.3 and Lemma 3.4) and experimental validation. The effectiveness of proposed attack algorithms is demonstrated clearly via experiments. Methods And Evaluation Criteria: The methods and evaluation criteria (theoretical analyses and benchmark datasets such as CIFAR-10 and synthetic Gaussian datasets) are highly appropriate and relevant for the studied problem. Theoretical Claims: The methods and evaluation criteria (theoretical analyses and benchmark datasets such as CIFAR-10 and synthetic Gaussian datasets) are highly appropriate and relevant for the studied problem. Experimental Designs Or Analyses: The experimental designs are sound and systematically validate theoretical insights. The comparison between benign, random, and adversarially curated datasets clearly demonstrates the impact of adversarial attacks. Supplementary Material: I did not check it. Relation To Broader Scientific Literature: The paper effectively situates itself within current research, highlighting its novelty as the first exploration of adversarial data curation in the context of self-consuming generative loops. It clearly distinguishes its contributions from prior studies (e.g., Ferbach et al., 2024; Wu et al., 2024), emphasizing the novel consideration of adversarial user curation in iterative training loops. Essential References Not Discussed: There are no essential missing references. Other Strengths And Weaknesses: Strengths: - Strong theoretical grounding clearly defining model behavior under adversarial data curation. - Novel adversarial attack algorithms effectively disrupting generative model alignment. - Extensive experimental validation demonstrating clear practical implications. Weaknesses: - The gradient-based attack methods are computationally expensive, potentially limiting their practical deployment. - Experiments are primarily demonstrated on CIFAR-10 and synthetic data; further validation on larger-scale or more diverse datasets would strengthen the claims. Other Comments Or Suggestions: NA Questions For Authors: Have the authors considered strategies to mitigate or detect such adversarial curation attacks in practical scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and encouraging feedback, especially for recognizing the novelty and effectiveness of our empirical validation. We address the reviewer’s concerns below: > “The gradient-based attack methods are computationally expensive, potentially limiting their practical deployment.” We agree with the reviewer that gradient-based adversarial curation methods can be computationally expensive, especially in large-scale or real-time settings. However, we emphasize that we were aware of this limitation and have addressed it in the paper by proposing alternative attack strategies that do not rely on gradient computations (See Section 4.2. Heuristic methods). While each approach has its own limitations, all were shown to be effective in the experiments. Our work is the first to study curation attacks in the self-consuming generative retraining process. We believe that designing more efficient or scalable attack algorithms is an important direction for future research. > “Experiments are primarily demonstrated on CIFAR-10 and synthetic data; further validation on larger-scale or more diverse datasets would strengthen the claims. Further validation on larger-scale or more diverse datasets would strengthen the claims.” As noted by the reviewer, we currently demonstrate results on CIFAR-10 and synthetic data. We chose these datasets to align with prior works [1, 2], which evaluate the self-consuming retraining loop under benign data curation. We fully agree with the reviewer that evaluating on larger or more diverse datasets would strengthen our findings. However, due to time and compute constraints, we were unable to complete experiments on very large datasets during the rebuttal period. Instead, we have extended our experiments to CIFAR-100, a more diverse dataset with 100 classes. The results can be found at [3], which validate the theorem and demonstrate the effectiveness of the proposed algorithm. The observations are consistent with our findings: benign curation gradually steers the class distribution toward user preferences, leading to progressively increasing reward scores. In contrast, adversarial curation with the gradient-based attack disrupts this alignment, depresses reward growth, and drives the model away from the user preferences. > "Have the authors considered strategies to mitigate or detect such adversarial curation attacks in practical scenarios?" While this work focuses on analyzing the vulnerability of self-consuming retraining loops to adversarial data curation, we have indeed considered potential mitigation strategies: - Adding real data: This is a common strategy used in prior works to stabilize the self-consuming retraining loop of generative models [4, 5]. We have evaluated this method in our paper. As shown in the experiments (Fig. 3), adding real data only partially mitigates adversarial effects by driving the model closer to the true data distribution $p\_{data}$. However, it does not fully prevent misalignment. - Anomaly detection methods: Although approaches like outlier detection may help identify and eliminate adversarially curated samples, they can inadvertently remove genuine preferences. When users are heterogeneous and come from multiple groups, removing genuine preferences from minority groups may potentially introduce biases. Additionally, our attack algorithm already considers such defense mechanisms: when formulating optimization (10), we impose a penalty term $\text{dist}(R\_{\theta},\widetilde{R}\_{\widetilde{\theta}})$ to prevent the adversarial behavior from being easily detected as anomalous (see line 261, left column). We believe that designing effective defense mechanisms is an important direction for future research. We would be happy to include the above discussion in the revised version of the paper. [1] Ferbach, D., Bertrand, Q., Bose,A.J., and Gidel, G. Self consuming generative models with curated data provably optimize human preferences, 2024. [2] Bertrand, Q., Bose, A.J., Duplessis, A., et al. On the stability of iterative retraining of generative models on their own data, 2024. [3] https://anonymous.4open.science/r/Anonymize_ICML2025-4CC5/CIFAR100.jpg [4] Alemohammad, S., Casco-Rodriguez, J., Luzi, L., et al. Self-consuming generative models go mad, 2023. [5] Bertrand, Q., Bose, A.J., Duplessis, A., et al. On the stability of iterative retraining of generative models on their own data, 2024.
Summary: This paper investigates a novel adversarial model where the data curation process for generating training data for iterative models is adversarially manipulated. The authors show theoretically that the effectiveness of such adversarial manipulations have is tied to the covariance between the unmanipulated and manipulated distributions. The theory is extended to account for the case where some fraction of the initial training distribution is always used. Leveraging theoretical insights, the authors propose two methods for how such adversarial manipulation could be performed and empirically demonstrate their efficacy. ## update after rebuttal I appreciate the additional clarifications from the authors. I have no serious concerns (mostly just on nomenclature), and I stand by my original positive review. Claims And Evidence: The theoretical claims are proven in the appendix, and the empirical claims are supported by experiments under two settings: Gaussian Mixture and CIFAR-10. Methods And Evaluation Criteria: Yes, the evaluations are sensible. Theoretical Claims: I did not throughly review the proofs attached in the appendix. Experimental Designs Or Analyses: The experimental design is fine, I have one minor concern about how the $\kappa$ parameter is treated, please see “Questions for Authors”. Supplementary Material: No. Relation To Broader Scientific Literature: This work is extends the work of Ferbach et al., which investigates the dynamics of iterative retraining with human curated data, to the adversarial case. More generally, it introduces a new adversarial model. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: One minor note is that the theory seems to assume that training converges to the global optimum. It is unclear how crucial that condition is to the overall theoretical analysis. Other Comments Or Suggestions: Please see my comments in “Questions for Authors” Questions For Authors: - It looks like Lemma 3.3 doesn’t necessarily suggest that the distribution will converge to the optimal value, since it is possible for an increasing sequence to converge to a suboptimal value. Is there an additional piece here that ensures optimal convergence? - On page 5, it is stated that $\kappa$ represents the success rate of perturbing the data on the target platform. Doesn’t that mean that you would need to account for the fact that the attacker does not have control over which manipulated samples are curated? It seems as though it is treated more like a budget in the experiments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback and positive assessment of our work. We appreciate the recognition of our theoretical analysis, experimental design, and the contribution of introducing a new adversarial model in the context of iterative retraining. We address the reviewer's concerns below: > "One minor note is that the theory seems to assume that training converges to the global optimum. It is unclear how crucial that condition is to the overall theoretical analysis." The reviewer is correct. Our theoretical analysis assumes that each model update converges to the global optimum of the training objective. (i.e., $p\_t$ maximizes data log-likelihood). This is a standard simplification in prior works [1, 2] to facilitate the analytical analysis. However, we agree that this assumption may not always hold in practice. Nevertheless, we emphasize that our main theoretical insights remain valid as long as each model update sufficiently approximates the optimum. In fact, our empirical results show that the observed behaviors persist even when training is noisy or approximate. We appreciate the reviewer for highlighting this point, and we will clarify this assumption and its implications in the revised version. > “Lemma 3.3 doesn’t necessarily suggest that the distribution will converge to the optimal value.” Yes, Lemma 3.3 does not claim that the iteratively retrained model converges to the optimum; instead, it only characterizes the relationship between the expected reward $\mathbb{E}\_{p\_{t}}\left[e^{r(x)}\right]$ over two consecutive time steps. As discussed in lines 172-185 (left column), whether the expected reward converges to the maximum depends on the covariance term $\operatorname{Cov}\_{p\_{t}}\left[e^{r(x)},e^{\widetilde{r}\_{t}(x)}\right]$. When the covariance is positive at every step, the expected reward increases and the variance decreases, indicating convergence toward the optimal distribution (which aligns with the findings in [1]). However, if the covariance becomes negative, the expected reward may oscillate and deviate from the maximum value. Indeed, our attack algorithm is developed based on these theoretical insights: when misaligning the model from human preferences, optimization (10) aims to flip human preference labels such that the covariance is as negative as possible. > About $\kappa$: the success rate The reviewer’s understanding is correct. In real-world settings, the attacker typically does not have full control over which adversarial samples are ultimately curated on the target platform. This is exactly why we avoid referring to $\kappa$ as *budget*, and instead interpret it as the success rate of perturbing the data on the target platform. In the experiments, we treat $\kappa$ as a controllable parameter to analyze how different attack intensities impact model alignment. We will add this clarification in the revised manuscript. [1] Ferbach, D., Bertrand, Q., Bose,A.J., and Gidel, G. Self consuming generative models with curated data provably optimize human preferences, 2024. [2] Bertrand, Q., Bose, A.J., Duplessis, A., et al. On the stability of iterative retraining of generative models on their own data, 2024.
Summary: This work proposes a method for adversarial attack defense when training generative models. Experimental results on synthetic and real datasets show the effectiveness. Claims And Evidence: All claims have support in the paper. Methods And Evaluation Criteria: The method makes sense for current generative models. Theoretical Claims: I do not check the proofs carefully but take a quick look. I do not notice obvious errors. Experimental Designs Or Analyses: The utilized datasets are very simple. I suggest utilizing datasets that contain more categories, such as ImageNet, for better evaluation. The introduced baseline, DDPM, might not represent all generative models, while methods based on GANs and VAEs also attracted much research attention recently. Supplementary Material: No. Relation To Broader Scientific Literature: It is a new method for generative models' adversarial attack field. Essential References Not Discussed: No. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: As mentioned, I suggest considering more experiments. Questions For Authors: I have no more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and for recognizing our work as "a new method in generative models' adversarial attack field." We now address the reviewer's concerns regarding the experiment: > On dataset diversity and model generality Our experimental setup follows prior work [1, 2], which also uses synthetic and CIFAR-10 datasets to analyze self-consuming retraining. In addition to reproducing the expected reward maximization under benign curation, our experiments also demonstrate the impact of adversarial curation, compare different attack strategies, and show that the model outputs remain visually plausible under attack. We acknowledge that including a broader range of generative models, such as GANs and VAEs, would make our evaluation more comprehensive and convincing. While our experiments present results using DDPM, we adopt a theoretical framework that does not depend on specific architectures. Our analysis applies to any likelihood-based model, including VAEs. Extending this work to GANs would require reformulating the training dynamics, which we leave for future work. Similarly, we agree that evaluating on more complex datasets (such as ImageNet) would strengthen the empirical validation of our method. However, due to time and compute constraints, we were unable to complete these experiments during the rebuttal period. Instead, we have extended our experiments to CIFAR-100, a more diverse dataset with 100 classes. The results can be found at [3], which validate the theorem and demonstrate the effectiveness of the proposed algorithm. The observations are consistent with our findings: benign curation gradually steers the class distribution toward user preferences, leading to progressively increasing reward scores. In contrast, adversarial curation with the gradient-based attack disrupts this alignment, depresses reward growth, and drives the model away from the user preferences. [1] Ferbach, D., Bertrand, Q., Bose,A.J., and Gidel, G. Self consuming generative models with curated data provably optimize human preferences, 2024. [2] Bertrand, Q., Bose, A.J., Duplessis, A., et al. On the stability of iterative retraining of generative models on their own data, 2024. [3] https://anonymous.4open.science/r/Anonymize_ICML2025-4CC5/CIFAR100.jpg
Summary: The paper studies the problem of iterative retraining of generative models on their own synthetic data, in the specific setting where synthetic data have been adversarially curated, e.g., a concurrent platform gives random or adversarial feedback when to vote for their favorite image on MidJourney. In this setting, authors theoretically show that the iteratively trained generative model learns the curation mixture probability. In addition they provide experiments to illustrate that iterative retraining on adversarially curated data do not maximize the initial, non-adversarial reward. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Results seem correct. I foresee no issue. I check the proofs of Lemmas 3.1 - 3.3. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The broader literature is correctly addressed. Essential References Not Discussed: NA Other Strengths And Weaknesses: Other Strengths And Weaknesses: Strength: The paper is very well written and very clear. The proofs are correct and well supported by the experiments. Weaknesses: My main concern is the **plausibility of the proposed scenario** and the impact of the reach conclusions: do we know how frequently theses curation attacks are likely to happen? Even if these curation happen, the conclusion feels a bit "obvious": "if someone is injecting bad, then iterative retraining of generative models will fail maximizing the reward". It seems that these questions are especially relevant since the setting, proofs, and experiments are mostly incremental with respect to Ferbach 2024. I would be eager to learn about academic references to be back the proposed attack scenario, do the authors think about Carlini 2024 implementation of attacks? CARLINI, Nicholas, JAGIELSKI, Matthew, CHOQUETTE-CHOO, Christopher A., et al. Poisoning web-scale training datasets is practical. In : 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024. p. 407-425. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback, especially their positive comments on the theoretical analysis, experimental design, and for describing our paper as “well written and very clear”. We provide clarifications to address the main concerns: > "Do we know how frequently theses curation attacks are likely to happen?" In practice, the frequency of curation attacks depends on how often a model is iteratively retrained using human-curated feedback. Precisely quantifying this frequency requires collecting real-world data from the target platform, which is an interesting direction for future research. Nevertheless, we emphasize that the proposed scenario is realistic: as generative models increasingly rely on human feedback for training and updates, it opens an opportunity for curation attacks. For example, InstructGPT [1] and LLaMA-2-Chat [2] are fine-tuned using curated human preferences; Pick-a-Pic [3] and Rich Feedback for Text-to-Image [4] demonstrate how human judgments can guide and refine image generation. If users can influence model updates through their preferences, then malicious feedback, which is curated by adversarial users such as those employed by competing platforms, may steer the model away from genuine user intent. Our work addresses a timely and critical question: under what conditions does adversarial curation impact the iterative training of generative models intended to align with benign user preferences? > "Do the authors think about Carlini 2024 implementation of attacks?" We thank the reviewer for suggesting [5], a valuable and highly relevant reference. Their work demonstrates that poisoning large-scale training datasets is not merely a theoretical concern but a practical threat. They show how adversaries can inject poisoned examples into datasets at minimal cost by exploiting web-based data collection mechanisms. While we did not cite [5] in the current version of our paper, we have discussed related dataset poisoning studies in the background and related work. We will revise the paper to include [5] and clarify the differences between their work and ours. Unlike [5], which focuses on poisoning static datasets during pretraining, our attack operates in an iterative retraining setting, where models continuously adapt based on user feedback. Notably, our attacker does not require access to data collection pipelines or backend systems; instead, they can act entirely through public feedback mechanisms, such as voting or ranking systems. This makes our approach more practical and harder to detect, as it unfolds gradually over time without direct data manipulation. Whereas traditional poisoning attacks often aim to induce outright model failure, our objective is more subtle: gradually misaligning the model from genuine user preferences. In competitive settings, such gradual misalignment can be highly damaging while remaining difficult to trace. > "The conclusion feels a bit 'obvious'" While the conclusion that adversarial curation can lead to misalignment may seem intuitive, our work goes further by formally characterizing when and how this misalignment occurs. First, we clarify that such misalignment is not inevitable. According to Lemma 3.3, misalignment arises only under specific conditions: $\operatorname{Cov}\_{p\_{t}}\left[e^{r(x)},e^{\widetilde{r}\_{t}(x)}\right] < 0$. When $\operatorname{Cov}\_{p\_{t}}\left[e^{r(x)},e^{\widetilde{r}\_{t}(x)}\right] \geq 0$, adversial curation can still lead the model converging to the optimum that maximizes reward (at slower convergence rate); this is verified in Fig. 5 and highlights the inherent robustness of the iterative retraining process. Another less intuitive finding is that, while prior work on self-consuming generative models suggests that adding real data can effectively stabilize training [6, 7], we demonstrate that simply incorporating real data does not mitigate the effects of adversarial curated data. [1] Ouyang, L., Wu, J., Jiang, X., et al. Training language models to follow instructions with human feedback, 2022 [2] Touvron, H., Martin, L., Stone, K., et al. Llama 2: Open foundation and fine-tuned chat models, 2023. [3] Kirstain, Y., Polyak, A., Singer, U., et al. Pick-a-pic: An open dataset of user preferences for text-to-image generation, 2023. [4] Liang, Y., He, J., Li, G., et al. Rich human feedback for text-to-image generation, 2024 [5] Carlini, N., Jagielski, M., Choquette-Choo, C. A., et al. Poisoning web-scale training datasets is practical, 2024. [6] Alemohammad, S., Casco-Rodriguez, J., Luzi, L., et al. Self-consuming generative models go mad, 2023. [7] Bertrand, Q., Bose, A.J., Duplessis, A., et al. On the stability of iterative retraining of generative models on their own data, 2024.
null
null
null
null
null
null
LLM Enhancers for GNNs: An Analysis from the Perspective of Causal Mechanism Identification
Accept (poster)
Summary: This paper explores the use of large language models (LLMs) as feature enhancers for graph neural networks (GNNs) in graph representation learning, addressing the fundamental properties of this approach using the interchange intervention method from causality theory. To facilitate analysis, the authors construct a synthetic graph dataset with controllable causal relationships, enabling precise manipulation of semantic structures. Through systematic interchange interventions, they investigate the correspondence between the LLM-enhancer-plus-GNN model and a high-order causal model, uncovering the internal logical structure of the black-box neural network. Based on these insights, they propose a plug-and-play optimization module to improve information transfer between the LLM enhancer and the GNN. Experimental validation across multiple datasets and models demonstrates the effectiveness of this module, offering a deeper understanding and improved performance of LLM-enhanced GNNs. Claims And Evidence: The paper provides clear and structured evidence for its claims through a combination of theoretical analysis, synthetic dataset experiments, and empirical validation across multiple models and datasets. The use of a synthetic graph dataset with controllable causal relationships strengthens the credibility of the analysis by allowing precise manipulation of semantic structures and causal dependencies. Methods And Evaluation Criteria: The entire analysis is conducted on a specific dataset, namely the Controlled Causal-Semantic Graph. My concern is the difference between a Causal-Semantic Graph and a causal graph. Based on my understanding of this dataset, the causality is still represented in a semantic way rather than reflecting the actual mechanism. Theoretical Claims: Yes Experimental Designs Or Analyses: The proposed analysis framework should be more generalizable, and the Controlled Causal-Semantic Graph dataset should be one of its implementations. Or, in my opinion, the Controlled Causal-Semantic Graph dataset should be used to evaluate the proposed analysis framework. Supplementary Material: Yes, Details of The CCSG Dataset and Extra Experiments. Relation To Broader Scientific Literature: The key contributions of this paper build upon and extend prior research in understanding how LLM enhancers contribute to GNNs through a causal lens. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper presents a novel perspective by applying causal analysis through interchange interventions to investigate the role of LLM-enhanced GNNs. The originality lies in its effort to bridge causality and graph representation learning, particularly in quantifying how LLM-derived features align with causal structures in GNNs. The introduction of the Controlled Causal-Semantic Graph (CCSG) dataset is another notable contribution, as it enables a structured evaluation of causal relationships in graph-based learning. However, one limitation is the generality of the findings, as the analysis is heavily dependent on the CCSG dataset. While synthetic datasets allow for precise control over causal structures, it remains unclear whether the insights generalize to structured causal DAGs rather than a semantic causal graph. Other Comments Or Suggestions: Line 251: LM-enhancer-plus-GNN -> LLM-enhancer-plus-GNN Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your analysis and suggestions for revising our paper, and your support for our paper. We greatly appreciate your feedback. Below, we will address each of the raised concerns. **Methods And Evaluation Criteria**: In the appendix, we provide a detailed description of the composition of our dataset. In fact, the semantic information in the Controlled Causal-Semantic Graph dataset is primarily reflected in the semantic richness of its node features. The relationships and labels in the graph are determined based on the entry categories and associations provided by Wikipedia, following a deterministic approach to ensure causality. **Experimental Designs Or Analyses**: Our proposed method can be applied to any graph dataset capable of abstracting higher-order causal models, a point we will clearly mention in the paper. Achieving this currently requires synthetic graph datasets with controllable causal relationships, where node features must contain rich semantic information to demonstrate the effectiveness of LLMs. Due to the limited availability of such datasets, we built our own CCSG dataset. To validate the generalizability of our method, we incorporated node data from the ogbn-arxiv dataset and reconducted the node-level experiments, we also conduct the experiments using random data, with the results as follows: **Table 1: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Psi_1 $,$ \Psi_2 $, and$ \Psi_3 $ of $h^{\text{node,1}}(\cdot) $, using OGBN-Arxiv data and random data as node features, are presented. These results are consistent with those in the paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer 0 | Layer 1 | Layer 2 | Layer 3 | Layer 4 | |--------|---------------------|---------------------|---------------------|---------------------|---------------------| |$\Psi_1$ | **0.123** | 0.223 | 0.143 | 0.134 | 0.136 | |$\Psi_2$ | **0.366** | 1.337 | 0.956 | 0.998 | 1.332 | |$\Psi_3$ | **0.612** | 2.747 | 2.131 | 2.324 | 2.653 | **Table 2: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Phi_1 $ and $ \Phi_2 $ of $h^{\text{node,2}}(\cdot) $, using random data as node features, are presented. These results are consistent with those in the paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer 0 | Layer 1 | Layer 2 | Layer 3 | Layer 4 | |-----------|-----------|-----------|-----------|-----------|-----------| |$\Psi_1$ | 1.002 | **1.202** | 1.494 | 1.625 | 2.139 | |$\Psi_1$ | 0.203 | **0.211**| 0.214 | 0.216 | 0.393 | We will add these results to the paper as well.
Summary: This paper presents a valuable analysis of the LLM-enhancer-plus-GNN framework, exploring its underlying mechanisms and identifying potential areas for improvement. The use of the CCSG dataset and the interchange intervention method provides a novel approach to understanding the relationship between LLMs and GNNs. The proposed AT module offers a practical solution to enhance the information transfer between these components, leading to improved performance. The paper is well-written and the results are presented clearly. Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: Overall, the methods and/or evaluation criteria proposed in this paper are reasonable. Here are two additional suggestions. Suggestions: 1. Extend the Analysis to Other Benchmark: Evaluate the performance of the method on more datasets, such as ognb-arxiv, etc. 2. Compare with Other Methods: Conduct a more comprehensive comparison with other LLM-enhancer-plus-GNN methods. Theoretical Claims: The theoretical Claims proposed by the authors of this paper are relatively reasonable. In the node-level and graph-level analysis, the authors used the CCSG dataset and causal modeling methods to analyze the llm-augmentor+gnn framework, revealing that for a fixed-parameter LLM enhancer, the features output by the LLM have the function of representing information at both the node level and the raw data level. Experimental Designs Or Analyses: Lack the Analysis of hyperparameter q :In Chapter 4, the impact of setting the hyperparameter q on the results lacks sufficient discussion. Supplementary Material: All. Relation To Broader Scientific Literature: The authors evaluated the impact of modifying the features transmitted from the LLM enhancer to the GNN on model performance, and discovered that token position selection in the LLM enhancer output has a significant effect on model performance. This finding is novel and can be used to improve the performance of the LLM enhancer + GNN framework. Essential References Not Discussed: In my understanding, there are none. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your analysis and suggestions for revising our paper, and your support for our paper. We greatly appreciate your feedback. Below, we will address each of the raised concerns. **Methods And Evaluation Criteria:** 1. Thank you for your suggestions. We have incorporated data from the ogbn-arxiv dataset and conducted additional experiments. The results are as follows: **Table 1: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Psi_1 $,$ \Psi_2 $, and$ \Psi_3 $ of $h^{\text{node,1}}(\cdot) $, using OGBN-Arxiv data and random data as node features, are presented. These results are consistent with those in the paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer 0 | Layer 1 | Layer 2 | Layer 3 | Layer 4 | |--------|---------------------|---------------------|---------------------|---------------------|---------------------| |$\Psi_1$ | **0.123** | 0.223 | 0.143 | 0.134 | 0.136 | |$\Psi_2$ | **0.366** | 1.337 | 0.956 | 0.998 | 1.332 | |$\Psi_3$ | **0.612** | 2.747 | 2.131 | 2.324 | 2.653 | We also conducted experiments using random data, and the results are as follows: **Table 2: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Phi_1 $ and $ \Phi_2 $ of $h^{\text{node,2}}(\cdot) $, using random data as node features, are presented. These results are consistent with those in the paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer 0 | Layer 1 | Layer 2 | Layer 3 | Layer 4 | |-----------|-----------|-----------|-----------|-----------|-----------| |$\Psi_1$ | 1.002 | **1.202** | 1.494 | 1.625 | 2.139 | |$\Psi_1$ | 0.203 | **0.211**| 0.214 | 0.216 | 0.393 | We will include these results in our paper. 2. We compared our results with TAPE, and the outcomes are as follows: **Table 1: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Psi_1 $,$ \Psi_2 $, and$ \Psi_3 $ of $h^{\text{node,1}}(\cdot) $, using TAPE as baseline, are presented. The results show that, due to its complexity, the TAPE method aligns with later layers of the neural network compared to those reported in the paper. We will include the relevant analysis results in our paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer0 | Layer1 | Layer2 | Layer3 | Layer4 | |---------------|------------------|------------------|------------------|------------------|------------------| | $\Psi_1$ | 1.620 | **1.454** | 1.468 | 1.544 | 2.911 | | $\Psi_2$ | **1.276** | 1.387 | 1.389 | 1.312 | 1.511 | | $\Psi_3$ | **1.380** | 1.614 | 1.621 | 1.399 | 1.946 | **Experimental Designs Or Analyses:** We have included the experimental results for this hyperparameter and will incorporate these results into our paper: **Table 1: Accuracy corresponding to different q values, using different numbers of prompts. Llama3 was used as the enhancer. The standard deviation of results over 10 trials.** | Dataset | q=1 | q=2 | q=3 | q=4 | |---------|----------------|----------------|----------------|----------------| | Cora | 84.94±0.81 | **86.45±0.49** | 85.79±0.51 | 83.33±0.23 | | Pubmed | 83.84±0.77 | **84.32±1.06** | 83.98±0.39 | 82.58±0.44 | We will also provide a detailed analysis of the hyperparameters. Thank you for your suggestion. --- Rebuttal Comment 1.1: Comment: Thanks for your reponse, and I have no further questions.
Summary: This paper proposes a new analysis tool for LLM encoders for GNNs, based on the causal theory. The proposed method is evaluated in one synthetic dataset generated by the authors. Claims And Evidence: N.A. Methods And Evaluation Criteria: No. The proposed method is only evaluated in one synthetic dataset, which I think is not enough to show its generality and effectiveness. Theoretical Claims: No. I do not have background in related theories. Experimental Designs Or Analyses: Refer to "Methods And Evaluation Criteria" Supplementary Material: N.A. Relation To Broader Scientific Literature: The analysis of the interpretability of LLMs encoder for GNNs is beneficial for this research area. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your analysis and suggestions for revising our paper. We greatly appreciate your feedback. Below, we will address each of the raised concerns. **Methods And Evaluation Criteria:** Regarding the AT module, we conducted experimental analysis using several public datasets. To demonstrate the generalizability of the proposed analytical method, we first note that it can be applied to any graph dataset capable of representing higher-order causal models. Currently, validating this requires synthetic graph datasets with controllable causal relationships, where node features are rich in semantic information to effectively showcase the power of large language models. However, due to the limited availability of such datasets, we constructed our own CCSG dataset. To further establish the robustness of our approach, we expanded our evaluation by incorporating node data from the ogbn-arxiv dataset and re-executing the node-level experiments. Additionally, we conducted experiments using random data. The results are as follows: **Table 1: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Psi_1 $,$ \Psi_2 $, and$ \Psi_3 $ of $h^{\text{node,1}}(\cdot) $, using OGBN-Arxiv data and random data as node features, are presented. These results are consistent with those in the paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer 0 | Layer 1 | Layer 2 | Layer 3 | Layer 4 | |--------|---------------------|---------------------|---------------------|---------------------|---------------------| |$\Psi_1$ | **0.123** | 0.223 | 0.143 | 0.134 | 0.136 | |$\Psi_2$ | **0.366** | 1.337 | 0.956 | 0.998 | 1.332 | |$\Psi_3$ | **0.612** | 2.747 | 2.131 | 2.324 | 2.653 | **Table 2: The results of node-level experiments, where $ Z^{h} $ corresponds to the output variables $ \Phi_1 $ and $ \Phi_2 $ of $h^{\text{node,2}}(\cdot) $, using random data as node features, are presented. These results are consistent with those in the paper. Bold values indicate the minimum $ \mathcal{L}_{\text{II}} $.** | | Layer 0 | Layer 1 | Layer 2 | Layer 3 | Layer 4 | |-----------|-----------|-----------|-----------|-----------|-----------| |$\Psi_1$ | 1.002 | **1.202** | 1.494 | 1.625 | 2.139 | |$\Psi_1$ | 0.203 | **0.211**| 0.214 | 0.216 | 0.393 | We will add these results to the paper as well.
null
null
null
null
null
null
null
null
Teaching Language Models to Critique via Reinforcement Learning
Accept (poster)
Summary: This paper introduces CTRL (Critic Training via Reinforcement Learning), a framework designed to train critic models for iterative refinement in code generation tasks. The authors propose a two-stage pipeline: supervised fine-tuning (SFT) using execution-guided critique synthesis and reinforcement learning (RL) with Group Relative Policy Optimization (GRPO). CTRL decouples the critic model from the task-performing model, enabling it to provide actionable feedback that improves solution quality without human supervision. Experimental results on multiple programming benchmarks demonstrate significant improvements in pass rates, reduced compounding errors, and generalization capabilities across both base and stronger generator models. Claims And Evidence: The claims made in the paper are generally well-supported by experimental results. The authors demonstrate that CTRL-trained critics improve pass rates, reduce error compounding, and generalize across different generator models and benchmarks. However, some claims, such as the scalability of CTRL to broader tasks beyond code generation, are only indirectly supported and lack extensive empirical evidence. Additional validation on more diverse domains could strengthen these claims. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem. The use of programming benchmarks like CodeContests, LiveCodeBench, and MBPP+ ensures that the evaluation is rigorous and relevant to the task of code generation. The iterative critique-revision process and the reliance on execution feedback are well-aligned with the problem's requirements. However, the reliance on sandbox execution environments might limit scalability to tasks without such clear evaluation metrics. Theoretical Claims: I did not verify all mathematical details rigorously, and some minor steps in the derivations (e.g., bias analysis in Equation 2) could benefit from additional clarification. While the claims are likely correct, their presentation could be more transparent for broader accessibility. Experimental Designs Or Analyses: The experimental design is robust, with clear comparisons between CTRL and baseline methods. The use of multiple benchmarks and generator models adds credibility to the results. However, there are some limitations, such as the lack of ablation studies to isolate the contributions of each component (e.g., SFT vs. RL). Additionally, while the authors analyze error compounding and scalability, the experiments are primarily focused on code generation, limiting generalizability to other domains. Supplementary Material: No supplementary material Relation To Broader Scientific Literature: The paper builds upon prior work in self-improvement of large language models, self-critique, and reinforcement learning for feedback generation. It extends these ideas by introducing a scalable framework that leverages execution feedback and RL. The weak-to-strong generalization phenomenon observed in CTRL aligns with findings in scalable oversight and weak supervision. However, the paper could better contextualize its contributions relative to recent advancements in generative reward models and self-correction in LLMs. Essential References Not Discussed: Some highly related works are not discussed: 1. CodeDPO: Aligning Code Models with Self Generated and Verified Source Code 2. Training Language Model to Critique with Multi-agent Feedback 3. RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning Other Strengths And Weaknesses: Strengths: The paper presents a novel combination of supervised fine-tuning and reinforcement learning for training critics. The experimental results demonstrate significant improvements over baselines, especially in reducing error compounding and enabling multi-turn critique-revision. The framework's ability to generalize across generator models and tasks is a notable strength. Weaknesses: This paper only focus on the code generation task, leaving alone other tasks especially the open-domain tasks, like summarization, translation, alignment. Other Comments Or Suggestions: No Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! We appreciate your recognition of our paper’s strengths, especially the `novel combination` of supervised fine-tuning and reinforcement learning for training critics and the `robust experimental design` supporting our claims. Below, we address your specific concerns in detail: **Scalability to broader tasks beyond code generation**: We agree that demonstrating generalization beyond code generation would strengthen our paper. We have conducted **additional experiments** on IFEval [1], a benchmark commonly used for evaluating alignment capabilities [2]. Our results show that, while not explicitly trained on instruction-following tasks, CTRL improves performance on alignment tasks through iterative refinement, demonstrating our approach can generalize to tasks beyond code generation. We will incorporate these findings in our final version to better illustrate CTRL's broader applicability. [1] Zhou J, Lu T, Mishra S, et al. Instruction-following evaluation for large language models [2] Yang A, Yang B, Zhang B, et al. Qwen2. 5 technical report **Sandbox reliance of our method**: While we leverage sandbox execution during training, CTRL can be adapted to tasks without such verification tools through 1) learned reward models for tasks like safety; and 2) reference-based evaluation for tasks with reference responses (e.g., translation) using metrics like ROUGE or BLEU. In fact, our framework is agnostic to the specific reward mechanism as long as it can distinguish between successful and unsuccessful revisions. **Ablation of components (e.g., SFT vs. RL)**: Tables 1 and 2 provide an initial component analysis comparing CTRL-SFT and full CTRL performance. Together with our **additional experimental results** on Pass@5 performance in the [anonymous link](https://anonymous.4open.science/r/icml25-rebuttal/anonymous.pdf), Table 2, we observe that: - SFT improves discrimination (F1: 61.19% → 68.55%) and establishes the critique format - RL significantly improves single-turn critique-revision (Pass@1: 8.36% → 11.76%) while reducing regression ($\Delta_\downarrow$: 3.03% → 0.85%) - For multi-sample scenarios, SFT shows larger gains in Pass@5, while RL improves single-sample effectiveness This demonstrates that while SFT provides initial improvement in critique format and discrimination, RL training is crucial for generating feedback that leads to successful revisions. **Essential references not discussed**: We thank the reviewer for suggesting highly relevant papers we overlooked. We will include and discuss these works in our final version: - While CodeDPO improves code generation through DPO with self-generated validation, our work differs by specifically training a critic model to provide human-readable feedback. Besides, CodeDPO relies on larger models for data generation, whereas our approach requires only self-generated critiques for critic training. - MultiCritique shares our goal of training critic models but utilizes a multi-agent framework with GPT-4 for meta-critique. In contrast, CTRL achieves strong performance without requiring access to more powerful models during training. Besides, our method enjoys the simplicity of having one critic model instead of four different models. - While RLEF uses execution feedback to improve generator models directly, we focus on enhancing the feedback capability of LLMs. In this regard, our approaches are complementary - RLEF trains better generators, while CTRL trains better critics. **Contributions relative to recent advancements in generative reward models and self-correction**: We discuss our position relative to generative reward models in Appendix B, Table 6. Specifically, our approach uniquely **unifies discrimination and refinement** by generating actionable critiques **without direct human supervision**. Our work differs from recent self-correction approaches: - SCoRe [3] employs multi-turn online RL to improve self-correction, but focuses on the generator directly correcting its outputs. CTRL decouples critique and generation, allowing specialized training of the critic and demonstrating signs of scalable oversight. - GLoRe [4] decomposes refinement into when, where, and how to refine. However, they focus on training both global and local refinement models separately, while CTRL trains a specialized critic model that can identify errors and provide actionable feedback to any generator. [3] Kumar A, Zhuang V, Agarwal R, et al. Training language models to self-correct via reinforcement learning [4] Havrilla A, Raparthy S, Nalmpantis C, et al. Glore: When, where, and how to improve llm reasoning via global and local refinements We again appreciate your thoughtful review and will incorporate your suggestions to strengthen the final version of our paper.
Summary: This paper presents the CTRL framework - a two-stage training approach that separates the critique function of a language model from its generative capabilities. The authors first synthesize high-quality critiques using execution feedback (running code trying to go through unit tests), which are then used in a supervised fine-tuning phase of a dedicated critique model. In the second phase, they use Group Relative Policy Optimization (GRPO) to further refine the critics through reinforcement learning so that the generated critiques directly help the fixed generator model to improve its output (measured in terms of test pass rates). Experiments on multiple coding benchmarks (e.g., CodeContests, LiveCodeBench, MBPP+) and JudgeBench have shown dramatic improvements in the performance of the framework, with reported relative improvements of up to 106% on some metrics. The framework not only improves the pass rate of the base generator, but also generalizes well when applying the critic to stronger models (e.g., GPT-4). In summary, this paper presents a novel, mathematically-based, empirically validated method for guiding LMs to provide actionable critiques that significantly improve the correctness of generated code. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper makes reasonable empirical claims about critic training but lacks formal theoretical proofs for its Markov chain analysis and optimization properties. The most significant theoretical gaps are: Missing justification for Markov assumptions in iterative refinement Unquantified variance reduction claims for GRPO Unexamined optimization landscape characteristics These limitations don't invalidate the experimental results but suggest opportunities for deeper theoretical analysis in future work. The community would benefit from formal proofs of: Markov chain convergence properties GRPO variance bounds Conditions enabling weak critic supervision of strong generators Experimental Designs Or Analyses: Yes, no further issues Supplementary Material: Yes, I have reviewed all the supplementary material Relation To Broader Scientific Literature: The paper's key contributions extend prior work on LLM self-improvement by addressing critical limitations identified in existing literature. While earlier methods like self-critique (Madaan et al., 2024) showed theoretical promise for iterative refinement, Huang et al. (2023) demonstrated their practical limitations due to compounding errors: a challenge CTRL directly addresses through its specialized critic training. The framework builds on weak-to-strong generalization concepts (Christiano et al., 2018) but extends them to cross-model critique-revision dynamics, showing smaller critics can effectively guide larger generators. This advances beyond traditional reward modeling approaches (Gao et al., 2023) that used scalar feedback, instead aligning with emerging work on generative reward models (Yu et al., 2024) while eliminating their reliance on human annotations. The test-time scaling mechanism through iterative revisions responds to Snell et al.'s (2024) call for compute-efficient inference methods, but introduces novel critique-driven iteration rather than simple sampling. By formalizing the critique process through Markov chain analysis, the work provides theoretical grounding to empirical observations in code generation studies (Zheng et al., 2024) about the importance of actionable feedback, while offering a generalizable framework that could extend beyond programming domains. Essential References Not Discussed: No Other Strengths And Weaknesses: ## Strengths - Solid mathematical and theoretical foundation: The iterative improvement process is described through Markov chain modeling, which clearly illustrates the impact of criticism and discrimination ability on the final success rate. Normalization of the strategy gradient using GRPO significantly reduces the gradient variance, thus making the reinforcement learning process stable and efficient. - Experimental validation is adequate: Significant improvement is demonstrated on several code generation benchmarks (CodeContests, LiveCodeBench, MBPP+), and the critic's discriminative ability is also verified on JudgeBench. Experiments comparing self-criticism, raw generation, and improvements under different generation models show that CTRL effectively reduces error accumulation and dramatically improves pass rates ## Weaknesses - Methods are not innovative enough: Although CTRL has done some work in combining executive feedback, supervised fine-tuning and reinforcement learning, it has overall borrowed existing ideas and techniques, such as self-criticism, RLHF, and GRPO methods. The core idea of the method is mainly to combine existing techniques, and it does not propose a completely new mechanism in terms of theory or algorithm, so it may not reach the highest level of innovation. - The theoretical analysis is not deep enough: Although relevant RL objectives and GRPO updating strategies are given, there is a lack of theoretical proof of convergence and optimality of the RL part, and the approach relies more on experimental performance. Other Comments Or Suggestions: - Authors should discuss the possibility of applying CTRL to domains where feedback is less binary or difficult to obtain (e.g., essay writing, dialogue generation). Future work could explore how to build alternative feedback mechanisms in these domains. - It would be useful to discuss the sensitivity of the method to various hyperparameters (e.g., group size in GRPO, strength of KL regularization) and whether any parts of it can be simplified. - Providing more qualitative examples, and perhaps a manual evaluation of the generated critiques, would help provide insight into what strategies the critics have learned and whether these critiques are interpretable. Questions For Authors: - How do you think about the CTRL framework being applied to automate the evaluation of tasks that are not as simple as code ? - Can you describe the sensitivity of your method to the parameters in GRPO? Do you observe any significant performance degradation if these parameters are changed? - In addition to improving the pass rate, have you performed a qualitative assessment of the quality of the generated comments? For example, how well do these comments agree with human debugging strategies? - How robust is your approach if execution feedback (i.e., test cases) is noisy or incomplete? Have you tested situations where the feedback could be misleading? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comprehensive and thoughtful review! We appreciate your recognition of our work's `solid mathematical and theoretical foundation` and `adequate experimental validation.` We would like to address your concerns in detail: **Theoretical analysis**: Our work is primarily empirical. While we use mathematical frameworks to motivate our approach, our main contributions are demonstrating a practical framework for critic training and empirically validating its effectiveness. To strengthen our theoretical analyses, we discuss more in detail: - Markov chain analysis: We provide theoretical analyses of Markov chain convergence properties in the [anonymous link](https://anonymous.4open.science/r/icml25-rebuttal/anonymous.pdf) complementing Figure 3's empirical findings. - GRPO properties: We show that GRPO achieves variance reduction by a factor of $G$, explaining its stability with large critique spaces. Our analysis also demonstrates that GRPO ensures monotonic improvement and converges to a local optimum under standard assumptions. - Weak-to-strong generalization: Our findings that weaker critics can guide stronger generators align with recent work (Burns et al., 2023; Kenton et al., 2024), though theoretical conditions for when this works remain an open research question. **CTRL does reach the highest level of innovation**: Thank you for recognizing our `novel, mathematically-based, empirically validated method.` Our primary contributions include: - Problem formulation: We identify and formalize the critique generation problem through a Markov chain lens, distinguishing between discrimination and critiquing abilities. - System design: We propose a decoupled critic-generator architecture and a two-stage training pipeline specifically designed for critique learning. - Robust training: We demonstrate GRPO's suitability for critique training due to reduced variance in policy gradients, and demonstrate its effectiveness empirically. - Empirical findings: We provide extensive empirical evidence demonstrating that our framework enables test-time scaling, mitigates compounding errors, and facilitates weak-to-strong generalization. **Applying CTRL to domains where feedback is less binary**: For domains where feedback is less binary, CTRL can leverage alternative reward mechanisms: - Learned reward models: Training a reward model from pairwise preference data (as in RLHF approaches) for tasks like safety or preference alignment. - Reference-based evaluation: For tasks with reference responses (e.g., translation, summarization), we can use metrics like log-likelihood on reference outputs or automated evaluation metrics (e.g., ROUGE and BLEU) as reward signals. CTRL is agnostic to the specific reward mechanism as long as it can distinguish between successful and unsuccessful revisions. This flexibility extends CTRL beyond domains with clear verification tools. Notably, we observe positive generalization results on IFEval, a benchmark outside our training distribution, as shown in our anonymous link. **Sensitivity to GRPO parameters**: We relied on prior work in RL training that identified the most sensitive parameters for training stability. Due to computation constraints, we performed a limited grid search (<10 runs), with final parameter selection based on performance on a TACO test subset. We found our method relatively robust to GRPO hyperparameters within reasonable ranges: - Group size: Larger sizes improve performance (we used 8) but increase computation time - KL coefficient: While unnecessary, KL improves stability and KL=0.001 balances exploration and stability in our experiments. - Learning rate: Most sensitive parameter; we found 1e-5 to be optimal via the grid search. **Qualitative assessment of critiques**: To better assess the quality of the generated critiques, we manually evaluated critiques on 50 randomly sampled critiques from CTRL against Qwen2.5-Coder's solutions on CodeContests. Our analysis revealed CTRL employs diverse debugging strategies: algorithm improvement (43 instances), static analysis (38), strategic debugging (11), and dynamic testing (10). This distribution reveals that CTRL primarily focuses on structural code improvements and algorithmic enhancements rather than superficial issues. The prevalence of static analysis and refactoring strategies also suggests CTRL learns debugging approaches that align with human reasoning patterns. **Robustness to noisy feedback**: While we didn't explicitly test noisy feedback scenarios, our low regression rates ($\Delta_\downarrow$) demonstrate CTRL can distinguish correct implementations from problematic code, avoiding misleading critiques that worsen solutions. This inherently suggests robustness to noise, though we agree more systematic evaluation of noise tolerance is an important direction for future work. Thank you again for your detailed review. We hope our response addresses your concerns and questions. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Most of my concerns have been adequately addressed, and I have raised my score. Although I still find the novelty somewhat limited, I will vote for a weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for reconsidering our work! We are glad that our response has adequately addressed your concerns and remain open to any further questions or clarifications you might need.
Summary: The paper presents a method to teach an LLM to critique the response of another LLM, specifically in the domain of coding contest problems. The problem is formalized as maximizing the probability of the latter LLM to succeed at providing a correct response after seeing a critique produced by the former, which is an RL problem. The method consists of an initial SFT stage followed by an RL stage. In the SFT stage, the critique model is trained on filtered critiques produced using a method that includes execution of the code produced by the answer model. In the RL stage the critique model is trained using GRPO to optimize the success of the answer model. The results show a significant boost in performance on CodeContests, and MBPP, and to a lesser extent on LiveCodeBench. The performance further increases with multiple rounds of critique and revision (even though this was not done during training). Claims And Evidence: The main claim of the paper, that one can train a model to critique using RL (and SFT), is well supported by the experiments. The motivating ideas and claims made in section 2 and 3 are very well supported by the cited related works. Methods And Evaluation Criteria: The method is rather straightforward and sensible. Similarly the evaluation datasets and benchmarks are standard (though see comments on experimental design below) Theoretical Claims: The claim on page 5 that variance of the gradient scales with the size of the answer and critique space are not proven. Other than that the paper does not make significant theoretical claims. Experimental Designs Or Analyses: The paper trains on the TACO dataset and evaluates on CodeContests, but these contain many of the same problems. The appendix states that 47 problems were excluded, but the TACO paper explicitly mentions that about half the TACO problems are also in CC, so I consider this a no-go. Moreover, the results in Table 3 show that although CTRL performs well on CodeContests and MBPP+ (similar contamination risk), on LiveCodeBench it fares worse than using either GPT-4o or Qwen2.5-Coder as critique models. This suggests that the model has not learned generalizable critiquing strategies that work beyond the narrow (and contaminated) distribution of TACO/CC. Although this is a serious issue that must be discussed in more detail in the paper and acknowledged as a weakness, it is still interesting to see that the model succeeds in-domain. It is quite likely that drastically scaling up the data size and diversity could enable learning of generalizing critiquing policies. Supplementary Material: just skimmed Relation To Broader Scientific Literature: The paper very clearly motivates its approach by citing relevant related work. This is a very nice feature of the paper. Essential References Not Discussed: no Other Strengths And Weaknesses: The experimental section of the paper contains a number of interesting analyses. Other Comments Or Suggestions: - I would not call the critic model Q, since this has a well-established meaning in RL - typo on page 5, "we samples", "and computes" - The equation for J on page 5 shows Q(y|x) but I think it should be Q(c | x, y)? Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable feedback! We appreciate your recognition that our main claim `is well supported by the experiments` with `a number of interesting analyses` and that the paper `very clearly motivates its approach by citing relevant related work.` We address your primary concerns below: **Proof: Variance of the gradient**: We provide a short proof in our [anonymous link](https://anonymous.4open.science/r/icml25-rebuttal/anonymous.pdf) that justifies our claim about gradient variance scaling with $|\mathcal{Y}| \cdot |\mathcal{C}|$. The key insights from this proof are: - When analyzing the variance of our policy gradient estimator, we find that under mild assumptions it scales proportionally with the product of answer and critique space sizes ($|\mathcal{Y}| \cdot |\mathcal{C}|$). - In cases where covariance terms are significant, the scaling becomes even worse than linear, potentially leading to a super-linear increase in the overall variance. This theoretical analysis provides a mathematical foundation for why standard policy gradient approaches struggle with critique-revision optimization, directly motivating our CTRL design choices that address this variance challenge. We will incorporate the proof in our final version. **Dataset overlap between TACO and CodeContests**: Thank you for raising this question! We agree that while we discuss this issue in the paper and propose a decontamination method to mitigate this, it deserves more attention. We would like to clarify a few points: - **Actual overlap**: While the TACO paper mentions a significant overlap with CodeContests overall, the critical distinction is that we evaluate on CodeContests test set (165 problems), while the reported overlap primarily exists between TACO and CodeContests full set. We excluded the 47 problems we identified as direct overlaps between the TACO train set and the CodeContests test set. - **Performance on out-of-distribution benchmarks**: We would like to highlight that our method demonstrates significant improvement over the zero-shot baseline on LiveCodeBench with Qwen2.5-Coder as the generator model (30.54% -> 33.21% in Pass@1), outperforming GPT-4o and Qwen2.5-Coder as critic models. This demonstrates that our method does learn useful critiquing abilities that transfer beyond its training distribution. - **Additional experiment**: To further validate CTRL's generalizability, we have conducted **additional experiments** on IFEval [1], a benchmark for instruction-following capabilities. The results are presented in the anonymous link, Table 1. Despite not being trained on instruction-following data, CTRL boosts accuracy by over 2% for single-turn critique-revision with Qwen2.5-Coder as the generator, outperforming all the baselines including GPT-4o. We agree that scaling up the data size and diversity is a promising direction to further improve the results. and consider this an important direction for future work, which we will highlight in the final version. [1] Zhou J, Lu T, Mishra S, et al. Instruction-following evaluation for large language models[J]. arXiv preprint arXiv:2311.07911, 2023. **Typos and suggestions**: Thank you for your careful review! We appreciate your suggestion about the naming convention, and will rename $Q$ and fix the spotted typos in the final version. Thank you again for your thorough review. Your feedback has helped us strengthen our work and identify important areas for improvement.
Summary: Teaching large language models (LLMs) to critique and refine their outputs is crucial for building systems that can iteratively improve, yet it is fundamentally limited by the ability to provide accurate judgments and actionable suggestions. In this work, the authors study LLM critics for code generation and propose CTRL, a framework for Critic Training via Reinforcement Learning, which trains a critic model to generate feedback that maximizes correction performance for a fixed generator model without human supervision. Their results demonstrate that critics trained with CTRL significantly enhance pass rates and mitigate compounding errors across both base and stronger generator models. # Update after rebuttal This paper is novel and clear in presentation, and the rebuttal resolves my concerns. I will keep my positive rating as my final score after rebuttal phase. Claims And Evidence: I summarize the claims issued in the paper, which includes: - Challenge: Without appropriate external feedback, such self-improvement loops may lead to performance degradation. - Solution 1: Reward models: reward models compress complex evaluation criteria into simplified numerical signals - Solution 2: Automated verification tools: generate low-level execution traces that do not directly translate to high-level fixes - Important: Feedback needs to both accurately discriminate the correctness of solutions and provide informative yet actionable suggestions for improvement. All the claims align with my previous knowledge and have clear evidence in related works. Methods And Evaluation Criteria: Yes, the critique qualtiy is important for underlying task performing model. Through (1 supervised fine-tuning to format the answer and enhandle the discrimination ability with hint, and (2 further RL training the critique through GRPO with advantage normalized reward that is calculated by majority voting of additional critique fixed evaluation models, they propose CTRL framework which is Qwen-2.5 based critique that surpasses multiple competitors like GPT-4o. And the experiment results and additional analysis about compounding errors, Test-time Scaling and Evaluating Critics as Generative Reward Models demonstrate the effectiveness of approach. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is abundant and comprehensive. But I have a major concern about the cost: 1. For reward evaluation, the authors use majority of vote for each step. Specifically, they generate multiple critiques for each solution and aggregate the results through majority voting. What is the base critic and the count is from 2 to $2^7$, does that cost too much? 2. We know that fine-tuning Qwen2.5-32B is costly with SFT and even more costly with RL, how the authors have such computing resource to afford fully fine-tuning with knowledge distillation from self-critique with execution feedback or RL training to optimize the GRPO goal with multiple critics? Supplementary Material: I read the supplementary material, which is solid and comprehensive. Relation To Broader Scientific Literature: This work relates to LLM critics for code generation, which applies GRPO to reduce variance through advantage and ensembles and chain of thoughts based prompt engineering with external ground-truth assistance (execution feedback) to improve the code generation quality. We note that such critique model can also serve as generative reward model for other domain tasks in JudgeBench. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: The idea is novel and useful to improve the performance with solid experimental demonstration and analysis disucssion. Motivation is clear, every step has a clear purpose. Weakness: - The implementation of methodology and baseline comparison requires large computing resources and long-time RL training with sensitive hyper-parameter tuning, how the authors overcome the difficulty and conduct hyper-parameter search for such a complex system with many components is interesting to know. Other Comments Or Suggestions: The reward calculate is too costly, could the authors just use the sandbox outputs as the correctness signal to save time? Questions For Authors: Please see weakness part in "Other Strengths And Weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comprehensive and well-thought-out review! We appreciate your recognition of our work's novelty, effectiveness, experimental design, clear motivation, and solid supplementary material. Your positive feedback is truly encouraging. Below, we address your concerns about computational costs and system complexity: **Cost for majority voting in JudgeBench evaluation**: The base critic used for majority voting is our CTRL-trained Qwen2.5-Coder-32B model, the one used throughout our evaluation. The majority voting approach is primarily used for our JudgeBench evaluation (Section 4.3) to calculate pairwise accuracy. While generating multiple critiques may seem costly, modern inference engines significantly reduce this overhead through prefix caching (i.e., the same prompt is prefilled for generating multiple critiques). This majority voting approach is also used in recent work on generative verifiers [Zhang et al., 2024], which observe similar results to ours that majority voting boosts accuracy significantly. Our experiments show that performance plateaus after approximately $2^6$ critiques, with additional samples providing diminishing returns (64.3 at Maj@$2^7$ vs 64.0 at Maj@$2^6$). For cost comparison with Claude 3.5 Sonnet (which performs similarly on JudgeBench), we calculate per-problem inference costs using [Together AI pricing](https://www.together.ai/pricing) for Qwen2.5-Coder-32B and [Anthropic pricing](https://www.anthropic.com/pricing#anthropic-api) for Claude 3.5 Sonnet: CTRL with Maj@$2^6$: - Average prompt length: 2,067.17 tokens - Average response length per critique: 231.72 tokens - Input/output cost: $0.80/1M tokens - **Cost**: (2067.17 + 231.72 * 64) * 2 * 0.8 / 1M = $0.027 per problem Claude 3.5 Sonnet: - Average prompt length: 1,627.74 tokens - Average response length: 652.97 tokens - Input cost: $3/1M tokens - Output cost: $15/1M tokens - **Cost**: 1627.74 * 2 * 3 / 1M + 652.97 * 2 * 15 / 1M = $0.029 per problem Note that the factor of 2 accounts for calculation on both responses in CTRL and removing positional bias for Claude 3.5 Sonnet. This analysis demonstrates that our method achieves comparable performance to state-of-the-art proprietary models at a similar or slightly lower cost. **Cost for fine-tuning**: We acknowledge that RL training is computationally intensive, especially for large models. To manage this, we implemented several optimizations: - SFT stage: Our SFT on self-critique with execution feedback provides a strong initialization, significantly reducing the need for extensive RL exploration. - Optimization techniques: We leveraged fully sharded data parallelism (FSDP), gradient checkpointing, and sequence packing to reduce memory requirements and speed up training. At each step the critic model and the generator model are offloaded to CPU to save memory, allowing us to train 32B models with a minimum of 8 x 80G GPUs. As open-source RL training frameworks [1,2] continue to evolve, we expect these costs to decrease further in the future. [1] Sheng G, Zhang C, Ye Z, et al. Hybridflow: A flexible and efficient rlhf framework[J]. arXiv preprint arXiv:2409.19256, 2024. [2] von Werra L, Belkada Y, Tunstall L, et al. TRL: Transformer Reinforcement Learning[CP/OL]. GitHub repository. GitHub, 2020. https://github.com/huggingface/trl. **Clarification on "multiple critics"**: We want to clarify that our approach uses only a single critic model during training, not multiple critics as may have been implied. We sample multiple critiques from this single model to estimate advantages in GRPO. **Hyperparameter tuning**: For hyperparameter selection, we relied on prior work in RL training that identified the most sensitive parameters for training stability: learning rate, KL coefficient, and group size for GRPO. Due to computation constraints, we performed a very limited grid search over these key parameters (< 10 runs), with final parameter selection based on performance on a TACO test subset. We report the final hyperparameters in Table 9. We agree that more extensive hyperparameter search could potentially improve results. We will improve the final version of our paper with a more detailed description of our grid search process. **Sandbox outputs as correctness signal**: Regarding using sandbox outputs as correctness signals, we would like to clarify two contexts of “reward calculation”: - During CTRL training: We do use sandbox outputs (pass/fail) as the reward signal to train our critic model, as described in Section 3.1. - For JudgeBench evaluation: High-quality unit tests are not available for many tasks, especially general-domain questions. This practical limitation is precisely why CTRL is valuable - it internalizes sandbox signals during training, allowing critic models to generalize to new domains where explicit test cases are not available. Thank you again for your insightful review and positive assessment of our work.
null
null
null
null
null
null
A General Graph Spectral Wavelet Convolution via Chebyshev Order Decomposition
Accept (poster)
Summary: The authors propose a novel spectral graph network, WaveGC, inspired by SWGT. WaveGC filters input features with matrix-valued kernels in the spectral domain and utilizes transformer architectures for wavelet transforms. The wavelet functions are learnable and are parameterized by Chebyshev polynomials with learnable spatial scales and polynomial coefficients. The authors provide theoretical results on the short-range and long-range performance of WaveGC and demonstrate its effectiveness on both node-level and graph-level tasks. Claims And Evidence: The authors supported their claims with evidence. Methods And Evaluation Criteria: The methods and evaluation criteria make sense to me. Theoretical Claims: The theoretical results make sense to me. Experimental Designs Or Analyses: The experimental design and analyses make sense to me. Supplementary Material: Some additional results and related works. Relation To Broader Scientific Literature: The authors proposed a new spectral GNN architecture based on graph wavelets. Good enough for an ICML submission. Essential References Not Discussed: While there were concerns about WaveGC missing the prior work (Bastos et al., 2022) that learns spectral wavelets, I see that the authors have included it as a baseline and shown competitive performance against it. I agree the novelty is somewhat limited along this line, but I think it is fine as long as the authors do not overclaim. Other Strengths And Weaknesses: Strengths: The authors analyze WaveGC theoretically. The method is evaluated on large datasets. The authors also provide details that rationalize WaveGC in Section 5. Weaknesses: The quality of Figure 1 is still not good enough. Other Comments Or Suggestions: Interestingly I am assigned the same paper that I reviewed a year ago. Most of my concerns, e.g. missing evaluation on large datasets and using eigenvalue embeddings, were addressed during the rebuttal period last year and in this new submission. Thus I maintain the same score as last year. Questions For Authors: The authors now advertise WaveGC satisfying admissibility criteria as the main advantage against (Bastos et al., 2022). I understand this is theoretically more appealing, but what is the practical advantage of being admissible? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the referee for taking the time to review our paper. Please, see below our answer to the raised comments/questions. > Q1: The quality of Figure 1 is still not good enough. We have improved the image resolution of Figure 1 at https://drive.google.com/file/d/1rfuYedo4FkeLg1zClq45ZOHU6rH-p8L4, and will update the figure in the next version of the paper. > Q2: Practical advantage of being admissible. | | Actor | Tolokers | | :---: | :------------: | :------------: | | DEFT | 35.04±1.89 | 84.32±0.45 | | WaveGC | **37.40±1.04** | **85.38±0.52** | Table 1. Comparing WaveGC and DEFT on heterophily datasets. The wavelet admissibility criterion is not just a theoretical preference—it is a fundamental requirement for a function to be considered a valid wavelet. Without satisfying this condition (i.e., g(0)=0), a function labeled as a "wavelet" **loses its essential localization property**. As discussed in our response to Reviewer 4MPc (Q1), this localization enables node-specific filtering, where each node can capture local deviations from the global signal structure. In contrast, non-admissible constructions (e.g., DEFT [Bastos et al., 2022]) lack this guarantee, leading to **less precise node-level modeling**. The **practical advantage** of admissibility becomes evident in tasks that require strong node-wise representation learning. For instance: $\bullet$ In **short-range node-level tasks** (e.g., CS, Photo, Computer, CoraFull, ogbn-arxiv), WaveGC consistently outperforms baselines. $\bullet$ In **long-range node-level tasks** (e.g., VOC and COCO), WaveGC also demonstrates superior performance. $\bullet$ To further validate the node-level expressiveness, we conducted additional experiments on **heterophilous datasets** (e.g., Actor and Tolokers), where accurate node-level modeling is particularly challenging. Results, shown in Table 1, confirm that WaveGC outperforms DEFT. Together, these results across **homophily, heterophily, short-range, and long-range settings** support the claim that admissibility leads to more effective and localized node-wise signal processing, providing WaveGC with a substantial practical advantage over prior non-admissible designs. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses and clarifications. I maintain my positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for your positive evaluation and rating. We will continue to improve the paper based on your and other reviewers' valuable feedback further.
Summary: This paper proposes a wavelet-based graph convolution method, WaveGC, which integrates spectral bases and matrix-valued kernels. The authors use the odd and even terms of Chebyshev polynomials to learn graph wavelets that satisfy the necessary conditions. Experimental results demonstrate that the proposed method achieves promising improvements on both short-range and long-range tasks. Claims And Evidence: The authors discuss the limitations of Fourier bases in Section 3.1, arguing that they lack multi-resolution and adaptability. This serves as the main motivation for exploring graph wavelets in this paper. However, to my knowledge, FFT bases are also fixed, much like graph Fourier bases. Additionally, while graph wavelets are theoretically more flexible than graph Fourier bases, they do not seem to offer significant practical advantages. Notably, the authors' experiments do not provide a comparison with graph Fourier base-based methods, which could have helped to highlight the benefits of wavelet-based methods. Methods And Evaluation Criteria: This paper introduces novel wavelet-based graph convolution methods and proposes using odd and even Chebyshev polynomials to learn graph wavelets. This approach offers a new method for graph convolution techniques and holds some implications for the graph learning community. Theoretical Claims: The main theoretical contribution of this paper is Theorem 4.2. I reviewed the corresponding proof and found no issues with it. Experimental Designs Or Analyses: The main issue in the experimental section is the absence of baselines based on Fourier bases, including polynomial approximation methods such as GPR-GNN(https://arxiv.org/abs/2006.07988), BernNet(https://arxiv.org/abs/2106.10994), UniFilter(https://arxiv.org/abs/2405.12474), S²GCN(https://arxiv.org/abs/2405.19121), and others. Supplementary Material: I checked the code provided by the authors, and they report the hyperparameters for each dataset, which facilitates reproducing the results of the paper. However, I did not attempt to reproduce the results myself. Relation To Broader Scientific Literature: The discussion in this paper is somewhat limited to the narrow scope of graph wavelets. If the authors intend to explore spectral-based graph convolution methods, they should focus on comparing them with existing state-of-the-art methods based on Fourier bases, as this could lead to meaningful advancements in the field of graph learning. Essential References Not Discussed: There is no lack of essential references, and the authors have considered the key works in the field of graph wavelets. Other Strengths And Weaknesses: The significant computational complexity of eigenvalue decomposition remains a key drawback, although it is present in most graph wavelet methods. The proposed method in this paper faces challenges in scaling to large graphs. In contrast, graph Fourier-basis-based methods can be easily scaled to graphs with hundreds of millions of nodes. Other Comments Or Suggestions: Minor comments: The $\mathbf{I}_n$ on the left of line 108 should be written as $\mathbf{I}_N$. The K on the left in line 285 should be in math font. Questions For Authors: 1. Further discussion is needed on how the graph Fourier-basis-based methods compare with the method proposed in this paper, both theoretically and experimentally. This comparison should especially contain some of the latest approaches in the field. 2. The functions $g$ and $h$ approximated by odd and even Chebyshev polynomials are sufficient but not necessary conditions for any function $g$ and $h$ that satisfy the graph wavelet condition. Does this imply that using odd and even Chebyshev polynomials to approximate $g$ and $h$ does not guarantee that any function meeting the requirements will be learned? ## update after rebuttal I keep my score. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and comments regarding our work. Please, see below our answer to the raised comments/questions. The link for the table is available at https://drive.google.com/file/d/146c3rsB_eJLJmk3I21aWg9fzGPpCvd7S. > Q1: Compare with SOTA methods based on Fourier bases with further discussion theoretically and experimentally. Below we address both experimental and theoretical aspects: 1. Experimental Comparison We have conducted additional experiments comparing WaveGC with four representative Fourier basis-based methods—**GPR-GNN, BernNet, UniFilter**, and **S²GCN**—on three long-range and three short-range datasets. As shown in the newly added Figure 1, WaveGC consistently outperforms these baselines, with particularly strong gains on long-range tasks. This empirically supports the practical advantages of our wavelet-based design, especially in modeling multi-scale information. 2. Theoretical Perspective a) The core distinction between WaveGC and Fourier-based spectral methods lies in their **design focus**: $\bullet$ Fourier-based methods typically fix the basis (Laplacian eigenvectors) and focus on designing flexible **filters**. $\bullet$ WaveGC, by contrast, introduces a novel **learnable basis construction** via graph wavelets and Chebyshev decomposition, while using a matrix-valued kernel as the filter. These two directions—basis design and filter design—are **complementary** rather than mutually exclusive. In fact, a hybrid method (e.g., applying Bernstein polynomials as filters on our wavelet bases, akin to a "BernWave" model) could be a promising future direction. b) Localization and Adaptivity Graph wavelets offer **node-level localization** due to the property g(0)=0 and the small values of g(λ) at low frequencies. This enables each node to respond to local deviations from global structure, creating **node-dependent filters**. In contrast, Fourier bases are inherently global, with eigenvectors representing community-level structures—leading to **community-dependent filters**. WaveGC benefits from this localized adaptivity, further enhanced by the scaling parameter s, which enables **multi-resolution control**. This is critical for tasks requiring both short- and long-range information modeling. c) Scope of Fourier Methods While many recent Fourier-based methods are effective in scenarios such as homophily/heterophily adaptation (e.g., GPR-GNN), they do not explicitly model spatial scale or receptive field. Even methods like S²GCN, which improve long-range capture via kernel design, may be limited by the fixed Fourier basis. Combining such techniques with our wavelet-based approach could potentially lead to stronger long-range generalization. We will include the new experimental results and expand our discussion in the paper to address these points. > Q2: Scaling to large graphs. 1) Different Designs for Different Scenarios: While Fourier-based methods scale well to massive graphs, they often sacrifice expressiveness—particularly in long-range modeling, where WaveGC consistently outperforms them (Table 1 in the link). We believe model design should align with task demands rather than aim for universal scalability. 2) Applicability to Current Datasets: The eigendecomposition required for WaveGC has O($N^3$) complexity in the worst case and O($N^2$) in training. This is practical for the small-to-medium graphs used in long-range benchmarks, where detailed spectral modeling is critical. 3) Our focus is on constructing **learnable, admissible wavelet bases**—a core challenge in spectral graph learning. Scaling spectral methods efficiently is an important but orthogonal problem that we identify as promising future work. > Q3: Can WaveGC approximate functions meet the requirements? We agree that using odd and even Chebyshev polynomials to construct h(λ) and g(λ) provides a **sufficient but not necessary** condition for wavelet admissibility. While our method does not guarantee representation of all admissible function pairs (g,h), it introduces a **strong inductive bias** through the separation of odd and even Chebyshev terms, enabling a principled and controllable wavelet construction. Despite this restriction, our approach offers **strong practical approximation capacity** due to several factors: $\bullet$ Chebyshev polynomials form a complete basis over [0,2], allowing approximation of any continuous function on this interval. $\bullet$ The separation into odd and even terms allows independent control over h and g while preserving admissibility. $\bullet$ Learnable coefficients enhance flexibility in shaping the filter functions. $\bullet$ The scaling parameter s further adapts the spectral response. Together, these elements form a flexible and expressive class of wavelet filters suitable for practical graph learning. We will clarify this approximation perspective in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Do the authors have any improving methods for dealing with WaveGC's high computational complexity, especially compared with graph Fourier-basis-based methods? --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's suggestion. We can consider existing strategies as well as propose newly developed approaches to reduce the computational complexity of our model. Please kindly find tables and figures at https://drive.google.com/file/d/1cLakrmKX0nOlcyv_RC2_e38npfxiyKlJ/view?usp=sharing. **1\) Existing Efficiency Measures** Our current implementation incorporates 1\) efficient eigendecomposition with $O(N^2)$ and 2\) retaining the first k eigenvalues with O(kN) for the following computation. This complexity is feasible for the small- to medium-sized graphs in our experiments. **2\) New Approximation Strategy for O(N) Complexity** To further reduce complexity, we propose a fully polynomial-based approximation that removes the need for eigendecomposition, achieving total complexity of O(N), comparable to Fourier-basis-based methods: * **Scaling Function h(λ) and Wavelet Function g(λ):** Given $h(λ)=\sum b_iT_i^o(\lambda)$, the transform becomes $\Phi f=Uh(\Lambda)U^\top f=\sum b_iT_i^o(Lf)$, a Chebyshev polynomial over the Laplacian L. Similarly, $g(λ)=\sum a_iT_i^e(λ)$ leads to $\Psi f=Uh(\Lambda)U^\top f=\sum a_iT_i^e(Lf)$. Both operations are polynomial and thus O(N). * **Handling the scale s:** The domain λ∈\[0,2\] for g(λ) transforms to λ∈\[0,2/s\] in g(sλ). This raises two scenarios: 1. If s\<1: The full spectrum \[0,2\] is covered, and g(sλ) remains valid as a polynomial. (Please refer to Fig. 1 (2)). 2) If s\>1: Only the range \[0,2/s\] is valid; we suppress \[2/s, 2\] using a window function w(λ), where $g_{truncated}​(sλ)=g(sλ)⋅w(λ)$, w(λ)= 1 at \[0, 2/s\], and w(λ)= 0 at (2/s, 2\], Both g(sλ) and w(λ) are Chebyshev-approximable, keeping the full operation within O(N). (Please refer to Fig. 1(3) and Fig. 2) This method also removes the need for eigenvalue encoding and allows relaxation of the tight frame constraint (i.e., normalization), while preserving the core structure of WaveGC. **3\) Empirical Evaluation of Efficiency** We tested this simplified version on three short-range datasets against GPRGNN, BernNet, and UniFilter: * Runtime: As shown in *Table 1*, the simplified WaveGC achieves comparable training time to Fourier-based methods. * Accuracy: As shown in *Table 2*, it incurs only a small drop in performance, confirming that polynomial approximation remains effective even without eigendecomposition. This efficient variant of WaveGC opens new directions for practical deployment and future research. We will incorporate this approximation strategy into the manuscript and further refine it.
Summary: In this work, the authors introduce WaveGC, an innovative wavelet-based graph convolution approach featuring multi-resolution bases and a dense matrix kernel. They construct the necessary wavelet bases by leveraging Chebyshev polynomials of the first kind. For kernel implementation, they draw inspiration from AFNO, employing MLPs to create an effective kernel structure. The operation of WaveGC closely resembles FFT-based convolution. Theoretical analysis demonstrates that WaveGC effectively captures both short-range and long-range feature information. Comprehensive experimental results convincingly showcase the superior performance and effectiveness of WaveGC across various tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes. I have reviewed the whole supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. The authors prove that WaveGC can effectively captures both short-range and long-range feature information. Weaknesses: 1. Eq. 1 is inappropriate. This is because in this formula, $\lambda$ represents the eigenvalue of the graph Laplacian. Eigenvalues are clearly discrete quantities and cannot be integrated. 2. Whether it's the graph Fourier transform or the graph wavelet transform, both are 1D discrete orthogonal transforms. For a 1D discrete orthogonal transform, a diagonal matrix rather than a full-size dense square matrix is sufficient as the kernel. Other Comments Or Suggestions: Although [1] was accepted by ICLR 2019, GWNN is not actually a GNN based on graph wavelet transform. First, graph wavelet transform requires the filter to be a band-pass filter, while GWNN uses the Heat Kernel, which is a low-pass filter. Second, graph wavelet transform is a discrete orthogonal transform, while $e^{-tL}$ is not a orthogonal matrix. [1] Bingbing Xu, Huawei Shen, Qi Cao, Yunqi Qiu, & Xueqi Cheng (2019). Graph Wavelet Neural Network. In International Conference on Learning Representations. Questions For Authors: 1. In Eq. 9, what are the specific representation of $\mathbb{S}$ and $\mathbb{W}$? The explanation of different wavelet kernels is vague. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for reviewing our work. Please, see below our answer to the raised comments/questions. > Q1: Eq. (1) is inappropriate. Eq. (1) describes requirements for general wavelets as introduced in Mallat’s book in 1999, not only graph wavelets, where g(λ = 0) = 0 is important for any data format. Thanks for your feedback, and we will change Eq. (1) to discrete version. > Q2: Why using a full matrix-valued kernel when a diagonal kernel is sufficient for 1D orthogonal transforms like the graph Fourier or wavelet transform? It is true that, in the context of classical 1D discrete orthogonal transforms (such as the Fourier or wavelet transform), convolution in the spectral domain can be represented using a **diagonal matrix** where each diagonal entry scales the corresponding frequency component. This is the foundation for many spectral GNNs that use vector-valued kernels (e.g., diag(θλ)). However, our design departs from this classical formulation by introducing a **matrix-valued kernel**, which enables richer transformations. Specifically: 1. **Diagonal kernels perform independent scaling across feature channels** for each frequency mode, limiting the ability to model interactions between features. 2. **Matrix-valued kernels**, in contrast, allow interaction within the feature dimension at each frequency mode. This is particularly useful in deep learning settings where features are multi-dimensional, and learning correlations between them is beneficial. 3. Furthermore, in our implementation, the same transformation (via a shared MLP) is applied across all spectral modes. This enforces a consistent filtering mechanism while keeping the parameter count manageable. Thus, while a diagonal kernel is mathematically sufficient for linear spectral filtering, we adopt a matrix-valued kernel to **increase expressiveness** and enable **learnable feature-channel interactions**, which empirically improves performance (as shown in Section 6.2). This design is inspired by similar motivations behind matrix-valued kernels in Fourier Neural Operators and Transformer models. We will revise the manuscript to more clearly explain this motivation and distinguish our approach from classical spectral filtering. > Q3: What are S and W in Eq. 9? S and W are the same as M in line 191 in the left column, representing different matrix-valued kernels. We will clarify this point.
Summary: This paper introduces WaveGC, a wavelet-based graph convolution network that integrates multi-resolution spectral bases with a matrix-valued filter kernel. It proposes graph wavelets by decomposing Chebyshev polynomials into odd and even terms and combining them with learnable coefficients, ensuring strict admissibility criteria and enhanced flexibility. The authors demonstrate that WaveGC effectively captures both extremely small and large scales, extending beyond previous graph wavelet theory. Experimental results show state-of-the-art performance in both short-range and long-range tasks. ## Update after rebuttal I think this is a nice work but not quite up to my expectation. I will keep my original rating for now. Claims And Evidence: The paper provides theoretical and empirical support for WaveGC’s ability to capture both short-range and long-range information, as well as its superior performance over existing models. However, several claims require further validation. 1. The assertion that WaveGC covers extremely large scales ($s \rightarrow \infty$) is supported by theoretical proof but lacks empirical verification as there is no analysis of the actual learned scale values or their impact. 2. Since only a single scaling term is used for low-frequency information while multiple wavelet bases are employed, the authors justify the addition of an MPNN in parallel to augment low-frequency modeling. However, the scaling function already takes a value of 1 at $\lambda = 0$, and other wavelet bases can also learn low-frequency components near the origin. Fig.3 shows that $g(s_3 \lambda)$ retains a significant amount of low-frequency information, raising the question of whether the additional MPNN is truly necessary for enhancing low-frequency modeling. I think if WaveGC can achieve strong performance without MPNN, it would better demonstrate its adaptability. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-aligned with the problem of spectral graph learning, as WaveGC is tested on both short-range and long-range benchmark datasets, demonstrating its adaptability. The use of various baselines is also appropriate. However, the rationale behind certain methodological choices is not so clear to me as, 1. The argument “powerful kernel with more parameters provide enough flexibility to adjust itself” seems contradicting the later claim that parameter sharing is necessary due to excessive parameters. 2. The paper claims that the matrix-valued kernel allows various frequency modes to interact. However, since $F(X)$ has a dimension of $N \times d$ and the kernel M is $d \times d$, the operation MLP($F(X)$) appears to facilitate interaction between node features rather than frequency modes. Is this understanding correct? Theoretical Claims: I have not verified the validity of Theorem 4.2 as it is in the appendix. Experimental Designs Or Analyses: 1. The ablation studies are presented on different datasets across Tab.4, Tab.5, and Tab.6, rather than consistently using the same set of datasets. This raises concerns whether each component consistently contributes to performance improvement. Additionally, in Appendix D.7, tight frames were not applied to ogbn-arxiv and Peptides-struct, and parameter sharing was not used for most long-range datasets. The paper claims these techniques as contributions for efficient computation and performance improvement. However, since strong performance is achieved without them on certain datasets, I am not so sure how to interpret this. 2. The paper states that, due to memory constraints, only the first 30% of eigenvalues and their corresponding eigenvectors were retained for short-range datasets. This approach likely results in a significant loss of high-frequency information, which is particularly important for short-range datasets. Given this, I have concerns on whether this strategy is appropriate unless kernels such as diffusion kernels are being tested. Supplementary Material: I reviewed Appendix C to examine the eigenvalue encoding used in Eq. (6) and Eq. (7). To check the experimental details, I referred to Appendix D, which includes the implementation details, settings, dataset descriptions, additional experiments, and model complexity. Relation To Broader Scientific Literature: This paper advances spectral graph convolution and graph wavelet transforms in SGWT (Hammond et al., 2011). It introduces a Chebyshev polynomial decomposition into odd and even terms, ensuring strict wavelet admissibility while enabling learnable wavelet construction. Additionally, it incorporates matrix-valued kernels, inspired by Fourier Neural Operators (Li et al., 2021), to enhance spectral filtering flexibility. The paper also expands on previous graph wavelet theory by providing a theoretical proof that WaveGC effectively captures both short-and-long range information from the perspective of information mixing (Di Giovanni et al., 2023), addressing a limitation in prior work (Hammond et al., 2011) that focused only on small-scale localization. Essential References Not Discussed: The paper is referencing proper and essential papers. Other Strengths And Weaknesses: Strength: - The paper introduces a novel wavelet construction by decomposing Chebyshev polynomials into odd and even terms, ensuring wavelet admissibility. - These wavelets are learnable and, together with the matrix-valued kernel, provide greater flexibility. It also includes a theoretical proof for long-range information capture and demonstrates strong performance through extensive experiments across various datasets and models, achieving state-of-the-art results. Weakness: - However, including the concerns raised above, I see the overall lack of clarity in the writing and the complex notations as weaknesses. Other Comments Or Suggestions: Some minor comments would be: 1. In Eq. (3), shouldn't the dimension be Nxd instead of NxN? 2. The paper should states that the eigenvalues are restricted within [0,2] for normalized Laplacian to define $g(\lambda)$ is a strict band-pass filter in the range [0,2]. 3. There is no explanation of what $H$ represents in Eq. (8). 4. There is a large blank space above "Other experiments" on page 8 and conclusion should be expanded. 5. In Table 3, the WaveGC result for Pf is reported as 69.10, but in Table 6, it appears as 69.01. Questions For Authors: - The notation in Eq. (6) seems unclear. Does $d$ here refer to the node feature dimension? According to Appendix C, the dimension of $\hat{Z}$ is given as $N \times (d+1)$, which does not align with the matrix operations. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the review provided. Please, find below our clarifications on the points raised, and the link to the figures & tables: https://drive.google.com/file/d/1DALF7e1t6O4SSMUkvfi2euMufIpNJARv. > Q1: Analysis of learned scales and experimental impacts. 1) Learned Scales and Receptive Fields Figure 1 visualizes the largest learned scales and corresponding receptive fields for Ps and VOC, using the heuristic: Given a wavelet Ψ(sλ), node j lies in the receptive field of node i if |Ψ(sλ)[i, j]| > 0.1 × max(|Ψ(sλ)|). Under this criterion, Ps exhibits a significantly larger receptive field (scale = 9.48), aligning with its inherently longer average shortest-path distances. Table 1 further confirms this through average and max receptive field sizes across all nodes. 2) Behavior at Extremely Large Scales (s → ∞) We increased the predefined scale vector $\bar{s}$ in Eq. (7) for Pf to be (10, 100, 1000). This vector determines the upper bound of the learnable scale range. As shown in Figure 2, the receptive field expands from local (s = 9.17) to global (s = 988.24), empirically supporting our theoretical claim that WaveGC captures long-range dependencies as 𝑠→∞. > Q2: Verify the necessity of MPNN for low-frequency information. As shown in Table 2 (w/o MPNN), WaveGC remains competitive without the MPNN branch, though with some performance drop—demonstrating its adaptability while also showing the benefit of MPNN. Wavelet bases are inherently localized and suppress low-frequency signals, enabling node-specific filtering for local structure modeling. However, global community-level information—captured by low-frequency components—is still important. A single scaling function h(λ) appears insufficient to model this alone, as reflected by the performance drop without MPNN. Therefore, we argue that including an MPNN branch complements the wavelet by restoring low-frequency (community-level) signals. Notably, removing the wavelet branch (w/o wavelet) leads to a larger performance drop, confirming that wavelets are the primary contributor, with MPNN as a helpful supplement. > Q3: Description on kernel parameters. Our goal is to balance expressiveness and efficiency. While matrix-valued kernels offer greater capacity than vector-valued ones, assigning separate matrices per frequency (as in FNO) leads to excessive parameters and risk of overfitting. To mitigate this, we use a shared MLP across frequencies, which retains modeling power while keeping parameter count low. As shown in Tables 3 and 4, this strategy improves performance with manageable complexity. We will revise the text to clarify this trade-off. > Q4: Usage of tight frames and parameter sharing. 1) **Parameter sharing**. In Appendix D.7, “parameter sharing” refers to sharing parameters across stacked WaveGC+MPNN blocks (Fig. 2 in the paper). Most datasets use independent parameters per layer, which performed better empirically. This is unrelated to the matrix kernel’s weight-sharing, which is always applied. We will revise the wording to clarify this. 2) **Tight frames**. The tight frame condition simplifies Eq. (9), but requires normalizing the scaling and wavelet bases, which may restrict model expressiveness. For ogbn-arxiv and Peptides-struct, we relaxed this constraint (i.e., omitted normalization) to prioritize performance. Importantly, **all datasets** still used Eq. (9), so the computational efficiency remained unaffected. > Q5: Does MLP(F(X)) facilitate interaction between node features? It is correct that the matrix-valued kernel (via MLP) operates within the feature dimension of each spectral component, not directly across frequency modes. Each row in $F(X)\in \mathbb{R}^{N(J+1) \times d}$ is transformed independently by the shared MLP. However, since all spectral modes share the same MLP, the model learns a **unified transformation** that generalizes across modes. This shared design **indirectly couples the modes**, as the MLP must accommodate diverse spectral inputs. We will revise the text to clarify this mechanism. > Q6: Rationality of only keeping the first 30% graph spectrum. Following your suggestion, we tested three types of spectral filters: 1) **Low-frequency (diffusion) kernel**: g1(λ)=exp⁡(−β⋅λ), 2) **High-frequency kernel**: g2(λ)=exp⁡(−β⋅(2−λ)) and 3) **Combined kernel**: g3(λ)=g1(λ)+g2(λ). Applied using R=Ug(λ)U⊤, the results (Table 6) show that g1 consistently outperformed the others across four short-range datasets. In contrast, g2 performed poorly, and g3 showed mixed results. On ogbn-arxiv, the diffusion kernel caused OOM. These findings support our choice to retain only the first 30% of the spectrum: it captures the most relevant low-frequency signals while reducing computational cost and noise from high-frequency parts. > Q7: Use the same datasets across experiments. We have supplemented experiments as in Tab. 3, 4 and 5 in the link. New results did not overturn the conclusions in the paper. --- Rebuttal Comment 1.1: Comment: I think the authors may have misunderstood my question regarding the use of only the first 30% of the eigenvalues and I did not ask for additional experiment. In fact, this new result raises me doubt as the 3) combined case is not best, i.e., combining representations from low-frequency and high-frequency does not improve the result from using the 1) diffusion kernel only. As wavelets are natural band-pass filters, this only means that wavelets as band-pass filtering harms the performance and only a scaling function should be adopted. What I wondered in the first place was that, focusing on the small eigenvalues limits the behavior of wavelets as band-pass filters as the $\lambda$s used in the experiment will be distributed mostly near the origin rather than 2, and thus $g(\lambda) \approx 0$. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our response. We apologize for our misunderstanding of the question. Below, we clarify why focusing on small eigenvalues—i.e., retaining only the first 30% of the spectrum—does not limit the behavior of wavelets. We present our explanation from both **spectral** and **spatial** perspectives: 1) **Spectral perspective: Truncated eigenvalues still allow wavelets to capture meaningful low-frequency signals.** As shown in Fig. 5 (Appendix D.3), the wavelet function g(sλ) retains non-trivial amplitudes within the first 30% domain. While g(λ)≈0, the retained spectral range is sufficiently broad to allow the wavelets to operate effectively. This aligns with our experiments showing that low-frequency components are most beneficial for short-range datasets, while high frequencies may harm performance. Therefore, even in the truncated setting, both the scaling function h(λ) and wavelet function g(λ) contribute meaningfully to low-frequency modeling. 2) **Spatial perspective: Spectral truncation mimics larger scales, expanding receptive fields and enabling higher-order aggregation.** In Fig. 5(b) (Appendix D.3), we observe that truncating the spectrum mimics the effect of using a larger wavelet scale s>1 (see also Fig. 1(3) in https://drive.google.com/file/d/1cLakrmKX0nOlcyv_RC2_e38npfxiyKlJ/view?usp=sharing), which reduces the effective spectral range and increases the spatial receptive field. This effect is visually confirmed in Fig. 4(c) and 5(c) (Appendix D.3), where the receptive fields become noticeably larger after truncation. Thus, even on short-range datasets, the wavelet branch captures valuable higher-order information that complements local aggregation from MPNN. This complementary role is further validated by the performance drop observed in the ablation study (Table 2 in https://drive.google.com/file/d/1DALF7e1t6O4SSMUkvfi2euMufIpNJARv.) when wavelets are removed. Overall, spectral truncation does not impair wavelet behavior; instead, it supports effective low-frequency modeling while also enhancing spatial coverage. We will revise the manuscript to better explain this nuanced interaction.
null
null
null
null
null
null
FOUNDER: Grounding Foundation Models in World Models for Open-Ended Embodied Decision Making
Accept (poster)
Summary: FOUNDER proposes a method that leverages the generalization capability of world model dynamics alongside the prior knowledge embedded in foundation models to improve embodied decision making, and demonstrate its effectiveness through extensive experiments across diverse domains. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence; further details are discussed in the following section. Methods And Evaluation Criteria: The proposed methods are appropriately evaluated using diverse criteria in environments such as Minecraft and DMC. Theoretical Claims: The mathematical formulations are well-structured and clearly presented, making them easy to understand. Experimental Designs Or Analyses: I checked the novelty of this submission and discussed the details in the section addressing its strengths and weaknesses. Supplementary Material: I reviewed the appendix and compiled the derived questions in the ‘question’ section. Relation To Broader Scientific Literature: The paper provides a thorough analysis of the strengths and weaknesses of both existing world models and foundation models, proposing a method that leverages their complementary benefits to enhance embodied decision-making performance. Essential References Not Discussed: This paper appropriately cites and discusses the prior works necessary to understand and explain the proposed method. Other Strengths And Weaknesses: - strength: - The authors conduct extensive experiments across multiple domains, demonstrating the approach’s robustness and scalability. - The selection of baselines appears well-reasoned, and the chosen metrics for comparison seem appropriate for evaluating the method’s improvements. - weakness: - It seems that FOUNDER can be viewed as essentially LEXA [1] plus Foundation Model : LEXA already utilizes a similar world model framework along with temporal distance prediction. While FOUNDER’s approach and results appear compelling, it is less clear how novel the core idea is compared to existing solutions. - The key hyperparameters (e.g., the frame window size k, the KL weight, etc.) are not clearly summarized, and the paper would benefit from an ablation study detailing how these values affect performance. Other Comments Or Suggestions: No Questions For Authors: - In the behavior learning phase, it’s unclear in the paper how goal sampling is done during policy training. Do you randomly pick from the offline dataset, take one as the goal? - If you’re calculating temporal distance with MSE as in Equation (7), wouldn't there be an issue if, for instance, the robot is standing still in the dataset or exhibits periodic behavior, resulting in incorrect learning? - Since you’re using a foundation model, you really need to demonstrate generalization in the experiments. Does it still work if you query out-of-distribution goal images or prompts that weren’t used in training? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful comments and valuable feedback on our paper. We are delighted to receive recognition of our method's effectiveness, robustness, and scalability through extensive experiments, and for acknowledging that our claims are supported by convincing evidence with well-justified evaluation criteria (baselines/metrics). Please find below our detailed responses to the reviewer’s comments. **1. Novelty concerns:** We appreciate the reviewer's comparison between FOUNDER and LEXA. While both methods utilize world models and temporal distance prediction, their technical contributions differ fundamentally. LEXA focuses on discovering diverse goals and learning to achieve them during online exploration, enabling adaptation to image goals for downstream tasks. In contrast, the core idea of FOUNDER is to ground foundation model (FM) representations (from multimodal tasks) into the world model (WM)'s latent space through explicit mapping functions, operating entirely offline. This enables physics-consistent translation of VLM embeddings into actionable WM states. It should be noted that world model and temporal distance predictor in FOUNDER only serve as tools to implement this paradigm, whereas the core contribution resides in bridging the gap between FM's general knowledge with WM's physical dynamics, establishing a new methodology for open-ended task solving and setting FOUNDER apart from prior methods using VLMs in RL. **2. Hyperparameter ablation**: We appreciate the reviewer's feedback regarding the hyperparameter settings. For frame window size (k=8), this choice strictly aligns with two critical considerations: (1) InternVideo2's pretrained temporal encoder requires fixed 8-frame inputs, as validated in its implementation. (2) Retained GenRL's 8-frame configuration for fair comparison. For the kl weight β, we use β=1.0 by default. To address your concerns, we conducted an ablation study on the Cheetah and Kitchen domains to validate their sensitivity, and the results are on [our website](https://sites.google.com/view/founder-rl). We find that FOUNDER is not sensitive to the kl weight parameter, while 1.0 is the best choice. **3. Goal sampling in behavior learning**: In fact, the goal state zg of a given video/text task during behavior learning is sampled from the goal distribution inferred from the video/text VLM embedding, e.g., using the learned mapping function: zg∼Qψ(⋅∣eg). This is discussed in Equation (6) and Appendix C.1. **4. Question about temporal distance learning**: We thank the reviewer for raising this valuable point. Learning a temporal distance predictor from quasi-static data may pose risks. However, the world model provides proper regularization to the input states, and such datasets are rare in real applications. To address your concerns, we provide experiments on learning a temporal distance predictor on the Stand datasets, where trajectories are generated by the expert policy in Walker Run, and validate the predictor's accuracy in predicting the same WM states as near-zero temporal distance. The results on [our website](https://sites.google.com/view/founder-rl) show robustness of temporal prediction in such settings. **5. Out-of-distribution tasks**: In section 5.2, we evaluate the performance of FOUNDER on out-of-distribution video tasks, where the visual appearance of task video prompts differs significantly from the agent's environment observations and training data, including cross-embodiment and cross-viewpoint settings on DMC and Kitchen, and FOUNDER demonstrates superior performance than other baselines. Also, in Sections 5.1 and 5.4, we test FOUNDER on out-of-distribution language tasks like Cheetah Flipping and Hunt Sheep, where the training dataset does not contain buffers of these tasks, and the performance of FOUNDER exhibits its generalization abilities. To further address your concerns, we present experimental results on real-world video tasks on [our website](https://sites.google.com/view/founder-rl) using videos provided in GenRL's code repository, and FOUNDER also demonstrates solid performance compared to GenRL.
Summary: This paper introduces FOUNDER, a framework for grounding foundation models (FMs) in world models (WMs) to enable generalizable representation learning, multi-modal prompting, and dense reward prediction. FOUNDER shares conceptual similarities with GenRL but incorporates temporal information to enhance reward prediction and goal-conditioned reinforcement learning (RL) for more flexible behavior cloning. Experimental results on DMC and Kitchen benchmarks demonstrate that FOUNDER outperforms GenRL baselines. ## update after rebuttal The author's response addressed my concerns. I would like to remain my original ratings. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA. Experimental Designs Or Analyses: The experimental design appears sound. Supplementary Material: Yes, I have reviewed the additional experimental results and analysis presented in the supplementary material. Relation To Broader Scientific Literature: FOUNDER shares similarity with GenRL but introduces temporal distance prediction and a goal-conditioned RL framework, resulting in more robust performance. It also relates to WM-CLIP, which learns a mapping from WM states to FM representations. However, FOUNDER instead grounds FM representations in the WM state space, leading to improved performance. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strength - The paper is clearly written and well-organized. - The experimental results effectively demonstrate the advantages of the proposed FOUNDER framework. Weakness - The proposed temporal distance prediction does not consistently improve performance. - For example, in Kitchen Burner (Table 1), the mean score drops significantly from 1.0 to 0.6. - Similar performance degradation is observed in the Minecraft environment, where Figure 5 shows that FOUNDER w/o TempD consistently outperforms FOUNDER. - It remains unclear whether temporal distance prediction is a broadly applicable method or merely a task-specific trick. The paper does not provide sufficient analysis regarding: 1) Which types of tasks benefit from temporal distance prediction? 2) Under what conditions does it fail? 3) What are the underlying reasons for performance degradation in certain tasks? Other Comments Or Suggestions: None. Questions For Authors: Please refer to the Weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable feedback on our paper. We are delighted to receive recognition of our method's sound experimental design and solid experimental results. Please find below our detailed responses to the reviewer’s comments. **The influence of temporal distance**: We appreciate the reviewer for raising these important questions regarding temporal distance (TempD). Generally, the role of TempD-based rewards is to enhance temporal awareness and provide task-completion information to the FOUNDER agent. From Table 1, using cosine similarity between VLM representations of task and observations as rewards, FOUNDER w/o TempD already performs well on static tasks (Stand tasks and Kitchen tasks). However, FOUNDER w/o TempD struggles in dynamic tasks (Walk, Run). We observe that in dynamic tasks, the agent can perform well at the very beginning of the behavior learning stage, but its performance deteriorates as training progresses, eventually reaching very low performance by the end. We also discovered that the real return and pseudo return curves exhibit completely opposite trends during behavior learning. The real performance worsens as the agent maximizes the pseudo reward, causing a reward hacking problem. We then visualize the resulting trajectories of the policy at the early stage and the final policy. Despite performing well at the beginning, we find that at the end, the agent’s behavior becomes more static. The agent may stay at its initial position, appearing as if it is running or walking, but in reality, it is only 'running' or 'walking' at the original place without progressing forward, or moving forward at an extremely slow pace. This explains the poor final performance of FOUNDER w/o TempD in Table 1. These training curves, trajectory visualization, and detailed analysis can be found on [our website](https://sites.google.com/view/founder-rl). We conclude that reward functions based on cosine similarity or other direct distance metrics may lead the policy to mimic the visual appearance of the goal, while overlooking the underlying task semantics and multi-step movement, particularly in dynamic tasks like running or walking. Since the FOUNDER-based method maps the target sequence to a single goal state in the world model, we hypothesize that using direct distance functions may result in a lack of temporal awareness. At this time, the role of TempD-based rewards is to provide temporal information and crucial task-completion information to the agent, with a much lower training cost than GenRL's complex sequence matching. In Table 1, FOUNDER then avoids the reward hacking problem and achieves superior performance on dynamic tasks like Walk and Run, compared to GenRL and FOUNDER w/o TempD. Moreover, Table 1 shows that incorporating TempD will not only benefit temporal dynamic tasks but also match or surpass the performance of FOUNDER w/o TempD on static tasks. Kitchen Burner is the only 1 of the 9 static tasks in Table 1 where FOUNDER is outperformed by FOUNDER w/o TempD, which does not constitute statistical significance. Furthermore, for Minecraft tasks, FOUNDER w/o TempD performs surprisingly good, and incorporating TempD could weaken the performance on several tasks. We hypothesize that the data quality and the stochasticity nature of the Minecraft environment may have a negative impact on the learning process of the TempD predictor. When temporal predictions become noisy, falling back to cosine similarity serves as a safer baseline, and using visual similarities as pseudo-rewards in short-horizon Minecraft tasks is beneficial and will not cause problems on Walk or Run tasks in DMC. However, when TempD learns accurate temporal dynamics, its temporal credit assignment mechanism provides provable advantages in reward-free multi-task RL, as is proved in prior works [1-4]. We will incorporate approaches discussed in our rebuttal with Reviewer 2yAX, as well as techniques in [1], for better TempD learning under stochastic and complex environments. [1] Myers V, Zheng C, Dragan A, et al. Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making[C]//International Conference on Machine Learning. PMLR, 2024: 37076-37096. [2] Park S, Kreiman T, Levine S. Foundation Policies with Hilbert Representations[C]//International Conference on Machine Learning. PMLR, 2024: 39737-39761. [3] Park S, Rybkin O, Levine S. Metra: Scalable unsupervised rl with metric-aware abstraction[J]. arXiv preprint arXiv:2310.08887, 2023. [4] Mendonca R, Rybkin O, Daniilidis K, et al. Discovering and achieving goals via world models[J]. Advances in Neural Information Processing Systems, 2021, 34: 24379-24391. --- Rebuttal Comment 1.1: Comment: The authors' response addressed most of my concerns. I will maintain my original score. --- Reply to Comment 1.1.1: Comment: We are glad to hear that our efforts to address your concerns have been well-received. Thank you for your time and consideration.
Summary: This work proposes FOUNDER, a method that leverages Visual-Language Models (VLMs) to get representations from visual observations and train RL agents on World Models imagination, starting from an offline dataset of trajectories. It proposes a method of aligning the embeddings from the VLM with the latent states learned by the World Model via a learned mapping function. This mapping function is optimized to reconstruct the VLM embeddings while also covering the WM latent distribution (via KL divergence minimization). For behavior learning, FOUNDER first learns a temporal distance predictor model, which is then used as reward model for the imagined rollouts. The agent then is learned via actor-critic RL in these imagined trajectories. The work presents experiments in DMC/Kitchen/Minecraft environments, claiming improvements over closer baselines (GenRL and WM-CLIP [1]). Claims And Evidence: I found two core claims from the paper: 1) The proposed method improves the alignment between the VLMs representations and the WM representations, which allows it to capture the semantic of the tasks better. The evidence would be the downstream results in the DMC/Kitchen/Minecraft environments. While I agree this is a proper form of evidence, I have some concerns on the experimental design (to be described later) 2) The temporal distance predictor provides a more consistent and informative reward signal, while prior methods are prone to reward hacking. The presented evidence is on qualitative analysis of failure cases and correlation analysis between the proxy reward and ground truth reward. There is one additional claim, which is not central as the previous ones, but still debatable: The proposed components of the method are task-agnostic and therefore “universally applicable to any downstream task and effectively facilitating open-ended task solving in embodied environments”. While it is clear that the modeling assumptions are task-agnostic, I think it is too strong to claim that they are universally applicable to any downstream task as there is no evidence these components would actually generalize/learn any task and also it is unclear how much this actually relies on the properties of the offline dataset. Methods And Evaluation Criteria: I believe that the proposed method, considered baselines, and evaluation criteria make sense. Nonetheless, I am not sure if the described problem setting and parts of the narrative really align. More concretely, the problem setting describes as a goal “solving open-ended tasks in the context of offline reinforcement learning from visual observations”, and it is unclear the open-ended terminology here, which is often related to interestingness (information gain) and learnability [2]. From what was presented in the paper, it is not clear how a method in a pure offline setting, with a clear definition of goals (as explicitly model as goal conditioned policy) would be learning open-ended behavior. Apart from that, there are some design choices which are questionable/unjustified (See weaknesses below) Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: I have some major concerns in terms of the experimental design. - The presented numbers (Table 1) does not reproduce what was claimed in GenRL paper [1]. In fact, there are substantial changes in some environments. It is unclear why this happens. Is the offline dataset used in this work different? If so, why? Given these discrepancies, it is hard to tell if GenRL was fully reproduced or if these methods are too sensitive for the employed offline dataset. - The paper does not bring any offline RL method as baseline, arguing that it involves a “cumbersome process for retraining from scratch for every task”. While I understand this, I believe they are necessary here to justify the need for a more complex, model-based approach. Furthermore, this is the standard in prior work as well - for instance, GenRL does bring these methods, which clarifies this point. - The minecraft experiments look very inconclusive. The standard errors are too spread for all methods in a way that is hard to claim that FOUNDER-based methods consistently outperform GenRL with statistical significance. This is concerning since it is supposed to be the most challenging benchmark evaluated in the work, and the gains are very unclear. Supplementary Material: Yes, the appendices and failure cases in the external website. Relation To Broader Scientific Literature: In my perspective, the main contribution of this work is to identify and highlight that the reward modeling aspect of GenRL leads to reward misspecification and hacking. The proposed method for alignment embeddings also looks relevant, although it is also unclear if this really works better than GenRL method (from Table 1, GenRL works better than Founder w/o TempD). Essential References Not Discussed: I couldn’t find any essential references not discussed. Other Strengths And Weaknesses: I believe the paper should discuss more some potential limitations: 1) Limitations related to the temporal distance predictor: - The temporal distance predictor relies on a predefined sequence length T, but on test-time this is not available (unless the problem setting is limited to fixed-horizon episodic tasks, but this is not specified). - Also, why isn’t the temporal-distance predictor reward model also not prone to reward hacking? In a harder environment I would expect it to misgeneralize and present the same effect. - There is also a potential limitation on mining negative examples for the temporal distance predictor training, which the paper does not explore and only performs randomly. 2) The “+1 reward” is not justified/discussed in proper depth. As a simple reward shaping, I would not expect to have such a big impact in the final performance as presented in Table 5. It would be very important to discuss the effect of this heuristic. 3) One last thing, while not specific to FOUNDER, but more broadly for these reward-free, offline settings: there is a strong reliance on the properties of the employed offline dataset. I believe this should be discussed in a limitations sections, cont Other Comments Or Suggestions: N/A Questions For Authors: Please see my questions in the previous points. **References** [1] Mazzaglia et. al. GenRL: Multimodal-foundation world models for generalization in embodied agents. NeurIPS, 2024. [2] Hughes et. al. Position: Open-Endedness is Essential for Artificial Superhuman Intelligence. ICML, 2024. **Please refer to my rebuttal comment for the updated score** Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful comments and valuable feedback. Below are our responses to each of their concerns: **Universally applicable claim**: Our claim of task-agnosticism refers to FOUNDER’s architecture—world model, mapping function, and reward generator—enabling deployment across downstream tasks specified via text or video inputs without architectural changes. This highlights its broad applicability, not guaranteed universal performance. We appreciate your feedback and will clarify this in the final version. **Open-endedness**: In our context, “open-endedness” refers to the agent’s ability to ground and solve user-defined tasks from free-form prompts via multimodal interfaces, without predefined reward functions. This differs from definitions emphasizing self-evolving exploration or unbounded learnability. FOUNDER achieves this capability by jointly learning the WM, mapping function, reward generator, and downstream policy from offline data. This definition aligns with similar works [1-3]. We will clarify this in the final version. **Performance of GenRL**: Our reproduction of GenRL follows its official code and datasets. Our reproduced results (GenRL_ours) align with GenRL’s arXiv v1 metrics on in-distribution DMC tasks but underperform on Kitchen tasks, likely due to domain properties. Despite GenRL v2’s improved performance, its implementation was not updated. FOUNDER outperforms GenRL in both cases. | | GenRL_v1 (reported) | GenRL_v2 (reported) | GenRL_ours (reproduced) | FOUNDER | | --- | --- | --- | --- | --- | | **DMC** | 0.74 $\pm$ 0.02 | 0.82 $\pm$ 0.01 | 0.75 $\pm$ 0.08 | 0.87 $\pm$ 0.03 | | **Kitchen** | 0.69 $\pm$ 0.15 | 0.76 $\pm$ 0.05 | 0.50 $\pm$ 0.32 | 0.89 $\pm$ 0.17 | **Effectiveness of FOUNDER's components:** Our key innovation is grounding VLM representations into WM states without GenRL’s costly sequence matching, enhancing task understanding beyond reward correction. As shown in Table 1, FOUNDER w/o TempD outperforms GenRL on static tasks (e.g., stand, Kitchen), while it requires TempD for dynamic tasks (e.g., Walk, Run) to improve temporal awareness. TempD efficiently provides temporal structure at a much lower computational cost than GenRL’s sequence matching. TempD’s addition to GenRL does not improve its performance, confirming FOUNDER’s architecture drives success. **The “+1 reward” heuristic**: The +1 operation smooths rewards, improving learning. Experiments show agents can achieve similar results without this shaping given more training steps. Training curves comparing shaped and unshaped rewards are available on [our website](https://sites.google.com/view/founder-rl). **Minecraft Experiment**: High variance in Minecraft experiments is mainly due to the environment’s stochasticity. To better reflect statistical significance, we now report 95% confidence intervals and provide clearer learning curves on our website. FOUNDER matches GenRL on 2/5 tasks and outperforms it on the remaining three. **Comparison with model-free baselines**: We directly compare **HILP**, a multi-task model-free approach, and **TD3**, the strongest single-task model-free baseline per GenRL. As detailed in our response to reviewer e54P, these comparisons justify the need for a model-based approach, with FOUNDER achieving a strong balance between performance and efficiency. **Limitations related to the temporal distance predictor**: 1. The predictor learns with a predefined sequence length during training, but during testing, it outputs the predicted temporal distance independent of sequence length. 2. Temporal distance excels in cross-domain tasks and metric evaluation, where GenRL and cosine similarity methods may fail due to reward hacking. While extreme OOD scenarios may still pose challenges, FOUNDER’s success shows its robustness in capturing deep task semantics rather than relying on brittle visual correlations. 3. We appreciate the reviewer’s insight on mining negative samples. We follow LEXA [4], but acknowledge the risk of mislabeling. This can be mitigated by filtering out high-similarity negative pairs or actively selecting challenging cross-sequence pairs via feature-space searching. We will include an ablation on this in the final version. **Limitations from offline data quality**: We agree with the reviewer that dataset properties limit reward-free offline RL methods and have discussed this in Section 6. We welcome further discussion on dataset dependency concerns, as the reviewer did not fully elaborate on the complete questions. [1] Fan L, et al. Minedojo: Building open-ended embodied agents with internet-scale knowledge. NeurIPS 2022. [2] Team O E L, et al. Open-ended learning leads to generally capable agents. arXiv preprint. [3] Qin Y, et al. Mp5: A multi-modal open-ended embodied system in minecraft via active perception. CVPR 2024. [4] Mendonca R, et al. Discovering and achieving goals via world models. NeurIPS 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. After careful examination, I can say that most of my concerns were addressed. I still believe all the limitations I raised are valid and I strongly recommend the work to discuss them. Furthermore, while the "+1 reward" is not necessary to attain the best performance (and brought in the new results), it still does make a big difference, and the justification given is still vague. I also strongly recommend authors to incorporate the feedback on the claims about universally applicability and open-endedness, otherwise the work may sound misleading for some audience. Nonetheless, I don't think these points are grounds for rejection, so I am raising my score to 3. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer's feedback and the updated score. We are glad to hear that our efforts to address most of your concerns have been well-received. We appreciate your valuable suggestions, and we will ensure a thorough discussion and clarification of the points you raised, such as task-agnostic applicability and open-ended multi-modal task-solving capability, in the final version of our work. Moreover, regarding the "+1 reward", we have released training curves comparing performance with and without the "+1 reward" on our website. These results confirm that agents can eventually achieve similar performance without the "+1 shaping" over more training steps, demonstrating that the "+1" operation is an optional engineering choice rather than a fundamental component of our method. We acknowledge that the "+1" shaping does improve learning efficiency and stability. Since most of the originally predicted temporal distances are clustered near -1, the "+1" operation shifts the rewards from near -1 to near 0, and it is common for prior works [3-5] to use zero-centering rewards (e.g. clipping or rescaling rewards to be zero-centered) to enhance learning. Additionally, the "+1" operation is a theoretically grounded instance of potential-based reward shaping [6] that preserves policy optimality, where the potential function induces a constant reward shift. Furthermore, directly normalizing the original temporal distance (without "+1") and using the resulting rewards yields similar performance and learning efficiency to the "+1" reward, as shown in the new learning curves results on [our website](https://sites.google.com/view/founder-rl). This indicates that the positive impact of the "+1" operation on the performance is similar to that of reward normalization. In conclusion, the "+1" operation and normalization are merely reward shaping techniques. While these shaping methods may affect learning performance [1-2], they are not central to the core of our approach. Nevertheless, we sincerely thank the reviewer for raising and discussing this issue, as it provides us with valuable insights. We deeply appreciate the reviewer’s time, consideration, and constructive feedback. [1] Henderson P, Islam R, Bachman P, et al. Deep reinforcement learning that matters[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1).\ [2] Van Hasselt H P, Guez A, Hessel M, et al. Learning values across many orders of magnitude[J]. Advances in neural information processing systems, 2016, 29.\ [3] Hessel M, Modayil J, Van Hasselt H, et al. Rainbow: Combining improvements in deep reinforcement learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1).\ [4] Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double q-learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2016, 30(1).\ [5] Andrychowicz M, Wolski F, Ray A, et al. Hindsight experience replay[J]. Advances in neural information processing systems, 2017, 30.\ [6] Ng A Y, Harada D, Russell S. Policy invariance under reward transformations: Theory and application to reward shaping[C]//Icml. 1999, 99: 278-287.
Summary: The paper proposed FOUNDER, a novel framework that integrates Foundation Models (FMs) with World Models (WMs) to enable reward-free, open-ended decision-making in embodied environments. The central idea is to ground FM representations into the WM state space, allowing GCRL through imagination. Instead of relying on manually crafted reward functions, FOUNDER estimates the temporal distance to goal states as an intrinsic reward signal, leading to superior task generalization. The proposed method is evaluated on multi-task offline visual control benchmarks, including the DeepMind Control Suite, Franka Kitchen, and Minecraft, demonstrating strong performance in learning task semantics from text or video prompts, especially in challenging cross-domain settings. Empirical results show that FOUNDER significantly outperforms prior methods like GenRL by leveraging deeper semantic understanding rather than relying on step-by-step visual alignment. Claims And Evidence: The paper claims that FOUNDER enhances task generalization by bridging FM knowledge with WM-based decision-making, which is supported by empirical results showing higher success rates across diverse domains. Another claim is that the temporal distance-based reward function provides a more reliable training signal than traditional visual similarity metrics, validated through improved reward consistency analysis. The authors also claim that FOUNDER performs well in cross-domain tasks, where large domain gaps exist between task prompts and the embodied environment. This is demonstrated by its superior performance in generalizing across different camera viewpoints and agent embodiments in tasks like cheetah and Minecraft. Methods And Evaluation Criteria: The evaluation protocol covered both locomotion and manipulation tasks with multiple baselines, including GenRL, WM-CLIP, and ablations of FOUNDER without temporal distance prediction. Except there are no real-world applications, the experiments are extensive and detailed. The author talked about efficiency or inefficiency at a few places, so maybe a quantitative result and analysis for efficiency would be more convincing. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experiments are extensive, with evaluations on both text-based and video-based task prompts across multiple simulated environments. Supplementary Material: The Supp. provides additional environment and experiment details. While these analyses make claims more concrete, further exploration into the scalability and computational efficiency of FOUNDER would be beneficial. Relation To Broader Scientific Literature: The paper is well-positioned within the literature on foundation models for reinforcement learning, goal-conditioned RL, and world models. It builds upon prior work such as GenRL, DreamerV3, and Choreographer, differentiating itself by focusing on task grounding through state-space alignment. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is well-written and straightforward to read, and the experiments are extensive and solid. One major concern is I am not sure how the proposed method compared with other methods, especially the model-free method, in both the performance, efficiency and even real-world applications. I noticed the author stated that "Model-free offline RL methods like IQL (Kostrikov et al., 2021b) and TD3+BC (Fujimoto & Gu, 2021) involve the cumbersome process of retraining from scratch for every task, and zero-shot FM-generated rewards using task prompts and observations has been shown to perform poor in (Mazzaglia et al., 2024). Therefore, we compare FOUNDER only with model-based methods." I may hold a more conservative opinion on this: for the "cumbersome process of retraining from scratch for every task" I believe there are lots of multi-task on-policy work that may also be a good and fair comparison, and seemed the published date of 2021 is far more behind 2024. Nevertheless, requiring intensive extra experiments for model-free methods may be hard in the rebuttal phase, I may expect more analysis on the efficiency part. Other Comments Or Suggestions: No Questions For Authors: All experiments primarily focus on tabletop and 2D short-horizon tasks. Given this, do the authors anticipate that FOUNDER’s integration of WM could be even more beneficial for long-horizon tasks that require memory and temporal abstraction? Specifically, could the WM’s ability to model latent state transitions and predict future states provide an advantage in tasks where agents must recall and utilize past information over extended time horizons? If so, what modifications or enhancements might be necessary for FOUNDER to effectively scale to such settings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful comments and valuable feedback on our paper. We are delighted to receive recognition of our method's solid performance and strong task generalization demonstrated by extensive and detailed experiments. Please find below our detailed responses to the reviewer’s comments. **Other model-free baselines**: Prior work (e.g., GenRL) has proven that traditional model-free offline RL methods (IQL, TD3+BC, TD3) perform poorly in this setting compared to model-based approaches. These methods rely on zero-shot pseudo-rewards generated by Foundation Models (FM) through cosine similarity between FM representations of task prompts and observations. However, FM rewards are not explicitly grounded in the embodied domain, limiting their effectiveness. Additionally, traditional model-free methods require training from scratch for each task, making them inefficient and impractical for multi-task solving. However, to ensure a comprehensive evaluation and address your concerns, we now include HILP [1], a multi-task model-free method that achieves strong zero-shot RL performance in goal-conditioned RL, and TD3, the best-performing single-task model-free baseline according to GenRL’s experiment. We assess their performance across eight tasks in the Cheetah and Kitchen domains, alongside GenRL for a clearer comparison. The results are on [our website](https://sites.google.com/view/founder-rl). We find that FOUNDER consistently outperforms both single-task and multi-task model-free baselines, and our prior claim is reinforced. We sincerely appreciate the reviewer’s suggestion regarding model-free baseline comparisons. **Efficiency and computational costs**: We appreciate the reviewer's emphasis on computational efficiency and provide a comparison of the four typical approaches mentioned. All experiments were conducted on an RTX 3090 GPU. For pretraining, training the MFWM in GenRL for 500k gradient steps requires approximately five days, whereas FOUNDER reduces this overhead to about three days by avoiding the seq2seq-style generative sequence modeling used in GenRL. Pretraining the Hilbert representation and foundation policy in HILP for 500k gradient steps takes around 30 hours. For downstream tasks, training the actor-critic on a given task prompt for 50k gradient steps takes under five hours for both GenRL and FOUNDER. In the case of model-free methods, HILP enables zero-shot adaptation to downstream tasks, whereas single-task model-free methods like TD3 require approximately seven hours to train from scratch for 500k gradient steps. Overall, we find that FOUNDER strikes a strong balance between performance and learning efficiency. | | TD3 | HILP | GenRL| FOUNDER | | :--- | :--- | :--- |:--- | :--- | | **Performance** | Low | Low | Medium | High | | **Multi-Task Adaptation**| From scratch (~7h) | Zero-shot | Finetune (~5h) | Finetune (~5h) | | **Pretraining Overhead**| N/A | ~30h | ~120h | ~72h | | **Overall Computational Cost**| Low | Medium-Low | High | Medium | **Scalability of FOUNDER** We have included a brief discussion on the scalability of FOUNDER to long-horizon tasks in Section 6. While our current experiments focus on short-horizon tasks, FOUNDER’s architecture is inherently designed to support long-horizon reasoning through two key mechanisms: (1) the world model’s temporal abstraction via latent state transitions and (2) the foundation model’s semantic grounding, which enables hierarchical task decomposition. To fully unlock this potential, there may exist three enhancements: (i) integrating more powerful sequence modeling architectures (e.g., [2]) or continual learning methods (e.g., [3]) a for persistent context tracking, (ii) implementing FM-guided curriculum learning [3] to phase in complex tasks, and (iii) developing a meta-controller for dynamic subgoal generation. We thank the reviewer for raising this concern, and we will explore it in our future work. [1] Park S, Kreiman T, Levine S. Foundation Policies with Hilbert Representations[C]//International Conference on Machine Learning. PMLR, 2024: 39737-39761.\ [2] Samsami M R, Zholus A, Rajendran J, et al. Mastering memory tasks with world models[J]. arXiv preprint arXiv:2403.04253, 2024.\ [3] Feng T, Wang X, Zhou Z, et al. EvoAgent: Agent Autonomous Evolution with Continual World Model for Long-Horizon Tasks[J]. arXiv preprint arXiv:2502.05907, 2025. --- Rebuttal Comment 1.1: Comment: The authors' response addressed most of my concerns. I will maintain my original score. --- Reply to Comment 1.1.1: Comment: We are glad to hear that our efforts to address your concerns have been well-received. Thank you for your time and consideration.
null
null
null
null
null
null
Mean-Shift Distillation for Diffusion Mode Seeking
Reject
Summary: The paper systematically explores behavior of the popular SDS algorithm for sampling of diffusion models and concludes that it does not well cover the true modes of the underlying distribution. Similar observations are made for another recent alternative - SDI. Inspired by the Gaussian paths defining training of diffusion models, the authors adapt the mean shift algorithm to express the gradient of probability function useful for optimization of the images. They proceed to demonstrate efficiency of the algorithm in recovering the modes of the distribution on both small synthetic datasets with known ground-truth and a real image denoiser from StableDiffusion. The method produces more stable and cleaner images than the altneratives which is also useful for tasks such as 3D reconstruction. Finally, the authors explore the parameter space to motive their design choices. ## update after rebuttal The rebuttal has addressed my concerns. I recommend accentance because the paper states a clear, relevant and important problem (finding the modes), proposes well motivated and theoretically founded algorithm (MDS) and clearly shows that it achieves an improvement compared to a relatively recent work (SDI). Is also provides illustrative low-dimensional examples clearly visaulizing the algorithm behavior beyond just showing nice output images. Claims And Evidence: Yes. The authors claim this is a swap-in replacement for SDS which they demonstrate with their algorithm. They also achieve better distribution coverage in their experiments. Methods And Evaluation Criteria: Yes, the method is well motivate. The design and evaluation of the experiments make sense and it is a good fit for the problem. Theoretical Claims: The mathematical derivation of the MDS algorithm seems correct as far as I can judge. I cannot see any errors. Experimental Designs Or Analyses: The design is in line with common practice and it is well explained. Supplementary Material: Yes, the supplementary pages provide useful additional results and algorithms. Relation To Broader Scientific Literature: Yes, the authors discuss relevant literature clearly and efficiently. Establish connection to their work. Essential References Not Discussed: None. Other Strengths And Weaknesses: **Strengths** + Well written, well explained. + Very nice illustration of the problem and achieved results. I like the utilization of the exact denoiser for analysis (even though it is inspired by prior work). The method seems efficient in what it tries to achieve. + Good results. **Weaknesses** - While the results of SDS are well represented in the paper, SDI is not as widely shown. E.g. it is not displayed in tables 1 and 2 and Fig 3. Edit: Additional values provided in the rebuttal. - The z_t in eq. 9 is understood to be the noisy version of x_t (or just noise in the limit) but I do not think it is properly defined in the paper. Edit: A correction promised in the rebuttal. **After rebuttal** The rebuttal has addressed my concerns and I recommend acceptance. Other Comments Or Suggestions: The formulation "gradient of the diffusion output distribution" from the abstract is a bit cumbersome. Typos: - L84: methodß Questions For Authors: 1) The 3D shapes in Figure 5 remain very highly saturated despite the method generally converging to the modes. Does it represent a quality aspect that cannot be well illustrated by the loss visualization in a low dimensional version of the problem? 2) Is the mode seeking behavior always desirable? Does it not come at a cost of diversity by avoiding less common but potentially interesting samples? 3) Figure 8 shows that the guidance in limited interval is important for the results. Would a similar trick also be effective for SDI? 4) How many samples per prompt were used in the experiments? FID computation typically requires thousands a few samples to avoid bias. [Bińkowski, Mikołaj, et al. "Demystifying MMD GANs." International Conference on Learning Representations. 2018.] Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and for recommending acceptance. We are glad they recognize our contributions. Below, we answer questions raised in the review: > While the results of SDS are well represented in the paper, SDI is not as widely shown. E.g. it is not displayed in tables 1 and 2 and Fig 3. In our rebuttal to Reviewer cVvL we extend Table 1, 2, and 3 with comparisons to SDI and a new baseline VSD [1], where missing. We qualitatively compare with SDI on the fractal dataset in Fig. 2. We will also extend it, along with VSD, to the spiral and pinwheel dataset shown in Fig 3 in our revision. > The z_t in eq. 9 is understood to be the noisy version of x_t (or just noise in the limit) but I do not think it is properly defined in the paper. Thank you for highlighting this. We will define the term in our revision. > Is the mode seeking behavior always desirable? The desirability of mode seeking varies between applications. When trying to directly sample images from the trained model, it is true that we wish to sample from the full variety of the distribution instead of getting only the mode. Methods like DDIM aim for this. On the other hand, when we are optimizing an image (or using the image as a proxy to optimize, e.g. NeRF parameters), any gradient-based optimization will converge to a set of sparse points - local extrema - where the gradients are zero (if it converges at all). This is the intended use-case for SDS, SDI, and our method, and in this case, it is not possible in general to have the optimization process converge to a distribution of points. Given that, the best we can guarantee is that the points the process converges to are aligned with the distribution. Mode-seeking is our proposed way of achieving that. > Highly saturated results in Figure 5 We have observed this “washed out” look in the image and 3D results, despite using a low guidance scale (CFG=7.5). We believe this to be an artifact of using CFG in the pipeline, because these effects appear in both our method and the baseline and are consistent with what has been observed in previous literature [2][3]. The analysis on the fractal toy dataset by Karras et al., which we also use in our paper, suggests that this is the visual effect of “sharpening” the modes of the distribution that can also be observed in these 2D test cases. > Figure 8 shows that the guidance in limited interval is important for the results. Would a similar trick also be effective for SDI? SDI only performs one denoising step in each optimization iteration. It is not straightforward to apply guidance in a limited interval without substantial modifications to their algorithm. > How many samples per prompt were used in the experiments For each method, we generate 500 samples per prompt. With a total of 10 prompts, this gives us a total of 5,000 images per method. References: [1] Wang Z, Lu C, Wang Y, et al. “Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation”. Advances in Neural Information Processing Systems, 2023. [2] Ho and Salimans, “Classifier-Free Diffusion Guidance”, 2021. [3] Saharia et al., “Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding”, 2022.
Summary: This paper presents mean-shift distillation, a diffusion distillation technique that provides a provably good proxy for the gradient of the diffusion output distribution. Claims And Evidence: The claims made in the submission are supported by clear evidence. However, the evidence is not convincing enough. For example, only one dataset is evaluated. It will be better if more datasets can be evaluated. In addition, the used prompts are limited. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem. Theoretical Claims: 1. In eq. (1), the meaning of $t$ and $\alpha(t)$ are missing. 2. In eq. (2) - eq. (5), it is clear that $p$ is the data density, but the meaning of $p(y)$ is missing. 3. In eq. (7), the meaning of $K_{\lambda}$ is missing. Experimental Designs Or Analyses: Compared methods are not sufficient. Besides FID which was published in 2017, the latest baselines should be compared. The paper on CLIP-based similarity was not cited. Supplementary Material: Yes, I reviewed Appendix B. Relation To Broader Scientific Literature: The key contributions of the paper are relatively related to the broader scientific literature as it is better than FID and CLIP-based similarity. Essential References Not Discussed: The compared methods, including FID and CLIP-based similarity, are not discussed in the related work. Other Strengths And Weaknesses: 1. Writing needs to be improved. 2. Evaluation is not comprehensive. Other Comments Or Suggestions: In "Distilling diffusion priors" of Section 2, it would be better if the limitations of the related works could be discussed. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We hope the following will resolve any confusion regarding our contribution and evaluations: - We would like to emphasize that the goal of our work is to improve mode seeking and achieve better distillation for diffusion models, not to improve metrics for image generation, such as FID, or metrics for evaluating image-text alignment, such as CLIP similarity. Neither FID nor CLIP-based similarity are baselines for our method, but rather the metrics by which we compare our method to the baselines (SDS, SDI, VSD), as in e.g. Table 3. FID and CLIP-based similarity are widely used evaluation metrics for image generative models in all state-of-the-art research works. We also cite the library we used (torchmetrics from PyTorch Lightning). - We would also like to point out that we evaluate on four datasets: three synthetic datasets demonstrating that current distillation methods fail even in toy distributions and the pre-trained text-to-image StableDiffusion-XL model for the practical setting. The latter was used in all our baselines (SDS, VSD, and SDI). Our text prompts are borrowed from DreamFusion (Poole et al., 2022). - Regarding the three points raised in the theoretical claim part of the review, we will clarify in the paper: (1) $t$ is the time step used in denoising diffusion, and $\alpha(t)$ is the time-dependent weighting function. Both these terms are standard notation in the denoising diffusion literature [1]. Eq. 1 is the SDS gradient from DreamFusion, where $t$ is randomly sampled. (2) $p(y)$ is the data density, and $p(x)$ is the smoothed data density after convolving the data density with the Gaussian Kernel. (3) $K_\lambda$ is the kernel used in mean-shift clustering, which determines the weight of nearby points for re-estimation of the mean. There have been various choices for kernels in mean-shift clustering, e.g., Gaussian, Epanechnikov, flat kernel [2]. Our analysis focuses on the case $K_\lambda (x)$ is the Gaussian kernel used in Eq. 2. In Eq. 7, we write the general expression for mean-shift updates for any kernel function. References: [1] Denoising Diffusion Probabilistic Models. J. Ho, A, Jain, P. Abbeel. NeurIPS 2020. [2] https://en.wikipedia.org/wiki/Mean_shift
Summary: This paper proposes a new diffusion distillation technique called mean-shift distillation (MSD), intended to solve the problem of mode-seeking when leveraging pre-trained diffusion models for tasks like text-to-2D or text-to-3D optimization. Existing approaches such as score distillation sampling (SDS) are known to suffer from high gradient variance and bias, often converging to sub-optimal or “out-of-distribution” modes. This paper aims to correct that by explicitly seeking the modes of the learned data distribution in a principled way. The authors derive a gradient estimator based on classical mean-shift mode-seeking. They prove that the estimator is aligned with the gradient of the distribution’s density in an approximate or smoothed sense, which in theory leads to better mode alignment. Besides, a key technical idea is to sample from a product of the data distribution (modeled via diffusion) and a Gaussian kernel centered on the current iterate, thereby estimating the mean-shift vector with minimal extra overhead. The proposed approach can be dropped into existing SDS pipelines without retraining diffusion models. The authors implement a few heuristic stability strategies (e.g. limiting guidance to a restricted interval) to address numeric issues. Claims And Evidence: The main theoretical claim—that mean-shift distillation aligns with the gradient of the smoothed density—is a well-known property of mean-shift, and the authors adapt it cleanly for the diffusion setting. The improved mode alignment claim is supported by 2D toy experiments where the ground truth distribution is known and the authors can directly observe that SDS creates biased maxima whereas MSD recovers actual modes. As for the real world text-to-2D and text-to-3D tasks, though more qualitative, support the claim that MSD produces sharper, more mode-aligned results (more faithful images, better 3D shapes, etc.) than SDS. However, while the authors introduce heuristics to handle numeric instabilities in high dimensions, it remains somewhat ad hoc. The general statement that their approach is “stable with minimal changes” may need more thorough ablation on large-scale tasks to confirm its robustness. Methods And Evaluation Criteria: The mean-shift-based approach is a well-established idea for mode-seeking in density estimation. Extending it into diffusion-based generative modeling by sampling from a product density is a neat conceptual step. The evaluation focuses on synthetic 2D data (where the ground truth distribution is known) to check theoretical correctness, and then transitions to text-to-2D and text-to-3D tasks with a large pre-trained diffusion model (Stable Diffusion). This is a reasonable design to capture both theoretical clarity and real-world practicality. However, in my opinion, although the chosen baselines in this paper (SDS, SDI, plus direct sampling like DDIM) are the most widely recognized, there are other methods including classifier guidance and variance reduction according to the authors' reference list. It would be more convincing if the authors can compare their methods with these baselines. Theoretical Claims: The authors leverage standard results from kernel density estimation and mean-shift. The proofs or derivations that show how the kernel’s gradient aligns with the data distribution’s gradient are standard but well-structured here. The demonstration that a product density approach (combining kernel $G_{\lambda}$ with $p$) can be sampled through a modified diffusion score is logical and consistent with known properties of diffusion sampling. Experimental Designs Or Analyses: The 2D toy experiments are carefully designed to highlight fundamental differences between SDS and MSD, especially focusing on distribution “phantom modes.” The approach is thorough and compelling for diagnosing method-level phenomena. The text-to-2D and text-to-3D experiments with Stable Diffusion reflect the standard practice in the field. The paper sketches the pseudo-code for MSD, clarifies some crucial heuristics, and references open-source frameworks (ThreeStudio). This should be sufficient for other researchers to replicate the approach. Some details about integrator stability, partial inversion, or bandwidth scheduling might need more elaboration for a fully plug-and-play experience, but overall the methodological details are clearly stated. Supplementary Material: Yes, I reviewed all the parts in the supplement. Relation To Broader Scientific Literature: Existing “score distillation sampling” approaches like SDS are widely used in text-to-3D tasks, but often criticized for high variance, poor convergence, and potential for spurious modes. The proposed mean-shift view is a novel perspective, linking classical KDE-based ideas to diffusion-based generative modeling. In this paper, the authors situate their work among mainstream diffusion references (Song et al., Karras et al., etc.), and they cite methods for variance reduction or improved multi-step approaches in SDS. Essential References Not Discussed: There is no major omissions. Other Strengths And Weaknesses: Strengths: In this paper, the authors provided a clean, conceptually simple remedy (mean-shift) for improving the alignment of the distillation gradient with the distribution’s modes. The 2D analysis is carefully executed, showing the shortfalls of SDS and the better behavior of MSD. As for the practical implementation, there are minimal code changes and no model retraining required, which are quite impressive. Weaknesses: 1. You note that the theoretical foundation for classifier-free guidance (CFG) is less rigorous, and the paper does not provide fully rigorous proofs on the possible side effects of classifier-free guidance combined with mean-shift—though it is well known that classifier-free guidance can produce distributional shifts. Have you tried alternative guidance (e.g., from Karras et al., 2024) to see if it pairs well with MSD? 2. In practice, the authors rely on partial integration heuristics and limiting guidance to certain intervals. These are effective but might appear less principled. I guess additional exploration of robust integrators or dynamic bandwidth strategies could improve this. 3. As I mentioned above, this paper focuses heavily on SDS vs. MSD, plus SDI. While that is a fair baseline set, the discussion might be enriched by direct comparisons with specialized variance-reduction or novel guidance techniques. 4. In text-to-3D tasks, you mention 7k steps. Does the improved stability substantially reduce the required number of iterations or does it primarily improve final fidelity? Other Comments Or Suggestions: Please refer to the "Weaknesses" section. Questions For Authors: Please refer to the "Weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive review. We are glad they recognize our contributions. Below, we answer questions raised in the review: > Comparisons with other guidance techniques and variance-reduction methods. **Guidance**. In low-dimensional settings (eg, our toy experiments), our method can recover the modes and reconstruct the data distribution well without any guidance (See Fig. 1). This is aided by the fact that the conditional score estimates parameterized as $\epsilon_\theta(z_t, c)$ (predicted noise from the pre-trained network) is good by itself, without guidance i.e. $\tilde{\epsilon}_\theta (z_t, c)$. Empirically, we observe that without guidance, ancestral sampling techniques like DDIM produce samples that lie on the data manifold, albeit with few outliers (See Fig. 2). This is not the case in the high-dimensional setting with experiments on Stable Diffusion. Here $\epsilon_\theta(z_t, c)$ samples are noticeably bad and are predominantly outliers. Currently, the best fix is to augment these noise estimates with guidance to produce $\tilde{\epsilon}_\theta (z_t, c)$, the strategy prevalent in sampling algorithms. We inherit these practices when performing distillation. We recognize the existence of alternative guidance techniques to CFG, like Autoguidance (Karras et al., 2024). As these methods pair well with DDIM (and other ancestral sampling techniques), we believe the benefits will extend to distillation-based methods like ours. Ultimately from the perspective of distillation, different guidance simply changes the shape of the output distribution but does not fundamentally change the mechanics of diffusion. We used CFG in all our experiments as it is more widely used, has hyperparameters (guidance scale) that have been more rigorously tested by the community, and was used in all our baselines (SDS and SDI). We extend **Table 2** with comparisons between two guidance techniques, CFG vs Autoguidance (Karras et al., 2024). We add a new baseline VSD [1] (suggested by Reviewer cVvL) and our original baseline SDI. We show just the fractal dataset. Please see the **rebuttal to Reviewer cVvL** for additional comparisons with VSD. Left: *learned denoiser with CFG* / Right: *learned denoiser with Autoguidance*. Best performance among distillation-based methods is highlighted in **bold**. |Dataset|Method|NLL↓|Precision↑|Recall↑|MMD↓| |-|-|-|-|-|-| ||DDIM†|-1.59/-1.67|0.97/0.96|0.44/0.79|257.43/0.25| ||SDS|15.96/11.33|0.17/0.04|0.03/0.03|3875.11/71.05| |Fractal|VSD|18.97/11.52|0.21/0.03|0.05/0.03|3845.41/**70.25**| ||SDI|27.375/0.652|0.30/0.51|**0.48/0.51**|**69.23**/15089.58| ||Ours|**-1.15**/**-1.99**|**0.94**/**0.97**|0.40/0.43|133.41/122.94| **Variance reduction**. One of the variance reduction methods we cite, SteinDreamer [2], propose using control variates to minimize the excessive variance present in SDS. This excessive variance comes from the randomly sampled noise added to the target (or rendered images). Unlike SteinDreamer, SDI finds a better approximation of this desired noise term, eliminating one of the root causes of the excessive variance instead of compensating for it later. We believe this makes SDI a better candidate among previous variance reduction methods. Moreover, the official implementation for SteinDreamer is not yet publicly available. We will be happy to make comparisons as soon as an official implementation gets published. We would be happy to add comparisons to any other specific work the reviewer has in mind. > The impact of integration heuristics and their ad-hoc nature, additional exploration of robust integrators or dynamic bandwidth strategies. We tried robust integrators in toy settings, and while they do indeed improve numerical stability, the additional score model evaluations they require make them impractical in a higher-dimensional setting. Higher-order numerical solvers like PNDM [3] did not improve numerical stability in our setting. We observed limited guidance to be the simplest to implement and most impactful. Adaptive bandwidth strategies have been proposed for Kernel Density Estimation in the past [4]. We hope to extend these techniques for our use case in the future. > Does the improved stability substantially reduce the required number of iterations, or does it primarily improve final fidelity? Yes, the improved stability reduces the number of optimization iterations while maintaining the fidelity of the output. References: [1] Wang Z, Lu C, Wang Y, et al. “Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation”, 2023. [2] P. Wang et al. “Steindreamer: Variance reduction for text-to-3d score distillation via stein identity”, 2023. [3] L. Liu et al. “Pseudo Numerical Methods for Diffusion Models on Manifolds”, 2022. [4] D. Comaniciu et al. “The Variable Bandwidth Mean Shift and Data-Driven Scale Selection”, 2001.
Summary: This paper introduces mean-shift distillation, which improves the convergence behavior of the SDS objective. Experiments on synthetic and real datasets demonstrate the effectiveness of the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. I've checked the soundness and validity of all the experiments. Supplementary Material: Yes. I've reviewed the whole supplementary material. Relation To Broader Scientific Literature: The mean-shift distillation developed in this paper may provide a new tool for 3D generation as well as one-step text-to-image generation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The proposed gradient proxy improves the SDS objective by reducing variance and enhancing mode alignment. Weaknesses: 1. Writing Clarity: This paper requires significant revision for clarity. The notations are particularly confusing. For example, $y$ represents the text embedding in the right column of line 70, but is also used for data samples in Eq. 2, and has a different meaning in Eq. 14. Additionally, Eq. 9 is unclear as $z_t$ is not defined. There are lots of undefined notations between Lines 39 and 45. 2. Baseline Comparison: The baseline appears relatively weak, as SDS is an early work in this field. Other Comments Or Suggestions: 1. Missing $\sigma_t$ in Eq. 12. 2. Table 2 does not contain a comparison between an ideal and a learned denoiser. 3. There are many typos in Algorithm 2. For example, the indices in line 560 and 568, $z_T$ in line 569. In line 577 to 579, it seems that $y_1 = y_2 = \cdots = y_N$. Questions For Authors: 1. How are the losses in Algorithms 1 and 2 derived? 2. Since the variational score distillation (VSD) loss developed by Prolificdreamer [1] is widely used for distilling diffusion models in image generation [2], how does the proposed method compare to the VSD? Reference [1] Wang Z, Lu C, Wang Y, et al. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation[J]. Advances in Neural Information Processing Systems, 2023, 36: 8406-8441. [2] Yin T, Gharbi M, Zhang R, et al. One-step diffusion with distribution matching distillation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024: 6613-6623. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for recognising our contributions. We acknowledge the feedback regarding our notations. ## 1 Writing clarification. As we cannot update the submission, we provide them below. We believe these are minor corrections and will incorporate them in our revision. |Notation|Clarification| |-|-| |$y$ in line 70| In image diffusion models, $y$ is the common convention for text prompt. Alternatively, $c$ is used for any conditioning signal. We will use the latter.| | $y$ in Eq. 2, Eq. 14| In mean-shift and KDE, $y_i$ can denote samples from product density or successive locations of kernel $G_\lambda$ [Comaniciu & Meer, 2002]. In Eq. 2 we use the same name for the integration variable of the convolution so that we would match this convention once we discretize the integral. Karras et al. (2022) use $y_i$ for data samples. We recognize the confusion.| |$z_t$ in Eq. 9| Line 158, right column, we use the product sampling trick from Song et al., 2021b; Dhariwal & Nichol, 2021. Here, $x_t$ or $z_t$ refers to the noisy sample or noisy latent, respectively. We will explicitly define this.| |Algorithm 2, indices (line 560, 568) and $z_T$ (line 569)| We acknowledge this is not standard notation for discretizing $t$ and the typo in line 569. We will change indices to $t = T, …, 1$ and change $z_t$ in line 561, 569 to $z_{t-1}$| |undefined notations between Lines 39 and 45| This paragraph is intended to define those notations, but contains one typo; $g$ and $M$ are supposed to be the same.| |Line 577 to 579, is $y_1=…=y_N$?| While $x^k$ and $\lambda$ are the same, there is stochasticity in the solver. Initial $z_T$ is either randomly initialized (naive) or initialized via inversion (stable). The latter is not deterministic and can have numerical errors. Denoising process is also not deterministic. Thus, each $y_i$ is different, and averaging them gives us unbiased estimate of the gradient. We will make stochasticity explicit.| > How are losses in Algorithms 1 and 2 derived? Score distillation methods (SDS, SDI, Ours) provide gradients of the loss w.r.t target parameters but not the loss itself (see Appendix A.4 in DreamFusion (Poole et al., 2022)). Losses are given implicitly. In our toy experiments, we reconstruct the loss function by integrating finite differences and visualizing them in Fig. 1, 2, and 3. Section 4.2 provides more details. ## 2 Additional baselines. We compare with VSD (PriolificDreamer, 2023) below. Note the following about VSD: - VSD claims (Appendix C.3) SDS's mode-seeking causes over-saturated, low-diversity results. We find SDS is not mode-seeking; its high variance and bias from modes are the reasons for poor results. See our rebuttal to Reviewer 6KVp on why mode-seeking is desirable for distillation. - VSD needs task-specific fine-tuning (via LoRA or training U-Net) to estimate the variational score $\nabla_{z_t} \log q^{x}_t$. Our method needs no fine-tuning, and it’s unclear how VSD’s approach generalizes across domains. - SDI outperforms prior works like VSD in text-to-3D generation. SDI is also easily extensible to any domain, making it a strong baseline. We extend **Table 1** comparing ideal and learned denoiser. We add new baseline VSD and original baseline SDI. Left: *ideal denoiser* / Right: *learned denoiser*. Efficiency ($\uparrow$). |Dataset|SDS|VSD|SDI|Ours| |-|-|-|-|-| |Fractal|-7.4/-6.9|-5.9/-3.8|14.2/14.2|13.4/7.7| |Spiral|-8.5/-7.6|-6.8/-4.4|13.9/14.2|13.4/6.3| |Pinwheel|-7.8/-7.0|-6.4/-3.9|14.2/14.2|13.8/7.1| Similarly, we extend **Table 2**. Best performance among distillation-based methods in **bold**. |Dataset|Method|NLL↓|Precision↑|Recall↑|MMD↓| |-|-|-|-|-|-| ||DDIM|-1.85/-1.51|0.97/0.95|0.93/0.96|0.86/0.007| ||SDS|36.15/9.12|0.08/0.01|0.03/0.0|328.03/87.04| |Fractal|VSD|9.97/9.88|0.05/0.10|0.02/0.05|230.97/94.68| ||SDI|24.28/**-2.87**|0.27/**0.97**|0.01/0.12|**29.927**/459.89| ||Ours|**-1.32**/-2.02|**0.92/0.97**|**0.33/0.42**|30.46/**12.79**| ||||||| ||DDIM|-1.39/-1.32|0.97/0.96|0.93/0.96|0.41/1.16| ||SDS|30.37/8.13|0.02/0.04|0.03/0.11|13.85/274.35| |Spiral|VSD|10.15/8.90|0.04/0.07|0.09/0.14|23.46/271.84| ||SDI|35.64/19.16|0.1/0.12|**0.9/0.42**|39.51/2008.3| ||Ours|**-1.28/-1.51**|**0.99/0.98**|0.18/0.18|**4.49/18.41**| ||||||| ||DDIM|-1.19/-1.1|0.97/0.97|0.94/0.97|1.05/0.27| ||SDS|2.29/2.00|0.85/0.90|0.03/0.005|**5.18**/36.37| |Pinwheel|VSD|3.34/2.28|0.65/0.97|**0.04**/0.019|6.78/33.36| ||SDI|28.31/17.33|0.17/0.51|0.001/**0.15**|6.13/98.09| ||Ours|**-1.94/-2.19**|**0.99/0.99**|0.01/0.13|5.83/**7.25**| Note: With ideal denoiser, the efficiency of our method is comparable to SDI. With learned denoiser, SDI has better efficiency. Despite this, we produce better samples. Similarly, we extend **Table 3**. |**Method**|**FID↓**|**CLIP-SIM(L/14)↑**| |-|-|-| |DDIM|-|44.1±2.8| |SDS|199|27.7±1.9| |VSD|158|30.8±1.4| |SDI|166|31.0±0.7| |Ours|114|32.6±0.8| We accompany Table 3 with additional qualitative comparison for text-to-2D: https://imgur.com/a/BWpHde5.
null
null
null
null
null
null
Gradient Boosting Reinforcement Learning
Accept (poster)
Summary: This paper presents **Gradient Boosting Reinforcement Learning (GBRL)**, an attempt to integrate **Gradient Boosting Trees (GBT)** into reinforcement learning (RL). GBTs are known for their performance in structured data tasks, but RL has long relied on deep neural networks. The authors propose modifications that allow GBTs to function effectively in RL. The main idea is to adjust standard GBT updates so they can handle the **dynamic and non-stationary nature** of RL. Instead of training separate networks for policy and value functions, GBRL uses a **single decision tree ensemble** for both, which reduces memory consumption. Experiments show that GBRL **performs well in structured environments like MiniGrid**, is **competitive in continuous control tasks**, but **struggles with high-dimensional unstructured data** such as Atari-RAM. The authors also implement **GPU acceleration**, making large-scale training feasible. Claims And Evidence: The claims regarding GBRL’s superior performance in structured RL tasks are well-supported by experiments. However, the claim that GBRL can be a general alternative to deep RL models is not fully justified, as results on high-dimensional tasks like Atari-RAM show significant performance gaps. ## update after rebuttal Thanks for the response. I will keep my score. Methods And Evaluation Criteria: The methods and evaluation criteria are generally appropriate. The selection of structured and continuous control tasks aligns well with the goal of testing GBTs in RL. However, including more comparisons against structured-data-oriented models could improve the fairness of the evaluation. Theoretical Claims: The paper lacks formal theoretical guarantees. There is no proof of convergence, and the theoretical discussion on stability in non-stationary RL environments is minimal. Experimental Designs Or Analyses: The experimental setup is well-structured and diverse, covering different RL environments. However, failure cases, particularly in high-dimensional settings, are not deeply analyzed. Supplementary Material: Yes Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths One clear strength is **novelty**—very few works have tried scaling gradient boosting for RL. The **shared tree-based actor-critic model** is an interesting way to cut down memory requirements. The paper’s **experimental coverage** is also broad, evaluating performance across structured, continuous control, and high-dimensional tasks. GBRL also **enhances interpretability in RL**. Unlike deep networks, decision trees provide a transparent way to understand decision-making, which could be crucial for real-world deployment. Additionally, **practicality** is a strong point. GBRL integrates with existing RL frameworks like **Stable Baselines3**, lowering the barrier for adoption. Weaknesses However, there are some issues. **Theoretical justification is lacking**—there’s no proof of convergence, and the paper does not fully explore how GBTs handle non-stationary RL data. While GBRL does well in structured environments, **its performance in unstructured tasks is poor**, especially on **Atari-RAM**, and the authors don’t offer a deep explanation of these failures. Another limitation is **baseline comparisons**. The experiments mostly compare GBRL to standard MLP-based RL models but **ignore specialized architectures** like TabNet or SAINT that are designed for structured data. Lastly, **computational cost is high**. GBRL requires **significantly more GPU memory** than MLPs (~24GB vs ~3GB in some cases), making it less practical for resource-constrained environments. Other Comments Or Suggestions: The paper would benefit from a more detailed discussion on why GBRL struggles in high-dimensional settings and an analysis of computational trade-offs in real-world deployment. Questions For Authors: Please refer to the weaknesses section for key concerns regarding theoretical justification, baseline comparisons, and failure case analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable feedback. Below we address each of your concerns. 1. **Lack of theoretical proof of convergence & handling non-stationarity**: While comprehensive theoretical analysis is beyond the current scope, GBRL builds upon established foundations from both gradient boosting and policy gradient theory. Each tree in our approach minimizes a functional gradient step, making GBRL's convergence analysis potentially related to known policy gradient convergence results. The recent work on softmax policy gradient convergence rates [1] shows O(1/t) convergence for tabular settings with true gradients. We believe similar analysis could potentially apply to GBRL, as our tree-based approximation maintains the key properties that enable convergence -- specifically, GBRL preserves the policy gradient's directional information while providing a piecewise approximation of the advantage landscape. Furthermore, GBT methods have convergence guarantees in various settings. Recent theoretical works have analyzed convergence for GBT on convex losses [2], and online gradient boosting methods have known convergence guarantees under certain conditions [3,4]. The work on non-parametric policy gradients [5] also provides a foundation for our approach. A full theoretical analysis would require examining how the tree-based approximation interacts with policy improvement dynamics and whether it preserves the Łojasiewicz condition identified in [1]. We believe our empirical evidence of convergence across diverse environments provides strong practical support for GBRL while setting the stage for future theoretical analysis. 2. **Poor performance on high-dimensional unstructured tasks (Atari-RAM)**: We agree with your observation. GBRL's weaker performance in high-dimensional raw-feature spaces (such as Atari-RAM) arises from a fundamental difference in inductive biases between decision trees and NNs [6, 7]. Decision trees partition the input space according to single feature thresholds, whereas NNs learn complex feature compositions via continuous transformations. This makes NN more suited for environments where useful information is embedded in combinations of features. This highlights the complementary nature of GBRL to existing NN-based methods rather than positioning it as a replacement. We will expand our discussion in the revised paper to explicitly highlight this limitation and its implications. 3. **Lack of comparisons to specialized architectures (TabNet, SAINT)**: The goal of this work is to combine the popular GBT tool with RL, analyze its performance, and overcome the challenges that this combination introduces. One major advantage that we found is GBT's robustness to OOD scenarios compared to NNs, and not only handling structured data. While specialized architectures like TabNet/SAINT excel at purely tabular data, our environments span a broader range of settings including those with mixed features, temporal dependencies, and various action spaces. Our primary focus was therefore on comparing with the standard MLP-based approaches most commonly used in RL rather than models optimized solely for tabular data. This allows us to assess GBRL's performance in the broader context of RL applications. 4. **Computational cost**: You raise a valid concern about the scalability of our approach. Please see our detailed response to **Reviewer KuLP** regarding the unbounded growth of the ensemble as the policy improves. [1] Mei, Jincheng, et al. "On the global convergence rates of softmax policy gradient methods." International conference on machine learning. PMLR, 2020. [2] Cortes, Corinna, Mehryar Mohri, and Dmitry Storcheus. "Regularized gradient boosting." Advances in neural information processing systems 32 (2019). [3] Beygelzimer, Alina, et al. "Online gradient boosting." Advances in neural information processing systems 28 (2015). [4] Hu, Hanzhang, et al. "Gradient boosting on stochastic data streams." Artificial Intelligence and Statistics. PMLR, 2017. [5] Kersting, Kristian, and Kurt Driessens. "Non-parametric policy gradients: A unified treatment of propositional and relational domains." Proceedings of the 25th international conference on Machine learning. 2008. [6] Grinsztajn, Léo, Edouard Oyallon, and Gaël Varoquaux. "Why do tree-based models still outperform deep learning on typical tabular data?." Advances in neural information processing systems 35 (2022): 507-520. [7] Rahaman, Nasim, et al. "On the spectral bias of neural networks." International conference on machine learning. PMLR, 2019.
Summary: This paper presents a GPU accelerated method to use gradient-boosting trees for use in RL. The contribution is in the fact that existing methods for boosting-trees are tailored to offline learning, whereas, the authors' method is fully incremental. They then show empirically many benefits of using tree based function approximation in RL settings such as, robustness to spurious correlations and input perturbations. The algorithm is RL-loss agnostic, can use 'weight sharing' (or more accurately '(sub-)tree sharing'), and compares well against a neural network baseline. Claims And Evidence: The authors describe a variety of benefits for using tree-based function approximation, all of which they address in their experiments. Limitations for their method are also shortly discussed. So the claims match well with what is shown. However, I do think that e.g., line 71-72 column 1 deserves a more nuanced discussion. The results are simply not decisive enough to conclude superior performance on structured tasks. Methods And Evaluation Criteria: No complaints. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental setup is exhaustive, explained well, and aligns with the claims that the authors want to make. However, much of the discussion of the results can be presented more scientifically rather than as the current salespitch. Especially in section 5.1. **For example:** Line 240 column 2, the authors praise results from the appendix on PutNear, Four Rooms, and Fetch. Figure 14, however shows that this is not consistent over RL algorithms and note that only 5 seeds are shown. The GBRL method fails on RedBlueDoors, KeyCorridor with A2C, and is slower to learn with AWR compared to the NN. But it does seem to often outperform the NN when paired with PPO. Why not summarize much of the current discussion and shed some light on why some RL losses yield strong differences in performance? This would improve impact so practitioners understand how GBRL interacts with such choices. Although, I doubt that this question can be answered with the currently low number of seeds... Supplementary Material: Please tidy up the figures in the supplementary material to fit within the margins, pages 17-20. Also, sort or split/ group the methods for better clarity and keep it consistent across figures. Typo in L667 section title. Unclear what the $\pm$ intervals mean in the result tables from the captions, best to repeat this from the main text. The text doesn't say how many seeds were run for the learning curves in C., neither does it say what the confidence bands show in figures 11-14. Repeat this from or refer to the main text. Relation To Broader Scientific Literature: I am not very familiar with prior work that uses gradient boosted trees in RL. The authors already cite the work by Abel D., et al. (2016) and Brukhim N. et al. (2023). However, there have also been approaches for decision tree learning within RL settings, see Silva et al. (2020) or Vos D. (2023). These are not mentioned in the paper right now, but could strengthen the 'tabular data' paragraph in the related work for concurrent tree-based approaches within RL. --- Silva, A., Gombolay, M., Killian, T., Jimenez, I. &amp; Son, S.. (2020). Optimization Methods for Interpretable Differentiable Decision Trees Applied to Reinforcement Learning. <i>Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics</i>. PMLR. 108:1855-1865 Vos, D., & Verwer, S. (2023). _Optimal decision tree policies for Markov decision processes_. In _Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence_ (pp. 5457–5465). IJCAI. [https://doi.org/10.24963/ijcai.2023/606](https://doi.org/10.24963/ijcai.2023/606) Essential References Not Discussed: NA Other Strengths And Weaknesses: ## Strength Language, wording, structure, style, and coherence is excellent. This is a well written paper that should be understandable for a broad audience. Limitations of the method and implementation are clearly formulated. In general, there is not too much to critique about this paper. There are indeed still limitations, however the aim, motivation, and approach are all very clear. The authors also provide a codebase that is implemented in CUDA and compatible with commonplace baseline repositories. This paper should be accepted as I think it establishes a valuable starting point for future work. --- ## Weaknesses The large GPU memory requirements and inefficient compression of old trees does not make this algorithm super competitive. Although, it does have redeeming features that future work should directly explore. Last sentence of the conclusion is too salesmen-like, and can be more nuanced. GBRL is a nice alternative to existing solutions. However, I don't see it as a 'step toward' real-world tasks. It rather feels like a sidestep. The critique from 350-353 on NNs is a bit arbitrary, you can always come up with examples that break particular models. If I use LayerNorm within my NN, then the output will always be in a reasonable range even if OOD. This can be formulated better. Other Comments Or Suggestions: Line 85 column 2, use \citet instead of \citep for 'Kersting & Driessens (2008) ...'. Section 3, why are the variables bold-face? To indicate a vector? This confused me for a moment since the reward $r$ is also bold-face, which made me think we were dealing with a multiobjective problem. Also, in figure 1 the state $s_t$ is not bold while in the main text it is. So, I suggest a slight revision for consistency and clarity. Remove 'superior' from line 312 column 2. Fontsize of plot ticks and labels in figures 5 are too small. The same goes for the ticks in figures 3, 8 and 9. The title/ y-labels in figure 6 is a bit confusing, I suggest to right-align the current y-labels and modify the ysuplabel as 'episodic reward'. Line 272-273 column 1 is difficult to read. Line 380-384 column 2 broken sentence. Summary in line 425-428 column 1 belongs in the conclusion (obsolete). At times there are singular words being split up from their text due to the format, this obstructs reading. Please revise the following, - Line 290 column 2, 'dilution' - Line 365 column 2, '$s_t = s_t + \epsilon |s_t|$' - Line 416 column 1, 'missing.' Many references are formatted inconsistently, wrong, or do not have proper journals or venues listed. Examples: - Line 482 column 1, URL uses a DOI link. - Line 458 column 2, Duan T. (2020) is a ICML 2020 paper. - Line 471 column 2, Gorishniy Y. (2023) is a NeurIPS 2021 paper. Questions For Authors: Figure 5; which RL algorithm was used for GBRL and NN, PPO? What neural network architecture was used? Figure 11; how come AWR-GBRL performs so poorly on pendulum-v1 ? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive feedback. We address your main points below and will incorporate the suggested revisions in the final version. **Regarding line 71-72 and performance on structured tasks**: We agree with this point. We will revise the contribution statement (lines 71-72) to: "In popular environments, our framework demonstrates competitive performance against NNs while showing particular strengths on structured tasks." **Regarding the critique on NNs (lines 350-353)**: We agree that any model class—including NNs and GBTs—can be made to fail or succeed depending on the setup. Our intent is to highlight a key difference in inductive biases. As discussed in Jeffares et al. (2024) [1], the kernel representations induced by GBTs are bounded and often behave more predictably on irregular or out-of-distribution inputs, while NN-tangent kernels can be unbounded and vary significantly on test points far from the training distribution. While techniques like LayerNorm can help stabilize neural network activations, they don't fundamentally change the difference in inductive biases and they often don't fully solve the extrapolation issues in OOD scenarios. We will revise the relevant paragraph in the paper to reflect this more clearly and avoid overgeneralization. **Why do certain algorithms perform differently than others?** Apart from inherent algorithmic differences, the main difference for GBRL is the number of fitted trees per update. - **A2C** - performs a single gradient step per rollout using the entire batch. In GBRL, this results in a single constructed tree per rollout. Since GBTs update the value and policy functions directly, such large, coarse updates can destabilize learning. - **PPO** - in contrast, uses minibatches and multiple epochs per rollout. This allows GBRL to fit several small trees per rollout, yielding more stable and incremental updates. - **AWR** - is an off-policy algorithm. Therefore, GBRL could potentially build many trees without adding sample complexity. However, we noticed that our best performing models had a huge computational cost to finish training. As a result, we resorted to reducing the number of gradient steps per update at the expense of performance. Future work could investigate methods for optimizing GBRL with a fixed tree budget (such as pruning and distillation), which may enable the applicability of more demanding RL algorithms, such as AWR, using GBRL. In general, as each tree approximates a gradient step, GBRL benefits from frequent updates to refine its ensemble gradually. This makes it particularly well-suited to algorithms that support minibatching and multiple epochs per rollout. We will include a targeted PPO-based ablation study in the appendix to demonstrate the impact of minibatching performance. Results can be found in the following link: https://pasteboard.co/ENlUg1ffghZU.png. **Other improvements**: We greatly appreciate your detailed comments on presentation and formatting issues. We will implement all of your suggested improvements to enhance the clarity, readability, and technical accuracy of our paper. **Regarding specific questions**: 1. **Figure 5; which RL algorithm was used for GBRL and NN, PPO? What neural network architecture was used?** We compared GBRL to NNs using PPO and a standard MLP with two hidden layers, each containing 64 units and tanh activations. This architecture is the default in Stable Baselines3, and a common choice in RL literature, for these environments. As such, we used the same NN architecture for the Equation environment. We used PPO for all experiments in section 5.2. 2. **Figure 11; how come AWR-GBRL performs so poorly on pendulum-v1?** AWR-GBRL performs poorly on Pendulum-v1, despite working well on many other environments. Since PPO-GBRL and AWR-NN succeed in this task, we believe the issue lies in the optimization process specific to AWR. Although we did try to tune hyperparameters for this setup, it's likely that the configuration remains suboptimal. [1] Jeffares, A., Curth, A., and van der Schaar, M. "Deep Learning Through a Telescoping Lens: A Simple Model Provides Empirical Insights on Grokking, Gradient Boosting & Beyond." Advances in Neural Information Processing Systems, 37, 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response and commend them for adding additional results. I have also read the comments by the other reviewers, and the authors' rebuttals. My score and impression of the paper remains the same **4-accept**. This is a good paper, which I believe can form a strong basis for future work. I have left a few last comments on your rebuttal and paper, and one question. --- **Question on extra PPO results batch-sizes** This result is interesting, but a bit counterintuitive. Typically performance tends to increase with large batch-sizes (lower variance), but in your method it decreases performance. I read your response to Reviewer KuLP, and see that $B=64, 128$ perform best due to a trade-off in learning speed and variance. Do you have additional thoughs on how to deal with this? Can you support reasonable values for $B$ over other environments than cartpole from this? --- > Regarding the critique on NNs (lines 350-353) I agree with your reasoning on how NNs and GBTs inherently induce different inductive biases. I see that you cite Jeffares' work in the main paper, but the point you sketch here is not obvious in the main paper, it's practically buried. I think the paper will benefit from including this discussion. I believe this also runs tangent to the critiques listed by all other reviewers on theoretical grounding of GBTs for RL. I'd suggest to put this in the related work, in the tabular data paragraph. At the moment, this paragraph only states that GBT can perform better than NNs, but not why. My example of LayerNorm was only an attempt at a minimal counterexample to invalidate your point in section 5.2.3. So, explain to the reader why GBTs can guarantee more predictable OOD behaviour, which can be supported by the architecture bias guaranteeing kernel boundedness. --- > Why do certain algorithms perform differently than others? Thanks for the clear explanation. I could not find this in the paper (maybe I read over this), please make sure to add this difference in RL algorithms and their nuances on your tree construction somewhere with a separate section heading (appendix is fine). --- > Regarding specific questions Please add your answer to 1) to the supplementary, even if it uses the stable-baselines3 defaults. Or otherwise cite a source that exactly details this. For 2) figure 11 bottom-center, I was wondering if you had specific thoughts on this, since it was the only outlier in this figure. But I understand that this can be due to a tuning problem. --- Last points of improvements: - Run more than 5 seeds and report proper confidence intervals - Appendix A title has a spelling error 'Implementaion' --- Reply to Comment 1.1.1: Comment: We address your main points below: **Question on extra PPO results batch-sizes**: We hypothesize that the drop in performance with large batch sizes stems from two factors: - Tree-level variance trade-off: Larger batch sizes mean more diverse data per tree, which can challenge split quality at fixed depths (we used depth = 4 for all experiments). - Fewer trees per rollout: Since each batch creates a single tree, larger batches result in fewer added trees per rollout. To further investigate this, we ran an additional ablation study, where we fixed the batch size and increased the number of epochs per rollout. This increases the number of trees (updates) without changing the batch size. Results can be found in the following link: https://pasteboard.co/VBqZXlhOScsg.png (runtime axis has been zoomed to focus on initial convergence). We observe that when keeping the total number of updates constant per rollout: - **More PPO epochs leads to faster convergence**. The policy reaches a higher score with the same number of environment interactions (higher sample efficiency). - **Increasing the number of epochs, at larger batch sizes, mitigates the low sample efficiency seen before**. For example, the following three experiments construct the same number of trees per rollout (32) and exhibit roughly the same convergence rates in terms of sample efficiency: $\frac{2048}{ 64} * 1$ (blue), $\frac{2048}{512} * 8$ (red), $\frac{2048}{1024} * 16$ (purple). However, larger batches increase runtime due to greater per-tree sample complexity. **Regarding the critique on NNs (lines 350-353)**: We agree and will implement this suggestion. **Why do certain algorithms perform differently than others?** We’ll add a new section in the appendix summarizing the differences between RL algorithms and how these affect tree construction. **Regarding specific questions** We will include our response to question (1) in the supplementary, along with either a citation or a note about the stable-baselines3 defaults.
Summary: The paper proposes GBRL, a framework that adapts gradient boosting trees (GBTs) to reinforcement learning (RL) tasks. Recognizing that neural networks (NNs) often struggle with structured and categorical features, the authors leverage the natural strengths of GBTs—namely, their ability to handle heterogeneous data and capture abrupt transitions in decision boundaries. Furthermore, to adapt the GBTs to the dynamic nature of reinforcement learning, the paper designed the GBRL framework to continuously interleaving tree construction with environment interaction. The paper presents extensive experiments on a range of tasks, demonstrating that GBRL can outperform or match NN-based approaches in settings where the input features are naturally structured, while also showing increased robustness to out-of-distribution states, signal dilution, and spurious correlations. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: yes (no proofs). Experimental Designs Or Analyses: yes, the experiments are sound. Supplementary Material: yes, extensive experimental results. Relation To Broader Scientific Literature: The paper encourage to exploration and usage of gradient booting tree algorithms in the domain of deep reinforcement learning. Essential References Not Discussed: no. Other Strengths And Weaknesses: **Strength**: * The proposed GBRL framework was shown to outperform traditional PPO methods implemented in "stable baseline" in many tasks. * Comprehensive experiments are conducted to support the claims of the authors, providing many valuable insights. * The paper are well-written and sound. "Weakness": * A significant limitation is the unbounded growth of the ensemble as the policy improves. While the paper discusses potential solutions (e.g., tree pruning or ensemble compression) for future work, this issue might affect long-term or real-time deployment and is not resolved in this paper. * Although the experiments are comprehensive, additional ablations on hyperparameter sensitivity and design choices (such as the impact of learning rates for policy vs. value functions) would strengthen the empirical claims. * Though comprehensive experiments are conducted, the papers lacks in-depth theoretical analysis to further support the strength of GBT concretely. Other Comments Or Suggestions: . Questions For Authors: . Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our paper, recognizing the comprehensive experiments and the value of our approach for structured data in RL. We address each of your concerns below and will revise the paper accordingly. **Regarding the "unbounded growth of the ensemble as the policy improves"**: You raise a valid concern about the scalability of our approach. In our current implementation, ensemble size does increase with training steps. There are several possible approaches to address this limitation: - **Tree Pruning**: One can reduce memory requirements through pruning redundant or low-importance splits after training. - **Ensemble Compression**: One can distill knowledge from large ensembles into smaller, more efficient tree structures. - **Adaptive Growing Strategy**: Rather than adding trees at fixed intervals, one can add them selectively based on performance plateaus or gradient magnitudes. While we briefly mentioned these approaches in the paper, we will expand the limitations section to further explain these approaches as future research directions. **Regarding "additional ablations on hyperparameter sensitivity"**: This is an excellent suggestion. We performed these experiments, provide them below, and will add them to the paper. Specifically, we propose to add an ablation study of GBRL-based PPO in the appendix examining: - **Learning rate ablation for policy and value function components**: Here, we keep one fixed while changing the other. Similar to training with neural network estimators, we observe that large learning rates harm stability, whereas low learning rates result in very slow convergence. The results can be seen in the following links: https://pasteboard.co/PxZtndOMFuzZ.png, https://pasteboard.co/2W5UT93RBNrn.png. - **Tree depth limitations**. This ablation shows that tree depth has a large impact on convergence, however deeper trees lead to increased runtime and compute cost. A larger tree depth corresponds to a better gradient approximation, which may explain the faster convergence shown in these results. In the paper, we chose a maximal tree depth of 4, a value that balances performance (the ability to solve environments) with maintaining a reasonable wallclock time-to-converge. The results can be found in the following link: https://imgur.com/iu0IUsT. - **Batch size**. Using a rollout length of 2048 samples, we analyze the impact of PPO batch size on performance. We observe that batch size significantly impacts convergence. Specifically, smaller batches result in GBRL building more trees per rollout, improving adaptability. However, smaller batches also lead to noisier gradient estimates due to limited samples per constructed tree (blue, 16 samples per batch, starts much faster, but fails to converge to a stable solution). Conversely, larger batches stabilize training by reducing variance through averaging within leaves (more samples per utilized per constructed tree), but build fewer trees per rollout (gray, 2048 samples per batch, is very stable but also very slow to converge). Hence, both excessively small and large batch sizes negatively impact performance. Results can be found in the following link: https://pasteboard.co/ENlUg1ffghZU.png. **Regarding the lack of "in-depth theoretical analysis"**: Please see our detailed response to **Reviewer 6bS7** regarding the lack of theoretical proof of convergence and handling non-stationarity. [1] Mei, Jincheng, et al. "On the global convergence rates of softmax policy gradient methods." International conference on machine learning. PMLR, 2020. [2] Cortes, Corinna, Mehryar Mohri, and Dmitry Storcheus. "Regularized gradient boosting." Advances in neural information processing systems 32 (2019). [3] Beygelzimer, Alina, et al. "Online gradient boosting." Advances in neural information processing systems 28 (2015). [4] Hu, Hanzhang, et al. "Gradient boosting on stochastic data streams." Artificial Intelligence and Statistics. PMLR, 2017. [5] Kersting, Kristian, and Kurt Driessens. "Non-parametric policy gradients: A unified treatment of propositional and relational domains." Proceedings of the 25th international conference on Machine learning. 2008.
Summary: The authors introduce a reinforcement learning method based on gradient-boosted trees (Gradient Boosting Reinforcement Learning, GBRL). The method consists of gradually training an additive ensemble of decision trees to output both a policy and Q-value estimates at the leaves. The loss function has two terms: a cumulative reward term for supervising the actor (i.e. policy outputs of the trees) and an L2 term for supervising the critic (which the authors don’t specify). The gradient of the cumulative reward with respect to the policy outputs can be computed using the policy gradient theorem, with the Q-value predictions being used to estimate the advantage function. The critic uses an L2 loss, whose gradient is the difference between the critic’s prediction and the target. Hence, the functional gradient can be computed from the tree’s outputs, enabling the application of the usual gradient-boosted trees approach (Section 4.1). The authors then experimentally validate GBRL, comparing it to neural network methods on on: - Standard RL environments: GBRL exhibits equivalent or superior performance on grid environments, a football environment, and continuous control environments. It is inferior in the Atari RAM domain, which exposes raw system memory states in Atari games. The authors attribute this underperformance to the fact that decision trees rely on single-feature splits, while NNs can compose raw features into more informative representations. - A custom environment for assessing robustness to signal dilution: the environment consists of simple algebraic manipulations on equations whose complexity can be tuned. The rewards in this environment are sparse. GBRL converges significantly faster than NN based methods, and achieves higher reward. - A custom environment for assessing spurious correlations: the environment consists of a grid world where the agent is meant to pick up a red ball. The authors experiment with placing a red box at various locations near the target ball to distract the agent. They find that GBRL exhibits greater out-of-distribtion performance when training on one box location and testing on other box locations. - Variations of the aforementioned environments to assess robustness to state perturbations: the authors consider (a) introducing an additional confounder object in the grid world mentioned above, to assess robustness to irrelevant information, (b) introducing Gaussian noise in classical control tasks to assess robustness to corruption/noise, and (c) varying the number of players in the football environment mentioned above, to assess robustness to missing features. In all cases, they find GBRL generalizes better than the NN baseline. The authors provide a CUDA-based implementation of their method. Claims And Evidence: I found the major claims of the paper (especially w.r.t. GBRL’s robustness) to be well-justified by the experiments. Methods And Evaluation Criteria: Yes, the proposed method makes sense for online RL. Theoretical Claims: N/A (there are no theorems, propositions, lemmas or proofs in the paper). Experimental Designs Or Analyses: Experiments are extensive, covering both standard RL environments and tasks designed specifically to assess various types of robustness. This paints a much clearer picture of GBRL’s strengths than if the authors had only evaluated it in standard benchmarks. Supplementary Material: I skimmed appendix A to better understand experimental details. Relation To Broader Scientific Literature: Prior work of Brukhim et al. (2022) (A Boosting Approach to Reinforcement Learning) had considered the application of boosting to RL, but from a theoretical perspective. In addition, Kersting & Driessens (2008) introduced Non-Parametric Policy Gradients (NNPG), which serve as a starting point for GBRL. GBRL also builds on foundational RL methodologies by using an Actor-Critic training setup. Essential References Not Discussed: I am not aware of any glaring omissions of references to prior work. Other Strengths And Weaknesses: Strengths: - Clear presentation of key motivations and results in the experiments section. - Extensive experiments, covering both standard RL environments and tasks designed specifically to assess various types of robustness. This paints a much clearer picture of GBRL’s strengths than if the authors had only evaluated it in standard benchmarks. - Provision of a CUDA-based library for GBRL (at least as far as the authors claimed), which should enable practitioners and researchers to more easily test GBRL in their domains of interest. - More generally, the experiments and running time considerations (appendix A.3) indicate GBRL is competitive with baseline RL methods in terms of performance and training cost, while showcasing certain robustness methods. This expands the toolbox of practitioners when doing RL in e.g. noisy or nonstationary environments. Weaknesses: - It was not immediately clear from reading the paper what architectures are used for the baseline neural networks in the experiments section. It would help contextualize the results if the authors could be clearer about this in the main paper. - Lack of clarity in the methods section regarding (a) how the advantage in Eq. 3 is computed and (b) how exactly are the Q-value prediction heads supervised. - Assuming e.g. a temporal difference loss for the critic, I would imagine the final loss used for obtaining the tree would be different from the one presented in Eq. (4). More generally, it would be good to contextualize how multi-objective training works when dealing with trees, and any additional constraints relative to working with neural networks. Other Comments Or Suggestions: While there is already an inherent amount of non-stationarity in the training data when doing online RL (as changes in the policy produce changes in the training data), it would have been good to see an experiment directly comparing GBRL’s behavior under non-stationarity relative to baseline NNs. Questions For Authors: 1. What NNs are used as a baseline in each experiment? 1. Would it be possible to give more detail (e.g. via pseudocode) on (a) how exactly the advantage is computed in the policy gradient and (b) what loss is used to train the critic? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Below we address your points, which we will also clarify in the revised paper. **Stable Baselines3 and RL Zoo3**: Our GBRL method was implemented within the Stable Baselines3 framework, a widely-used reinforcement learning library providing standard implementations for RL algorithms [1]. We compared GBRL against baseline neural networks (NNs) provided by RL Zoo3 [2], which include tuned hyperparameters for each environment, ensuring a fair and standardized evaluation. **NN architecture** : For all experiments, we used a standard MLP with two hidden layers, each containing 64 units and tanh activations. This architecture is the default in Stable Baselines3 for the environments we tested and is widely used as a baseline in RL literature. We maintained this consistent architecture across all environments to ensure fair comparison. - **Advantage computation**: We use Generalized Advantage Estimation (GAE) as proposed by Schulman et al. (2018) [3] to compute the advantage: $A_t = \delta_t + (\gamma\lambda)\delta_{t+1} + \ldots + (\gamma\lambda)^{T-t-1}\delta_{T-1}$, where $\delta_t = r_t + \gamma V(s_{t+1}) - V(s_t)$. The GAE parameters $\gamma$ and $\lambda$ for each environment are provided in Table 2 of the appendix. - **Critic**: We use on-policy actor-critic algorithms, where the critic predicts the value function (not action-value Q). We use a standard L2 loss between the critic's prediction and the target return estimate: $L_\text{critic}(θ) = ||V_\theta(s_t) - G_t||^2$, where $G_t$ is either a Monte Carlo return estimate or a bootstrapped n-step return, depending on the specific algorithm (PPO, A2C, or AWR). Both the GAE and target return calculations were done using built-in functions in Stable_baselines3. **Multi-objective training with GBT**: In GBRL, we compute gradients for both the actor and critic objectives per timestep. We then concatenate these gradients: $g_t := [g_{t, \text{actor}}, g_{t, \text{critic}}]$ and use them to build a decision tree. At each candidate node split, we evaluate a score function to decide the best split. However, since the actor and critic gradients can differ in magnitude, this score can become biased toward one objective. To address this, we explored two strategies: - Gradient normalization per output dimension with L2-based split scoring. - Cosine similarity, which emphasizes gradient direction over magnitude. Both approaches are supported in our codebase and produce good results, while the cosine similarity seems to perform slightly better. The results are visualized for the Cartpole environment in the following link: https://pasteboard.co/qXY0zekTq7uM.png. We will add this result as part of an ablation study to the appendix of the final version. [1] Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., and Dormann, N. "Stable-Baselines3: Reliable Reinforcement Learning Implementations." Journal of Machine Learning Research 22 (2021): 1–8. [2] Raffin, A. "RL Baselines3 Zoo." GitHub repository, 2020. https://github.com/DLR-RM/rl-baselines3-zoo [3] Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. "High-Dimensional Continuous Control Using Generalized Advantage Estimation." arXiv preprint arXiv:1506.02438 (2015).
null
null
null
null
null
null
The Generalized Skew Spectrum of Graphs
Accept (poster)
Summary: The paper generalizes the Skew Spectrum to obtain permutation invariant and isomorphism-invariant to graph embeddings. In detail, the authors propose multi-orbit spectra that can handle attributed graphs, multi-layer graphs and hypergraphs. Then $k$-Correlation Spectra is introduced to theoretically characterize the trade-off between complexity and expressivity, which is further extended to Doubly-Reduced $k$-Spectra. The theoretical results are supported by simulation and experiments. Claims And Evidence: The claims are well supported by evidence including theoretical analysis and experiments. Methods And Evaluation Criteria: The simulation part is convincing. However, experiments on the real-world benchmarks can be strengthened. Particularly, the only real-world dataset is QM7, and more standard benchmarks including Zinc, QM9, ogbg-molhiv/molpcba, and Long Range Graph Benchmark, is strongly encouraged to validate the practical performance and scalability of the method. Moreover, the base models (e.g., XGB and Linear) do not include state-of-the-art GNNs, which is questionable. How to integrate (doubly-reduced) $k$-spectra into existing GNNs, and what is the effectiveness? Theoretical Claims: I check the correctness and did not find major flaws. Experimental Designs Or Analyses: See Methods And Evaluation Criteria part. Supplementary Material: I review the whole supplementary material. Relation To Broader Scientific Literature: The paper is related to literature in mathematics, including group theory, harmonic analysis, and graph isomorphism problem. Essential References Not Discussed: The paper lacks discussions on established work for graph invariants/expressivity [1, 2, 3]. It would also be better (but not necessary) to include some machine learning papers based on group theory [4, 5]. [1] Zhang, B., Zhao, L., & Maron, H. (2024). On the expressive power of spectral invariant graph neural networks. *arXiv preprint arXiv:2406.04336*. [2] Gai, J., Du, Y., Zhang, B., Maron, H., & Wang, L. (2025). Homomorphism Expressivity of Spectral Invariant Graph Neural Networks. *arXiv preprint arXiv:2503.00485*. [3] Zhou, C., Wang, X., & Zhang, M. (2023, July). From relational pooling to subgraph gnns: A universal framework for more expressive graph neural networks. In *International Conference on Machine Learning* (pp. 42742-42768). PMLR. [4] Batatia, I., Geiger, M., Munoz, J., Smidt, T., Silberman, L., & Ortner, C. (2023). A general framework for equivariant neural networks on reductive Lie groups. *Advances in Neural Information Processing Systems*, *36*, 55260-55284. [5] Dehmamy, N., Walters, R., Liu, Y., Wang, D., & Yu, R. (2021). Automatic symmetry discovery with lie algebra convolutional network. *Advances in Neural Information Processing Systems*, *34*, 2503-2515. Other Strengths And Weaknesses: This paper is valid in theory and is well written. It might be better for the authors to include more detailed introduction in group theory/graph invariants etc. to improve the readability for those readers not so familiar with this specific topic. Other Comments Or Suggestions: N/A Questions For Authors: * Can you connect the framework with some existing GNN expressivity framework, such as the WL hierarchy? * Can you elaborate on more large-scale/real-world datasets and more state-of-the-art models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **References:** Thank you for the references, we will include them in our manuscript. **Introduction:** More introductory material would help reach a broader audience, but the page limits presents a substantial challenge. Upon acceptance, we will consider both leveraging the extra page to revise the introductory material and extending the appendices to include a broader discussion on graph invariants, GNNs expressivity, and group theory. **WL hierarchy:** We thank the reviewer for the great suggestion. Relating our (general non-reduced) k-Spectra (kSP) to the WL hierarchy is very interesting, since understanding this relationship helps clarify the use of kSP in more advanced learning algorithms like GNNs. Preliminary analysis shows that kSP provides an alternative expressivity framework, since k-WL tests look at nodes and distinguish them based on having certain neighbors; kSP count different edge structures (see Intuition in Sec. 4). To investigate the relationship further, we extended our experiments on Doubly-reduced k-Spectra (DRkSP) to show that they are neither strictly less powerful than 1-WL nor strictly more powerful. Indeed, for undirected graphs of 7 nodes, we have that the concatenation of DRkSP for different values of k forms a complete invariant, while 1-WL tests, even with 50 iterations, report 22 collisions. On the other hand, on a dataset made of chordal graphs, 1-WL tests make 2 collisions and DRkSP 10. We updated our Figure 5 to report these results: https://anonfile.io/f/LRH87kqC **GNNs:** Most Message Passing Neural Networks (MPNN) architectures are at most as expressive as 1-WL tests [1, 2], while our previous experiment shows that DRkSP can be more powerful than 1-WL test. Corollary 3.14 of your second reference states that Spectral invariant GNNs can count cycles and paths up to 7 vertices, but non-reduced kSP, for high enough k, also counts paths and cycles on more than 7 vertices, see Intuition in Sec. 4. This suggests that DRkSP and further heuristics on kSP could be used to improve the expressivity of GNNs. We came up with three ways how to embed kSP into GNNs: - Precompute the whole-graph embedding with kSP and concatenate it to the GNN layer that outputs the graph embedding, just before the fully connected layer that computes the prediction. - Compute subgraph invariants in the node aggregation step using kSP. - Compute subgraph invariants in the pooling layer using kSP, instead of other invariants like sums or max of all node labels. In all the three cases, we could introduce some learnable parameters, such as weights in the direct sum terms of our Eq. 8 to tune the relevance of the orbits. We will include this discussion in the manuscript. To prove the point, we extend our experiments and report preliminary results. Following the first approach, we extend the HGP-SL architecture [3] and test on PROTEINS, using their code and filtering for graphs having between 3 and 30 vertices. We try a dropout rate of 0.5 and 0.2, use a learning rate of 0.001 and a batch size of 32, training the model for a maximum of 1000 epochs and using early stopping on validation loss. The results show that the DRkSP improves the learning capability and that both higher correlations and multiple orbits help: https://anonfile.io/f/yqPzRNyf **Real-world datasets and scalability:** To address the concern on scalability and practicality, in addition to our previous experiment on QM7, we added some preliminary experiments on two suggested real-world datasets: QM9 and ZINC. These two datasets contain 130,831 and 249,456 molecules, respectively. We processed these datasets on a desktop, using our non-optimized python implementation. This experiment extends Table 1 to the two datasets:https://anonfile.io/f/Y7HOKwqq For QM9 we used the first 100,000 molecules for training and the remaining 30,831 for testing. The 3-orbits include the adjacency matrix, the sum of the 4 edge attributes, and the 6th node feature. For ZINC, we used 220,011 molecules for training and 5000 for testing. The 3-orbits include the adj. matrix, the edge attributes, and the node features. The multi-orbit 3SP reports comparable results to the baselines, improving over the single-orbit on QM9. These experiments help prove the scalability of the approach. We are positive that a more thoughtful use of our kSP (including higher correlations, non-reduced versions, and further tuning) will lead to more competitive results. We remain available for further discussions/clarification. [1] Morris, Christopher, et al. "Weisfeiler and leman go neural: Higher-order graph neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019. [2] Xu, Keyulu, et al. "How Powerful are Graph Neural Networks?." International Conference on Learning Representations. [3] Zhang, Zhen, et al. "Hierarchical graph pooling with structure learning." arXiv preprint arXiv:1911.05954 (2019). --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. The theoretical results that DRkSP is incomparable with 1-WL is interesting. The additional experiments are appreciated, and it would be better if the authors can try to integrate DRkSP with existing GNNs for all datasets in their camera-ready version. I have raised my scores. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful comments and for recognizing the theoretical contribution regarding the incomparability of DRkSP with 1-WL. We appreciate their suggestion and are grateful for the improved scores.
Summary: This paper proposes a family of permutation-invariant graph embeddings, which generalizes the Skew Spectrum of graphs introduced by Kondor & Borgwardt (2008). Grounded in group theory and harmonic analysis, the method introduces a new class of isomorphism-invariant graph invariants that can embed more complex graph structures, including attributed graphs, multilayer graphs, and hypergraphs, which the original Skew Spectrum could not accommodate. The paper further defines a family of functions offering a tradeoff between computational complexity and expressivity, and demonstrates an improvement in expressiveness without additional computational cost through generalization-preserving heuristics. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: yes Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The author claims that there is no increase in computational cost and has conducted an analysis on computation and complexity. However, there is no visualization comparing the complexity with existing algorithms. 2. The structure of the article is confusing. It spends a large amount of space introducing basic concepts, such as self-loops, node features, and edge features et al. However, the experiments are extremely insufficient, with almost no experimental results presented except for Figure 5. Other Comments Or Suggestions: None Questions For Authors: 1. The author claims that there is no increase in computational cost and has conducted an analysis on computation and complexity. However, there is no visualization comparing the complexity with existing algorithms. 2. The structure of the article is confusing. It spends a large amount of space introducing basic concepts, such as self-loops, node features, and edge features et al. However, the experiments are extremely insufficient, with almost no experimental results presented except for Figure 5. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Computational cost:** Benchmarking the k-Spectra (kSP) and their Doubly-Reduced (DRkSP) version would be extremely interesting. However, a fair comparison requires a substantial amount of work, which has to be covered in future research. Fair benchmarking requires using the same programming language, amount of preprocessing, and hardware. While many existing algorithms leverage C implementations and GPUs or TPUs, our current implementation of kSP is in Python, deserves further optimization, and does not fully leverage the high parallelization opportunities of kSP, given by matrix sums, products and tensors. To perform the benchmarking, one would need to optimize for proper scaling and performance. While a full benchmarking is beyond the scope of this paper, our claim that the single-orbit DRkSP fully generalizes the Skew Spectrum (SkSP) without increasing its asymptotic computational cost is supported by an extremely detailed analysis and proof, which continues in Appendix C. The algorithm’s complexity is O(n^3). To get an idea of the scaling, this is the same as a full eigenvalue decomposition of an nxn adjacency matrix. More concretely, in the rebuttal period, even with a non-optimized implementation, we computed 3-orbit kSP of QM9 (130,831 molecules) and ZINC (249,456 molecules) on a desktop PC, confirming scalability. **Paper structure:** We appreciate the feedback on structure and we are sorry it can seem confusing. We structured our paper this way: Introduction + SkSP Background (3 pages), Multi-orbit + Higher correlations + heuristics (2.5), Computation and complexity (1 + Appendix), Experiments and Conclusions (1.5). We want to stress that from page 3 onwards, the article only presents our own contributions. Sec. 2 Background discusses the minimum group theoretic background to understand the original SkSP [1]. We believed that introducing these concepts was necessary for a broader audience, who might not necessarily be familiar with graph invariants, harmonic analysis on the symmetric group, and the Skew Spectrum. Indeed, Reviewer 3 (VVhY), suggests expanding the introductory material even further. About self-loops, node features, and edge features, we believe there is a misunderstanding. We assume that the reader is familiar with these concepts and never introduce them. The homonymous paragraphs in Section 3 describe, extremely concisely, how to encode these structures in our multi-orbit extension, which is part of the contribution. Our understanding is that the reviewer expected more pages for the experimental evaluation, as it is common in many ML papers. However, our structure is the one of a theoretical paper, with several formal statements, analysis, proofs, and experiments that showcase the theoretical results. We believe that similar papers are usually welcome at ICML. In any case, upon acceptance, we could expand the experimental section. **Experiments:** We acknowledge that the experiments can be improved (and we are currently working on this), but respectfully disagree on characterizing them extremely insufficient for our purpose. The current experiments confirm and illustrate our theoretical findings: - Eigenvalue collision shows the problem of multiple orbits and how simple eigenvalue invariants fail to address them. - Synthetic graphs shows how multi-orbit improves over the previous SKSP. - Table 1 shows that multi-orbits can offer more expressive representations than eigenvalue invariants and the SkSP on real-world datasets like QM7. - Figure 5 shows the expressive power of DRkSP, illustrating advantages over SkSP and graph spectral-eigenvalue methods. We further improved our experiments in response to feedback. In particular, we investigated the relationship between DRkSP and 1-WL tests, showing that DRkSP is not less powerful nor strictly more powerful than 1-WL (see https://anonfile.io/f/LRH87kqC). This suggests that kSP could improve the expressibility of standard GNNs, which are known to be less powerful than 1-WL [1, 2]. Indeed, a preliminary experiment on PROTEINS shows this is the case https://anonfile.io/f/yqPzRNyf Further experiments on QM9 and ZINC in the setting of Table 1 validate the scalability of the proposed method, with the multi-orbit representation offering comparable results to other representations https://anonfile.io/f/Y7HOKwqq More thoughtful experimental settings could show further advantage of multi-orbits on real-world datasets. Please check the replies to other reviewers for more information. We hope this rebuttal addresses the reviewer’s concerns and are happy to contribute with further information otherwise. [1] Morris, Christopher, et al. "Weisfeiler and leman go neural: Higher-order graph neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019. [2] Xu, Keyulu, et al. "How Powerful are Graph Neural Networks?." International Conference on Learning Representations.
Summary: In this paper, authors extend the Skew Spectrum-based graph representation method to handle rich graph structures and achieve preferable computational efficiency. The idea is interesting. Claims And Evidence: A lot of theoretical analysis are made. The expression of the paper is easy to follow. Methods And Evaluation Criteria: The setting is rational to me. Theoretical Claims: I am not familiar with the theoretical analysis in the paper, so I am sorry that I am not able to appreciate the analysis in this part. Experimental Designs Or Analyses: In this experiment, authors prove the effectiveness of the Multi-Orbit Embeddings, the Higher-Order Correlations and its advantage over the classic Skew Spectrum. More experimental results are encouraged to be conducted to illustrate the advantage of the proposed algorithm against the message-passing neural networks Supplementary Material: Yes, have read the supplementary material. Relation To Broader Scientific Literature: See summary. Essential References Not Discussed: See summary. Other Strengths And Weaknesses: I have some concern over the advantage of giving each node in a graph an index. In this paper, authors claim that they have the advantage of “expressivity theoretical guarantees” and better generalization ability to richer graph structures like attributed graphs, multilayer graphs, and hypergraphs. What can the theoretical guarantee be used for? Also, there are many advanced algorithms that are proposed to handle attributed graphs, multilayer graphs, and hypergraphs, can you discuss more about the proposed method with those kinds of algorithms? Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Message-Passing Neural Networks:** We stress that our main claim is not to improve message-passing neural networks (MPNN), but rather the Skew Spectrum of graphs. However, we think these methods can improve MPNNs and report evidence in the answers. Most MPNNs architectures aren’t more powerful Weisfeiler-Lehman (WL) tests, specifically 1-WL [1, 2]. We start from this and include two other experiments: - We investigate the relationship between k-Spectra (kSP) and WL tests. On a theoretical level, the two appear to be different: k-WL tests look at nodes and distinguish them based on having certain neighbors; k-correlations count different edge structures (see Intuition in Sec. 4). This suggests that kSP provides an alternative expressivity evaluation framework. We extended the experiments in Fig. 5 to show that Doubly-Reduced kSP (DRkSP) can be more and/or less expressive than 1-WL: DRkSP is complete on graph of 7 vertices, while 1-WL is not, but perform slightly worse on Chordal graphs. New Fig: https://anonfile.io/f/LRH87kqC - Since DRkSP can be more expressive than 1-WL it can be combined with GNNs to enhance their expressivity. We integrate DRkSP by concatenating the embedding with the GNN output, before a fully connected prediction layer. We use the HGP-SL model of [1], trained on PROTEIN with molecules between 3 and 30 nodes. The results show that the DRkSP improves the learning capability and that both higher correlations and multiple orbits help: https://anonfile.io/f/yqPzRNyf **Theoretical Guarantees:** We are unsure of what you mean by “giving each node of the graph an index”. We extend the Skew Spectrum in two ways: enabling more complex graph structures through multi-orbits; increasing the expressivity with higher correlations. An explanation of why the extensions improve expressivity is in both Section 3 and 4. The theoretical guarantees and the mathematical structure of the invariant guarantee interpretability. The invariant is not a black box. Having an explicit mathematical representation of the model allows determining why an algorithm made a specific choice, which is crucial, for instance, in cyberphysical and medical applications. The theoretical guarantees set expectations on the framework. For instance the embedding invariance (Theorem 4.3) implies that the invariant will not assign distinct representations to isomorphic graphs. Theorems 4.5 and 4.6 clarify which input adds information about graph isomorphism to the embedding. The intuitions in Sec. 3 and 4 explain how the kSP extends expressivity and help reasoning about WL tests and GNNs before experimenting. Other theorems detail the running time, guaranteeing a bounded execution time and setting expectations on the scalability of this method. If you consider it necessary, we can clarify this further in the revised manuscript. **Other algorithms:** Most of the modern approaches are either based on MPNNs [1] or on Spectral invariant GNNs [2]. We already discussed the relationship between MPNNs, WL and kSP above. About Spectral invariant GNNs, Corollary 3.14 of Reviewer 3’s second reference states that Spectral invariant GNNs can count cycles and paths up to 7 vertices, but non-reduced kSP, for high enough k, can count paths and cycles on more than 7 vertices, see Intuition in Sec. 4. This suggests that DRkSP and further heuristics on kSP could be used to improve the expressivity of such alternative methods. We can come up with three ways of integrating the kSP with GNNs: - Precompute the kSP and concatenate it to the GNN layer that outputs the graph embedding, just before the fully connected layer that computes the prediction (see our HGP-SL experiment). - Compute subgraph invariants in the GNN node aggregation step using kSP. - Compute subgraph invariants in the GNN pooling layer using kSP, instead of other invariants like sums or max of all node labels. In all the three cases, we could introduce some learnable parameters, such as weights in the direct sum terms of our Eq. 8 to tune the relevance of the orbits. We will include this discussion in the manuscript. We thank the reviewer for their kind feedback and hope that our answer addresses their concern. We remain available for further questions. [1] Morris, Christopher, et al. "Weisfeiler and leman go neural: Higher-order graph neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019. [2] Xu, Keyulu, et al. "How Powerful are Graph Neural Networks?." International Conference on Learning Representations. [3] Zhang, Zhen, et al. "Hierarchical graph pooling with structure learning." arXiv preprint arXiv:1911.05954 (2019). [4] Lu, Q., Zhou, Z., & Wang, Q. (2024). Multi-layer graph attention neural networks for accurate drug-target interaction mapping. Scientific Reports, 14(1), 26119. [5] Feng, Yifan, et al. "Hypergraph neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.
null
null
null
null
null
null
null
null
RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Accept (poster)
Summary: This paper proposed RWKVQuant, a method for PTQ of RWKV. They propose a hybrid method combining both scalar and vector quantization, and a decision rule for assigning different layers to these two methods. For appropriate values of hyperparameters (such as $\tau_c$ and $\tau_f$), they demonstrate strong performance on a wide range of benchmarks. Claims And Evidence: The claims are supported by clear evidence, but the setting of hyperparameters needs to be addressed more carefully. Methods And Evaluation Criteria: The benchmark datasets make sense for the problem at hand. However, much more sensitivity analyses are needed for the setting of hyperparameters. Theoretical Claims: Section 3 is not so much a theoretical claim, but rather an explanation for the development of a heuristic that the authors then use. Experimental Designs Or Analyses: I did not check the experimental designs. Supplementary Material: I reviewed Appendix A.5, which discussed the limitations of the work, in particular the difficulty of setting the hyperparameters. Relation To Broader Scientific Literature: PTQ of recurrent models has been much more difficult than that of Transformers, likely because recurrent computation allows errors to compound. It is very impressive that RWKVQuant is able to attain such strong performance in light of the known difficulty of this problem in the literature. Essential References Not Discussed: I am not aware of essential references not discussed. Other Strengths And Weaknesses: **I am very concerned about the setting of hyperparameters in this paper**. The hyperparameters I'm most concerned about are $\tau_c$ and $\tau_f$ , as well as $K$ from equation 17. I would like to see a detailed section in the Appendix discussing in detail all decisions about how to set these hyperparameters in the different experiments, and detailed ablation studies / sensitivity analyses showing what happens to the performance of RWKVQuant as different values for these hyperparameters ($\tau_c$, $\tau_f$, and $K$ are used). Appendix A.5 is an OK start, but it should have been explicitly pointed to in the main section. There needs to be a sentence somewhere in Section 3 saying that "the performance of RWKVQuant can vary depending on the setting of $\tau_f$ and $\tau_c$, see Appendix A.5". I am also concerned by the admission of the following: > Therefore, in future work, we will further explore how to determine appropriate values for $\tau_c$ and $\tau_f$. In my opinion, the derivations in Section 3 (Method) are not sufficiently principled to warrant publication on their own, and so the merit of this work is based on the strong practical and empirical performance of RWKVQuant in Section 4 (Experiments). However, **a practical method is only useful if its hyperparameters can be set robustly.** I cannot recommend publication of this work until the bullet points laid out by the authors at the end of Appendix A.5 are satisfactorily addressed. I believe the answers are vital to widespread use of this method. Other Comments Or Suggestions: * The short title has not been changed from "Submission and Formatting Instructions for ICML 2025" * typo: line 097: "hight-order" * typo: line 066 (second column): "we propose RWKVQuant, which **enhance** VQ..." * typo: line 088 (second column): "the current word $x_t$ **and** can be derived by..." * typo: line 202: "dose" should be "does" Questions For Authors: 1. > For both vision and language tasks, we select 128 samples from the corresponding test datasets for calibration 1a. Do you think that choosing the samples from the **test** dataset corresponds to a slight form of test set leakage? I would like to see an ablation where the samples are chosen from a different dataset (or at least where those 128 samples that are used for calibration are not used to monitor performance) 1b. To clarify: does "calibration" here mean the procedure for setting $\tau_c$ and $\tau_f$ such that 90% of the layers use SQ and 10% of the layers use VQ? I would like to see an ablation where this percentage of layers using SQ vs VQ is changed and the performance of RWKVQuant is tracked). 2. Lines 316-7 (second column) say > For example, in RWKV7, $\tau_c$ is set to 1.54, while $\tau_f$ is set to 30. However, Figure 5 is constructed > Under the settings of $\tau_c=1.5$ and $\tau_f=50$. Why is there this discrepancy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and efforts in reviewing our manuscript. We have addressed each comment and made the necessary revisions to improve the quality and clarity of our manuscript. >Concerned about the setting of hyperparameters. 1. For hyperparameters $\tau_c$ and $\tau_f$ We have explicitly added the sentence "the performance of RWKVQuant can vary depending on the setting of $\tau_c$ and $\tau_f$, see Appendix A.5" to our latest manuscript after line 274 (first column). Actually, $\tau_c$ and $\tau_f$ are automatically set for each individual model and do not require adaptation. Specifically, we obtain their values in the following steps: - Compute the coarse-grained proxy $P_c$ (Eq.15) for each layer to be quantized. - Set $\tau_c$ to the value at the 50th percentile of all $P_c$. - Compute the fine-grained proxy $P_f$(Eq.17) for each layer whose $P_c$ < $\tau_c$. - Set $\tau_f$ to the value at the 20th percentile of all $P_f$. Although fine-tuning the percentile values for each network may further improve accuracy, the percentile values (i.e., 20% and 50%) —used in all our experiments—already delivers strong performance across all RWKV networks, as shown in Table 2 of the manuscript. We have also included more results in various configurations in our latest manuscript. Here is an example of RWKV-6-1B in [Table R1](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_ablation_study.MD). 2. K represents the order of the Taylor expansion, we set it to 4 for all models. A larger number of K offers a closer result to the actual value (i.e., K=+∞). Experimentally, we find that K=4 is an ideal trade-off between accuracy and quantization cost under various $\tau_c$ and $\tau_f$. For instance, in RWKV-6-1B [Table R2](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_ablation_study.MD). >Writing Errors. Thanks for pointing them out. We have fixed them in our latest manuscript. >Concerns about calibrating from the test dataset. 1. RWKVQuant does not include any training process. All samples that are used for calibration are not used to monitor performance. They are only used to generate activations, rather than supervising. 2. For a fair comparison, we randomly sample 128 sequences from Lambda, following previous work like GPTQ [1] and GPTVQ [2]. In addition, we evaluate on various datasets that are not used for calibration, as shown in Table 2 in our manuscript. To further address your concern, we provide [Table R3](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_ablation_study.MD), where calibration is performed using only a small subset (128 samples) of the training dataset under the same conditions. The results indicate that our approach still maintains an advantage. >Does "calibration" here mean the procedure for setting $\tau_c$ and $\tau_f$ such that 90% of the layers use SQ and 10% of the layers use VQ? Calibration and proportion are two distinct concepts. Calibration is an common practice of PTQ, which samples data and feed it to models to generate activations. Making use of such activations, PTQ can optimize the weight quantization [1,2]. As for the proportion of SQ and VQ, it is fixed to our default configuration in this work corresponding to the first reply. >I would like to see an ablation where this percentage of layers using SQ vs VQ is changed and the performance of RWKVQuant is tracked. Please refer to the [Table R1](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_ablation_study.MD). >Why is the setting of $\tau_c$ and $\tau_f$ are not the same? The values of $\tau_c$ and $\tau_f$ can vary across different models. This variation occurs because these values are dictated by the percentile of the proxy specific to each individual model. References: [1] GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers. [2] GPTVQ: The Blessing of Dimensionality for LLM Quantization. --- Rebuttal Comment 1.1: Comment: Thank you for your additional results on setting hyperparameters. I am satisfied that the method is reasonably robust. Please include all tables from rebuttals in the final manuscript. I am glad that you are also including an explicit pointer to Appendix A.5. I am raising my score to a 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your constructive comments and valuable suggestions. We will include all the tables from rebuttals in the final manuscript. We are glad to address all your questions and sincerely appreciate your recognition. Best, Authors
Summary: This paper aims at reducing the memory usage and inference latency of RWKV through post-training quantization (PTQ) techniques. However, authors find that non-linear operators and larger amount of uniformly distributed weights hinder the effectiveness of previous PTQ methods. Therefore, authors propose RWKVQuant to quantize weights of RWKV models to extremely low-bit with less than 1% accuracy loss, 2.83$\times$ memory saving and 2.14$\times$ acceleration rate, which includes a coarse-to-fine proxy to adaptively select weight outliers and a codebook optimization algorithm to further enhance the performance of quantized RWKV models. Claims And Evidence: Claims in the "introduction" section is supported by Figure 1&5 and Table 1. Methods And Evaluation Criteria: I've checked all theoretical and qualitative analysis and claims in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the methodology and equation derivation. Theoretical Claims: I've checked all theoretical and qualitative analysis and claims in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the methodology and equation derivation. Experimental Designs Or Analyses: I've checked all experimental settings, comparison and results in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the experimental part. Supplementary Material: Any possible details in the supplementary material is checked. Relation To Broader Scientific Literature: All contributions are technical and all datasets used for experiments are open-sourced. Thus no key contributions of this paper related to the broader scientific literature. Essential References Not Discussed: All necessary references are discussed. Other Strengths And Weaknesses: ## Strengths: 1. The bottleneck of quantizing RWKV and the differences between RWKV models and Transformer-based LLMs are both clearly claimed and explained. Thus the motivation of this paper is clear and well-explained. 2. Figures are fancy and easy to follow, all legends in figures are clear and captions provide detailed information about the corresponding figure. 3. Preliminary is concise. The process of proposed method is clearly described, although the symbol is a little bit confusing. 4. Experiments are extensive and prove the effectiveness of the methods proposed in this paper. ## Major weaknesses: 1. In section 2.2, authors introduce SQ and VQ, but fail to claim the pros and cons of these two techniques. Therefore, I'm a little bit confused by why authors choose to use hybrid quantization instead of only use VQ since the outliers in applying SQ to RWKV is unsolvable. 2. Eq. 11 is weird, since $$\\sum_{i=1}^n G'_i = 1$$ and $$\\sum_{i=1}^n \\frac{1}{n} = 1$$, which means Eq. 11 is always satisfied. Maybe authors mean $$\\sum_{i=1}^n \\delta= \\sum_{i=1}^n (G'_i - \\frac{1}{n}) = 0$$, which means conducting the element-wise subtraction first then conducting the sum operator. 3. Why the $s_k$ in Eq. 15 can be omitted? if so, the Taylor expansion won't be satisfied, then the $P_f(G')$ does not equal to the k-th expansion of $P_c(G')$. Maybe authors can add some experimental comparison of w/ & w/o $s_k$ for double-check and further clarification since $s_k$ is easy to calculate. 4. Seems like the fine-grained proxy is just a Taylor expansion version of coarse-grained one. What if just set a smaller $\\tau_c$? As shown in Figure 3 (b) & (c), if $\\tau_c$ is set as 0.95 (0.94(Fig. 3c) <0.95 <0.96(Fig. 3b)) and w/o $\\tau_f$, then it will also utilizes VQ when the condition depicted in (b) happened. Authors should add more comparison about different setting of $\\tau_c$ and w/ w/o $\\tau_f$. 5. The section of "codebook optimization for element-wise multiplication" seems like just applying percentile-based clipping operation into a traditional VQ, instead of simply average all samples, thus solve the outlier phenomenon. This part lacks novelty. It would be better to further explain the novelty and differences between authors' method and previous arts. 6. (1). Why the setting of bpw of previous methods are 3.25/3.5, while authors' method is 3.275? Just for ensuring that SQ is used for 9/10 of the layers and VQ for 1/10? How the performance changed with different settings of proportion of SQ and VQ? Lack relevant experiments and analysis. (2). Meanwhile, results in Table 2&3 both prove that the performance of VQ method GPTVQ is much better than all other previous work, is it because VQ methods often provide better performance with latency overheads? Then authors should also compare the flops and performance with different settings of proportion of SQ and VQ. 7. Authors only provide results with bpw around 3-bit, how about other bit-width, e.g. 2-bit & 4-bit. 8. As shown in Table 11, it seems like the effectiveness of codebook optimization is much weaker with larger-scale RWKV on Wiki2. I'm curious about this phenomenon, is it because with the model scale becomes larger, the outliers affect the performance less significantly? Maybe authors can explain this part shortly. ## Minor weaknesses: 1. Symbol conflict in Eq. 6: since subscript $i$ is used in the numerator $G_i$, the subscript in denominator $\sum_{i=1}^n G_i$ should be a new one, which can be formulated as $\sum_{j=1}^n G_j$. 2. The explanation of superscript $k$ in Eq. 12 is missing. If I'm not missing anything, does it represent k-th order partial? 3. Symbol reuse: $m$ in section 3.1 represents index of number of weights, while in section 3.2 & 2.2 represents shape of a tensor or weight $\\mu$. . Other Comments Or Suggestions: See "Minor weaknesses" part above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and efforts in reviewing our manuscript. We have addressed each comment and made the necessary revisions to improve the quality and clarity of our manuscript. >Why authors choose to use hybrid quantization instead of only use VQ since the outliers in applying SQ to RWKV is unsolvable. VQ and SQ are suitable for differently distributed weights. VQ excels in managing outliers but underperforms when dealing with uniform distributions. For weights without outliers, SQ can achieve better results. Thus, we are motivated to hybrid them. >Concerns of Eq. 11. Thanks for pointing it out. We have fixed this problem in our latest manuscript. >Why the $s_k$ in Eq.15 can be omitted. Eq.15 is an equal expression of the Taylor expansion, but Eq.17 is obtained by performing an absolute operation to Eq.15. Thereby $s_k$ can be omitted. We explain the reason for this operation in step5 of the fine-grained proxy (line 241-245). >Concerns on the fine-grained proxy. Fine-grained proxy is not just a Taylor expansion version of coarse-grained one. It extracts the outliers features by performing an absolute function to specific terms of the Taylor expansion, as stated in step5 of the fine-grained proxy (line 241-245). >Authors should add more comparison about different setting of $\tau_c$ and w/ w/o $\tau_f$. Actually, $\tau_c$ and $\tau_f$ are automatically set for each individual model and do not require adaptation. Specifically, we obtain their values in the following steps: - Compute the coarse-grained proxy $P_c$ (Eq.15) for each layer to be quantized. - Set $\tau_c$ to the value at the 50th percentile of all $P_c$. - Compute the fine-grained proxy $P_f$ (Eq.17) for each layer whose $P_c$ < $\tau_c$. - Set $\tau_f$ to the value at the 20th percentile of all $P_f$. Although fine-tuning the percentile values for each network may further improve accuracy, the percentile values (i.e., 20% and 50%) —used in all our experiments—already delivers strong performance across all RWKV networks, as shown in Table 2 of the manuscript. We have also included more results in various configurations in our latest manuscript. Here is an example of RWKV-6-1B in [Table R1](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_Tr_Tf_wo_f.MD). >It would be better to further explain the novelty and differences between the codebook optimization and previous arts. Firstly, our approach is designed for element-wise multiplication operations (widely-applied in RWKV), instead of matrix multiplication (widely-applied in Transformer-based LLMs). Compared to matrix level, element-wise poses a challenge of batch integration for calibration. Secondly, we discover that the activations of RWKV follow the Guassian distribution as shown in Figure 4. Thus, our codebook optimization can be well applied. Thirdly, directly applying previous arts to RWKV leads to significant accuracy decline as shown in Table 7 of the manuscript. Our method can effectively enhance performance. >How the performance changed with different settings of proportion of SQ and VQ? 1. VQ and SQ are suitable for differently distributed weights. VQ excels in managing outliers but underperforms when dealing with uniform distributions. For weights without outliers, SQ can achieve better results. Thus, we are motivated to hybrid them. 2. It is possible to improve accuracy by adjusting the percentiles for each network. However, this default configuration can already achieve effective results across all RWKV networks and typical tasks as shown in Table 2 of the manuscript. >Why are VQ methods much better than all other previous work? Generally, SQ is not good at handling outliers for its scaling characterstic, especially in aggressive bitwidth (e.g., lower than 4 bit). In contrast, VQ has the ability to capture the distribution of the original sequence and offers an advantage in terms of compression ratio [1]. >Results on other bit-width settings. Please refer to the Table R1 in the above response. >The effectiveness of codebook optimization is much weaker with larger-scale RWKV. Generally, larger-scale models are less sensitive to quantization, primarily due to their larger amount of redundant parameters as mention in [2,3]. Thus, the optimal quantization strategy may be less obvious for these models. >Symbol conflict in Eq. 6. Thanks for pointing it out. We have fixed this problem in our latest manuscript. >The explanation of superscript k in Eq. 12 is missing. We explain it next to Eq. 12 in line 188-190. >Symbol reuse: m. Thanks for pointing it out. We have fixed this problem in our latest manuscript. References: [1]. QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks [2]. A Survey of Quantization Methods for Efficient Neural Network Inference [3]. LLM Inference Unveiled: Survey and Roofline Model Insights
Summary: RWKV is a modern RNN architecture that faces deployment challenges on resource-constrained devices. RWKVQuant, a post-training quantization (PTQ) framework, is proposed to address the limitations of applying existing quantization methods to RWKV models. RWKVQuant uses a coarse-to-fine proxy to adaptively select quantization approaches and optimizes codebook performance for element-wise multiplication. Specifically, RWKVQuant can quantize weights to approximately 3-bits with less than 1% accuracy loss, while providing up to a 2.14× speedup in inference Claims And Evidence: This paper presents strong claims about RWKVQuant’s effectiveness in improving post-training quantization for RWKV models, backed by experimental results. It demonstrates weight quantization to 3 bits with less than 1% accuracy loss and a 2.14× speedup, using a hybrid SQ-VQ approach guided by Information Entropy and higher-order moments. The method shows promising performance across language and vision tasks, with reduced memory and faster inference — supported by ablation studies. However, clearer comparisons with other leading quantization frameworks and more detailed performance metrics would further strengthen the claims. Methods And Evaluation Criteria: ### Proposed Methods: 1. RWKVQuant Framework: Combines Scalar Quantization (SQ) and Vector Quantization (VQ), leveraging RWKV’s architecture for better performance. 2. Coarse-to-Fine Proxy: Uses Information Entropy to handle weight uniformity and outliers, adapting the quantization approach to the weight characteristics. 3. Codebook Optimization: Tailors codebook generation to RWKV’s element-wise multiplication modules for improved efficiency. ### Evaluation Criteria: 1. Benchmark Datasets: Uses standard datasets like LAMBADA (language) and ImageNet, Coco, ADE20K (vision) for fair comparisons. 2. Performance Metrics: Measures perplexity (PPL) for language and Top-1 Accuracy, AP, and MIoU for vision tasks — suitable for assessing quantization impact. 3. Comparative Analysis: Benchmarks against existing SQ, VQ methods, and strong baselines to validate performance improvements. Theoretical Claims: This paper does not explicitly present formal proofs for theoretical claims; rather, it primarily focuses on empirical results and the introduction of the RWKVQuant framework. However, it does make several theoretical assertions and claims, particularly about the quantization methods and their effects on the RWKV models. Experimental Designs Or Analyses: 1. Choice of Datasets and Tasks: Uses LAMBADA (language) and ImageNet, Coco, ADE20K (vision), covering both NLP and CV tasks. 2. Quantization Methods and Baselines: Compares RWKVQuant against SQ methods (RTN, GPTQ) and VQ methods (K-Means, GPTVQ). Comparisons are broad, but ensuring equivalent configurations (e.g., bit representation, group sizes) across methods is essential for fairness. More transparency on parameter choices (e.g., bpw = 3.25/3.5) would improve clarity. 3. Performance Metrics: Uses perplexity (PPL) for language and Top-1, Box AP, MIoU for vision tasks. Discussion: Metrics are appropriate, but reporting should include mean performance across runs, standard deviation, or confidence intervals to account for random variations. Supplementary Material: 1. Structure of Time- and Channel-Mixing: RWKV’s Time and Channel Mixing mechanisms model token relationships efficiently while reducing computation and memory overhead. 2. RWKV Weight Distribution: Analyzes uniform vs. non-uniform weight distributions across layers, guiding the selection between Scalar Quantization (SQ) and Vector Quantization (VQ). 3. Compute-to-Memory Ratio: Highlights RWKV’s high memory-access reliance compared to other models, suggesting greater inference speed gains on memory-constrained devices. 4. Additional Results: Includes extensive experiments showing RWKV’s competitive accuracy and efficiency under different quantization settings. 5. Limitations and Future Work: Acknowledges limitations and proposes future research directions, particularly for improving quantization across diverse model configurations. Relation To Broader Scientific Literature: Post-Training Quantization (PTQ): Builds on prior PTQ methods, highlighting the poor performance of traditional SQ and VQ on RWKV, similar to challenges in other architectures. Hybrid Quantization Strategies: Introduces a hybrid SQ-VQ approach, aligning with prior research suggesting hybrid methods improve quantization performance. Model Efficiency: Focuses on efficiency for resource-constrained devices, achieving ~3-bit quantization with high accuracy, supporting findings on T-LLM deployment Essential References Not Discussed: None Other Strengths And Weaknesses: ### Strengths: 1. Novel Hybrid Quantization Framework: RWKVQuant introduces a significant advancement by combining Scalar Quantization (SQ) and Vector Quantization (VQ) tailored specifically for RWKV models. This hybrid approach effectively addresses inefficiencies in previous methods, optimizing quantization for non-linear operators in RWKV. 2. Empirical Results: The experiments show impressive outcomes, such as less than 1% accuracy loss while quantizing RWKV-6-14B to about 3 bits and achieving a 2.14x speedup. These benchmarks validate the framework’s real-world applicability, making it highly relevant for deploying large models on resource-constrained devices. Thorough Understanding of RWKV Architecture: 3. The paper demonstrates a deep understanding of RWKV’s unique characteristics, such as Time Mixing and Channel Mixing, and tailors the quantization strategy accordingly. This approach is more effective than generic quantization methods, ensuring optimal performance. 4. Clarity and Structure: The paper is well-organized, with a clear flow from the problem statement to the methodology, experiments, and discussions. The inclusion of figures like the accuracy-model size curve effectively communicates complex concepts, making it easy to follow for researchers and practitioners. ### Weaknesses: 1. Limited Comparison with State-of-the-Art: While RWKVQuant shows promising results, the paper would benefit from a broader comparative analysis with other advanced quantization methods, such as Adaptive Weight Quantization or mixed precision methods, to establish its relative strengths more clearly. Concerns of Overfitting: The favorable experimental results raise concerns about overfitting, particularly with the Lambada dataset. Acknowledging this limitation and conducting experiments on more diverse datasets or employing cross-validation would provide a clearer picture of the method's generalization capability. Practical Implementation Guidance: Although the methodology is clear, the paper lacks practical implementation details. Providing examples, pseudo-code, or insights into hyperparameter tuning and the application of the coarse-to-fine proxy would make the framework more accessible for practitioners. Other Comments Or Suggestions: Please refer to the weakness part. Questions For Authors: 1. Can this method be applied to activation quantization as well? 2. In Table 4, why does the speedup increase with model size? With weight quantization alone, what measures should be taken to achieve actual speedup? 3. What is the actual cost of the codebook optimization step? 4. Regarding practical implementation, could you provide specific strategies for tuning the parameters τ_c and τ_f? Were any heuristic approaches or guidelines identified during your experiments that could assist other researchers in achieving optimal performance? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and efforts in reviewing our manuscript. We have addressed each comment and made the necessary revisions to improve the quality and clarity of our manuscript. > The paper would benefit from a broader comparative analysis with other advanced quantization methods. 1. Directly applying SOTA methods on Transformer-based LLMs on RWKV models can lead to significant accuracy degration. For example, applying QuaRot [5] to LLaMA series has only 1% accuracy drop, while that for RWKV series is over 8%. 2. There are few works that apply quantization for RWKV models. We are the first to combine SQ and VQ, achieving effective performance on the RWKV family. >Concerns of Overfitting: The characteristic of PTQ is that it enables rapid quantization deployment with a small calibration dataset. Current SOTA methods of Transformer-based LLMs do not encounter this issue. We also care about the overfitting challenges on PTQ for RWKV. We perform experiments on nine datasets, **which are not used for calibration (i.e., zero-shot)**. As shown in Table 2 of the manuscript, RWKVQuant does not overfit. >Lack of pseudo-code: Thanks for you adivce, we've added an [Algorithm](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_New_Method.MD) to our latest manuscript. >Can this method be applied to activation quantization as well? Considering that bandwidth is the main bottleneck of LLMs' inference speed, this work is put forward to tackle the challenges associated with weight quantization. Moreover, applying this quantization approach to activations represents a highly promising and valuable area of research. We intend to conduct in-depth explorations of this aspect in our forthcoming research endeavors. >In Table 4, why does the speedup increase with model size? With weight quantization alone, what measures should be taken to achieve actual speedup? The actual speed up can be effected by various factors, such as instruct latency, memory access, computational time, optimization of de-quantization kernal, and so on. Generally, memory access takes the most importance across all above factors, since the decode phase is typically memory-bound. With the model size increase, memory access will account for more latency. Considering that weight quantization can reduce this cost, the speed up should also increase with the model size. >What is the actual cost of the codebook optimization step? The codebook optimization is a lightweight **offline** process; for instance, it only takes about 15 minutes for a 70-parameter model. >Could you provide specific strategies for tuning the parameters $\tau_c$ and $\tau_f$? Actually, $\tau_c$ and $\tau_f$ are automatically set for each individual model and do not require adaptation. Specifically, we obtain their values in the following steps: - Compute the coarse-grained proxy $P_c$ (Eq.15) for each layer to be quantized. - Set $\tau_c$ to the value at the 50th percentile of all $P_c$. - Compute the fine-grained proxy $P_f$ (Eq.17) for each layer whose $P_c$ < $\tau_c$. - Set $\tau_f$ to the value at the 20th percentile of all $P_f$. Although fine-tuning the percentile values for each network may further improve accuracy, the percentile values (i.e., 20% and 50%) —used in all our experiments—already delivers strong performance across all RWKV networks, as shown in Table 2 of the manuscript. We have also included more results in various configurations in our latest manuscript. Here is an example of RWKV-6-1B in [Table R1](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_New_Method.MD). References: [1] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. [2] GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers. [3] GPTVQ: The Blessing of Dimensionality for LLM Quantization. [4] VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models. [5] QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs
Summary: This paper introduces RWKVQuant, a post-training quantization framework for the RWKV model family. The main contributions are: 1) revealing the limitations of existing Scalar Quantization (SQ) and Vector Quantization (VQ) methods on RWKV; 2) proposing a coarse-to-fine proxy strategy to guide the hybrid use of SQ and VQ; 3) optimizing the codebook for RWKV's unique element-wise multiplication operation. Extensive experiments show that the method can quantize RWKV-6-14B to about 3-bit with less than 1% accuracy loss while achieving 2.14x speedup. ## update after rebuttal Thank you for the author's response. After referring to the author's response and the comments of other reviewers, I decided to keep my score. Claims And Evidence: The paper's main claims are well supported by both experimental and theoretical evidence: 1. Table 1 verifies the more uniform weight distribution characteristic of RWKV through clustering loss comparison 2. Figure 3 validates the effectiveness of the proxy strategy through visualization analysis 3. Tables 2-3 demonstrate the method's superiority through comprehensive comparative experiments Methods And Evaluation Criteria: The methodology is sound and evaluation criteria are comprehensive: 1. Method design progressively develops solutions based on RWKV characteristics 2. Evaluation metrics include model performance indicators such as accuracy and perplexity 3. Considers practical metrics like memory usage and inference speed 4. Thorough validation on multiple benchmark datasets Theoretical Claims: The theoretical analysis is complete: 1. Detailed analysis of SQ and VQ limitations in RWKV 2. Complete mathematical derivation for the coarse-to-fine proxy strategy 3. Codebook optimization method designed based on element-wise multiplication characteristics Experimental Designs Or Analyses: The experimental design is comprehensive and well-structured: 1. Validation across multiple RWKV model scales 2. Comparison with existing mainstream quantization methods 3. Rich ablation studies 4. Both qualitative and quantitative analyses provided Supplementary Material: The supplementary materials are complete. Relation To Broader Scientific Literature: Clear articulation of relationships with existing work. Essential References Not Discussed: All major related works are discussed, with no significant omissions noted. Other Strengths And Weaknesses: **Strengths:** 1. First systematic study of RWKV model quantization. First paragraph of Introduction explicitly states this is the first comprehensive quantization framework for RWKV family. 2. Novel and effective method design. - Table 2 shows superior performance across multiple models. - Ablation studies validate the necessity of each component 3. Rigorous theoretical analysis. - Complete mathematical derivation in Section 3 - Intuitive visualization explanation in Figure 3 4. Comprehensive experimental validation. - Validation on 7 different scale models - Comparison of multiple evaluation metrics in Tables 2-3 **Weaknesses:** 1. Lack of theoretical guidance for threshold selection in proxy strategy. Only empirical setting of $\tau_c$ and $\tau_f$ mentioned in experimental section. 2. Insufficient analysis of optimal quantization strategies across different model scales. No related discussion in experimental section. Other Comments Or Suggestions: None Questions For Authors: Please refer to Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable time and efforts in reviewing our manuscript. We have addressed each comment and made the necessary revisions to improve the quality and clarity of our manuscript. >Only empirical setting of $\tau_c$ and $\tau_f$ mentioned in experimental section. Sorry for causing this confusion. Actually, $\tau_c$ and $\tau_f$ are automatically set for each individual model and do not require adaptation. Specifically, we obtain their values in the following steps: - Compute the coarse-grained proxy $P_c$ (Eq.15) for each layer to be quantized. - Set $\tau_c$ to the value at the 50th percentile of all $P_c$. - Compute the fine-grained proxy $P_f$ (Eq.17) for each layer whose $P_c$ < $\tau_c$. - Set $\tau_f$ to the value at the 20th percentile of all $P_f$. Although fine-tuning the percentile values for each network may further improve accuracy, the percentile values (i.e., 20% and 50%) —used in all our experiments—already delivers strong performance across all RWKV networks, as shown in Table 2 of the manuscript. We have also included more results in various configurations in our latest manuscript. Here is an example of RWKV-6-1B in [Table R1](https://anonymous.4open.science/r/RWKVQuant-E375/Table_for_Tr_Tf.MD). >Insufficient analysis of optimal quantization strategies across different model scales. No related discussion in experimental section. In line 347-353 of the manuscript, we have disscussed this point. Here is a takeaway for your convenience. As shown in Table 2 in original paper, on small-scale models, our method reduces the error by 14.5% compared to the SOTA; on larger-scale models, our method performs almost consistently with floating-point precision and reduces the error by 62.34% compared to the SOTA. The following table compares the average zero-shot accuracy between origin models and our quantized models. It can be observed that our method is only of slight accuracy decline across different model sizes . |Model |RWKV-6-1B|RWKV-6-3B|RWKV-6-7B|RWKV-6-14B| |:----:|:----:|:----:|:----:|:----:| | FloatingPoint|54.39%|58.32%|61.69%|63.65%| | FloatingPoint|51.69%|55.79%|60.19%|62.69%|
null
null
null
null
null
null
Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity
Accept (poster)
Summary: This paper explores compression techniques for linear RNNs, including unstructured sparsity and fixed-point quantization, and evaluates their acceleration on neuromorphic hardware. The study investigates the trade-offs between latency, energy, and accuracy compared to dense RNNs by introducing these compression techniques. The results demonstrate that highly sparse linear RNNs achieve improved efficiency-performance trade-offs, with 2× less compute and 36% less memory at iso-accuracy. Additionally, quantizing the sparse models and deploying them on the Intel Loihi 2 neuromorphic chip yields significant speed and energy improvements over edge GPUs. ## Update after rebuttal (Was previously in official comment not visible by the authors) Thanks for the clarification. It would be beneficial to see the extended results and ablation study in the appendix. For now I will keep the score as 3. Claims And Evidence: The paper claims that sparse linear RNNs provide better efficiency-performance trade-offs than dense baselines, with notable reductions in computation and memory usage. Furthermore, it states that quantized sparse models on Intel Loihi 2 outperform edge GPUs in terms of speed and energy efficiency. These claims are supported by experimental results, which show improvements in denoising quality while maintaining computational efficiency. The evidence presented in the paper generally aligns with these claims. Methods And Evaluation Criteria: The proposed method is evaluated using the Intel Neuromorphic Deep Noise Suppression Challenge, which focuses on human speech denoising. The dataset is derived from the Microsoft DNS Challenge, containing clean human speech and noise source samples. The denoising quality is assessed using the scale-invariant signal-to-noise ratio (SI-SNR), which is an appropriate metric for this application. Given the embedded system context, the chosen evaluation settings are relevant and reasonable. Theoretical Claims: The paper does not make any theoretical claims. Experimental Designs Or Analyses: The experimental design is based on the S5 model built with JAX. JAXPruner is used for pruning the network to introduce unstructured sparsity, while the AQT library is used for quantization-aware training. The final network is deployed and evaluated on the Intel Loihi 2 neuromorphic chip. The experimental design appears valid, leveraging appropriate tools and methodologies to assess the impact of sparsity and quantization. Supplementary Material: I have reviewed the supplementary material, particularly the execution mode of Loihi chip. Relation To Broader Scientific Literature: The paper provides a sufficient literature review covering key topics such as linear RNNs, sparsity, model compression, and neuromorphic computing. The citations are relevant and provide necessary context for understanding the contributions of the work. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: * The paper successfully demonstrates an improved Pareto optimal front, achieving better denoising quality for the same multiply-accumulate operations (MAC) or reducing MAC while maintaining denoising quality. * The results show competitive performance compared to an edge GPU (Jetson Orin Nano) in terms of latency and energy efficiency. Weaknesses: * The study focuses primarily on a single design point: the S5 model on the Intel Loihi 2 chip. It does not explore how different variants of linear RNNs are affected by sparsity and quantization. * The paper examines model dimension versus denoising quality but does not sufficiently discuss other critical design parameters such as sparsity ratio and quantization format (beyond W8A16). Other Comments Or Suggestions: * It would be beneficial to evaluate how different linear RNN variants respond to sparsity and quantization to generalize the findings further. * Additional analysis of various sparse ratios and quantization formats could provide deeper insights into trade-offs and optimal configurations. Questions For Authors: * Have you considered evaluating the impact of sparsity on other linear RNN architectures beyond the S5 model? * Can you provide insights into how different quantization formats (other than W8A16) might affect the performance and efficiency of the model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We gratefully thank the reviewer for their effort and feedback. We would like to address their questions, and we’d appreciate any raise in score if our arguments convince them. We will be happy to hear and address any further feedback. ### Generalization to different architectures: preliminary results on HGRN To expand our results to different linear RNN architectures beyond S5, we can present some preliminary results on training a 370M parameter language model based on the HGRN architecture [1]. Using the same iterative magnitude pruning setup as described in our paper, we trained the HGRN model on the FineWeb dataset for ~5B tokens with different target sparsity ratios from 0% to 90%. The training loss for the 90% sparse model is ~9% higher than for the dense model, see the Table below. Additional details are available in the Figure available at https://figshare.com/s/a8cd30c07af56f4ddbbf. These findings confirm a similar trend as what we reported in our paper on real-time sequence modeling tasks. More work is underway to scale up the language modeling experiments, and verify their validity on real-world datasets and benchmarks. | | Dense | 50% | 80% | 90% | |------------------------|-------|-------|-------|-------| | Training loss | 2.68 | 2.72 | 2.82 | 2.90 | | Relative loss change | 0% | +1.5% | +5.3% | +8.3% | | MACs (10^6) | 341 | 187 | 95 | 64 | | Relative MAC change | 0% | -45% | -72% | -81% | ### Ablation study on quantization schemes and sparsity levels We agree with the reviewer that exploring the large design space of quantization schemes and sparsity distributions would provide additional insights in this domain. On quantization, previous work on S5 [2] showed that, compared to W8A8, lower precision significantly impacted accuracy. Since Loihi’s message format doesn’t provide any performance advantage for 8-bit versus 16-bit activations, we selected W8A16 which preserves higher accuracy. Regarding sparsity, we selected a 90% target after some initial experimentation, as it resulted in a good trade-off between impact on task accuracy and MACs reduction. We could add an ablation study on sparsity levels to Appendix A, if the Reviewer finds it helpful for readers. We would also acknowledge these directions for future work by updating the last sentences of Section 4 with the following: *Finally, further exploration of quantization schemes and sparsification techniques could offer deeper insights into optimal model design for different hardware platforms. In particular, leveraging advanced data types for quantization (e.g., FP8) and adopting a more fine-grained selection of sparsity levels, potentially guided by iterative hardware profiling, are promising directions for future research.* #### References 1. Zhen Qin, Songlin Yang, Yiran Zhong, “Hierarchically Gated Recurrent Neural Network for Sequence Modeling”, NeurIPS 2023 Spotlight. 2. Abreu, Steven, et al. "Q-S5: Towards quantized state space models." arXiv preprint arXiv:2406.09477 (2024).
Summary: This paper explores the efficiency-performance trade-offs of sparse linear RNNs through a scaling study. The models achieve SOTA results in real-time audio denoising. By quantizing and deploying them on the Intel Loihi 2 neuromorphic chip, the work significantly reduces latency and energy consumption compared to a dense model on an edge GPU. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, they are correct. Experimental Designs Or Analyses: The presentation of Figure 6 lacks clarity. The experimental conditions for Base and QAT are not very clear. Please provide a detailed explanation of what Base represents in each experiment. Supplementary Material: Yes. And I review A.1 and A.2 Relation To Broader Scientific Literature: This work emphasizes the suitability of linear RNNs for real-time, long-range sequence modeling on edge devices, while noting that their compression and acceleration remain underexplored. However, compression and acceleration of linear RNNs have not been thoroughly explored. Additionally, the paper notes that linear RNNs are a promising match for neuromorphic processors. The study explores sparsification and quantization of linear RNNs and deploys the compressed models on the Intel Loihi 2 neuromorphic chip for real-time processing Essential References Not Discussed: No missing essential references were identified. Other Strengths And Weaknesses: I think the paper is reasonably clear and well-supported by sufficient evidence. However, its organizational structure could be improved. For example, the section on Compressing Linear RNNs includes a significant amount of related work and background information, which could be streamlined without disrupting the logical flow. Additionally, the discussion of contributions is relatively weak and could be expanded. Furthermore, the section on hardware deployment could provide more details to enhance clarity. Other Comments Or Suggestions: Is FPX in Figure 6 actually intended to be FXP, meaning fixed-point? Questions For Authors: Could you provide additional related works on deploying linear RNNs to neuromorphic devices, along with performance comparisons? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We gratefully thank the reviewer for their effort and feedback. We would like to address their questions, and we’d appreciate any raise in score if our arguments convince them. We will be happy to hear and address any further feedback. ### Clarification on Figure 6 We regret the suboptimal presentation of Figure 6 and appreciate the reviewer’s nudge to clarify. We confirm that FPX is a typo and should read FXP, for fixed-point. We will replace the caption with the following in the camera-ready version: *Base: trained a 32-bit floating-point model and applied post-training quantization without additional quantization-aware fine-tuning. QAT: trained the model with quantization-aware training using W8A16 quantization for the forward pass, as straight-through estimators for the backward pass. The results show that the Base model without QAT performs slightly better in FP32 than the QAT model, but significantly worse in static quantization and fixed-point precision. The model shown here is the sparse-6 variant, see Figure 4.* ### Previous work on deployment of linear RNNs to neuromorphic devices Since the now broad interest in linear RNNs is relatively recent, there is limited previous work on deployment of this architecture to neuromorphic devices, and, to the best of our knowledge, no previous work on hardware-aware model compression in this context. Previous work on Loihi 2 demonstrated the implementation of S4D [1], a state space model based on the Legendre Delay Network [2], and a MatMul-free LLM [3]. While S4D could potentially target the same application domains as S5, Ref. [1] only reports benchmarks on simple synthetic tasks (sMNIST and sCIFAR) and without model compression. For this reason, a direct performance comparison is not possible. Ref. [2] implemented a state space model with a spiking neural network on a neuromorphic processor – also without model compression techniques. The MatMul-free LLM in Ref. [3] uses ternary weight matrices for model compression but doesn’t leverage sparsity advantages from the neuromorphic chip on which it is deployed, thereby making the comparison to our present work difficult. We are not aware of any further deployment of linear RNNs on neuromorphic hardware. #### References 1. Meyer, Svea Marie, et al. "A Diagonal Structured State Space Model on Loihi 2 for Efficient Streaming Sequence Processing." arXiv preprint arXiv:2409.15022 (2024). 2. Gaurav, Ramashish, Terrence C. Stewart, and Yang Yi. "Legendre-SNN on Loihi-2: Evaluation and Insights." NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms. 3. Abreu, Steven, et al. "Neuromorphic Principles for Efficient Large Language Models on Intel Loihi 2." arXiv preprint arXiv:2503.18002 (2025). --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. I would like to keep my score.
Summary: This paper explores unstructured sparsity in linear recurrent neural networks (RNNs) to improve efficiency in edge AI applications, particularly when deployed on neuromorphic hardware (Intel Loihi 2). The authors examine various model compression techniques and conduct a scaling study to determine the Pareto front of performance vs. efficiency. The study highlights unstructured sparsity as a viable strategy for real-time processing in low-power edge devices. Claims And Evidence: The claim of the findings in this work can generalize to edge AI beyond neuromorphic hardware is questionable. Only one neuromorphic chip (Loihi 2) and one task (audio denoising) was tested. Methods And Evaluation Criteria: It makes sense that the performance is measured using Scale-Invariant Signal-to-Noise Ratio (SI-SNR), compute (MACs), memory usage, latency, and energy consumption. Results are compared with dense models running FP32 on a Jetson Orin Nano. However, no evaluation on other neuromorphic processors (e.g., SpiNNaker 2, IBM NorthPole). Only one downstream task is evaluated. Theoretical Claims: No theoretical claims are described in the paper. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria". Supplementary Material: Additional implementation and experiments are described in the supplementary material, which provide extra soundness to the reproducibility. Relation To Broader Scientific Literature: The paper builds on prior work in structured state-space models (S5), model compression, and neuromorphic hardware. The experimental results in this work can be good references for the deployment of linear RNN models on edge devices. Essential References Not Discussed: I am not an expert in either linear recurrent neural networks or neuromorphic computing, so I cannot assess whether any relevant prior findings are missing from the paper. Other Strengths And Weaknesses: Strengths: 1. The investigation of deep learning model compression in neuromorphic computing has substantial real-world impact. 2. The on-device experiment results are very beneficial for the community regarding edge AI deployment. Weaknesses: 1. This work is more like an experimental report rather than an academic paper with solid intellectual contribution. The model and the compression methods are all off-the-shelf. 2. The profiling of model compression on neuromorphic hardware is only conducted on a single device model (Intel Loihi 2), which weakly support the conclusion from the experimental results. 3. Only a single task (audio denoising) is evaluated. A broader range of tasks need to be assessed to sufficiently support the conclusion for the general edge AI applications. Other Comments Or Suggestions: As a conclusion, I suggest the authors to make a further step in designing original methodology of model compression techniques for neuromorphic hardware in edge. The generalizability of the current results require more comprehensive experiments. The authors should also reconsider if they want to position this work specifically for "neuromorphic hardware" or even more general edge computating, which should be reflected correctly in the paper title. Questions For Authors: 1. Why was only Intel Loihi 2 used? Would the results generalize to other neuromorphic chips like SpiNNaker 2 or IBM NorthPole? 2. Would the observed efficiency trends hold for other real-time tasks (e.g., NLP, speech recognition, or perception)? 3. How does batch size impact efficiency? Could Loihi 2 be optimized for batch processing? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We gratefully thank the reviewer for their effort and feedback. We would like to address their questions, and we’d appreciate any raise in score if our arguments convince them. We will be happy to hear and address any further feedback. ### Generalization to different accelerators Please refer to our response to Reviewer VTiJ. ### Generalization to different tasks Please refer to our response to Reviewer VTiJ. ### Impact of batch processing We appreciate the reviewer’s question regarding the impact of batching on our benchmarking results. We address it in the following paragraph, which we plan to include in Section 3.3 of the camera-ready version. *Although edge applications are typically thought of as single-batch applications, some workloads at the edge require small-batch inference, e.g., de-noising audio streams from multiple on-device microphones. For this reason, it is important to investigate how batch processing affects latency and energy efficiency for the two hardware architectures. While Intel Loihi 2 doesn’t natively support batching in the sense of processing multiple independent samples though the same model instantiation, the parallel inference of independent sequences can be obtained by replicating the model on the chip as many times as required by batch size. We extended the results in Table 1 to compare the effect of this implementation of batching to the usual batch processing of the Jetson Orin GPU. Figure X (available at https://figshare.com/s/4d6035c9a2c4739d7201) shows total latency and energy per sample across batch sizes, from 1 to 32. The results demonstrate that our approach on Loihi 2 maintains its large latency advantage while the energy efficiency gain is only maintained for the small-batch regime (below 8 samples), which is typical of edge inference applications. As expected, batch processing improves energy efficiency for the GPU, since the cost of data movement associated with loading the model is offset by the parallelized evaluation of multiple samples.* ### Novelty and originality of the work We thank the reviewer for pointing out that we missed to express the novelty and originality of our work. We also appreciate their statement that our work “can be a good reference for the deployment of linear RNN models on edge devices”. In addition, we would argue that our paper has two further main original/novel contributions. First, this paper spearheads the idea that neuromorphic processors are an ideal platform for the emerging class of linear RNNs. This is particularly due to the tight integration of massively parallel compute and memory in neuromorphic hardware, which can efficiently update stateful recurrent neurons. Neuromorphic processors are typically designed for low-latency processing of sequentially incoming sensory signals, and can thus particularly benefit from the advantageous scaling trends of linear RNNs with long sequences. Our present paper is the first peer-reviewed publication that explores the combination of model compression techniques required to exploit the synergy between neuromorphic processors and linear RNNs. Second, while we agree that we applied off-the-shelf compression techniques, we would argue that our paper combines them into a novel training recipe necessary to leverage the specific features typical for neuromorphic computing for an optimized deployment of linear RNNs. We would like to kindly point out that, according to the 2025 ICML reviewer guidelines, “Originality need not mean wholly novel methods. It may mean a novel combination of existing methods to solve the task at hand, a novel dataset, or a new way of framing tasks or evaluating performance so as to match the needs of the user.” Our new training and deployment recipe has proven to bring tangible performance and energy advantages on real hardware for a real-world task that requires low-latency, low-power execution, audio denoising. We will release this new training pipeline upon acceptance so that the community can extend it or apply it to additional applications. We will update the introduction in the camera-ready version accordingly. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the response. My previous conerns are mostly addressed. I would like to raise my rating to a 3.
Summary: The paper presents a method to accelerate the computations of linear Recurrent Neural Networks (RNNs) using unstructured sparsity for edge computing applications. This work is motivated by a case study showing that highly sparse linear RNNs achieve superior efficiency-performance trade-offs compared to dense baselines. The paper particularly highlights the deployment of sparse linear RNNs on the Intel Loihi 2 neuromorphic processor, where quantized models demonstrate significant reductions in latency (~42×) and energy consumption (~149×) compared to edge GPUs. Claims And Evidence: The paper supports its claims about the efficiency and performance benefits of unstructured sparsity in linear RNNs by reporting its implementation results from the Intel Loihi 2 chip. The results demonstrate significant reductions in latency (42×) and energy consumption (149×) compared to an edge GPU, supported by benchmarks on real-world tasks like audio denoising. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria align well with the problem of accelerating linear RNNs for edge applications. The Intel N-DNS Challenge benchmark is a good test case for audio denoising, and the comparison between sparse vs. dense models on Loihi 2 vs. Jetson Orin Nano provides is interesting. Theoretical Claims: There is not much theory in this work. Experimental Designs Or Analyses: Yes, the experimental design appears OK specially in evaluating sparsity effects, quantization, and hardware acceleration on Loihi 2 vs. Jetson Orin Nano. The Pareto front analysis shows interesting trade-offs between accuracy, compute efficiency, and memory. Supplementary Material: Yes, all parts have been checked. Relation To Broader Scientific Literature: The paper builds on prior works in sparse models and neuromorphic computing. The main broader impact of this work is its validation on neuromorphic hardware for streaming tasks, pushing the field toward practical edge AI deployment. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: -- Low-latency, memory-efficient implementation of AI models is highly important for real-time streaming applications, which motivates the research conducted in this paper. -- The authors empirically validate their claims by studying Pareto efficiency trade-offs across different compute budgets. -- The integration of neuromorphic processors for accelerating sparse models is very interesting. It provides practical results which supports their simulation results. Weaknesses: -- While Loihi 2 is well-suited for sparse and event-driven computations, it remains unclear how these optimizations would translate to conventional customized hardware. How does it compare to customized accelerators for sparse or quantized models? -- The focus of this work is on audio denoising. How does the proposed method could generalize to other domains (e.g., NLP and vision applications). Other Comments Or Suggestions: N/A Questions For Authors: See my points listed as weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We gratefully thank the reviewer for their effort and feedback, and we would like to address their questions. We will be happy to hear and address any further feedback. ### Generalization to different accelerators Reviewers VTiJ and jRFp raised the need for a discussion about the generalizability of our methodology and benchmarking to other hardware platforms (e.g., SpiNNaker 2 and IBM NorthPole). While we adopted a hardware-aware approach focused on Loihi 2, we believe that platforms with similar feature sets can benefit from it. However, since most neuromorphic processors are currently research prototypes, they don’t provide public access (like IBM NorthPole) and/or lack a high-level programming framework that would allow the transfer of models from one platform to another. Following the approach proposed in NeuroBench [1], we will publicly release our training pipeline and checkpoints to enable the implementation on other neuromorphic processors by the relevant experts. We further address the generalizability discussion in the following paragraph, which we plan to add to Section 4 in the camera-ready version. *While the proposed hardware-aware methodology is tailored to leverage the Loihi-specific feature set, we believe that platforms with similar characteristics could potentially benefit from the results we presented. Neuromorphic processors such as SpiNNaker 2 [2] and IBM NorthPole [3], which presents similar architecture patterns to Loihi 2, would greatly benefit from our methodology on activation and weight sparsity. In addition, other platforms with tight compute-memory integration, such as Cerebras Wafer-Scale Engine (WSE-3) [4], provide support for unstructured sparsity, even if targeting datacenter-scale applications.* ### Generalization to different tasks Reviewers VTiJ and jRFp noted that extending our benchmarking to other datasets would strengthen our claims and the relevance of our work for the broader edge inference community. Starting from the S5 baseline [5] and following our methodology, we are running experiments on the keyword spotting SpeechCommands V2-35 (SC35) dataset [6], which provides a common real-time use case for the edge. The preliminary results, available at https://figshare.com/s/3d0ba8e3f515535871ba, compare the training curves for a narrow dense and wider sparse model (90% sparse weights and ReLU activations) with similar inference MACs. The plot shows that the sparse model (still under training) is on track to match or exceed the accuracy of the dense counterpart. Similar experiments are underway across inference compute budgets, and we plan to add such results in the same form as Figure 4 to the camera-ready version. Since our methodology seems to generalize without modification to SC35, we expect it to also generalize to other sequence modeling applications commonly solved with linear RNNs, such as bio-marker signal monitoring [7], time series forecasting [8], or action recognition [9]. #### References 1. Yik, Jason, et al. "The neurobench framework for benchmarking neuromorphic computing algorithms and systems." Nature Communications 16.1 (2025): 1545. 2. Mayr, Christian, Sebastian Hoeppner, and Steve Furber. "SpiNNaker 2: A 10 million core processor system for brain simulation and machine learning-keynote presentation." Communicating Process Architectures 2017 & 2018. IOS Press, 2019. 277-280. 3. Modha, Dharmendra S., et al. "Neural inference at the frontier of energy, space, and time." Science 382.6668 (2023): 329-335. 4. Lie, Sean. "Cerebras architecture deep dive: First look inside the hardware/software co-design for deep learning." IEEE Micro 43.3 (2023): 18-30. 5. Smith, Jimmy TH, Andrew Warrington, and Scott W. Linderman. "Simplified state space layers for sequence modeling." arXiv preprint arXiv:2208.04933 (2022). 6. Warden, Pete. "Speech commands: A dataset for limited-vocabulary speech recognition." arXiv preprint arXiv:1804.03209 (2018). 7. Pimentel, Marco AF, et al. "Toward a robust estimation of respiratory rate from pulse oximeters." IEEE Transactions on Biomedical Engineering 64.8 (2016): 1914-1923. 8. Schirmer, Mona, et al. "Modeling irregular time series with continuous recurrent units." International conference on machine learning. PMLR, 2022. 9. Kuehne, Hildegard, et al. "HMDB: a large video database for human motion recognition." 2011 International conference on computer vision. IEEE, 2011.
null
null
null
null
null
null
Steer LLM Latents for Hallucination Detection
Accept (poster)
Summary: The paper proposes a method named Truthfulness Separator Vector (TSV) to detect the hallucinations in LLMs. The TSV is a lightweight vector that reshapes the LLM's latent space during inference without altering model parameters. A two-stage training framework is used to explore TSV in a similar manner to a semi-supervised manner, which makes sense to me. Extensive experiments demonstrate that TSV performs well with minimal labeled data, exhibiting strong generalization across datasets. ------------------------After rebuttal-------------- Most of my concerns are well addressed, and therefore, I change my score to 4. However, I still think the steering method mentioned in A5 is worth trying, as it would further strengthen this paper's contributions to the community. Claims And Evidence: Supporting by pervious study in Haloscope [NeurIPS'24], I think the two-stage method to explore the truthfulness separator vector can work in hallucination detection tasks. Methods And Evaluation Criteria: Overall, the paper is well-written and easy to follow. The proposed hallucination detection pipeline makes sense. In all the evaluated datasets, the truthful and untruthful latent representations can all be separated well. Are there failure cases when the latent representations overlap? In other words, does the separation assumption for truthful and untruthful features always hold? Theoretical Claims: The $\epsilon$ in (7) seems not be defined. Experimental Designs Or Analyses: I think most of the experimental designs are sound and valid. While I think it will be better to discuss and briefly evaluate different methods for pseudo-label assignments. The current experiments seem to lack a proper baseline model. Supplementary Material: I have reviewed some parts of the supplementary material. Relation To Broader Scientific Literature: This research intersects with the broader scientific literature on LLM reliability. It builds on existing hallucination detection methods by addressing the limitation of default embeddings. Demonstrating TSV's effectiveness across datasets validates the approach of steering latent spaces, providing a new direction for improving LLM trustworthiness in real-world applications. Essential References Not Discussed: I think most of the listed related works are quite essential for understanding the contributions of this paper. While there are some other works in Large vision language models (LVLMs) that seem to use a similar approach of studying latent features in LVLM between positive and negative samples to adjust the model's behavior: e.g. [1] Reducing hallucinations in vision-language models via latent space steering, 2024. [2] Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection, 2024. These papers are only on arXiv, from what it seems, but I still suggest searching for related papers using feature steering in hallucination-related issues in LVLM fields. Other Strengths And Weaknesses: I suggest that the authors further strengthen the argument that using label assignment via optimal transport is necessary for the proposed method, which is a key contribution of this paper. Generally, there can be a series of techniques that can achieve the label assignment for TSV. Why can using other methods hinder the overall detection performance? Providing a brief discussion will make it easier for readers to make the main contribution. Other Comments Or Suggestions: See previous parts. Questions For Authors: Does the learned TSV be used to steer activation for hallucination mitigation, as in [1]? [1] Inference-time intervention: Eliciting truthful answers from a language model. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your thoughtful and valuable comments. Below, we provide detailed responses to each of your questions and comments. > A1. Optimal separability of the representation Thank you for your insightful question! We acknowledge that there can indeed be failure cases where the representations of truthful and hallucinated contents overlap. For instance, in Table 1, the performance on the NQ Open dataset (76.1%) is relatively lower compared to other datasets, suggesting weaker separability. This is likely due to the inherently challenging QA task, which can make perfect separation harder. **We'd like to emphasize that our goal is not to guarantee perfect separability, but rather to improve the separability relative to the pre-trained model.** _This improvement holds consistently across all datasets_, even if the final representation space is not perfectly disentangled. As a result, our method enables more reliable deployment compared to prior approaches that rely on fixed pre-trained embeddings. > A2. $\epsilon$ in Eqauation (7) The value of $\epsilon$ is defined in **Appendix L602** where it is set to 0.05, following [1] that demonstrated its effectiveness. For improved clarity, we will move this detail to the main paper. Thank you for pointing this out! [1] Caron et al., "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments," NeurIPS 2020 > A3. Different pseudo-label assignment strategies We appreciate the reviewer for highlighting this important point. Our motivation for introducing the optimal transport (OT) is that we aim to generate pseudo-labels that align with the class distribution of unlabeled LLM generations, which is inherently **imbalanced**. Traditional pseudo-labeling methods [2,3] can suffer from class imbalance, leading to biased predictions—either toward majority classes or, in some cases, unstable predictions favoring minority classes. This leads to a **mismatch between the class distribution of pseudo-labels and the true underlying distribution**, which can degrade both pseudo-label quality and overall detection performance. In contrast, our OT-based clustering approach introduces cluster-wise regularization through a constrained optimization framework, **ensuring that cluster sizes align with the underlying or estimated class distribution**. In comparison, we evaluate two pseudo-labeling baselines—nearest centroid [2] and confidence thresholding [3] on LLaMA-3.1-8b using AUROC. We will include these comparisons and discussions in the main paper. | Method | TruthfulQA | SciQ | |----|------------|------------| |Nearest centroid [2]| 82.2 | 81.8 | | Confidence thresholding [3]| 83.1 | 81.6 | **Ours** | **84.2** | **85.8** | [2] Rebuffi et al., "iCaRL: Incremental Classifier and Representation Learning," CVPR 2017 [3] Sohn et al., "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence," NeurIPS 2020 > A4. References on LVLM latent space steering Thank you for highlighting these interesting works in the LVLM field! We agree that comprehensively studying latent feature steering in LVLMs is also crucial, offering valuable insights that can inform both communities and help shape future directions. We are happy to include the following relevant works from the LVLM literature. We will also ensure a more thorough literature review to identify any additional related studies. [4] Liu et al., "Reducing Hallucinations in Vision-Language Models via Latent Space Steering," ICLR 2025 [5] Yang et al., "Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection," CVPR 2025 [6] Chen et al., "ICT: Image-Object Cross-Level Trusted Intervention for Mitigating Object Hallucination in Large Vision-Language Models," CVPR 2025 [7] Duan et al., "TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-Intervention," ArXiv 2025 > A5. Can TSV mitigate hallucination? Thank you for the interesting question. TSV is designed to learn a representation space specifically for detection (i.e., classification), which is fundamentally different from mitigation (i.e., generation). As a result, it is not straightforward to directly apply a detection-trained TSV to a mitigation task. However, TSV offers a plug-and-play mechanism for pre-trained LLMs. In practice, one can first apply TSV to classify truthful vs. hallucinated outputs from unlabeled generations in the wild, and then leverage these predictions to learn a steering vector for the mitigation task [8]. **This perspective opens up a promising direction for hallucination mitigation under unsupervised/limited-label settings, where extensive supervision is often impractical**. [8] Li et al., "Inference-Time Intervention: Eliciting Truthful Answers from a Language Model," NeurIPS 2023
Summary: This paper introduces the Truthfulness Separator Vector (TSV), a lightweight approach for hallucination detection in LLMs that reshapes the model's latent space during inference without modifying its parameters. The method employs a single trainable vector added to an intermediate layer, trained through a two-stage framework that first uses a small labeled exemplar set (as few as 32 examples) and then incorporates pseudo-labeled unlabeled data using an optimal transport algorithm with confidence-based filtering. Experiments across multiple datasets (TruthfulQA, TriviaQA, SciQ, NQ Open) demonstrate state-of-the-art performance, achieving +12.8% AUROC improvement on TruthfulQA while requiring only 0.00005% of model parameters (4K for LLaMA-3.1-8B). The approach shows strong generalization across datasets and model families (LLaMA and Qwen) at various scales (7B-70B parameters), performing comparably to fully supervised methods with significantly less labeled data and 8-800× fewer parameters than parameter-efficient fine-tuning alternatives. ## update after rebuttal Most of my concerns have been addressed. I will remain my positive score. Claims And Evidence: The primary claims are well-supported by empirical evidence: - Superior performance claim: The authors demonstrate substantial improvements over state-of-the-art methods across multiple datasets. - The reported AUROC improvements (e.g., +12.8% on TruthfulQA compared to previous methods) are significant and convincing. - Minimal labeling requirement claim: The paper shows that their approach performs nearly as well with just 32 labeled examples as with full supervision. The ablation studies in Figure 3c provide sufficient evidence for this claim. - Lightweight and flexible design claim: The comparison with PEFT methods in Table 4 clearly demonstrates TSV's parameter efficiency, using 8-800× fewer parameters than alternatives while achieving superior performance. Methods And Evaluation Criteria: The proposed methods are appropriate for the hallucination detection problem. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs are generally sound. Supplementary Material: Yes, i briefly review all parts Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper addresses a critical problem in LLM deployment with a pragmatic approach that doesn't require model fine-tuning. - The component analysis provides valuable insights into the contribution of each part of the system. Weakness: - The paper doesn't fully explore potential limitations of the approach, such as its behavior with longer-form content beyond QA pairs. - While the method shows good generalization across datasets, there's limited discussion of how it might behave across different model architectures or scales beyond the tested models. Other Comments Or Suggestions: Consider explaining in more detail how the method might be deployed in practice alongside a production LLM system. Questions For Authors: 1. How might the TSV approach scale to much longer generations beyond QA pairs? Many real-world hallucinations occur in longer contexts where there might be a mix of truthful and hallucinated content within a single generation. 2. Have you explored whether the TSV learned from one model architecture (e.g., LLaMA) transfers to a different architecture (e.g., Qwen)? This would strengthen the claim about generalization capabilities. 3. The optimal transport algorithm assumes a class distribution based on the exemplar set. How sensitive is the method to mismatches between the assumed and actual class distributions in real-world data? 4. Could the TSV approach be extended to finer-grained hallucination detection (e.g., identifying specific hallucinated spans within otherwise truthful text)? This would significantly increase its practical utility. 5. The results in Table 4 show TSV outperforming LoRA despite using far fewer parameters. Is this advantage primarily due to the low-data setting, or is there something inherently more effective about the TSV approach for hallucination detection? Would LoRA perform better if updates were restricted to specific components (e.g., only the o_proj weight in the 4th layer) rather than applied broadly across the model? 6. The ablation studies examine different locations for applying TSV, but have you explored applying TSV to multiple locations simultaneously? For example, can the vector be added to multiple positions within the MHA architecture or to multiple layers? This might further enhance the separation capability. 7. How is performance affected when using quantized models (e.g., 4-bit or 8-bit quantization) compared to fp16 models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and insightful questions. Below, we provide detailed responses to each of your points. All the added experiments are performed with LLaMA-3.1-8b. >A1. Longer generations Great point. In this work, we focus on short-form QA (phrase and sentence-level) which remains challenging, as evidenced by suboptimal performance reported in prior literature. We adopt this setting to ensure fair comparison with existing benchmark studies. That said, we fully agree that extending to long-form generation is a critical direction for real-world applications. We view our current work as a necessary step toward this goal. One natural solution is to decompose the long-form generation into multiple QA pairs and verify each pair individually. This reframes the problem as hallucination detection over a set of QA pairs, where TSV can be applied. We believe this warrants a deeper investigation on its own, and we appreciate you highlighting it! > A2. Deployment in practice TSV is a lightweight, plug-and-play hallucination detection module that can be integrated with any production LLMs. We first perform a standard forward pass using the original model parameters (without TSV) to generate a response. Then, during a second pass, we apply TSV at an intermediate layer and compute a truthfulness score using the steered representation. It can be flexibly toggled on/off, preserving the original model behavior when hallucination detection is not required. We will clarify these deployment steps more explicitly in the revision. > A3. Class distribution mismatch Thank you for bringing up this point! We manually construct exemplar sets under three different scenarios: (1) distribution aligned with the unlabeled generations, (2) uniform distribution, and (3) distribution reversed from the unlabeled generations. We observe a slight performance decline in scenarios with distribution mismatches, but the overall performance remains competitive. |Setting|TruthfulQA|SciQ| |-|-|-| |Aligned (Ours)|84.2|85.8| |Uniform|82.8|82.5| |Reversed|82.1|82.7| > A4. Extension to the span-level hallucination detection Great question! We agree this is an exciting direction, which requires localizing exactly which tokens or phrases are hallucinated. This is particularly challenging, as it requires adapting TSV to separate hidden states at different token positions. One promising direction is to identify salient entities—often the primary source of hallucinations—and apply TSV before and after each entity span, then measure the resulting shift in hidden states or detection scores to infer potential hallucinations at the corresponding span. These challenges require deeper investigation. But again, we agree this could significantly increase practical utility and will consider it in future work. > A5. Ablation on LoRA TSV is more effective for hallucination detection as it directly shapes the representation space while being more parameter-efficient. Additionally, we applied selective LoRA updates to specific model components within the same layer as TSV, and found these targeted adjustments still underperformed TSV, as shown below. |Method|TruthfulQA|SciQ| |-|-|-| |LoRA (q_proj) |72.0|72.2| |LoRA (k_proj) |71.6|74.1| |LoRA (v_proj) |76.7|76.8| |LoRA (o_proj) |75.2|72.4| |**Ours**|**84.2**|**85.8**| > A6. Applying TSV to multiple locations We appreciate your insights. We applied TSV to multiple layers or MHA components of LLaMA-3.1-8b on TruthfulQA and observed slight performance gains. However, a single-layer TSV already performs strongly, suggesting it is sufficient for effective hallucination detection and highlights the efficiency of our approach. |Layer|AUROC|MHA Components|AUROC| |-|-|-|-| |7, 13|84.2|Res, MLP |82.1| |8, 14|84.9|Res, Attn |83.3| |9, 12|84.7|MLP, Attn |85.2| |Ours (9th layer)|84.2|Ours (Residual)|84.2| > A7. Performance on the quantized model Thank you for the great question. We experiment with an 8-bit quantized LLaMA-3.1-8b model$^1$ and observe a slight drop in performance. However, the overall results remain strong, indicating that our method is robust to quantization. |Model|TruthfulQA|SciQ| |-|-|-| |8-bit|84.1|84.9| |16-bit|84.2|85.8| $^1$ https://huggingface.co/docs/transformers/main/en/quantization/bitsandbytes > A8. Transferability for different models Thank you for the suggestion! While we demonstrate that TSV performs well across various models and scales, we believe full transferability is unlikely. This is because differences in architecture, training framework, and pretraining data across models (e.g., LLaMA vs. Qwen) lead to **fundamentally different representation spaces**, making direct application of TSV challenging. However, a promising future direction could be to train an adapter that aligns the representations between models of different architectures or scales.
Summary: The paper introduces the Truthfulness Separator Vector (TSV), a lightweight and flexible steering vector designed to reshape the latent space of Large Language Models (LLMs) during inference to enhance the separation between truthful and hallucinated outputs without modifying model parameters. TSV is trained using a two-stage framework that first learns from a small set of labeled exemplars and then augments this set with pseudo-labeled LLM generations using an optimal transport-based algorithm. Claims And Evidence: N/A Methods And Evaluation Criteria: Strength: 1. The paper presents a novel and lightweight steering vector method that enhances hallucination detection in LLMs without modifying model parameters. 2. Unlike prior work that relies on costly fine-tuning or extensive labeled datasets, TSV reshapes the latent space dynamically during inference while maintaining model flexibility. 3. The paper provides rigorous experimental validation, demonstrating state-of-the-art hallucination detection on multiple benchmark datasets. Weakness: 1. A trainable steering vector is not a novel concept for LLMs. The main idea of TSV is to model the binary classification problem of hallucination detection using the von Mises-Fisher distribution and then treat the mean direction of each cluster as a trainable steering vector. While the overall performance is promising, this approach essentially combines existing methods in a new problem setting rather than introducing a fundamentally new technique. 2. The paper primarily uses AUROC as the evaluation metric for hallucination detection. Could you also provide precision (both positive and negative classes), recall, and F1 scores at either the generation level (non-hallucination vs. hallucination) or the token level? 3. TSV heavily relies on the quality of pseudo labels. It would be helpful to include a failure case analysis and assess its robustness to label noise. Additionally, how is the quality of the exemplar set evaluated, and how does the content of exemplars impact the final results? 4. Although TSV avoids fine-tuning, it still requires hyperparameter tuning, which to some extent reduces its practicality in plug-and-play scenarios across diverse models. Theoretical Claims: N/A Experimental Designs Or Analyses: See above. Supplementary Material: I have gone through the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: ## update after rebuttal Most of my concerns have been addressed. I would like to keep the current positive score for this work. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > A1. Steering vector for LLMs Thank you for your insights! While it is true that the steering vectors have been explored in LLMs, our key contribution lies in how we specifically design them for hallucination detection, as recognized by you and Reviewers iT9L and tv5M. Sections 4.2 and 4.3 illustrate the novel algorithm that leverages both a few labeled and unlabeled LLM generations with an optimal transport-based pseudo-labeling framework for training. To our knowledge, this is the **first work** to formulate hallucination detection in this way. Moreover, we emphasize that **our contribution lies not only in the specific method but also in the perspective shift we offer to the hallucination detection community**, which has largely relied on fixed pre-trained embeddings—despite their limitations. Our work challenges this status quo by demonstrating that actively steering the latent space—even with a lightweight vector—can substantially improve detection performance. We believe this perspective shift is **fundamentally significant, as it opens up a new design space** and encourages the community to look beyond pre-trained embeddings toward representation shaping as a core principle. > A2. Evaluation with other metrics Thank you for the suggestion! In addition to AUROC, we report AUPRC, F1 score, precision, and recall at the generation level, consistent with prior works. We include results on TruthfulQA with LLaMA-3.1-8b, where we found that **our method maintains a consistent advantage across all metrics**. | Method | AUROC | AUPRC| F1 | Precision (Pos) | Precision (Neg) | Recall | |-|--|-|-|-|-|-| | HaloScope | 70.2 | 56.2 |64.0 | 55.6| 78.2| 75.3| | SAPLMA | 78.2 | 63.3 |64.7 | 57.7| 85.8| 73.4| | **Ours** | **84.2** | **76.2** | **70.9** | **64.5**|**88.5**| **78.9** > A3-1. Robustness to pseudo-label noise Thank you for bringing this up! As requested, we include the robustness analysis of pseudo-label noise for the selected unlabeled data ($K=128$). Specifically, with the same selected dataset on TruthfulQA, we gradually add more noise to their pseudo labels by manually flipping some of the correct labels. The resulting relationship between the pseudo-label noise ratio and the hallucination detection AUROC on LLaMA-3.1-8b is shown as follows, where our method remains relatively robust under increasing noise conditions. | Pseudo-label Noise Ratio (%) |5 (no flipping, original) |10 |15 | 20 | 25| |-|-|-|-|-|-| | Hallucination Detection AUROC (%)| 84.2 |83.8| 82.6 | 82.2 | 81.3| > A3-2. Quality of exemplar set, and how does the content of exemplars impact the results? Another insightful point! To understand the quality, we randomly sample the exemplar set using different seeds and annotate them w.r.t. factuality. Manual inspection confirms that the exemplars are high-quality and cover diverse content. The TruthfulQA dataset contains 817 questions across 38 categories with an imbalanced distribution, making it a meaningful testbed for examining how content affects performance. To further verify, we construct exemplar sets on LLaMA-3.1-8b using three strategies, with imbalance ratio $\gamma$ defined as the ratio of the most to least frequent category counts. | Setting | AUROC | AUPRC| F1 | |-|-|-|-| | Uniform ($\gamma=1$) | 84.9 | 77.0 | 71.0 | | Random (Ours, $\gamma=5$) | 84.2 | 76.2 | 70.9 | | Biased ($\gamma=10$)| 83.7 | 75.0 | 69.3 | **Even in cases where the selected exemplars are biased toward a particular content, the test performance remains consistent.** This reinforces the robustness of our approach. We emphasize that our approach achieves strong performance with a very small labeled exemplar set (e.g., 32), **which makes manual curation and quality control highly feasible in practice**. Since the cost of labeling such a small set is minimal, our work offers a reliable and efficient solution for hallucination detection in real-world applications. >A4. Hyperparameters We emphasize that TSV requires minimal hyperparameter tuning, making it practical and easy to deploy. Key hyperparameters (e.g., steering strength) are **lightweight to tune** and we provide default settings (see Table 6 in Appendix) that **generalize well across all models and datasets**. In fact, all experiments in our paper use hyperparameters selected using the validation set of TruthfulQA with LLaMA-3.1-8b, and these settings are **applied uniformly without re-tuning to all experiments**, while still achieving state-of-the-art performance. Moreover, our ablation in Figure 3b demonstrates that TSV’s performance is **not overly sensitive** to the choice of hyperparameters. The robustness of TSV is further supported by our generalization results in Figure 4, which show consistent performance across diverse data distributions, even with fixed hyperparameters. This significantly reduces the burden of hyperparameter tuning in real-world use.
null
null
null
null
null
null
null
null
Privacy-Preserving Federated Convex Optimization: Balancing Partial-Participation and Efficiency via Noise Cancellation
Accept (poster)
Summary: The paper proposes DP federated learning algorithms that allows partial participation based on the DP $\mu^2$ SGD algorithms from (Reshef & Levy, 2024). The objective is to minimize the average of the population risks across all clients. The authors consider both cases of trusted and untrusted servers and achieve optimal convergence rate and linear computation efficiency. The case of trusted server is just a straightforward extension to (Reshef & Levy, 2024) where the queries are updated on the server side. Meanwhile, in the case of trusted server, the authors propose a technique of noise cancellation such that the injected noise at the server will be i.i.d and enjoy some concentration properties. Claims And Evidence: All the claims make sense to me and are supported by convincing proofs. Methods And Evaluation Criteria: There are no empirical demonstrations in the paper. It will be greatly appreciated if the authors can include some experiments and comparisons to other DP FL methods. Theoretical Claims: I took a quick look at the proofs but didn't go through them thoroughly. The proofs look reasonable and seem correct to me. Experimental Designs Or Analyses: See my reviews in the section "Methods And Evaluation Criteria". Supplementary Material: I checked the supplementary material for the proofs for the main theorem. Relation To Broader Scientific Literature: The contributions of this paper, especially compared with prior works, are not clear to me. The contribution of the trusted server case is trivial given the extension from (Reshef & Levy, 2024) is straightforward. For the case of non-trusted server, the authors mention in the paper that (Lowy & Razaviyayn, 2023) achieves the same rate but with $|S|^{3/2}$ computations. However, given the problem of LDP SCO is a well studied area, I am wondering if there are any other related works (even in centralized setting) that achieves optimal rate and linear computational complexity. If so, how is this work compared with them? Essential References Not Discussed: Please see my comments in section "Relation To Broader Scientific Literature" Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review and for your comments. We address all of your concerns and kindly ask you to update your score accordingly. --- **Q: Adding experiments** **A:** We have now added experiments that demonstrate the applicability of our approach and corroborate our theoretical guarantees (Logistic regression over MNIST). Please see our response to Reviewer **kvrv**. --- **Q: Novelty over the full-participation paper [Reshef and Levy, 2024]** **A:** The **trusted server setting** is indeed quite simple — as explicitly mentioned in our paper. But this is **not the case** for the **untrusted-server setting**, which is our **main contribution**. We understand that our approach seems simple in hindsight. However: - In **Section 4.1** of our paper, we depict **two natural extensions** of [Reshef and Levy, 2024] that completely **fail to achieve optimal guarantees**. - As shown in our analysis in **Appendices D and E**, the combination of **partial participation** and **correlated noise**: - substantially complicates the analysis - introduces challenges well beyond those in [Reshef and Levy, 2024] -Moreover, our approach is: (i) the **first to achieve optimal generalization guarantees** in the partial-participation setting (ii) the **first to achieve linear computational complexity** in this setting -Finally, we believe that: (i) a **simple algorithm is an advantage**, not a disadvantage (ii) while our approach is simple in hindsight, it was **very challenging to develop and analyze this non-trivial idea** --- **Q: Other related works** **A:** Let us re-emphasize the relation to previous works: - For the case **where machines trust each other**: - The work of [Feldman et al. 2020] achieves **optimal guarantees** with **linear computational complexity**, but **only for the last iterate**. - Their technique **does not extend** to the case where machines do not trust each other, since they **only maintain privacy of the last iterate**, while all intermediate iterates are **non-private**. - Conversely, our approach works when machines **do not trust each other** (regardless of whether they trust the server), and still obtains optimal guarantees and complexity. - For the case **where machines do not trust each other**, beyond the work of [Lowy & Razaviyayn, 2023] that requires $O(|S|^{3/2})$ complexity, there is a recent work of [Gao et al. 2024] brought to our attention by Reviewer *kvrv*. However: - In the **full-participation** case, [Gao et al. 2024] achieves **optimal generalization** with a complexity of $O(|S|^{9/8})$, which improves over [Lowy & Razaviyayn, 2023], but is **still suboptimal** compared to our work. - In the **partial-participation** case, [Gao et al. 2024] achieves **suboptimal guarantees** (see below), and still requires **superlinear computation** in $|S|$. -Conversely, **we obtain optimal performance for the partial-participation case**, and enjoy **linear computational complexity** in $|S|$. **Elaborate Comparison to [Gao et al. 2024]** The work of [Gao et al. 2024] indeed provides guarantees for DP-FL learning. **Regarding guarantees:** - In the **full-participation** setting: - [Gao et al. 2024] achieves **optimal DP generalization** (i.e., population loss guarantees) - However, their approach requires **super-linear computational complexity**, i.e., $O(|S|^{9/8})$, where $|S|$ is the total number of samples used (see Eq. (7) in their paper) - Note that in their notation: $|S| = nN$, where: - $N$ = total number of machines - $n$ = number of samples per machine - **Comparably, in this setting** we: - achieve **optimal DP generalization** - require **only linear computational complexity** in $|S|$ - In our notation, $|S| = n$ denotes the **total number of datapoints used by all machines** during training - In the **partial-participation** setting: - [Gao et al. 2024] still uses $|S| = nN$ (even when only $M \leq N$ machines are used per round; see Alg. 1 and Alg. 2) - Theorem C.1 (Eq. (17)) in their paper shows a convergence rate of: $$ O\left(\frac{1}{\sqrt{Mn}} + \frac{\sqrt{d}}{\epsilon \sqrt{M}n}\right) $$ -Translating to total data $|S| = nN$, this becomes: $$ O\left(\frac{N/M}{\sqrt{|S|}} + \frac{\sqrt{d}N/\sqrt{M}}{\epsilon |S|}\right) $$ which is **suboptimal** - Moreover, their approach still requires **superlinear computation** in $|S|$ - **Conversely, in our partial-participation setting**, we: - obtain **optimal guarantees** of: $$ O\left(\frac{1}{\sqrt{|S|}} + \frac{\sqrt{d}\sqrt{M}}{|S|}\right) $$ - achieve **linear computational complexity** in $|S|$ - Note: in our notation, $M$ = total number of machines We shall add this comparison to the paper.
Summary: The paper addresses the challenge of applying DP in FL when only a subset of clients are active per iteration. It mainly improves on a previous paper called 'Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems', in which all settings are same but the clients are all active. The authors argue that the previous proposed full participation approach does not extent well to partial participation case. The main challenge is that the partial participation case will accumulate noise in the global estimate. To overcome this, the authors propose a novel noise-cancellation mechanism. Specifically, they introduce a method where the added noise at each round is adjusted by subtracting the noise from the previous round (i.e., the noise is defined as Y^(t+1)-Y^(t) ) This approach prevents cumulative noise growth, which can otherwise degrade model accuracy and convergence. Claims And Evidence: The claim is supported by rigorous theoretical proofs Methods And Evaluation Criteria: While it make sense to address the challenge of cumulative noise by introducing noise cancellation mechanism, it is not a novel technique proposed firstly in federated convex optimization, but has been applied in other applications such as 'DPNCT: A Differential Private Noise Cancellation Scheme for Load Monitoring and Billing for Smart Meters' Theoretical Claims: It appears correct with explicit assumptions like bounded heterogeneity and Smoothness etc Experimental Designs Or Analyses: No, there is no experimental validation Supplementary Material: No Relation To Broader Scientific Literature: In terms of the broader scientific literature, the main idea of noise cancellation in differential privacy (DP) is not entirely new. Similar concepts have been applied in other domains, such as in "DPNCT: A Differential Private Noise Cancellation Scheme for Load Monitoring and Billing for Smart Meters." Beyond the prior work "Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems," the primary contribution of this paper lies in extending the analysis from full client participation to partial client participation. However, this incremental advancement offers limited novelty, particularly given that the core technique, noise cancellation, has already been explored in other applications. Essential References Not Discussed: Noise cancellation in DP related literature are not discussed. E.g., 'DPNCT: A Differential Private Noise Cancellation Scheme for Load Monitoring and Billing for Smart Meters' Other Strengths And Weaknesses: No experimental validation is provided in the paper. As a result, it is difficult to assess how the proposed approach enhances privacy in practical settings or how it performs compared to existing methods. Without empirical evidence, the practical effectiveness and advantages of the method remain unclear. Additionally, the paper offers limited novelty. The core idea of noise cancellation in differential privacy has been previously explored in other contexts, and the transition from full to partial client participation represents only an incremental extension of existing work. Other Comments Or Suggestions: I recommend conducting a more comprehensive review of the broader literature, particularly focusing on related work in FL and decentralized computation that addresses DP and noise design techniques. This would help contextualize the proposed approach within existing research and clarify its unique contributions. In addition, it is important to provide experimental validation to demonstrate the effectiveness of the proposed method. Comparative evaluations against established approaches in FL or decentralized learning would offer valuable insights into its practical benefits Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for the review. We address all of your concerns and kindly ask you to increase your score accordingly. --- **Q: Novelty over the full-participation paper [Reshef and Levy, 2024]** **A:** We understand that our approach seems simple in hindsight. However, let us highlight that coming up with this idea and analyzing it was *not trivial at all*. Concretely: - In **Section 4.1** of our paper, we depict **two natural extensions** of [Reshef and Levy, 2024] that completely fail to achieve optimal guarantees. - As shown in our analysis in **Appendices D and E**, the incorporation of **partial participation** and **correlated noise**: - substantially complicates the analysis - introduces challenges far beyond those in [Reshef and Levy, 2024] - Moreover, our approach is: - the **first to achieve optimal generalization guarantees** in the partial-participation setting - the **first to achieve linear computational complexity** in this setting - Finally, we believe that: - a **simple algorithm is an advantage** rather than a disadvantage - while our approach is simple in hindsight, it was **very challenging to develop and analyze this non-trivial algorithmic idea** --- **Q: Novelty over previous work with noise-cancellation, [DPNCT: A Differential Private Noise Cancellation Scheme for Load Monitoring and Billing for Smart Meters]** **A:** Please note again that our work is the **first approach to achieve optimal guarantees for DP-FL with partial participation**. The work you referenced indeed discusses noise cancellation in the context of privacy, and we will gladly cite it in the final version of the paper. However, that paper: - does **not consider machine learning training!** - does **not provide any guarantees!** — not even privacy guarantees! Conversely, our work: - discusses **machine learning**, specifically **DP training** in the **federated partial-participation** setting - provides **optimal guarantees** for: - population risk - computational complexity - differential privacy - proposes the use of **noise-cancellation in conjunction with the $\mu^2$-SGD approach**, which is crucial to achieving our guarantees (using noise-cancellation with standard SGD estimates would **fail** to provide optimal guarantees) --- **Q: Adding experiments** **A:** We have now added experiments that demonstrate the applicability of our approach and corroborate our theoretical guarantees (Logistic regression over MNIST). Please see our response to Reviewer **kvrv**.
Summary: This paper proposes a differentially private algorithm for federated learning when the untrusted server and with partial participation. The approach is an extension of the previous work of the paper "Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems", Roie Reshef, Kfir Y. Levy ICML 2024 which requires full participation. Claims And Evidence: The claims are supported by the proofs Methods And Evaluation Criteria: The approach is only theoretical, and proofs are included, so the methodology is correct Theoretical Claims: I was not able to check the soundness of the proofs. Experimental Designs Or Analyses: This is theoretical paper. Supplementary Material: I read the beginning of appendices. Relation To Broader Scientific Literature: It seems coherent. Essential References Not Discussed: none Other Strengths And Weaknesses: The main weakness of this paper is its significant similarity with the previous paper of ICML 2024. The extension to partial participation does not seem very challenging and it is not clear that this should deserve a full ICML paper. The overlap is a bit concerning. For instance, pages 12 to beginning of 16 are nearly identical in both papers, without clear mention of it. It would ease the reader understanding to clear present what is new and what is unherited from previous work. Other Comments Or Suggestions: none Questions For Authors: Please comment on the novelty with respect to the ICML 2024. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the review. We address all of your concerns and kindly ask you to increase your score accordingly. Regarding your comment: --- **Q: Novelty over full-participation paper [Reshef and Levy, 2024]** **A:** We understand that our approach seems simple in hindsight. However, let us highlight that coming up with this idea and analyzing it was *not trivial at all*. Concretely, - In **Section 4.1** of our paper, we depict **two natural extensions** of [Reshef and Levy, 2024] that **completely fail to achieve optimal guarantees**! - Moreover, as you can see in our analysis appearing in **Appendices D and E**, the partial participation and correlated noise: - substantially complicate the analysis - are substantially more challenging compared to the analysis of [Reshef and Levy, 2024] - In pages 12–16 of the appendix we do have some overlap with [Reshef and Levy, 2024], but these appendices are provided **for completeness**. Actually, [Reshef and Levy, 2024] themselves also provide this for completeness, and this part is **not a novelty of their work**. We shall highlight this in the final version of the paper. - Moreover, our approach is: - the **first to achieve optimal generalization guarantees** in the partial-participation setting - the **first to achieve linear computational complexity** in this setting - Finally, we believe that: - a **simple algorithm is an advantage rather than a disadvantage** - while our approach is simple in hindsight, it was **very challenging to come up with this non-trivial algorithmic idea and analyze it**
Summary: The paper addresses the challenge of ensuring Differential Privacy (DP) in Federated Learning (FL) under partial participation, where only a subset of devices engage in each training round. Existing approaches struggle to extend DP guarantees from full-participation settings to practical FL scenarios with inconsistent availability. The authors propose a novel noise-cancellation mechanism that preserves privacy without degrading convergence rates or computational efficiency. Their method, analyzed in the SCO framework, achieves optimal performance for both homogeneous and heterogeneous data distributions. The results provide a scalable and practical solution for privacy-preserving FL, particularly in large-scale environments. Claims And Evidence: I think the main theoretical claims are well-supported by the proofs. Methods And Evaluation Criteria: Due to the theoretical nature of this paper, there are no experiments. The authors did compare their bounds with previous works. Theoretical Claims: I have mainly checked the privacy and convergences of Theorem 5.1 and Theorem 5.2, which appear to be correct to me. Experimental Designs Or Analyses: NA. Supplementary Material: I have checked the proofs of Theorems 5.1 and 5.2. Relation To Broader Scientific Literature: FL with formal DP guarantees is important, and improving the computational complexity is also meaningful. Essential References Not Discussed: I think the authors may consider discussing the improvement over the following paper [1], which seems to improve the computation complexity (as well as communication complexity) compared to (Lowy & Razaviyayn,2023) [1] Gao, C., Lowy, A., Zhou, X., & Wright, S. J. (2024). Private heterogeneous federated learning without a trusted server revisited: Error-optimal and communication-efficient algorithms for convex losses. arXiv preprint arXiv:2407.09690. Other Strengths And Weaknesses: Strengths: + A nice extension of the previous framework to handle the partial participation scenario, which somehow demonstrates the power of $\mu^2$-SGD approach. + The paper is also well-written and easy to follow Weaknesses: - Maybe, some experiments could be added, though I understand that this is mainly a theory paper. Other Comments Or Suggestions: It seems that the notation $|\mathcal{S}|$ is not defined in the related work section, though it can be infered from the context. Questions For Authors: I have two main comments. 1. It seems that in the lower bound sections, the authors mainly use the results for empirical loss, while later upper bounds are for population loss. I understand one can do a quick reduction from the lower bound for the empirical one to the population one. It would be better to explicitly say it to avoid any confusion. 2. Another comment is about the relationship between the noise cancelation in this paper and the one in the standard binary-tree mechanism (see [1]). This is mainly from my curiosity. Are they related or simply not related at all? [1] Koloskova, Anastasiia, et al. "Gradient descent with linearly correlated noise: Theory and applications to differential privacy." Advances in Neural Information Processing Systems 36 (2023): 35761-35773. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review and for your comments. We address all of your comments and kindly ask you to update your score accordingly. Regarding your comments: --- **Q: Comparison to [Gao et al. 2024]** **A:** The work of [Gao et al. 2024] indeed provides guarantees for DP-FL learning. **Regarding guarantees:** - In the **Full-Participation** setting, [Gao et al. 2024] enjoy *optimal DP generalization* (i.e., population loss guarantees), while requiring a computational complexity which is super-linear in the size of the dataset, i.e., $O(|S|^{9/8})$, where we define $|S|$ to be the overall number of samples used (see Eq. (7) therein). Note that in the notation of [Gao et al. 2024], $|S| = nN$, where they use $N$ to denote the total number of machines, and $n$ to be the size of the data on each machine. - Comparably, **in this setting we achieve optimal DP generalization while requiring only linear computational complexity in** $|S|$. Note that in the notation of our paper, $|S| = n$, since we denote $n$ as the total number of datapoints used by all machines throughout the training process. - In the **Partial-Participation** setting, [Gao et al. 2024] obtain *sub-optimal* DP generalization guarantees. Concretely, the total data used in their work is still $|S| = nN$, where $N$ is the total number of machines and $n$ is the number of data points on each machine. This applies even when they are using $M \leq N$ machines in every round (see Alg. 1 and Alg. 2 in their paper). The bound they establish in Theorem C.1 (Eq. (17)) shows a convergence rate of: $$O\left(\frac{1}{\sqrt{Mn}} + \frac{\sqrt{d}}{\epsilon \sqrt{M}n}\right).$$ Translating this into the size of the dataset $|S| = nN$ implies a bound of: $$O\left(\frac{\sqrt{N/M}}{\sqrt{|S|}} + \frac{\sqrt{d}N/\sqrt{M}}{\epsilon |S|}\right),$$ which is suboptimal. Moreover, their approach requires a computation which is superlinear in $|S|$. Conversely, in the partial participation case **we obtain optimal guarantees** of: $$O\left(\frac{1}{\sqrt{|S|}} + \frac{\sqrt{d}\sqrt{M}}{|S|}\right),$$ and our computational complexity scales **linearly** with $|S|$. Note that in our notation $M$ is the total number of machines. We shall add this comparison to the paper. --- **Q: Stating overall lower bounds, w.r.t. Population loss** **A:** Thank you for this suggestion, we will do so. --- **Q: Relation of our noise-cancellation mechanism to [Koloskova et al. 2023]** **A:** [Koloskova et al. 2023] discusses the idea of adding general correlated noise patterns during DP training. But they: - only discuss ERM problems - suggest adding correlated noise to FTRL or SGD which use standard gradient estimates - do not provide explicit guarantees, and do not discuss partial-participation Conversely in our work we: - discuss and provide population-loss guarantees - suggest adding noise-cancellation (and therefore correlated noise) to $\mu^2$-SGD estimates, which is crucial to our guarantees - provide explicit bounds for partial-participation We shall add this discussion to our work. --- **Q: Experiments.** **A:** We performed experiments that illustrate the benefit of our approach and corroborate our theoretical findings. See below. **Experimental Results (MNIST, Logistic Regression)** We compare our method (“Our Work”) to Noisy SGD [DL DP] and [Lowy & Razaviyayn, 2023]. All use n=60,000 samples. In Our Work and Noisy SGD, each sample is used only once (single pass). The low accuracy (<70%) is due to privacy noise and single-pass training. **Privacy-Level Comparison** (fixed m=50, M=100): | ρ | Our Work | Time | Noisy SGD | Time | [Lowy & Razaviyayn, 2023] | Time | |-----|----------|------|-----------|------|---------------------------|------| | 4 | 53.8% | 13s | 45.1% | 9s | 47.6% | 64s | | 8 | 63.7% | 13s | 58.9% | 9s | 63.3% | 282s | | 12 | 66.5% | 13s | 63.7% | 9s | 66.7% | 730s | **Participation-Level Comparison** (fixed ρ=8, M=100): | m | Our Work | Time | Noisy SGD | Time | [Lowy & Razaviyayn, 2023] | Time | |----|----------|------|-----------|------|---------------------------|------| | 20 | 60.8% | 13s | 54.9% | 9s | 59.7% | 114s | | 50 | 63.7% | 13s | 58.9% | 9s | 63.3% | 282s | | 80 | 63.8% | 13s | 57.0% | 9s | 65.8% | 452s | **Key Takeaways:** - For strong DP requirement (low $\rho$) and partial-participation (low $m$) we consistently achieve better performance with runtimes comparable to noisy-SGD. The runtime of [Lowy & Razaviyayn] is substantially higher! And in this regime they obtain substantially worse performance. -For weak DP requirement (high $\rho$) and when approaching full-participation (high $m$), the accuracy of [Lowy & Razaviyay] improves but the runtime is very high.
null
null
null
null
null
null
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment
Accept (poster)
Summary: - The paper introduces the MM-RLHF dataset containing 120k preference-pairs aimed at improving MLLMs at image/video understanding and safety. They employ a human-assisted pipeline to ensure the high quality of the dataset annotations with available MLLMs generating initial responses. - The authors also introduce a critique based reward modeling framework that incorporates a critique head into a standard reward model arch with a score head, enabling interpretabilty of the reward scores. - They also introduce a Dynamic Reward Scaling technique that integrates a sample-wise weighting into DPO training to increase the effect of pairs with wide reward margins. - The authors introduce two benchmarks: MM-RLHFRewardBench (sampled from MM-RLHF) to evaluate reward models and MM-RLHF-SafetyBench (sampled from existing benchmarks) to evaluate performance on safety-related tasks. Claims And Evidence: - The authors claim that MM-RLHF improves performance which is visible in the reported scores but there are a few issues with the reported setup: - I have no clue what does training their MM-RLHF-Reward model look like, was any pretraining used? what’s the base MLLM? how long did the training take. Was MM-DPO used or just DPO? - They also do not finetune the LLaVA-Critic model on their dataset which is an important baseline number. - It is expected that their model would perform (Tab. 2) the best on their MM-RLHF-RewardBench since the benchmark is sampled from their model’s training set! - The authors claim that the MM-DPO is useful but I cannot find any evidence of that in the main text. I can also not find a comparison to commonly used alignment techniques including but not limited to: DPO, SimPO. Similarly, the authors do not compare MM-DPO to DPO head-to-head neither on their MM-RLHF nor on any other dataset (for example: LLaVA-Critic’s data) ## update after rebuttal I am raising my score to accept **CONTINGENT** on the author's promise to update the main text. In its current stage, the main text is sub-optimal for understanding the method without the appendix. Methods And Evaluation Criteria: - Yes, the methods and benchmark used are appropriate. Theoretical Claims: - No theoretical claims made in the paper. Experimental Designs Or Analyses: - The data annotation pipeline looks valid. - The experiment comparisons are a bit unfair; as mentioned before, there's no comparison of MM-DPO to DPO/SimPO and using any other training set except MM-RLHF. This would not be required if the authors' only contribution was the MM-RLHF dataset, but the model/MM-DPO discussed as main contributions covering ~2 pages of main text makes it necessary. Supplementary Material: - No supp material provided. Relation To Broader Scientific Literature: - Relevant to the community with their findings that critique-based reward modeling improves performance and it would be useful if the authors release their dataset for the community too. Essential References Not Discussed: - The authors do not include the related work section in the main text and only discuss those in the appendix for >1 page. It is crucial to include related works in the main text. The authors should think about this. No article should be published unless there’s a rel works section in the main text. Other Strengths And Weaknesses: - The authors should consider restructuring their paper, and not putting crucial experiments and sections in the appendix instead of the main text because of the space issues. I am specifically annoyed by the absence of rel works and clear ablations in the main text. It is also unfair to report all important details in the appendix. - The results are strong. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Concern 1: MM-RLHF-Reward Model Training Details** We sincerely apologize for any lack of clarity regarding our reward model training process. Here are the key details: 1. Base Model: We initialized our reward model from LLaVA-OV-7B, following common practice in both LLM and MLLM research where reward models are typically derived from capable base models. 2. Training Setup: - Hardware: 32× A800 GPUs - Training Time: 8 hours - Dataset Size: 120K samples 3. Training Objective: The reward model was trained exclusively using: - The critique loss (Eq. 2 in our paper) - Standard binary classification loss for reward modeling (Eq. 3 in our paper) - Neither DPO nor MM-DPO was involved in reward model training We would be happy to clarify any additional aspects of this process that may need further explanation. ---- **Concern 2: LLaVA-Critic Fine-tuning Baseline** We appreciate this suggestion and have conducted additional experiments: 1. **Data Adaptation**: We reformatted our MM-RLHF data to match LLaVA-Critic's input requirements (pairwise responses with GPT-4o evaluations). 2. **Training Variants**: - Basic LLaVA-Critic-MM-RLHF (direct GPT-4o critiques) - Enhanced version with human-annotated critique expansions The final results are as follows. We observe that fine-tuning LLaVA-Critic with our data yields a significant improvement, whereas directly using GPT-4o’s critic as a training objective shows limited gains. Even with the expansion of generated critiques based on human annotations, the performance does not surpass that of GPT-4o, though it does serve as a strong baseline. Additionally, we found that this training strategy is highly dependent on the model’s instruction-following capability, and at times, it fails to produce the expected comparative results during evaluation, requiring complex regularization matching. | Method | MCQ | Long | Short | Safety | Video | Overall | |--------|-----|------|-------|--------|-------|---------| | LLaVA-OV-7B | 0.14 | 0.11 | 0.29 | 0.41 | 0.32 | 0.24 | | LLaVA-Critic (Pairwise) | 0.23 | 0.54 | 0.24 | 0.28 | 0.52 | 0.35 | | LLaVA-Critic-MM-RLHF | 0.55 | 0.85 | 0.56 | 0.60 | 0.75 | 0.65 | | +Enhanced Annotations | 0.65 | 0.90 | 0.58 | 0.61 | 0.85 | 0.72 | | GPT-4o | 0.69 | 0.95 | 0.56 | **0.72** | 0.80 | 0.74 | | MM-RLHF-Reward | **0.93** | **1.00** | **0.71** | 0.66 | **0.92** | **0.85** | ---- **Concern 3: It is expected that their model would perform (Tab. 2) the best on their MM-RLHF-RewardBench since the benchmark is sampled from their model’s training set!** First, the data used in MM-RLHF-RewardBench and our model’s training set do not overlap. As a standard practice in deep learning, we ensure by default that the training and test datasets are kept separate. To further clarify this, we will explicitly mention this in the main text. To further prevent overfitting to the MM-RLHF dataset, we also evaluated our model on VL Reward Bench, as shown in Table 3. In this comparison, MM-RLHF-Reward-7B performs similarly to Claude-3.5-Sonnet, significantly outperforming the LLaVA-OV-7B baseline. This demonstrates the strong generalization capability of our critic-reward model. Finally, in Table 2, when directly fitting the model to our training data, the results on the test samples are not as strong, at best matching GPT-4o. What we want to emphasize is the potential of the critic-based reward model; with optimal critic generation, the reward model could achieve up to 93% average accuracy. ---- **Concern 4: The authors claim that the MM-DPO is useful but I cannot find any evidence of that in the main text.** Actually, we compare DPO with MM-DPO in Figures 1 and 11, where MM-RLHF refers to training with the MM-RLHF dataset using DPO loss, not SFT on high-rated examples. For a more detailed answer to the reviewer’s question, please refer to this link (https://anonymous.4open.science/r/mm-rlhf-rebuttal-BE17). We compared our approach to various baselines, including LLaVA-Critic, beta-DPO, SIMPO, and MPO, and found that existing methods showed limited gains on our high-quality preference dataset. Additionally, we compared DPO training results across multiple datasets (e.g., VL Feedback, RLAIF, LLaVA-Critic, MPO-Data), and the results demonstrated that MM-RLHF provides more comprehensive and significant performance improvements compared to existing datasets. ---- **Concern 5 The authors should consider restructuring their paper.** Given the large amount of content, it's challenging to fit everything within the limited page count. We have decided to follow the reviewer’s advice. First, we will condense the related work section, highlighting key content in the main text and providing comparisons. Second, we will include important experimental settings—such as baselines, model implementation details, and computational overhead—in the main text, while moving less critical experiments to the appendix.
Summary: This paper introduces MM-RLHF, a multimodal alignment pipeline combining a large preference dataset, a critique-based reward model, and MM-DPO, an enhanced DPO algorithm with dynamic reward scaling. The proposed approach is evaluated on 10 tasks across 20+ benchmarks, showing consistent gains in conversational ability, safety, hallucination control, and reasoning. Claims And Evidence: The paper claims that the proposed critique-based reward modeling and dynamic reward scaling (MM-DPO) substantially improve multimodal large language model (MLLM) alignment. However, some of the evidence is incomplete and partially overstated. Most observed gains appear to stem from better data quality and more human annotations rather than from the proposed techniques themselves. Methods And Evaluation Criteria: The overall pipeline (data curation, reward modeling, preference optimization) is reasonable for multimodal alignment. However, the evaluation framework is heavily dependent on a custom-built benchmark (MM-RLHF-RewardBench), which is derived from the same data sources used in training, raising serious concerns about train-test leakage. Additionally, the paper evaluates only MM-DPO, without comparing it to standard DPO, making it unclear whether dynamic reward scaling is necessary. Theoretical Claims: No significant theoretical claims requiring proof verification. Experimental Designs Or Analyses: - Critical Baseline Missing: There is no direct comparison between MM-DPO and standard DPO, making it impossible to quantify the benefit of dynamic reward scaling itself. - Reward Model vs Base Model Gap: The reward model (MM-RLHF-Reward-7B) is separately trained and may have different capability and bias compared to the base MLLMs, raising concerns about preference mismatch during alignment. - Modest Gains: Reported improvements over SFT baselines are relatively small, especially on tasks like mathematical reasoning and video understanding, calling into question the necessity of the proposed techniques. Supplementary Material: Reviewed Appendix B (annotation standards), Appendix C (safety data process), and Appendix G (additional results). These sections provide useful context and support key claims. Relation To Broader Scientific Literature: The work fits into ongoing efforts around RLHF for multimodal models, especially building on: Direct Preference Optimization (DPO); Critique-based reward modeling; Multimodal alignment datasets (LLaVA, VLFeedback) It extends these ideas with dynamic reward scaling and a more structured critique pipeline tailored to MLLMs. Essential References Not Discussed: One possible omission is more explicit discussion of: - Early text-based RLHF pipelines (e.g., InstructGPT) - Broader literature on safety alignment for vision-language models, especially recent work on adversarial robustness and hallucination mitigation. These are minor and do not undermine the paper’s contributions. Other Strengths And Weaknesses: ### Strengths - Introducing critique-based reward modeling is interesting for improving transparency. - Evaluation spans diverse multimodal capabilities, covering hallucination, reasoning, and safety. ### Weaknesses - Gains are incremental, with no evidence the proposed techniques are strictly necessary. - Evaluation benchmarks are custom and potentially biased, reducing credibility. - Lack of analysis on failure cases, especially in safety-critical scenarios. Other Comments Or Suggestions: No additional comments beyond the points discussed above. Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Concern 1: Critical Baseline Missing: There is no direct comparison between MM-DPO and standard DPO, making it impossible to quantify the benefit of dynamic reward scaling itself.** Actually, we compare DPO with MM-DPO in Figures 1 and 11, where MM-RLHF refers to training with the MM-RLHF dataset using DPO loss, not SFT on high-rated examples. For a more detailed answer to the reviewer’s question, please refer to this link (https://anonymous.4open.science/r/mm-rlhf-rebuttal-BE17). We compared our approach to various baselines, including LLaVA-Critic, beta-DPO, SIMPO, and MPO, and found that existing methods showed limited gains on our high-quality preference dataset. Additionally, we compared DPO training results across multiple datasets (e.g., VL Feedback, RLAIF, LLaVA-Critic, MPO-Data), and the results demonstrated that MM-RLHF provides more comprehensive and significant performance improvements compared to existing datasets. ---- **Concern 2 Reward Model vs Base Model Gap:** First, using an untrained model directly as a reward model performs poorly (see LLaVA-OV-7B in Tables 2 and 3) and is practically unusable for our training objectives. This necessitates training a dedicated reward model rather than relying on raw MLLMs. Numerous studies have explored specialized reward model training rather than naively repurposing MLLMs as reward models [1,2,3]. Furthermore, to mitigate overfitting to the MM-RLHF dataset, we evaluated our model on an independent reward model benchmark (Table 3). Results show that MM-RLHF-Reward-7B performs comparably to Claude-3.5-Sonnet on the VL Reward benchmark and significantly outperforms the LLaVA-OV-7B baseline. This demonstrates strong generalization of our critic-reward model across diverse datasets. [1] Aligning large multimodal models with factually augmented RLHF [2] LLaVA-Critic: Learning to evaluate multimodal models [3] Self-generated critiques boost reward modeling for language models ---- **Concern 3: Modest Gains: Reported improvements over SFT baselines are relatively small** We emphasize that "MM-RLHF" specifically refers to training with the MM-RLHF dataset using DPO loss—we did not include an SFT-only baseline. Please refer to this link (https://anonymous.4open.science/r/mm-rlhf-rebuttal-BE17) for General Response 2. Comparison of SFT Baselines and DPO Sample Selection Strategies. The results that preference-based training (DPO/MM-DPO) is essential for robust performance. MM-DPO further outperforms standard DPO, highlighting its effectiveness: ---- **Concern 4: Contextualization with Early RLHF Pipelines and Safety Alignment** We appreciate the reviewer’s constructive feedback on relating our work to broader literature. 1. Early Text-Based RLHF (e.g., InstructGPT): A key limitation of traditional RLHF methods lies in their sensitivity to hyperparameters and dependence on base model capabilities. For instance, when testing PPO with LLaVA-Ov-7B (actor) and our MM-RLHF-Reward-7B (critic), we observed only marginal improvements in dialogue tasks, alongside the need for meticulous tuning to avoid training instability. In contrast, MM-RLHF’s high-quality responses (e.g., from Qwen2-VL-72B) make DPO-based methods more intuitive and stable for achieving consistent gains. 2. Safety Alignment in Vision-Language Models: While prior work [1] focuses on trade-offs between safety and model capability (often at the cost of performance), our pipeline demonstrates that proper data construction can simultaneously enhance both safety and general capabilities. [1] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models ---- **Concern 5: Evaluation Benchmark Credibility** We acknowledge concerns about potential bias in our custom benchmarks (MM-RLHF-RewardBench and MM-RLHF-SafetyBench). To address this: 1. Data Leakage Prevention: Strict separation between training and test data was enforced. 2. Reward Model Generalization: As shown in Table 3, MM-RLHF-Reward-7B matches Claude-3.5-Sonnet on the VL Reward benchmark and vastly outperforms LLaVA-OV-7B, confirming cross-dataset robustness. 3. SafetyBench Provenance: MM-RLHF-SafetyBench was curated from existing benchmarks with no overlap with training data. 4. General Knowledge Evaluation: We tested on diverse, independent benchmarks (e.g., LLaVA-Wild, OCRBench) to ensure reliability. We welcome further discussion on benchmark design if needed. ---- **Concern 6: Failure Case Analysis in Safety-Critical Scenarios** We agree that safety failure analysis is crucial. While visual examples cannot be included here, please refer to (https://anonymous.4open.science/r/mm-rlhf-rebuttal-BE17) Reviewer xZJU: Analysis on failure cases, especially in safety-critical scenarios for a detailed failure case.
Summary: This paper introduces MM-RLHF, an approach for aligning multimodal large language models (MLLMs) with human preferences with thousands of human annotated preference pairs and ratings. It's proved that conducting training on MM-RLHF dataset and the future DPO on the preference pairs can make the model safer. In summary, three main contributions are: 1. A large-scale, high-quality dataset containing 120K human-annotated preference comparison pairs across image understanding, video understanding, and MLLM safety domains. 2. A Critique-Based Reward Model that generates detailed critiques before assigning scores, enhancing interpretability. 3. Dynamic Reward Scaling, which adjusts the loss weight during training based on reward margins to optimize the use of high-quality comparison pairs. The authors evaluate their approach across 10 dimensions encompassing 27 benchmarks, demonstrating significant improvements in visual perception, reasoning, dialogue, and trustworthiness. Claims And Evidence: yes Methods And Evaluation Criteria: Yes. The reason to MM-RLHF dataset construction are well justified. The evaluation also makes sense and covers a lot of areas. Theoretical Claims: Yes. There are no significant theoretical contributions of this paper. The math notaitons and loss functions in section 3 and section 4 are used to explain the basic reward model training and DPO algorithms, where previous paper have talked already. Experimental Designs Or Analyses: yes. The authors evaluate their approach across 10 dimensions encompassing 27 benchmarks, demonstrating significant improvements in visual perception, reasoning, dialogue, and trustworthiness. Supplementary Material: yes. I checked the examples of the MM-RLHF in the appendix. Also the annotations guidelines the authors have designed. Relation To Broader Scientific Literature: 1. The paper positions itself within the broader scientific literature on MLLM alignment, identifying that most current MLLMs undergo only supervised fine-tuning without comprehensive alignment. 2. They claim this is due to the lack of high-quality human annotated dataset and thus presents MMRLHF and shows that it's effective to align the model to human preference. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper presents a large-scale, high-quality dataset containing 120K human-annotated preference comparison pairs across image understanding, video understanding, and MLLM safety domains. 2. The finding that small-scale MLLMs struggle with self-improvement is an important insight that challenges some existing assumptions. 3. The paper's thorough evaluation across diverse metrics is commendable and provides a more holistic view of model alignment. Weaknesses: 1. The improvement in some tasks (e.g., high-resolution benchmarks) is limited, and the paper acknowledges this limitation but doesn't provide strong solutions. 2. The Dynamic Reward Scaling technique, while effective, is a relatively incremental advance over existing DPO methods. There are already some similar ideas proposed for reward model training [1]. 3. This paper did not compare itself or discussed some previous RLHF methods or datasets for multimodal LLM training, like MIA-DPO and PPLLaVA, LLaVA-OneVision-Chat, InternVL2.5-MPO, etc. Although they have been talked in the related works, but they seem not appear in the baseline comparison Other Comments Or Suggestions: 1. The figures showing performance improvements would benefit from error bars or statistical significance indicators. 2. The proposed MM-DPO requires to train on all the possibly pairs of the dataset, which can significantly scale the dataset size. It's better to discuss whether this is necessary as an ablation study. Questions For Authors: 1. In Figure 1, what is "MM-RLHF", is that doing SFT on the high rated examples of the MM-RLHF dataset or just the MM-RLHF reward model's results. The setting seems a bit unclear to me. 2. Since you already have the MM-RLHF-reward model, have you tried using the reward model to conduct some PPO RL? It seems there is no further usage after the training of reward models. MM-DPO also did not use it but instead directly uses the original MM-RLHF dataset. So what's the purpose of training a reward model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Concern 1: The paper acknowledges this limitation but doesn't provide strong solutions.** It's important to note that not all models show minimal improvements on these benchmarks. For example, InternVL2 performs better on RealWorld tasks (from 43.1 to 44.9), which may relate to its image segmentation strategy. We recently discovered that the limited number of high-resolution images in the training data may be a key reason for this. Most public datasets have few high-resolution images, and our use of CLIP features for clustering reduced the number further. To address this, we sorted the dataset by resolution and re-annotated the top 1k highest-resolution samples. This led to significant improvements, with LLaVA-OV-7B increasing from 55.3 to 56.8. Based on these results, we believe increasing the number of high-resolution samples will improve performance on such tasks. We plan to continue adding more high-resolution data in the future. ---- **Concern 2: Dynamic Reward Scaling technique is a relatively incremental advancement over DPO methods. There are already similar ideas proposed for reward model training [1].** At first, the reference [1] in the reviewer's question is missing. In fact, we discuss the differences between Dynamic Reward Scaling, DPO, and its improved versions in line 304 of the original paper. Overall, there are three key distinctions: 1. Novelty: We are the first to propose the dynamic beta adjustment framework for MLLMs. 2. Methodological Advance: We demonstrate that instance-level beta tuning is viable with a robust reward model, contrary to prior beliefs [1]. 3. Empirical Gains: Our approach outperforms existing methods (Figure 11(a)). [1] Beta-DPO: Direct preference optimization with dynamic beta. Additionally, we have already elaborated on the differences between similar reward model training methods and our approach in line 261. We conclude the main content as follows: In the MLLM community, there is no unified framework for designing reward models. Some approaches use traditional reward models with limited interpretability, while others rely on LLMs for ranking, which often leads to high variance. Additionally, other works focus on improving the reliability of model-generated critiques, but with a different goal. Our study is the first to explore how MLLMs can effectively leverage human annotations to enhance both interpretability and scoring ability. For a detailed discussion, please refer to Appendix F (Comparison to Existing Methods on Beta Adjustment in LLMs and MLLMs) and Appendix E (Discussion of MM-RLHF-Reward Model). ---- **Concern 3: This paper did not compare itself or discussed some previous RLHF methods or datasets for multimodal LLM training, like MIA-DPO and PPLLaVA, LLaVA-OneVision-Chat** PPLLaVA directly uses DPO loss without any improvements, so a direct comparison is not feasible. For a more detailed answer to the reviewer’s question, please refer to this link (https://anonymous.4open.science/r/mm-rlhf-rebuttal-BE17). We compared our approach to various baselines, including LLaVA-Critic, beta-DPO, SIMPO, and MPO, and found that existing methods showed limited gains on our high-quality preference dataset. Additionally, we compared DPO training results across multiple datasets (e.g., VL Feedback, RLAIF, LLaVA-Critic, MPO-Data), and the results demonstrated that MM-RLHF provides more comprehensive and significant performance improvements compared to existing datasets. --- **Concern 4: Is all possible pairs necessary? and What does “MM-RLHF” refer to—SFT on high-rated examples or the reward model’s output?** Please refer to this link (https://anonymous.4open.science/r/mm-rlhf-rebuttal-BE17) for **General Response 2. Comparison of SFT Baselines and DPO Sample Selection Strategies.** --- **Concern 5: Figures would benefit from error bars or statistical significance indicators.** Thank you for the suggestion. Since we use fixed seeds during training and temperature=0 for deterministic generation during evaluation, performance variance is generally small. However, we agree this is a valuable improvement. Due to the time limit, we will report the mean and standard deviation across three runs with different seeds in the final version. --- **Concern 6: Why train the MM-RLHF reward model if it’s not used for PPO or other RL methods?** The reward model is essential in our MM-DPO method for computing reward margins. We did experiment with PPO as well, but similar to the observations in Concern 3 about multi-stage DPO, PPO requires online sampling and fine-grained hyperparameter tuning. We only observed improvements in relatively simple dialogue tasks with a 7B LLaVA-ov model, and training stability was a challenge. In contrast, DPO-based methods offer more robust performance, especially given the high-quality responses (e.g., from Qwen2-VL-72B) in the MM-RLHF dataset, making them a more practical and effective choice.
Summary: This work introduces MM-RLHF, a new dataset for fine-tuning multimodal large language models (MLLMs) with human preference. The data samples are collected from diverse sources and carefully annotated by expert human annotators. Based on this new dataset, a reward model training framework is developed, which generates text critiques before scoring. Along with the newly proposed dynamic reward scaling technique, the method improves several MLLMs' capabilities in a wide range of benchmarks. Claims And Evidence: No concerns. Methods And Evaluation Criteria: - When constructing the dataset, this work does not seem to explicitly incorporate a mechanism that prevents test data leakage/contamination. In other words, some test questions and/or images may be included in the training data of MM-RLHF, and model performance is improved on such test samples. - The human preference annotations (as detailed in Appendix B) can be subjective and vary among annotators. It is unclear how to improve the annotation consistency and reduce the biases. This work does not include an evaluation of annotation consistency of different human annotators. - Although the data samples are re-sampled to create a balanced mixture of different topics, there seems no clear evidence that the dataset has sufficient geographical diversity. Theoretical Claims: This work does not include theoretical claims. Experimental Designs Or Analyses: No concerns. Supplementary Material: The reviewer has briefly checked the annotation guidelines and other statements. Relation To Broader Scientific Literature: This work introduces a new dataset for training MLLMs with human preference, which is a great contribution to the MLLM community. Empirical results show improved performance brought by this dataset. However, there are a few remaining concerns regarding the dataset construction, which should be addressed before publication. Essential References Not Discussed: No concerns. Other Strengths And Weaknesses: No more concerns. Other Comments Or Suggestions: - Please improve the resolution of Figure 1. Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and for acknowledging our work. We will address each of your concerns below: ---- **Concern 1: This work does not seem to explicitly incorporate a mechanism that prevents test data leakage/contamination** Thank you for raising this important issue. In our work, we have implemented several measures to minimize the risk of leakage between the training and test datasets: 1. **Strict separation between training and test data**: During the data sampling process, we manually ensure that the selected samples are from the training set and have no overlap with the evaluation subset. 2. **Experimental validation**: We tested our results on more than twenty benchmarks, and observed consistent improvements across domains, with no signs of overfitting in any specific domain. The safety-related data showed the largest improvements because the model had minimal exposure to safety-related issues during pretraining. However, our safety data was newly constructed, clearly distinct from the benchmarks used. ----- **Concern 2: This work does not include an evaluation of the annotation consistency of different human annotators.** In fact, our annotation process involves multiple rounds of interactive validation (at least two rounds). We will include the following details in the main text: 1. **Clear annotation guidelines and training**: As shown in Appendix B, we provided detailed annotation guidelines and training for annotators, ensuring that they could consistently understand and execute the annotation tasks. This helps reduce annotation inconsistency among different annotators. 2. **Annotation review and iteration**: To further improve consistency, we implemented a multi-step review process. The first annotator performs the annotations, then another annotator reviews them to ensure agreement. In cases of inconsistency, a third annotator is introduced, and the final decision is made by selecting the most suitable annotations. ---- **Concern 3: There seems to be no clear evidence that the dataset has sufficient geographical diversity.** This is an interesting point. In Figure 3, we show the task richness of the dataset, and the images involved may contain buildings or natural landscapes from various geographical locations around the world. However, since most existing public datasets are in English, our mm-rlhf model is also affected by this issue. The majority of the scenarios are still based on English-language contexts. As the reviewer pointed out, there may be a lack of sufficient geographical diversity. We have recognized the importance of geographical diversity and are actively working on collecting multimodal data from diverse geographical locations and languages, including Chinese (Mandarin), French (French), and other commonly spoken languages, to further enhance the geographical diversity of our dataset. ---- **Concern 4: Please improve the resolution of Figure 1.** Thank you for the suggestion! We have updated the original image to a PDF format to avoid the issue of low resolution. --- Rebuttal Comment 1.1: Comment: The authors' response is greatly appreciated. I will adjust the rating as "weak accept." Regarding the mechanism for preventing data leakage, a better strategy could be, for example, measuring the similarity between images in the evaluation benchmarks and the training samples (in some embedding space), and investigate the ones that are very similar. Some concerns raised by other reviewers are also valid. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response ! We fully agree with your suggestion that measuring the similarity between images in the evaluation benchmark and the training samples could be an effective strategy to prevent potential data leakage. However, with our initial sample of 100k image samples and over 20 evaluation benchmarks, computing the similarity across all images becomes extremely challenging. Additionally, due to the diversity of the benchmarks, it is difficult to pre-identify which benchmarks might be susceptible to data leakage. Therefore, for large-scale datasets such as LLaVA-OV-Image (with 3.5 million samples), there is currently no highly efficient strategy to prevent data leakage. We are actively conducting related experiments to further filter the training data, but it is unlikely that we will be able to complete these before the response deadline. In the case of the MM-RLHF-reward benchmark, we quickly conducted a relevant experiment where we removed the training images most similar to the top 100 images in the benchmark, along with their corresponding questions. The performance change was negligible, with an observed difference of less than 1%, as this subset of data only accounted for a very small portion of the training set. No evidence of data leakage was found during this process. If you have any further questions, please feel free to ask.
null
null
null
null
null
null
Natural Perturbations for Black-box Training of Neural Networks by Zeroth-Order Optimization
Accept (poster)
Summary: This paper extends the idea of natural gradient descent from back-propagation based training (first-order) to zero-order based training of neural networks. Specific care is taken to enable efficient approximations to the Fisher-Information-Matrix for deep neural networks by using block-wise FIM. Several experiments across a variety of settings demonstrate the convergence benefits of their method compared to standard sampling from a standard uniform gaussian. Claims And Evidence: The authors claim to have developed a new method through the concept of natural perturbations. Even though this papers shares common themes with yet-unpublished work provided in the appendix, the present paper provides sufficient new elements to be a convincing contribution. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria makes sense. Specifically, studying the convergence behavior normalized by compute time (E.g. Figure 6) captures an important aspect of ZO optimization. What I am missing is a study for the memory-usage of this method. While this paper explicitly targets black-box functions as a motivating argument for ZO, the reality of many ZO-applications is that they applied because of their reduced memory-consumption compared to back-propagation based methods. While this paper explicitly doesn't target memory-reductions, it would have been a great addition to make this method appealing to a wider audience. In terms of datasets, many papers in this literature study ZO for LLM fine-tuning on datasets mentioned here: https://github.com/ZO-Bench/ZO-LLM It would be great to see Natural Perturbations ZO applied to those. Theoretical Claims: I checked all derivations in the paper excluding Section 3.3 and its corresponding appendix. Experimental Designs Or Analyses: I checked the experiments and analyses I am familiar with, which excludes the MZI-related experiments, about which I understand very little. There are no issues related to these experiments. As stated above, I would appreciate an extension of the experiment section to include tasks from the ZO-Bench, which would appeal to a wider audience. Supplementary Material: I did not check the code, but appreciate its inclusion. I scanned the unpublished related paper. Relation To Broader Scientific Literature: The broader scientific literature in this domain tries to accelerate convergence of ZO methods. There are many orthogonal ideas to achieve this goal, which have not been explicitly discussed, which I think is fine. There seems to be some related work in the optical NNs literature, which I was not aware of - and I'm glad for the references. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is very well written, the math is laid-out in an extremely intuitive manner in terms of notation and accompanying text. Other Comments Or Suggestions: NA Questions For Authors: If the authors include a discussion on the memory-costs of their algorithm and provide experimental results on (a subsection of) the ZO-Bench datasets with LLM fine-tuning, it would widen the audience of their method - and I will raise my score. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and evaluating that the contribution is sufficient and the paper is very well written. > If the authors include a discussion on the memory-costs of their algorithm and provide experimental results on (a subsection of) the ZO-Bench datasets with LLM fine-tuning, it would widen the audience of their method Below, we report and discuss the memory costs of our algorithm not only for already reported neural networks but also for a newly constructed larger-size neural network with one million parameters. However, it is difficult for our current algorithm to perform experiments on the ZO-Bench datasets with LLMs that have much larger number of parameters (perhaps hundreds of millions at the smallest). We feel that we need a different strategy, e.g., a much stronger approximation of the FIM than making it block diagonal, and would like to contribute as future work in another paper. > What I am missing is a study for the memory-usage of this method. ... While this paper explicitly doesn't target memory-reductions, it would have been a great addition to make this method appealing to a wider audience. The table below shows the memory cost corresponding to the settings of Table 1. The difference between ZO-I and ZO-NP roughly corresponds to the additional memory cost of introducing natural perturbations. ### Memory footprint (GB) corresponding to the settings of Table 1 | dataset/task | architecture | $N_\mathrm{max}$ | $B$ | ZO-I | ZO-co | ZO-NP | |---|---|:---:|:---:|:---:|:---:|:---:| | MNIST, FashionMNIST | CNN (matrix) | 431 | 6 | 1.08 | 1.08 | 3.79 | | Equalization | FeedForward (MZI) | 280 | 2 | 0.39 | 0.39 | 0.42 | | Copying | RNN (MZI) | 453 | 7 | 0.75 | 0.75 | 1.91 | | CIFAR10 | MLP-mixer (matrix) | 510 | 66 | 3.95 | 3.95 | 7.36 | We can reduce the memory cost by decreasing the maximum block size $N_\mathrm{max}$, as the following table shows. Together with Figure 7 in the submitted main text, we understand that ZO-NP trades off the test accuracy and the memory cost overhead. ### Memory footprint (GB) for FashionMNIST corresponding to the setting of Figure 7 | $N_\mathrm{max}$ | 1 | 2 | 4 | 8 | 16 | 32 | 64 | 124 | 236 | 431 | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | ZO-NP | 1.11 | 1.11 | 1.11 | 1.11 | 1.11 | 1.15 | 1.48 | 1.90 | 3.07 | 3.79 | We also measured the memory footprints of a larger MLP-mixer for CIFAR10, as shown in the table below. The number of mixers has increased from 3 to 12 and the channel width has increased from 32 to 256, making the number of parameters increased from 33,642 to 1,706,762. Again, the difference between ZO-I and ZO-NP roughly corresponds to the additional memory cost of introducing natural perturbations, and we can reduce the memory cost by decreasing the maximum block size $N_\mathrm{max}$. ### Memory footprint for CIFAR10 with a larger MLP-mixers | method | $N_\mathrm{max}$ | $B$ | memory footprint (GB) | |---|---|---:|---:| | ZO-I | 512 | 3,334 | 15.68 | | ZO-co | 512 | 3,334 | 14.35 | | ZO-NP | 512 | 3,334 | 40.87 | | ZO-NP | 256 | 6,668 | 23.77 | | ZO-NP | 128 | 13,335 | 21.80 | We will include these results in the revised manuscript to clarify the memory cost of the proposed algorithm for a wide variety of neural networks. > In terms of datasets, many papers in this literature study ZO for LLM fine-tuning on datasets mentioned here: https://github.com/ZO-Bench/ZO-LLM It would be great to see Natural Perturbations ZO applied to those. We will include the references to ZO-LLM and the corresponding paper [1] in the revised manuscript, and state that applying the idea of natural perturbations to ZO-based LLM fine-tuning is a future work. [1] Zhang, Yihua, et al. "Revisiting zeroth-order optimization for memory-efficient LLM fine-tuning: A benchmark." arXiv preprint arXiv:2402.11592 (2024). Thanks for the nice suggestions to widen the audience of our method.
Summary: This paper proposed a novel sampling strategy for zeroth-order optimizaiton for training neural networks. Specifically, the authors propose natural perturbations that incorporate not only the parameter space discrepancy but also a function space discrepancy. To make the approach practical for networks with large-scale parameters, the paper proposes a block coordinate method that partitions the parameters into smaller groups, thereby enabling efficient computation of an approximate block-diagonal Fisher Information Matrix. Experiments on diverse tasks demonstrate taht the proposed method outperforms baseline methods. ## update after rebuttal Most of my questions and concerns are solved during the author rebuttal and I have no other questions. Claims And Evidence: The paper claims that standard ZO perturbation sampling strategy is suboptimal for deep neural networks because it ignores the correlation among parameters. he claims are supported by both mathematical derivations and comprehensive empirical results. One minor potential concern is the additional query cost for computing the FIM, which is addressed via block partitioning—but its impact on extremely large-scale networks remains to be further validated. Methods And Evaluation Criteria: These methods and evaluation criteria are appropriate for the target application—black-box training. Theoretical Claims: The paper’s primary theoretical contribution is Theorem 3.2, which establishes that the error between the approximate ZO gradient with natural perturbations and the natural gradient is bounded under an L-smooth assumption. I have checked the proof of this theorem. Experimental Designs Or Analyses: The experimental design appears sound. A potential area for further investigation is the scalability of the method to very large-scale networks. Supplementary Material: I have reviewed the proof of Theorem 3.2 and additional experimental results. Relation To Broader Scientific Literature: This paper successfully integrates and extends ideas from both natural gradient literature and black-box optimization to address a practical problem in hardware-based neural networks. Essential References Not Discussed: Wierstra D, Schaul T, Glasmachers T, et al. Natural evolution strategies[J]. The Journal of Machine Learning Research, 2014, 15(1): 949-980. Other Strengths And Weaknesses: Strengths: 1) The idea of designing the sampling distribution by regularizing both PSD and FSD is innovative. The derivation of the optimal covariance and the accompanying error bound lend strong theoretical support. 2) Evaluations across multiple datasets and architectures, along with sensitivity analyses, provide robust empirical evidence. Weakness: Although the block-coordinate method mitigates query costs, it remains to be seen how the approach scales to very large-scale networks (e.g., millions of parameters). Other Comments Or Suggestions: N/A Questions For Authors: Can the authors elaborate on how the method might be adapted or scaled for neural networks with orders of magnitude more parameters (e.g., in the millions), especially regarding the computational cost of Jacobian and FIM estimation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and evaluating the idea of designing the sampling distribution as innovative. > Can the authors elaborate on how the method might be adapted or scaled for neural networks with orders of magnitude more parameters (e.g., in the millions), especially regarding the computational cost of Jacobian and FIM estimation? Yes, stemming from the experimental condition of the last row (CIFAR10) of Table 1, we additionally performed experiments with a larger MLP-mixer. The number of mixers has increased from 3 to 12 and the channel width has increased from 32 to 256, making the number of parameters increased from 33,642 to 1,706,762. The following table shows the computational cost with the million parameters. The difference between ZO-I and ZO-NP roughly corresponds to the cost of Jacobian and FIM estimation. While the elapsed time (reported as seconds/epoch) overhead is less than 3%, the memory footprint cost is significant. We can reduce the memory footprint cost by decreasing the maximum block size $N_\mathrm{max}$ as shown in the last two rows. As a tradeoff, however, this increased the number $B$ of blocks and consequently the elapsed time in seconds per epoch. ### Elapsed time in seconds per epoch and memory footprint for CIFAR10 with a larger MLP-mixers | method | $N_\mathrm{max}$ | $B$ | seconds/epoch | memory footprint (GB) | |---|---|---|---:|---:| | ZO-I | 512 | 3,334 | 638.8 | 15.68 | | ZO-co | 512 | 3,334 | 588.0 | 14.35 | | ZO-NP | 512 | 3,334 | 656.5 | 40.87 | | ZO-NP | 256 | 6,668 | 698.2 | 23.77 | | ZO-NP | 128 | 13,335 | 790.8 | 21.80 | > One minor potential concern is the additional query cost for computing the FIM, which is addressed via block partitioning—but its impact on extremely large-scale networks remains to be further validated. ... Although the block-coordinate method mitigates query costs, it remains to be seen how the approach scales to very large-scale networks (e.g., millions of parameters). According to our current procedure for computing the FIM shown in Algorithm 2, the additional query cost is not affected by the scale of a network (the number $N$ of parameters) as long as the block size $N_b\leq N_\mathrm{max}$ is the same. This is because only one block $b$ is sampled in line 9. The problem for a large-scale network instead is that the number $B$ of blocks is large and it takes many epochs to compute the FIMs for all blocks. In other words, the FIMs of many blocks remain as the initialized identity matrix $\mathbf{I}$ until many epochs have passed. The following table shows such situations. The second row corresponds to the last row in Table 1, where the number $B$ is not so large. The third row corresponds to the above introduced larger MLP-mixer with million parameters. Since the number $B$ is large in this larger network, some blocks remained as initialized even after 1000 epochs with the update frequency hyperparameter $T_\mathrm{ud}=100$. We feel that we can show the scalability and also some limitations of our current method/algorithm by additionally reporting the results like these two tables in the revised manuscript. Thanks for raising these issues. ### The number of epochs (1st row) and the number of blocks (the remaining rows) whose FIM are computed. | | $N$ | $B$ | 1 | 2 | 5 | 10 | 20 | 50 | 100 | 200 | 500 | 1000 | |---|---:|---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | MLP-mixer in Table 1 | 33,642 | 66 | 5 | 10 | 21 | 36 | 51 | 63 | 66 | 66 | 66 | 66 | | Larger MLP-mixer | 1,706,762 | 3,334 | 5 | 10 | 25 | 50 | 98 | 240 | 469 | 878 | 1767 | 2594 | > Essential References Not Discussed: Wierstra D, Schaul T, Glasmachers T, et al. Natural evolution strategies[J]. The Journal of Machine Learning Research, 2014, 15(1): 949-980. We will refer to this paper in the revised manuscript and discuss the differences described in the following table. It shows that the proposed method of natural perturbations is superior in the sense that it can be used for a larger problem with a larger number $N$ of parameters than natural evolution strategies due to the size of FIM. Thanks for suggesting the reference. ### Differences between Natural perturbations and Natural evolution strategies | | Natural perturbations | Natural evolution strategies | |---|---|---| | FIM is computed for | distributions that the neural network expresses | sampling distribution | | FIM is used for | directly as the covariance matrix of the sampling distribution | iteratively updating the parameters of the sampling distribution with a small learning rate in a natural gradient manner | | FIM of | neural network parameters, whose number is $N$, to be optimized | all kinds of parameters of sampling distribution, e.g., mean and covariance matrix for a multivariate normal distribution | | size of FIM | $N\times N$ | $(N+N^2)\times (N+N^2)$ for a multivariate normal distribution |
Summary: This paper introduces the concept of natural perturbations for black-box training of neural networks using zeroth-order (ZO) optimization. The authors propose a novel sampling strategy for parameter perturbations that maximizes entropy while regularizing the distribution to prevent drastic changes in the neural network's conditional probability distribution while considering both parameter space discrepancy (PSD) and function space discrepancy (FSD), inspired by the natural gradient method. The approach involves partitioning parameters into blocks to efficiently compute the Fisher information matrix (FIM) and update the covariance matrix for perturbation sampling. Experimental results demonstrate the superiority of this method over existing zeroth-order optimization techniques across various datasets and architectures. Claims And Evidence: The paper makes several claims regarding the effectiveness of natural perturbations for ZO optimization. These claims are supported by experimental results on various datasets and tasks, including MNIST, FashionMNIST, CIFAR10, Equalization, and Copying memory tasks. The authors show that their method outperforms existing ZO optimization techniques such as ZO-I and ZO-co, as well as other black-box optimization methods like CMA-ES. The results are statistically significant and consistent across different experimental settings. Methods And Evaluation Criteria: The proposed methods for natural perturbations and block coordinate perturbations make sense for the problem of black-box training of neural networks. The sampling strategy and block diagonal covariance matrix approach are logical extensions of existing ZO optimization techniques, designed to address specific challenges such as parameter correlation and computational efficiency. The evaluation criteria, including test accuracy and training loss, are appropriate for assessing the performance of optimization algorithms in this context. Theoretical Claims: The paper provides theoretical analysis for the approximation error bound of the zeroth-order gradient approximation by natural perturbations. The proof appears sound and is based on standard assumptions from non-convex optimization literature. Experimental Designs Or Analyses: The experimental design involves training the proposed method alongside baseline methods like ZO-I and ZO-co, using identical configurations to ensure a fair comparison. However, the experiments primarily focus on small datasets, and it would be beneficial to explore performance on larger datasets and more diverse tasks including NLP. Supplementary Material: The supplementary material includes detailed configurations and additional experimental results. Relation To Broader Scientific Literature: The paper contributes to the broader scientific literature by addressing the challenge of efficient zeroth-order optimization for neural networks implemented in hardware. This work aligns with recent efforts in black-box optimization techniques and natural gradient methods. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: ++ The paper addresses an important practical problem in neural network training for hardware implementations. ++ The proposed method shows consistent improvement over existing techniques across multiple benchmarks. ++ The theoretical analysis provides a solid foundation for the proposed sampling strategy. Weaknesses: -- The paper could benefit from more detailed ablation studies to disentangle the effects of sample number. -- The experiments primarily focus on small datasets, and it would be beneficial to explore performance on larger datasets and more diverse tasks including NLP. Other Comments Or Suggestions: I suggest the authors conduct more comprehensive experiments across diverse scenarios, including NLP tasks. This would help validate the effectiveness of natural perturbations in real-world applications. Questions For Authors: How does the proposed natural perturbations method scale to larger neural network architectures, particularly in terms of computational resources and training time? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and evaluating that the proposed method makes sense, experimental improvements are consistent, and the theoretical analysis provides a solid foundation. > How does the proposed natural perturbations method scale to larger neural network architectures, particularly in terms of computational resources and training time? Stemming from the experimental condition of the last row (CIFAR10) of Table 1, we additionally performed experiments with a larger MLP-mixer. The number of mixers has increased from 3 to 12 and the channel width has increased from 32 to 256, making the number of parameters increased from 33,642 to 1,706,762. The following table shows how the computational resources in terms of memory footprint and training time (seconds per epoch) have increased with the 50 times larger number of parameters. ### Memory footprint and elapsed time in seconds per epoch for CIFAR10 with MLP-mixers | method | #mixers | channel width | $N$ | $N_\mathrm{max}$ | $B$ | memory footprint (GB) | seconds/epoch | |---|---:|---:|---:|---:|---:|---:|---:| | ZO-I | 3 | 32 | 33,642 | 510 | 66 | 3.95 | 42.1 | | ZO-co | 3 | 32 | 33,642 | 510 | 66 | 3.95 | 40.0 | | ZO-NP | 3 | 32 | 33,642 | 510 | 66 | 7.36 | 45.4 | | ZO-I | 12 | 256 | 1,706,762 | 512 | 3,334 | 15.68 | 638.8 | | ZO-co | 12 | 256 | 1,706,762 | 512 | 3,334 | 14.35 | 588.0 | | ZO-NP | 12 | 256 | 1,706,762 | 512 | 3,334 | 40.87 | 656.5 | We observe that the memory footprint and the training time have increased by 5.5 and 14.5 times, respectively, which is rather mild considering the 50 times larger number of parameters. More critical is a linearly increasing (50 times) number $B=3334$ of blocks. This is due to the nature of the block coordinate approach with the same $N_\mathrm{max}$, and it takes many epochs to compute the FIMs for all $B=3334$ blocks. See the second table in the reply to Reviewer 73sf for a related experimental result. We think that the proposed natural perturbations method scales to larger networks to some extent but a network with one million parameters might be a limitation. We will include such a discussion in the revised manuscript. Thanks for raising this issue. > -- The experiments primarily focus on small datasets, and it would be beneficial to explore performance on larger datasets and more diverse tasks including NLP. As reported above, we have tested a larger neural network with one million parameters. However, it is difficult for our current algorithm to perform experiments on larger datasets like with LLMs that have much larger number of parameters (perhaps hundreds of millions at the smallest). We feel that we need a different strategy, e.g., a much stronger approximation of the FIM than making it block diagonal, and would like to contribute as future work in another paper. > -- The paper could benefit from more detailed ablation studies to disentangle the effects of sample number. We might misunderstood this comment, but if 'sample number' means the number $Q$ of perturbations in Eq. (10), we agree that we should additionally report some results with varying numbers $Q$. So, we have obtained the following results. The query budget for each $Q$ was determined by the number of queries that ZO-I and ZO-co consumed for 100 epochs. We observe that the proposed method ZO-NP consistently outperformed the other two methods, except the extreme cases ($Q=1,2$), where the number of epochs allowed was small since the relative query consumption cost for the black-box Jacobian computation becomes large. We will include such results in the revised manuscript. Thanks for the comment. ### Test accuracies of FashionMNIST with varying the number $Q$ of perturbations in (10) | $Q$ | 1 | 2 | 5 | 10 | 20 | 50 | 100 | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | ZO-I | 0.710 | 0.744 | 0.775 | 0.788 | 0.794 | 0.818 | 0.832 | | ZO-co | 0.730 | 0.760 | 0.778 | 0.803 | 0.820 | 0.841 | 0.847 | | ZO-NP | 0.716 | 0.763 | 0.822 | 0.840 | 0.858 | 0.869 | 0.873 | | query budget ($\times 10^6$) | 0.12 | 0.18 | 0.36 | 0.66 | 1.26 | 3.06 | 6.06 | | #epochs allowed for ZO-NP | 31 | 41 | 58 | 71 | 82 | 92 | 95 |
Summary: The paper proposed an sampling strategy for ZO optimization, where the perturbation is sampled from a multivariate Gaussian distribution with a covariance matrix. The covariance matrix is designed by not only minimize the expected PSD but also FSD. Adopting the concept of natural gradient, this perturbation is named natural perturbation. To work around large cost in computing for the covariance matrix explicit, the authors propose to use block coordinate approach for a more efficient computation. Experiments show that the proposed method is superior in convergence speed and ending test accuracy compared to existing methods. ## update after rebuttal The authors have addressed most of my questions and I thus updated the score. Claims And Evidence: The concept of natural perturbation - the motivation, intuition and derivation - is well presented, and I am convinced that this concept is well established. The efficient block-box method of computing FIM (and then the covariance matrix) is also well presented. The algorithm and the motivation and intuition behind it is made clear. As to the empirical results part that showing the proposed method is superior, overall the establishment of the experiments is solid, but I have some questions which I will elaborate in the "Methods And Evaluation Criteria" section. The main concern is that since the proposed method introduces a large computation overhead (even after applying block coordinate approach), the comparison is only fair when the computation cost is the same. It would be interesting to see if the proposed method outperforms the baseline with the same computation budget, or the proposed method gives an interesting trade-off. I don't feel the discussion around the computation overhead is sufficient, nor the comparison is "fair". If the authors can address that in the revisions I would be happy to update my review score. Methods And Evaluation Criteria: Overall it is good. But I have some questions/concerns. Beside the major concern already mentioned in "Claims And Evidence", questions are: 1. Why in Table 1, ZO-I also uses blocking. Can we add the comparison to the most naive baseline i.e. ZO-I without blocking? 2. For ZO-co and ZO-NP, are they using the same Q? If so I wonder if using such small Q causes the perturbation being quite sparse resulting in slow convergence. Since ZO-NP has a higher computation overhead, it seems to make sense to compare it with ZO-co with a larger Q (which have similar computation cost to ZO-NP) for a fair comparison. 3. In figure 6, it looks like it is not fully converged and I am wondering what happens if we increase the epochs till it fully converges. I am wondering if ZO-NP has the advantage just in convergence speed, or also in the ending test accuracy. Theoretical Claims: Yes and I don't find any major issue. There are a few questions though: 1. In line 167 left column, it mentions that a good sampling strategy should maximize the entropy of the distribution. Why is that? Is there theoretical or empirical evidence supporting it? Actually, in the experiment, can you add another baseline with sampling from another distribution with a smaller entropy? 2. In line 321left column, it mentions that FIM also becomes block diagonal. Why? 3. In line 216 right column, it mentions a too large FSD leads to unstable training. Is there evidence to support it? 4. In line 87 right column, what is z and what does p(z|y) means? Experimental Designs Or Analyses: Overall it is solid, for questions please refer to "Methods And Evaluation Criteria" Supplementary Material: No Relation To Broader Scientific Literature: Zero-order optimization has a broad application in many fields. Proposed in this paper is a general approach to achieve a better performance in zero-order optimization which I believe has a large impact to broader literature. Essential References Not Discussed: References are sufficiently discussed in the paper. Other Strengths And Weaknesses: I am not sure what a point authors trying to make in section 3.2 and figure 3. As (13) already well expressed the trade-off between entropy, PSD and FSD, seeing the results in figure 3 is not surprising but almost a bit trivial. Other Comments Or Suggestions: None. ## update after rebuttal The authors have addressed most of my questions and I thus updated my score accordingly. Questions For Authors: I already listed my questions in "Claims And Evidence", "Methods And Evaluation Criteria", and "Theoretical Claims". I hope the authors can help addressing these questions and I would happy to update my score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and evaluating that the concept of natural perturbation is well established. > It would be interesting to see if the proposed method outperforms the baseline with the same computation budget, or the proposed method gives an interesting trade-off. The computation cost can be characterized by the elapsed time and the memory footprint. For the former, we already show the corresponding results in the right plot of Figure 6 and the bottom plots of Figure 12. These show that the proposed method ZO-NP outperformed the baselines with the same elapsed time. For the latter, we newly report the results in the reply to Reviewer 2tFg (please see the first two tables). With these tables and Figure 7, we understand that ZO-NP trades off test accuracy and memory overhead. Thanks for raising this issue. We will include these memory footprint results in the revised manuscript. > Why in Table 1, ZO-I also uses blocking. Can we add the comparison to the most naive baseline i.e. ZO-I without blocking? The reason to use blocking also for ZO-I is that the performance gets better with an appropriate block size $N_\mathrm{max}$, as Figure 7 shows. And we already have some results without blocking in Figure 11 in Appendix D.2. While the main intention there was to compare with CMA-ES, we can also compare the results with ZO-I. > For ZO-co and ZO-NP, are they using the same Q? ... Since ZO-NP has a higher computation overhead, it seems to make sense to compare it with ZO-co with a larger Q (which have similar computation cost to ZO-NP) for a fair comparison. We used the same Q for ZO-co and ZO-NP, and ZO-co results in a sparse perturbation (a sparse estimated gradient as a weighted sum of perturbations to be exact) as you pointed out. For more related information, see the discussion on _the number of changing parameters_ in Section 5.3.2. Regarding the computational overhead of ZO-NP, we have already made a fair comparison in terms of elapsed time (the right plot of Figure 6 and the bottom plots of Figure 12). > In figure 6, it looks like it is not fully converged and I am wondering what happens if we increase the epochs till it fully converges. I am wondering if ZO-NP has the advantage just in convergence speed, or also in the ending test accuracy. The two tables below show the situations with more epochs (three times more). The epochs allowed for ZO-NP, which consumed extra queries for the Jacobian computation, are shown in the parentheses. The training losses continued to decrease, but the speed efficiency was ZO-I < ZO-co < ZO-NP. The test accuracies converged in ZO-co and ZO-NP but not in ZO-I. It seems that ZO-NP entered the range of overfitting. ### Training loss | epochs | 1000 (975) | 2000 (1950) | 3000 (2925) | 4000 (3900) | |---|:---:|:---:|:---:|:---:| | ZO-I | 1.039 | 0.962 | 0.919 | 0.889 | | ZO-co | 0.966 | 0.881 | 0.826 | 0.787 | | ZO-NP | 0.824 | 0.731 | 0.672 | 0.631 | ### Test accuracy | epochs | 1000 (975) | 2000 (1950) | 3000 (2925) | 4000 (3900) | |---|:---:|:---:|:---:|:---:| | ZO-I | 0.592 | 0.614 | 0.614 | 0.621 | | ZO-co | 0.604 | 0.623 | 0.633 | 0.633 | | ZO-NP | 0.622 | 0.632 | 0.636 | 0.632 | > In line 167 left column, it mentions that a good sampling strategy should maximize the entropy of the distribution. Why is that? Is there theoretical or empirical evidence supporting it? Actually, in the experiment, can you add another baseline with sampling from another distribution with a smaller entropy? The reason is to make the sampled perturbations explore as widely as possible. This statement is our new way of understanding the sampling in ZO optimization. Section 3.1, especially the paragraph starting from line 213 left column, provides theoretical support as the existing sampling strategy from $\mathcal{N}(\mathbf{0}, \mathbf{I})$ can be derived from this statement. We could perform some experiments where the entropy of a sampling distribution is randomly reduced independent of the FIM. Please let us know in a further reply if you prefer to see the results. > In line 321left column, it mentions that FIM also becomes block diagonal. Why? This is because the block matrix size is the same for $\Sigma_b$ and $\mathbf{F}_{\theta^{(b)}}$ by (34). > In line 216 right column, it mentions a too large FSD leads to unstable training. Is there evidence to support it? A good lecture note can be found at https://www.cs.toronto.edu/~rgrosse/courses/csc2541_2022/readings/L03_metrics.pdf and Figure 1 there on the Rosenbrock function is very intuitive. > In line 87 right column, what is z and what does p(z|y) means? The specific form of p(z|y) depends on the type of task the neural network performs. For example, if the neural network performs a classification task, p(z|y) is a multinomial distribution as explained in Section 4.1.1. And z is a random vector that is marginalized in the FIM computation (28) and specified by a target vector in the loss computation (1). --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I have updated my review score accordingly.
null
null
null
null
null
null
Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?
Accept (poster)
Summary: This paper investigates the robustness of DBNNs under environmental noise in resource-constrained scenarios. The authors identify that the vulnerability of DBNNs stems from binary weights and scaling factors and propose an L1,∞-norm constraint to improve robustness. The proposed method introduces an auxiliary robustness loss function to balance classification and robustness objectives. Experiments on CIFAR-10, CIFAR-100, Brain Tumor MRI, and bio-electrical signal datasets show that the proposed method improves model accuracy and reduces accuracy degradation under noise compared to SOTA methods. ## Update after Rebuttal The authors' rebuttal solved my concerns so I raised my score to 4. Claims And Evidence: The paper claims that L1,∞-norm constraints enhance DBNN robustness by improving the stability of binary weights and scaling factors under noise. However, the evidence is limited to specific tasks and datasets, primarily focused on vision and medical diagnostics. Furthermore, while the paper reports lower computational overhead, it does not analyze real-time inference latency, which is important in resource-constrained scenarios. Methods And Evaluation Criteria: The proposed method is clearly described. The authors provide a structured explanation of the L1,∞-norm constraint and its impact on DBNN robustness. The evaluation is based on widely used datasets and includes a reasonable selection of models (e.g., CNNs and vision transformers). Theoretical Claims: The paper provides a theoretical analysis of the robustness bounds derived from the L1,∞-norm constraints. The derivation appears sound, and the theoretical insights align with the empirical findings. Experimental Designs Or Analyses: The experimental setup is well-organized and follows standard practices. Supplementary Material: Yes. The supporting material contains code for testing on two datasets. I didn't try to run it, but the content of the code seems reasonable Relation To Broader Scientific Literature: The paper builds on previous work in fault tolerance, particularly in binary neural networks and hardware-aware ML. It references relevant prior work and positions itself well within the context of DBNN optimization and robustness. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper presents an interesting and potentially impactful method for improving DBNN robustness. However, the lack of generalization beyond vision and medical tasks reduces the scope of the contribution. Other Comments Or Suggestions: Figure 3 needs a clearer description of what w11 and w12 represent respectively Questions For Authors: 1. How does the proposed method generalize to non-vision tasks such as NLP? 2. What is the real-time inference impact of the L1,∞-norm constraints, particularly in resource-limited environments? 3. Can you provide an ablation study to isolate the contribution of binary weights versus scaling factors in improving robustness? 4. How does the proposed method compare to more recent adaptive fault-tolerance methods beyond traditional redundancy-based techniques? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: How does the proposed method generalize to non-vision tasks such as NLP? A1: Thank you for your insightful question. From a theoretical perspective, extending to NLP tasks such as text classification or machine reading, it is necessary to first construct BERT models that binarize weights and activation, sort out their iterative and expansion forms, and the most critical step is to analyze the formalized upper bound of environmental noise perturbations for each layer. Because the structure of BERT is different from the DBNNs with ResNet backbone, it means that the Lipschitz constant of each part needs to be calculated carefully. This suggests that the upper bound of the binarized BERT model exhibits a slight deviation from Theorem 3. With a moderate adjustment, the theoretical analysis can be effectively applied to NLP tasks. On the other hand, the bio-electrical signal classification task and NLP task belong to the sequence modeling task. In particular, the data for the bio-electrical signal classification task are closely associated with time-dimensional information, and the proposed constraints have effectively validated the performance of the BNN model (e.g., A binary convolution layer as a replacement for the full-precision convolution layer). Q2: What is the real-time inference impact of the L1,∞-norm constraints, particularly in resource-limited environments? A2: Thank you for your nice question. In fact, the proposed constraints are incorporated into the training stage in the form of an objective function penalty term. This mean that the proposed method solely impacts the training cost, and thus no additional operations are introduced to the layers within the DBNNs. Consequently, it does not introduce extra time overhead during the model inference phase, thereby eliminating any inference concerns. Finally, DBNNs on the server side can achieve inference times at the second level. Q3: Can you provide an ablation study to isolate the contribution of binary weights versus scaling factors in improving robustness? A3: Thank you for your constructive suggestion. We found that the constraints on binary weights and scaling factors are indispensable to improve the robustness of DBNNs. Table 1. Ablation study 1: Binary weights V.S. Scaling factors to improving robustness on the CIRFA-100 dataset. The w/o indicates that the constraint on the scaling factors has been removed. | **Backbone** | **Dorefa+Our** | **Dorefa+Our+w/o** | **React+Our** | **React+Our+w/o** | **ABC+Our**|**ABC+Our+w/o**| **IRNet+Our** | **IRNet+Our+w/o** | **Bireal+Our** |**Bireal+Our+w/o**|**CycleBNN+Our**|**CycleBNN+Our+w/o**| |:------------:|:--------------:|:------------------:|:--------------:|:-----------------:|:--------------:|:---------------:|:--------------:|:-----------------:|:--------------:|:------------------:|:----------------:|:--------------------:| | ResNet18| **67.21**| 66.90|**68.26**| 67.26|**69.23**|68.57|**70.04**|68.74|**68.84**|67.79|**67.29**|66.51| Table 2. Ablation study 2: Binary weights V.S. Scaling factors to improving robustness of DBNNs on the Brain tumor dataset. | **Backbone** | **Dorefa+Our** | **Dorefa+Our+w/o** | **React+Our** | **React+Our+w/o** | **CycleBNN+Our** | **CycleBNN+Our+w/o** | |:------------:|:--------------:|:------------------:|:--------------:|:-----------------:|:----------------:|:--------------------:| | ResNet18|**86.91**|85.62| **83.87**| 82.39|**85.46**|84.17| Q4: How does the proposed method compare to more recent adaptive fault-tolerance methods beyond traditional redundancy-based techniques? A4: Thank you very much! The description of the manuscript may have given you some misunderstanding. We would like to clarify the distinction between the environmental noise robustness of DBNNs and adaptive error tolerance methods. Based on two recent works [1-2], adaptive fault tolerance methods focus on ensuring that hardware devices can continue performing tasks even after a fault occurs (i.e., The communication node is faulty). In contrast, the DBNN model exhibits approximately a 10-20% decrease in performance under environmental noise perturbations, rather than encountering fault. Therefore, we believe that it is not suitable to compare the adaptive fault-tolerance methods for hardware devices with the robustness improvement algorithm for DL models (DBNNs). Another reason is that the tasks targeted by these fault-tolerance algorithms differ significantly from the classification tasks of DBNNs, and their portability remains to be validated. We plan to investigate the effectiveness of adaptive fault-tolerance methods when the DBNN encounters fault. [1] Ada-FA: A Comprehensive Framework for Adaptive Fault Tolerance and Aging Mitigation in FPGAs. IEEE Internet Things J. 11(10): 17688-17699 (2024) [2] A Dynamic Adaptive Framework for Practical Byzantine Fault Tolerance Consensus Protocol in the Internet of Things. IEEE Trans. Computers 73(7): 1669-1682 (2024)
Summary: This paper addresses the robustness of deep binary neural networks (DBNNs) under environmental noise perturbations in resource-constrained scenarios. The authors propose an $L_{1,\infty}$-norm constraint on binary weights and scaling factors to derive a tighter robustness upper bound compared to existing methods. Experiments on image classification (CIFAR, ImageNet) and medical tasks (bio-electrical signals, brain tumor MRI) demonstrate improved robustness with minimal computational overhead. While the work is well-motivated and technically sound, several aspects require strengthening for impact. Claims And Evidence: The claims in the paper are mostly well-supported. Specifically, the authors clearly demonstrate empirical gains compared to competitive baselines (ResNet 18/34, multiple BNN variants like BiRealNet, IR-Net, CycleBNN, etc.) on multiple image classification (CIFAR, ImageNet) and medical tasks (bio-electrical signals, brain tumor MRI) tasks. The theoretical claims regarding provided tightness bounds are sound and clearly explained. Methods And Evaluation Criteria: The proposed robust training algorithm and the chosen evaluation criteria (five diverse image and medical tasks covering classification) are highly relevant and sensible for the problem studied. Theoretical Claims: The proofs and theoretical arguments appear technically correct and rigorous. Experimental Designs Or Analyses: The experimental design is sound and thorough. Results across multiple DBNN-based models and five real-world datasets clearly illustrate the advantage of proposed method. However, one minor limitation is that the paper does not provide the residual block analysis, or explicitly show how sensitive of the residual block is to environmental noise. Supplementary Material: I carefully reviewed the supplementary material. The supplementary material is comprehensive and effectively supports reproducibility. Relation To Broader Scientific Literature: The paper appropriately relates its contributions to the broader literature on DBNN-based method under environmental noise perturbations in resource-constrained scenarios. It extends the current state-of-the-art by clearly showing the theoretical benefit of noise robustness, also alleviates the problem of introducing additional expensive costs with the SOTA approach. Essential References Not Discussed: The paper adequately discusses relevant literature. Other Strengths And Weaknesses: Strengths 1. The paper identifies a critical gap in existing research: the lack of robustness analysis for binary neural networks (BNNs) under unpredictable environmental noise (e.g., patient movement artifacts). The proposed $L_{1,\infty}$-norm constraint offers a theoretically grounded solution, explicitly linking DBNN vulnerability to scaling factors and binary weights. The closed-form robustness bound (Theorem 4.2) is a key contribution, as it provides a quantifiable metric for evaluating DBNN robustness, surpassing the heuristic approaches of prior work (e.g., Shang et al., 2022). 2. The experiments span diverse domains (i.e., image classification, medical diagnostics) and architectures (ResNet 18/34, multiple BNN variants like BiRealNet, IR-Net, CycleBNN). Results show consistent robustness improvements (e.g., +5.4% on Brain Tumor MRI) while maintaining low computational overhead (16% faster training than LCR). The inclusion of real-world noise models (e.g., SNR=50% for bio-electrical signals) strengthens practical relevance. 3. The framework’s compatibility with various BNN backbones (e.g., ResNet34) and automated training pipeline (Fig.2) enhances applicability. The ablation study on constraint coefficients (also in the Appendix) and visualization of feature map perturbations (Fig.4) provide actionable insights for practitioners. Weaknesses 1. This manuscript designs a large number of experiments to verify the validity of proposed method on different types of datasets. The reviewer wants to know how robust of proposed algorithm can be against adversarial attacks with latest BNNs (e.g., CycleBNN, Fontana et al., 2024). 2. The theoretical analysis assumes Lipschitz continuity of binarized activation but does not provide the residual block analysis. It is suggested that the author reflect this part of the analysis after the main theorem. Other Comments Or Suggestions: 1. Increase font size in Fig.4 for clearer visualization of feature maps. 2. Discuss potential risks of deploying noise-robust DBNNs in safety-critical medical applications (e.g., false negatives under extreme noise). Questions For Authors: 1.The reviewer wants to know how robust of proposed algorithm can be against adversarial attacks with latest BNNs (e.g., CycleBNN, Fontana et al., 2024). 2. The theoretical analysis assumes Lipschitz continuity of binarized activation but does not provide the residual block analysis. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1.The reviewer wants to know how robust of proposed algorithm can be against adversarial attacks with latest BNNs (e.g., CycleBNN, Fontana et al., 2024). A1. Thanks for the reviewer's nice suggestion. We opted for a standard training and testing phase to introduce PGD attacks, thereby evaluating the effectiveness of the proposed approach in defending adversarial attacks. This mode more closely with the hypothesis considered in our manuscript, which posits that noise is encountered exclusively during the inference phase. Specifically, we use the LinfPGDAttack method from the library advertorch, with step size set to 7 and perturbation value set to $\epsilon=1/255$. Then, we added such adversarial perturbations to the test set of the CIFAR-100 task and measured the CycleBNN with ResNet18 backbone and our proposed method. In below table, experimental results demonstrate that DBNN models, which have not been trained with adversarial samples, are highly susceptible to strong PGD attacks (Similar issues have been reported in full-precision models during both standard training and adversarial attack testing [A1]). The proposed method still has a certain percentage improvement despite the serious degradation of the performance of unconstrained DBNN models. Table 1. PGD performance comparison between CycleBNN and our on the CIRFA-100 dataset. | **Models** | **Test PGD** | |:------------:|:------------:| | CycleBNN | 17.78 | | CycleBNN+Our | **18.38** | [A1] Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR (Poster) 2018. Q2. The theoretical analysis assumes Lipschitz continuity of binarized activation but does not provide the residual block analysis. A2. Thank you very much for your comment, we also think that the Lipschitz continuous analysis of residual structure is very important. Due to space constraints, the corollary and relevant proof of this part are located in the Appendix of the manuscript. Please refer to Corollary A.2 (Appendix) on page 12. For the camera version, we will move it into the main paper. Q3. Increase font size in Fig.4 for clearer visualization of feature maps. Q3. Please forgive our mistakes, we have set the font size to 24 to make the text description in Figure 4 clearer. Q4. Discuss potential risks of deploying noise-robust DBNNs in safety-critical medical applications (e.g., false negatives under extreme noise). A4. Thank you very much for your constructive suggestions. First, false negatives represent a significant challenge in the field of medical imaging. However, if imaging alone cannot provide a definitive determination, a biopsy can be conducted, and its subsequent pathological analysis serves as the gold standard for diagnosis. In practical MRI, extreme noise perturbations caused by patient movement can occur. In such cases, experienced senior physicians on the medical team will implement appropriate corrective measures. This indicates that our proposed method is unlikely to encounter such issues under normal environmental noise conditions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed feedback. I have read your response. You have provided a good feedback which addresses most concerns. I have raised the score to 4, hope that authors include these proposed revisions in your final version. --- Reply to Comment 1.1.1: Comment: Dear Reviewer FmSJ, Thank you very much! We are pleased to address your main concerns and will incorporate the aforementioned four revisions into the final version. Best regards, All authors
Summary: The paper investigates whether Deep Binary Neural Networks (DBNNs) can be robust to environmental noise, particularly in resource-constrained scenarios such as bio-electrical signal classification and medical imaging. The authors identify that DBNNs' robustness vulnerabilities stem from binary weights and scaling factors. To address this, they propose a L1,∞-norm constraint for binary weights and scaling factors, which they claim provides a tighter upper bound on noise perturbations compared to state-of-the-art (SOTA) methods. Their approach involves: 1. Theoretical Analysis: They derive a formal noise perturbation upper bound for DBNNs using L1,∞-norm constraints. 2. Robust Training Framework: The proposed L1,∞-norm constraint is incorporated into the training process to enhance DBNN robustness. 3. Experimental Validation: Their method is tested on five classification datasets, including CIFAR-10, CIFAR-100, Brain Tumor MRI, and bio-electrical signal datasets. Results show improved robustness, with up to 4.8% and 5.4% improvements on CIFAR-100 and Brain Tumor MRI, respectively. 4. Computational Efficiency: Their method reduces additional training overhead compared to previous methods. The study concludes that L1,∞-norm constraints effectively mitigate the impact of environmental noise on DBNNs while maintaining efficiency, making it suitable for edge devices in safety-critical tasks. Claims And Evidence: 1. Claim: DBNNs' robustness vulnerability comes from binary weights and scaling factors. Evidence: The authors provide a theoretical analysis showing that binary weights and scaling factors contribute to noise sensitivity. They derive an upper bound on noise perturbations using L1,∞-norm constraints, which is compared to prior approaches. 2. Claim: The proposed L1,∞-norm constraint provides a tighter upper bound than existing methods. Evidence: Theoretical derivations show that the L1,∞-norm constraint offers a more restrictive bound on noise effects compared to spectral norm-based constraints. This is further supported by quantitative comparisons. 3. Claim: The proposed method improves robustness across multiple DBNN architectures. Evidence: The authors conduct experiments on five classification datasets (CIFAR-10, CIFAR-100, Brain Tumor MRI, bio-electrical signals). The results show robustness improvements (up to 4.8% on CIFAR-100 and 5.4% on Brain Tumor MRI), validating their claim. 4. Claim: The method reduces additional computational overhead compared to previous robustness approaches. Evidence: They compare training/testing time with the LCR (Shang et al., 2022) method and show a 16% reduction in training time while maintaining performance. Problematic Claims: *** Claim: The method is broadly applicable to real-world safety-critical tasks. Issue: The experiments are conducted only on standardized datasets, without real-world deployment on actual medical or edge devices. Practical applicability remains untested. Also, the CIFAR datasets are considered too small scaled for now-a-day studies. *** Claim: The proposed method is universally effective across DBNN architectures. Issue: The study focuses mainly on ResNet-based DBNNs. It is unclear how well it generalizes to non-ResNet architectures or tasks beyond classification. for example transformer architectures. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem. The L1,∞-norm constraint is a reasonable approach to enhance robustness in Deep Binary Neural Networks (DBNNs), and the authors justify its effectiveness through theoretical derivations and empirical validation. For evaluation, the authors use five benchmark datasets (CIFAR-10, CIFAR-100, Brain Tumor MRI, bio-electrical signals, and ImageNet), which are commonly used in robustness studies, but are fairly small scale. They introduce environmental noise perturbations and measure test accuracy under noise, which is an appropriate metric for assessing robustness. Theoretical Claims: The proofs seem mathematically valid and logically structured. However, the assumptions about scaling factors and generalization to different architectures require additional empirical verification. A formal empirical validation comparing actual perturbation bounds across architectures would strengthen confidence in the claims. Experimental Designs Or Analyses: The experimental design is generally well-structured for evaluating the robustness of DBNNs to environmental noise, using five datasets (CIFAR-10, CIFAR-100, Brain Tumor MRI, bio-electrical signals, and ImageNet) and measuring classification accuracy under noise perturbations. However, the study lacks real-world deployment tests on edge devices, making it unclear how well the method generalizes to practical applications. The noise perturbation strategy is reasonable but does not appear to be based on real-world noise distributions, which could affect its applicability. While the comparison against four BNN-based models and the LCR method (Shang et al., 2022) is fair, the study does not include adversarial robustness methods, which would provide a more complete assessment. Additionally, while computational efficiency is analyzed in terms of training time, there is no analysis of inference speed and memory usage on actual resource-limited devices, which is crucial for assessing its practical feasibility. Overall, the experiments demonstrate improvements in robustness but leave important questions about real-world applicability unanswered. Supplementary Material: yes, the authors provided the cifar and brain tumor experiments code. Relation To Broader Scientific Literature: The paper builds on prior work in BNNs and robustness methods by addressing DBNNs' vulnerability to environmental noise, extending Lipschitz-based constraints (Gouk et al., 2021; Miyato et al., 2018) with an L1,∞-norm constraint that is more efficient than the LCR method (Shang et al., 2022). It improves robustness for resource-constrained applications like medical signal processing, complementing adversarial robustness studies (Gowal et al., 2018; Balunovic & Vechev, 2020) by focusing on random noise instead of attacks. Essential References Not Discussed: I'm not familiar with this field Other Strengths And Weaknesses: The paper presents an original approach by adapting Lipschitz-based robustness constraints to DBNNs, offering a novel L1,∞-norm constraint that improves robustness with lower computational cost, which is a meaningful contribution to lightweight model research. It effectively highlights a practical issue—environmental noise affecting DBNNs in resource-constrained scenarios—that has been underexplored compared to adversarial robustness. However, its significance is somewhat limited by the lack of real-world deployment or validation on actual edge devices, making it unclear how well the method translates to practical applications. The clarity of theoretical explanations and experimental results is generally strong, but some assumptions, such as the applicability of the bound across architectures, lack thorough empirical verification. While the work is an important step toward improving DBNN robustness, further validation in real-world conditions would enhance its impact. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks to the reviewers for their positive comments and constructive comments. Q1. However, its significance is somewhat limited by the lack of real-world deployment or validation on actual edge devices. A1. We apologize for the fact that the inference experiments on actual edge devices. However, the PyTorch framework can integrate ONNX third-party libraries to support DBNN model transformation and facilitate deployment on the simulation platform of edge devices (i.e., Raspberry Pi Debian 12). Specifically, the CycleBNN method with a ResNet18 backbone, after being trained with our $L_{1,\infty}$-norm constraint loss, resulted in a checkpoint file size of 45.34MB. This is significantly smaller than the 8GB storage capacity of the Raspberry Pi 4B+ device. The inference of a single image under environmental noise takes only 1.6 seconds, which is deemed acceptable given that brain tumor-related tasks typically do not demand real-time processing capabilities. In particular, the performance of the Raspberry PI simulation environment provided by Docker is lower than that of the actual device. This indicates substantial potential for deploying our proposed method on the actual edge device. Q2. The clarity of theoretical explanations and experimental results is generally strong, but some assumptions, such as the applicability of the bound across architectures. A2. Thank you for the reviewer's valuable suggestions. We agree that the applicability of the bound across architectures plays an important role in ensuring the theoretical robustness analysis of DBNNs under environmental noise perturbations. We clarify that the DBNN-based model discussed primarily focuses on the binarization of weights and activations of CNN-based models. However, the multi-head self-attention mechanism in transformer-based models differs significantly from the convolution operations in CNNs, and ResNet-based backbones also lack a positional embedding layer. Thus, potential issues for applications of bound across architectures are more likely to stem from substantial architectural differences rather than from the assumptions of theoretical analysis. Nevertheless, we will conduct a rigorous analysis of the upper bound of environmental noise perturbations in both the position embedding layer and the multi-head self-attention layer after binarization, thereby extending the existing conclusions to the Transformer model.
Summary: In this work, the authors investigate the robustness of deep binary neural networks (DBNNs) under environmental noise perturbations in resource-constrained scenarios. Specificity, the authors propose an $L_{1,\infty}$-norm constraint on objective function for binary weights to derive a tighter robustness upper bound and a low computational overhead training algorithm compared to existing methods. Then, the authors conduct extensive experiments on three benchmark image datasets (i.e., CIFAR-10, 100 & ImageNet) as well as medical datasets (i.e., bio-electrical signals, brain tumor MRI), validating the proposed algorithm by enhancing the robustness of major BNN algorithms. ## Rebuttal summary This paper studies an interesting problem about the robustness of deep binary neural networks under environmental noise perturbations in resource-constrained scenarios. My initial concerns have been resolved, so I keep a positive score. Claims And Evidence: In this work, the claims are well-supported. Firstly, the theoretical perspective establishes a tighter upper bound on environmental noise robustness compared to prior studies and further elucidates the quantitative relationships, thereby offering a clear explanation of the robustness in DBNNs-based model during inference. Finally, the authors conduct extensive experiments on three benchmark image datasets (i.e., CIFAR-10, 100 & ImageNet) as well as medical datasets (i.e., bio-electrical signals, brain tumor MRI), validating the proposed algorithm by enhancing the robustness of major BNN algorithms. Methods And Evaluation Criteria: The proposed method targets the interference of environmental noise during the inference stage. Specifically, the author uses test accuracy in a noisy environment as the evaluation metric, which is highly relevant to the research areas. Theoretical Claims: The theoretical analysis of this manuscript is accurate and rigorous. Experimental Designs Or Analyses: The experimental setup of this manuscript is accurate and comprehensive. The experimental results are well verified on three image classification datasets. In addition, the advantages of the algorithm in medical time series and image tasks are fully visualized. It would be better if the authors could provide more analysis of the computational overhead. Supplementary Material: The supplementary material provides the core code of the proposed method to support the reader in reproducing the experimental results. Relation To Broader Scientific Literature: The paper effectively situates its contributions within the broader scientific literature of DBNN-based methods, specifically addressing environmental noise perturbations in resource-constrained scenarios. It mainly explains the robustness of DBNNs in specific scenarios, and also relieves the problem of high robust training cost faced by existing methods. Essential References Not Discussed: This manuscript discusses the relevant literature. Other Strengths And Weaknesses: Strengths 1. The background of the issue examined in this manuscript is both critical and intriguing, particularly considering the paucity of theoretical studies that have explored the robustness of DBNN in environmental noise scenarios. 2. In this paper, a low-cost and robust training algorithm is developed from the perspective of dual norm constraints, and a more compact and noise-robust upper bound is presented. 3. The visualization results for both medical image and standard image classification tasks demonstrate the significant effectiveness of the proposed algorithm. Specifically, it elucidates the extent to which ambient noise perturbation interferes with the binarized convolution component of the model. Weaknesses 1. This manuscript conducts several comprehensive experiments to verify the advantage of the proposed method on different types of datasets. It seems that model overhead analysis is only available on the ImageNet dataset. It would be nice to analyze the overhead of more BNN algorithms under noise perturbations on other datasets. Other Comments Or Suggestions: 1. Authors should give priority to the visual results of medical image classification in the main text. After all, the results on real tasks are better than tables to visualize the advantages of the proposed approach. 2. It would be better if the authors could provide more BNN-based algorithm overhead under environment noise on the medical dataset. Questions For Authors: 1. The reviewer wants to know more about the cost of the BNN algorithm under environmental noise perturbations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: Authors should give priority to the visual results of medical image classification in the main text. A1: We extend our gratitude to the reviewer for providing this constructive suggestion. We fully concur with the suggestion to include visual results in the main text to effectively demonstrate the efficacy of the proposed method. The visualized results of brain tumors have been relocated to the experiment section. Q2: The reviewer wants to know more about the cost of the BNN algorithm under environmental noise perturbations. A2: Thank you very much for the valuable comment. We concur that, in addition to comparing the computational overhead with the LCR method (SOTA), which investigates BNN robustness, it is also essential to explore how the proposed $L_{1,\infty}$-norm constraints loss function influence the computational overhead of various popular BNN algorithms. To expedite the acquisition of experimental results, we measured the actual computation time on an A800 GPU. Specifically, we have evaluated the training cost of a series of BNN-based methods on CIFAR-100 and Brain Tumor datasets. We have reported the average training time of each epoch and total test time. According to the results presented in the main paper and two table below, it can be observed that an appropriately designed penalty term not only enhances model robustness but also significantly improves training efficiency. In addition, the overhead in the test phase also encompasses the time required for adding environmental noise. The detailed computational overhead are presented below: Table 1. Computational cost comparison between popular BNN-based methods and our on the CIFAR-100 dataset. | **Methods (ResNet18 backbone)** | **Training Time/Epochs (s)** | **Total Test Time (s)** | |:-------------------------------:|:----------------------------:|:-----------------------:| | CycleBNN | 24 | 43 | | CycleBNN+Our | **21** | **42** | | IRNet | 13 | 32 | | IRNet+Our | **12** | **31** | | Dorefa | 11 | 33 | | Dorefa+Our | 11 | 33 | | React | 14 | 33 | | React+Our | **13** | 33 | | Bireal | 13 | 30 | | Bireal+Our | **12** | 30 | | ABCNet | 13 | 41 | | ABCNet+Our | **12** | 41 | Table 2. Computational cost comparison between BNNs and our on the Brain tumor dataset. | **Methods (ResNet18 backbone)** | **Training Time/Epochs (s)** | **Total Test Time (s)** | |:-------------------------------:|:----------------------------:|:-----------------------:| | CycleBNN | 13 | 8 | | CycleBNN+Our | **11** | 8 | | Dorefa | 11 | 3 | | Dorefa+Our | **10** | 3 | | React | 6 | 4 | | React+Our | **5** | 4 |
null
null
null
null
null
null
Achieving Linear Speedup and Near-Optimal Complexity for Decentralized Optimization over Row-stochastic Networks
Accept (spotlight poster)
Summary: This paper studied the decentralized optimization problem where the mixing matrix is row-stochastic. This paper first derived the lower bound. Then, this paper analyzed PULL-DIAG-GT, showing that PULL-DIAG-GT requires an additional assumption, Assumption 4, to converge to the stationary point. Finally, this paper proposed a novel method, MG-PULL-DIAG-GT, which uses gradient accumulation and multiple gossip averaging. Then, this paper showed that MG-PULL-DIAG-GT can achieve almost the same convergence rate as the lower bound. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: See the comments in the suggestion. Supplementary Material: No Relation To Broader Scientific Literature: The proposed method seems to be a combination of existing techniques, gradient accumulation, multiple gossip averaging, and gradient tracking. However, deriving the lower bound of the convergence rate and showing that the proposed method can achieve almost the same rate as the lower bound are solid contributions to the research community. Essential References Not Discussed: The related papers seem to be well discussed in this paper. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * Could you please describe how the authors tuned hyperparameters, e.g., learning rate, in Sec. 6? This information is very important for reproducing the experimental results. * The authors used the exponential graph, grid, and ring as the network topology, while all of these graphs have a doubly stochastic mixing matrix. Could you please show some additional experimental results with a graph whose mixing matrix is not doubly stochastic? * If the mixing matrix is doubly-stochastic, we need to use accelerated gossip averaging to achieve the optimal convergence rate [1,2]. The proposed method seems to achieve the lower bound without using this acceleration. Can the authors explain why acceleration is impossible (or unnecessary) when the mixing matrix is row-stochastic? ### Reference [1] Tian, Y., Scutari, G., Cao, T., and Gasnikov, A. Acceleration in distributed optimization under similarity. In ICML 2022. [2] Yuan, K., Huang, X., Chen, Y., Zhang, X., Zhang, Y., and Pan, P. Revisiting optimal convergence rate for smooth and non-convex stochastic decentralized optimization. In NeurIPS 2022 Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and constructive comments. Below, we address each point in detail. **Q1:** Could you please describe how the authors tuned hyperparameters, e.g., learning rate, in Sec. 6? This information is very important for reproducing the experimental results. **A1:** Thank you for your reminder. We will briefly describe how we tuned the learning rates and the logics behind it. - Figure 2: For exponential graph of $n$ nodes, the learning rate $\alpha_n$ satisfies $$\alpha_n\times n=0.0512.$$ For ring graph of $n$ nodes, the learning rate $\alpha_n$ satisfies $$\alpha_n\times n=0.002.$$ We set $n\times \alpha$ as a constant because, as shown in Appendix (lines 1268-1271), we have the bound $$\frac{1}{K+1}\sum_{k=0}^K \mathbb{E}[\|\nabla f(w^{(k)})\|^2] \le \frac{12\Delta}{ n\alpha(K+1)}+16\alpha L\sigma^2+ \frac{L\Delta}{K+1}.$$ This implies that $n\alpha$ effectively controls the _descent rate_. By fixing $n\alpha$, we ensure that all curves decay at the same rate and only differ in their final noise-dominated error level. - Figure 3: For ring graph with $m$ times of multiple gossips, the learning rate $\alpha_m$ satisfies: $$\alpha_1=0.005,\alpha_5=0.01,\alpha_{10}=0.02.$$ For grid graph with $m$ times of multiple gossips, the learning rate $\alpha_m$ satisfies: $$\alpha_1=0.02,\alpha_5=0.03,\alpha_{10}=0.03.$$ For geometric graph with $m$ times of multiple gossips, the learning rate $\alpha_m$ satisfies: $$\alpha_1=0.02,\alpha_5=0.02,\alpha_{10}=0.03.$$ For nearest neighbor graph with $m$ times of multiple gossips, the learning rate $\alpha_m$ satisfies: $$\alpha_1=0.02,\alpha_5=0.02,\alpha_{10}=0.02.$$ Here, the learning rates are chosen in a non-decreasing order, as the MG mechanism typically permits a wider range of stable learning rates. We will include these in Appendix E in a future version. **Q2:** The authors used exponential graph, grid, and ring as the network topology, all of these graphs have a doubly stochastic mixing matrix. Could you please show some additional experimental results with a graph whose mixing matrix is not doubly stochastic? **A2:** Thank you for the question. While these graphs can support doubly stochastic matrices, we generate **row-stochastic** matrices over them, not doubly stochastic. We construct these matrices using the Metropolis rule (Nedic et al., 2018), where $$ a_{ij} = \begin{cases} \frac{1}{1 + d_j^{\text{in}}}, & \text{if } (i \to j) \text{ exists} \\\\ 0, & \text{otherwise} \end{cases} $$ This typically yields a row-stochastic matrix. Only in the special case of regular graphs does the Metropolis rule produce a doubly stochastic matrix. We will emphasize that we used row-stochastic matrices in a future version. On the other hand, the row-stochasticity of these matrices can be verified by examining $\kappa_A$ listed in Appendix E (lines 1413–1421). Recall that $$\kappa_A = 1, \quad \text{if and only if } A \text{ is doubly stochastic.}$$ For graphs ring, grid, geometric, and nearest neighbor, we have $\kappa_A > 1$, confirming that the mixing matrices are not doubly stochastic. **Q3:** Can the authors explain why acceleration is impossible (or unnecessary) when the mixing matrix is row-stochastic? **A3:** Thank you for your insightful question. We will cite [1,2] and carefully discuss them in related works. - In [1,2], accelerated gossip relies on the mixing matrix being **doubly-stochastic and symmetric (or having real spectrum)**. This structure is crucial: after each power iteration (e.g., $A \to A^2$), the eigenvalues remain real and lie on the positive real axis. This allows a spectral shift (e.g., via Chebyshev polynomials) to center the spectrum around zero, reducing the spectral radius and enabling acceleration. In contrast, our setting involves **row-stochastic and generally non-symmetric** mixing matrices, for which the eigenvalues are complex and not aligned along the real axis. Without symmetry or spectral knowledge, Chebyshev-type acceleration becomes ineffective or even impossible. - Another reason lies in the network dependence on $1 - \beta$. In doubly-stochastic scenarios, the optimal dependence is $1/\sqrt{1 - \beta}$, which requires Chebyshev acceleration to achieve. In contrast, for row-stochastic matrices, the optimal dependence is $1/(1 - \beta)$, which is worse than in the doubly-stochastic case. Such $1/(1 - \beta)$ dependence can be attained without Chebyshev acceleration. **Other Comments:** While our proposed optimal algorithm combines existing algorithmic components, the **analysis technique is original** and departs from prior work in several key aspects. In particular, our work is the **first to handle "inexact descent"** in the nonconvex setting, whereas existing analyses focus on exact descent methods. We provide a detailed explanation of this distinction in our response to Reviewer zni5.
Summary: A key challenge in decentralized optimization is determining the optimal convergence rate and designing algorithms to achieve it. While this problem has been extensively addressed for doubly-stochastic and column-stochastic mixing matrices, the row-stochastic scenario remains unexplored. This paper bridges this gap by introducing effective metrics to capture the influence of row-stochastic mixing matrices and establishing the first convergence lower bound for decentralized learning over row-stochastic networks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: A good extension of previous work. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: This paper introduces effective metrics to capture the influence of row-stochastic mixing matrices and establishing the first convergence lower bound for decentralized learning over row-stochastic networks. Weakness: The techniques used in this paper are mainly from ``Towards better understanding the influence of directed networks on decentralized stochastic optimization''. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Reviewer's Comment:** Weakness: The techniques used in this paper are mainly from ''Towards better understanding the influence of directed networks on decentralized stochastic optimization''. (Liang et al. (2023)) **Authors' Response:** We sincerely thank the reviewer for their insightful feedback and constructive comments. While our work is inspired by Liang et al. (2023), it differs significantly in both **problem setting** and **analytical techniques**. We encountered many new challenges and we developed original techniques for each of them, listed as follows: **1. Challenges in Consensus Protocols.** - Liang's work studied Push-Sum protocol. In Push-Sum protocol, for a column-stochastic matrix $B$, its Perron vector $\pi_B$ is estimated by $v^{(k)}:=n^{-1}B^k 1_n$. Due to the **linear** relationship $v^{(k+1)}=Bv^{(k)}$, the estimate from Push-Sum is consistent and enjoys good monotone properties (Lemma 2.1 in Liang et al. (2023)). - **Our work** studied Pull-Diag protocol. In Pull-Diag protocol, for a row-stochastic matrix $A$, the Perron vector $\pi_A$ can only be estimated by $d^{(k)}:=\rm{Diag}(A^k)$. Note that $d^{(k+1)}$ is **nonlinear** corresponding to $d^{(k)}$, we need to propose a novel analysis to study the property of Pull-Diag. See our Lemma 12. **2. Challenges in Obtaining Linear Speedup (Inexact Descent).** - Liang's work studied Push-Diging algorithm, which is an **exact** descent algorithm because the update follows (after proper projection): $$\hat{x}^{(k+1)}=\hat{x}^{(k)}-\gamma n\bar{g}^{(k)},$$ where $\hat{x}^{(k)}$ is the projected parameter and $\bar{g}^{(k)}$ is the average of local gradients. This resembles **centralized SGD**, making the linear speedup result easy to be obtained. - **Our work** studied Pull-Diag-GT, which is an **inexact** descent algorithm because the update follows $$\hat{x}^{(k+1)}=\hat{x}^{(k)}-\gamma \pi_A^\top y^{(k)}.$$ Compared to exact descent, we have an extra gap: $\pi_A^\top y^{(k)}-n\bar{g}^{(k)}$. This term, called **descent deviation**, typically prevents us from obtaining a linear speedup. Prior to our work, this term can only be dealt under the strongly-convex and deterministic setting. Our work is the **first** to rigorously handle such descent deviation in the nonconvex, $L$-smooth and stochastic setting. Lemmas 3, 4, 6, and 13 provide the tools for bounding and controlling this term. These lemmas can also be applied on similar inexact descent problems. **3. Challenges in Adapted Gradient Tracking.** - Liang's work studied standard gradient tracking, which is $$y^{(k+1)}=By^{(k)}+g^{(k+1)}-g^{(k)}.$$ In standard gradient tracking, $g^{(k+1)}-g^{(k)}$ can be easily bounded using the $L$-smoothness property. This also enables a straightforward estimate for $y^{(k)}$. - **Our work** studied adapted gradient tracking, which is $$y^{(k+1)}=Ay^{(k)}+D_{k+1}^{-1}g^{(k+1)}-D_k^{-1}g^{(k)}.$$ The different coefficients $D_{k+1}$ and $D_k$ make it highly nontrivial to apply $L$-smoothness property. Our Lemma 4 provided valuable insights on how to address this problem. **4. Challenges Raised from Inversion of Small Values.** - In Push-Sum protocol studied by Liang et al (2023), the estimate of Perron vector $v^{(k)}$ is naturally lower bounded by a $\kappa$-related constant. They do not need to worry about the numerical instability. - **Our work** studies Pull-Diag protocol, which uses inversion of diagonal entries to do corrections. However, these diagonal entries can be arbitrarily small or zero, even for fixed $\kappa_A$ and $\beta_A$. To avoid being divided by zero, we must have Assumption 4 to provide a lower bound for diagonal entries. Altough this is a weak assumption (See our response to Reviewer MwZe, 3rd problem in 'Other Comments Or Suggestions'), this will prevent us from attaining the lower bound. Fortunately, in Lemma 7, we discover that MG will guarantee a positive lower bound on diagonal entries. This allows us to remove Assumption 4 and to derive a clear convergence rate using only $\kappa_A$ and $\beta_A$. While the final conclusion matches the conclusion in Liang et al. (2023), the underlying logic is quite different. In summary, our analysis provides a more general framework capable of handling richer sources of inexactness in decentralized optimization. We developed original techniques which differs from Liang et al. (2023). The challenges and distinctions above have also been discussed in our paper, pages 5 and 6. A further clarification will be included in Appendix due to page limits.
Summary: This paper presents a theoretical analysis of decentralized optimization with row-stochastic mixing matrices. It is the first to establish a lower bound for convergence. Gradient tracking-based algorithms are shown to achieve linear speedup in convergence under an additional assumption. To overcome this limitation and attain a near-optimal convergence rate, the authors propose a new algorithm that incorporates multiple gossips. Experimental results validate the theoretical findings. Claims And Evidence: The claims in the theory part are clear, with step-by-step sketch of the proofs, which are easy to understand. However, the experiments are not as convincing as the math derivations. Methods And Evaluation Criteria: Yes, techniques used in this paper, as well as the performance evaluation, are widely accepted in this field. Theoretical Claims: Yes, theoretical claims look good to me. Experimental Designs Or Analyses: Experiments look traditional. I have a few questions on the design: 1. Is it possible to showcase the MG component in a larger real dataset? 2. The graph structure is either un-directed or directed with good structure. How will the graph topology change the results? For example, a random directed graph, and/or even with high $\kappa_A$? 3. How is the MNIST data distributed. Are they homogenous (uniform)? 4. Is there any explanation on the plateau-like stage of the accuracy in the neural net experiments? Supplementary Material: I went over the proofs and experimental settings, though I did not read every line of the proofs. Relation To Broader Scientific Literature: This paper inherits ideas from Liang et al. (2023) with same metric definitions, similar MG remedy for the convergence, and analogous bounds. Therefore, the results in this paper sounds reasonable as everything looks like a transpose (to me). It could be better if the authors are able to emphasize the technical difference between their results and that literature work. Essential References Not Discussed: I suggest adding more recent works. For example, in Section 1, most papers look like before pandemic. In addition, I would like to see more practical evidences of analyzing directed graph. E.g., there is one paper for differences in node power ranges, and what about channel disruptions? Other Strengths And Weaknesses: No more comments here. Important questions or suggestions are **numbered** in the corresponding parts. Other Comments Or Suggestions: 1. In Section 2.3, right before Section 3, the referred Figure 1 shows exponential rate of convergence. I would guess it is because it works on a strongly-convex problem. However, I did not find where the description of the setting locates (or I missed it). This might cause misunderstanding of the claim of convergence rate, as your assumptions are much milder than strong convexity. 2. I suggest explaining the abbreviation GT before using it in the abstract, similar to MG. 3. it could be better to discuss the limitation of Assumption 4, as it looks unnatural and it is hard to tell which scenarios satisfy or un-satisfy this assumption. 4. In Section 2 Assumption 1, the meaning of $(i,j)\in\mathcal{E}$ needs clarification, as this is a directed graph. Questions For Authors: 1. This paper closely follows the approach of Liang et al. (2023), including the use of a specific metric, a similar multi-gossip (MG) strategy to address optimal convergence, and analogous forms of the resulting bounds. As a result, the theoretical originality here sounds unclear to me. To strengthen the contribution, it is important for the authors to clearly articulate the technical novelties and distinctions from the literature, beyond applying known ideas in a slightly different setting. 2. Would it be more accurate to describe the result as near-optimal, given the logarithmic gap between Theorem 3 and Theorem 1? I find the phrasing in the title of Section 5—“near-optimal rate”—both appropriate and reflective of the paper’s contributions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and constructive comments. Below, we address each point in detail. **All newly added experiments can be found in the Rebuttal Experiment Sheet (RES)** https://anonymous.4open.science/r/ICML-2025-Rebuttal-Experiment-Sheet-B6C0/ **Experimental Designs Or Analyses** 1. We add an experiment training ResNet-18 on CIFAR-10, using both vanilla Pull-Diag and its MG versions. Data is uniformly distributed. See Figure 1 in RES. 2. Both $\beta_A$ and $\kappa_A$ will affect the results. For most topologies (including random graphs), $\kappa_A$ is typically a small number and convergence rate is mainly affected by $\beta_A$. For special mixing matrices, as shown in our Proposition 8, $\kappa_A$ can be exponentially large. In this case, vanilla Pull-Diag-GT suffers a lot, while its MG version can achieve quick convergence and higher precision. We present experiments for large $\kappa_A$ in Figure 2 in RES. 3. Yes, MNIST data is uniformly distributed. For heterogeneous results, see Figures 3,4 in RES. 4. Since we use a constant learning rate and a simple 4-layer neural network, the test accuracy will not reach 100\% in the end. **Relation To Broader Scientific Literature** Our work addresses a series of new challenges with original techniques. **These challenges and techniques are detailed in our response to Reviewer zni5.** For reviewer's convenience, we also provide a brief summary as follows: i. **Consensus protocol: Pull-Diag is fundamentally different from Push-Sum.** Pull-Diag introduces stronger nonlinearities and disrupts the linear structure exploited in Push-Sum analysis. It is not a dual or transpose of Push-Sum, thus new analytical tools are needed. See Lemma 12. ii. **New error terms.** Liang et al. only handle consensus error. In our case, we must additionally control descent deviation, which arises from the interaction between Pull-Diag and gradient tracking. This term is new in nonconvex settings and is analyzed via Lemmas 2 and 3. iii. **Non-uniform gradient tracking**: In Push-DiGing, all gradients carry the same weight, allowing standard $L$-smoothness arguments. In our method (Eq. 5b), gradient coefficients vary, breaking this symmetry. We introduce Lemma 4 to handle this non-uniformity. iv. **Inversion of small values.** Pull-Diag involves inverting diagonal entries that can be arbitrarily small, making it hard to control bounds. In Lemma 7, we show that MG guarantees a positive lower bound on these values, allowing us to derive clear convergence rates in terms of $\kappa$ and $\beta$. While this matches a conclusion in Liang et al., the underlying logic is different. In a nutshell, our work provides new analytical tools to handle the unique challenges introduced by Pull-Diag and Row-Only settings. These tools can be extended to analyze other algorithms in similar settings. **Essential References Not Discussed** We will discuss connection failure (Yemini et al.,2022, Li et al 2024) and include more recent literature on Row-Only algorithms (Jeong and Kountouris, 2024, Nguyen et al. 2023, Xing et al., 2024). **Other Comments Or Suggestions** 1. The problem in Figure 1 is ''achieving average consensus''. Initially, every agent $i$ has vector $z_i$. Agents communicate with their neighbors and all want$\frac{1}{n}\sum_{i=1}^n z_i$ in the end. The $y$-axis denotes the distance between their current state and $\frac{1}{n}\sum_{i=1}^n z_i$. This can be seen as a strongly convex optimization problem. We plot Figure 1 to show the impact of $\beta_A$ and $\kappa_A$ on Pull-Diag, thereby justifying their use in characterizing row-stochastic matrices. However, we do not use Figure 1 to support our theoretical results in non-convex settings. 2. Our abstract will be modified as: ... deviation in the descent direction caused by the adapted gradient tracking (GT) and instability introduced by the PULL-DIAG protocol. 3. Under Assumption 1, an equivalent statement of Assumption 4 is: $A$ has a positive diagonal (every node has a self-loop). This equivalence can be derived using Lemma 7. We will include this in a future version. 4. $(i,j) \in E$ represents an edge from $i$ to $j$. **Questions For Authors** For Question 1, - Our work differs from Liang et al. (2023) in both setting and analysis. Please check our response to **Relation To Broader Scientific Literature** and a more detailed version can be found in our response to Reviewer zni5. - The key novelty is that, our analysis framework is more general. Liang et al. analyzes **exact** descent method, while our method addresses **inexact** descent with disturbacnes. Detailed explanation of this distinction can also be found in our response to Reviewer zni5. - We have discussed this distinction in our paper pages 5 and 6, and a further clarification will be included in Appendix due to page limits. For Question 2, - Yes. We would use near-optimal in title in a future version.
null
null
null
null
null
null
null
null
Probably Approximately Global Robustness Certification
Accept (poster)
Summary: The authors propose an algorithm that extends local robustness certification techniques to the entire input space with probabilistic guarantees. To achieve this, they introduce a novel approach for quantitatively characterizing a Deep Neural Network's (DNN's) robustness across the input space. Specifically, their method establishes high-probability lower bounds on robustness for each input point using only independent and iid samples and access to a local robustness oracle. The authors leverage existing local robustness testing oracles, such as adversarial attacks like PGD and formal local robustness verifiers, to determine the robust radius around a single sampled data point. ## Update after rebuttal I thank the authors for their response. I have decided to maintain my scores and remain on the fence about this manuscript, as I see a mismatch between the theory and the proposed algorithm. The test dataset—or the samples obtained by perturbing it—may not truly represent i.i.d. samples from the global input distribution. Furthermore, relying on the test dataset inherits the same criticism faced by local robustness verification: it does not account for valid inputs (e.g., valid images of an airplane) that differ from those in the test set. Since the test dataset may not be a representative sample of all valid inputs for a given class, global robustness remains an important and unsolved problem. Claims And Evidence: **Strengths** - The overall approach is mathematically well-motivated, and the use of $\epsilon$-nets to probabilistically characterize the DNN's robustness across the input space is an interesting theoretical idea. **Weaknesses** - The theoretical results and the proposed approach assume access to a sampling technique that can generate iid samples from the data distribution (e.g., MNIST/CIFAR10 images). However, the authors do not clarify how they ensure that their sampling method truly provides iid samples from the entire MNIST or CIFAR10 distribution. Instead, they rely on dataset samples (or dataset samples perturbed with Gaussian noise), meaning the approach inherits the same fundamental limitation as local robustness methods (**dependence on the Dataset**). Consequently, the guarantees remain dataset-dependent, which undermines the paper’s motivation of generalizing robustness guarantees to the entire input distribution. - Building on the previous point, the proposed method appears to be a modification of **robust accuracy**, which measures the percentage of $L_{\infty}$ regions around test data points that are verified as robust in local robustness studies. - In Section 3.1, the class of the sample $x$ is defined as $\text{class}(x) = \arg\max_{i \in [1, n]} f(x)_i$. Please correct me if I am wrong, but under this definition, even if the classifier $f$ consistently predicts the wrong class (one that does not match the ground truth) with high confidence, it would still be considered robust. Is this intentional? If so, does this mean that a classifier with very poor accuracy (making incorrect predictions that do not match the ground truth with high confidence) could still achieve a high robustness measure according to the proposed method? Methods And Evaluation Criteria: See the questions from the previous section and additionally: - Exactly computing Eq. (30) is an NP-hard problem. Formal verifiers can compute a valid lower bound, while adversarial attacks provide an upper bound. However, both bounds can be imprecise. The authors should report both bounds to present a clearer picture. - In lines 40–43, the authors claim: * That is, for each point in the input space we are able to give a high-probability lower bound for its robustness."* However, in the CIFAR10 experiments, they use only adversarial attacks, which compute an upper bound for Eq. (30)—a bound that can be significantly far from the optimal. Doesn't this contradict the claim made in lines 40–43? Please let me know if I am misunderstanding something. Theoretical Claims: I have not carefully verified the detailed proofs in the Appendix. However, I have reviewed the proofs in the main paper and skimmed through those in the Appendix. They appear to be correct to me. Experimental Designs Or Analyses: The experiments are limited to a single standard and adversarially trained MNIST and CIFAR10 network. Additionally, I have some concerns regarding the robustness oracles used—see `Methods and Evaluation Criteria`. Once the authors clarify these doubts, I may have follow-up questions regarding the experimental design and analysis. Supplementary Material: I have skimmed through the proofs and additional experimental design discussed in the Appendix. I believe the MNIST DNN used in the experiments is relatively small. I recommend that the authors use standard benchmarks and networks from the most recent works [1, 2] to ensure a more robust evaluation. [1] *"Scalable Neural Network Verification with Branch-and-Bound Inferred Cutting Planes,"* NeurIPS, 2024. [2] *"Relational Verification Leaps Forward with RABBit,"* NeurIPS, 2024. Relation To Broader Scientific Literature: I believe global robustness is a crucial problem for ensuring the safety of DNNs across the entire input distribution. However, my primary concern is that the guarantees obtained in this work, like those in local robustness studies, are essentially tied to the test dataset. Essential References Not Discussed: I believe the authors have overlooked some of the most recent works on local robustness and local hyper-property verification (I referenced two from NeurIPS 2024 in the previous sections). There is also minimal discussion on GPU-accelerated Branch and Bound (BaB) based complete verifiers. I think the authors should consider using them for the CIFAR10 DNN experiments, which would help them apply formal methods to CIFAR10 networks as well. Other Strengths And Weaknesses: Please refer to the `Claims And Evidence` and `Methods And Evaluation Criteria` section. Other Comments Or Suggestions: I would suggest that the authors clarify the theoretical results in Sections 4 and 5, distinguishing between their original contributions and results derived from existing works or modifications of well-known results. This will make the paper much easier to read. For instance, Proposition 4.2 follows directly from Definition 3.5, and (may be wrong but) Lemma 4.3 appears to stem from existing results. Please explicitly mention these sources and highlight any specific extensions or modifications made to the existing results. Questions For Authors: Please refer to the `Claims and Evidence` and `Methods and Evaluation Criteria` sections. I am open to increasing my evaluation if my concerns regarding dataset dependency and the similarity to local robustness are adequately addressed. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your thorough and insightful review. We address your comments in the following. **Dependence on the dataset**. We lift local robustness statements about a finite dataset to a (probabilistic) global robustness guarantee over the data distribution, i.e., to a robustness guarantee that generalizes with high probability. Hence, our approach directly addresses the dataset dependence of local robustness methods. For our theoretical statements, we indeed rely on the common assumption that we can get *unlabelled* iid samples from the data distribution. In our experiments, our sampling procedure may only approximate the data distribution. However, our evaluation shows that this approximation is sufficient to let our guarantee generalize to unseen data. This setup does not undermine our theoretical findings but rather demonstrates that our theory works in practice even in imperfect settings. This is also supported by Theorem 4.6. **Robust accuracy** is an empirical measure of model accuracy and local robustness. Given a set of points, robust accuracy quantifies, for these points *only*, how often a perturbation leads to misclassification. Our method is fundamentally different. Our approach does not aim to measure robustness directly but rather formalizes the conditions for generalization of robustness from a sample to the distribution. For this, we can provide guarantees for *any* given local robustness oracle, including one based on robust accuracy. **Robustness vs Accuracy.** We consider robustness and accuracy to not necessarily be related. This is consistent with the literature on robustness verification (e.g. [a] from `gsc4`). If ground truth labels are available to the local robustness oracle, then our theory can accommodate a notion of robustness that considers accuracy as well. **Robustness lower-bound (Eq. 30)**. We use the expression “robustness lower-bound” to refer to the smallest distance to a counter-example reported by a chosen oracle $\mathbf{rob}_f$ on our sample $N$. For each point in the sample, the oracle reports a local robustness radius and in Eq. 30 we take the minimum of these reported values. Our global robustness guarantees are provided with respect to the attack scenario (e.g., a PGD attack) specified by the chosen robustness oracle, and Eq. 30 provides a valid lower-bound for this chosen scenario. While PGD may over-approximate the distance to the class boundary, if a PGD oracle is used then our method provides *valid guarantees* for adversarial robustness against PGD attacks (Fig. 4). We report lower bounds for both an adversarial and a formal oracle on the same data in Fig. 4 and 5. Compared to PGD, a formal verifier can provide more conservative bounds (Fig. 5). **Adversarial Attacks & Lower bounds (Line 40-43)** Our guarantees are provided with respect to a specific oracle and not necessarily with respect to the exact distance to the class boundary. Hence, our results for PGD are high probability robustness lower bounds for the distance required to reach an adversarial example using the PGD attack. We obtain these lower bounds by *exact computation of Eq. 30* for our sample $N$ with our chosen robustness oracle. **Experiments.** We have performed new experiments with larger architectures, including `convBig` [2], and with a new robustness oracle using the LiRPA library adopted by the CROWN verifiers. We provide details in the response to reviewer `md7J`. Both CIFAR and MNIST are commonly used benchmarks in the robustness literature [1, 2]. The additional formal verification benchmarks in Zhou et al. [1] are out of the scope of our work as they do not consider a notion of robustness comparable to ours. In our experiments, we investigate networks of comparable size to [1,2] and often achieve better accuracy. **Other tools.** We run more experiments using recent libraries from the $\alpha$-$\beta$ CROWN tool (please see reply to `md7J`). **Theoretical results**. Thank you for the pointers, we will revise the section accordingly to highlight our contributions. Proposition 4.2 does follow from Definition 3.5. Lemma 4.3 is a small technical result stemming from rank statistics which does not directly follow from existing results to the best of our knowledge. However, we will be happy to cite any other relevant literature in your knowledge. We hope we have addressed your concerns and thank you again for your detailed review. We are very happy to engage in further discussion if you have more questions. --- Rebuttal Comment 1.1: Comment: I will maintain my grades due to the following concerns and questions: - Assuming the samples generated by the authors are i.i.d. from the global data distribution, can the average robust accuracy (i.e., the sample mean) reported in local robustness studies serve as an approximation of the global average robust accuracy, provided the sample size is sufficiently large? - My main concern is that there is a mismatch between the theory and the actual experiments—specifically, that the test dataset, or the samples obtained by perturbing it, may not truly be i.i.d. samples from the global input distribution. Isn't this the key distinction between local and global robustness—where the goal is to assess robustness on entirely unseen data, which could be significantly different from the test dataset? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and engaging with us on the paper's methodology. In our experiments, we split the datasets into three parts: training, sampling and test. The *training* split is only seen during the NN training. We sample from the sampling split with Gaussian noise to obtain our guarantees. The sampling split is the only split where Gaussian noise is used. **The test split is the official test dataset for MNIST and CIFAR10 respectively, without any perturbation or modification.** The test split is **unseen** by the training procedure and the procedure to obtain guarantees. The test split is used only to assess how our guarantees transfer on entirely unseen data. > “Isn't this the key distinction between local and global robustness—where the goal is to assess robustness on entirely unseen data, which could be significantly different from the test dataset?” Our experiments assess the robustness on test data that is **entirely unseen** and **adheres to the input data distribution**. The key idea of “global robustness guarantee” is that the guarantee generalizes to unseen/test data. Our method formalizes how many local assessments need to be performed to infer such a global robustness guarantee for a **fixed input data distribution**. If test data were, instead, sampled from a **different distribution** with known total variation distance from the one used to obtain our guarantees, then Theorem 4.6 captures this scenario. If the test data is sampled from an ***arbitrarily different* distribution**, and we therefore have data which is arbitrarily different from the data we used to obtain our guarantees, then no statistical statement can be made, neither by our approach nor by any other learning theory or statistics based methodology. Our notion of generalization from a sample to its fixed but unknown distribution is similar to the setting in PAC-learning [1,2] and also is similar to the intuition of your following question > “can the average robust accuracy (i.e., the sample mean) reported in local robustness studies serve as an approximation of the global average robust accuracy, provided the sample size is sufficiently large”, This is indeed possible. And our approach gives a bound on “sufficiently large sample size” for our setting. However, the nature of the guarantee we introduce is completely different (and more useful) than a global average. First, robust accuracy is just one choice of a local robustness metric, **our guarantee works with any local notion of robustness.** Second, **average robust accuracy (or average of any local metric) can not be used to distinguish between the robustness of two points with different confidence values at test time.** We provide a high probability guarantee of a point’s robustness **given its prediction confidence**, which holds for the entire distribution and is much more nuanced than a global average. We thank the reviewer again for their analysis of our contribution. And we will be happy to further engage or clarify any queries. [1] L. G. Valiant: A Theory of the Learnable.1984 [2] Mitzenmacher, M. and Upfal, E. Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis, Second Edition, 2017 [Chapter 14, sec 14.4].
Summary: The authors propose a probabilistic framework to evaluate global robustness in neural networks. Global robustness for a neural network is defined such that a neural network is globally robust if it is robust at all confident predictions. The approach relies on \epsilon-nets and is evaluated on MNIST and CIFAR-10 datasets. Claims And Evidence: Some of the claims are not accurate. For instance: - The authors claim that verification-based techniques for neural networks are limited to networks with small networks, where I guess small networks mean few hundreds of parameters. However, techniques such as CROWN have already scaled to larger networks than those considered in this paper [Wang, Shiqi, et al. "Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification." Advances in neural information processing systems 34 (2021): 29909-29921.]. Also, techniques like randomized smoothing [Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." international conference on machine learning. PMLR, 2019.], which could also provide guarantees (even if with some confidence), have already scaled to IMAGENET. Methods And Evaluation Criteria: The evaluation performed by the authors makes sense. However, I miss an empirical comparison, for example, in terms of the number of required samples with comparable methods, such as [Webb, S., et al. "A statistical approach to assessing neural network robustness." Seventh International Conference on Learning Representations (ICLR 2019). International Conferences on Learning Representations, 2019], which seem to be applicable to the same specification. Theoretical Claims: Theoretical claims appear correct to me. Experimental Designs Or Analyses: While experimental analysis seems sound, as mentioned above, I miss a comparison with comparable samples. Also, it would be important to see how the number of required samples changes with the confidence required. Supplementary Material: I did not check in detail Supplementary Material Relation To Broader Scientific Literature: Adversarial verification of neural networks is an important problem, and the probabilistic global notion of robustness considered here is surely important. However, as similar statistical approaches based on samples from the data distribution have already been used for similar problems, it would be important to have at least an empirical comparison with state-of-the-art to be able to judge the importance of the results. Essential References Not Discussed: Most of the key references are mentioned, but, as I mentioned above for the Webb, S., et al. case or the literature in formal verification, some of them are not discussed properly. Other Strengths And Weaknesses: As mentioned above, it is difficult to evaluate the strengths of the paper without a proper comparison with the literature or more extensive experiments to see how the required number of samples changes. Other Comments Or Suggestions: N/A Questions For Authors: Apart from the ones mentioned above in terms of comparison with Webb, I have the following ones: - How do you compute the VC dimension d for your problem with the various neural networks? - Can your result also be applied if the data distribution support is unbounded? - As some of your results are based on the fact that all samples are robust, how does your complexity in terms of the number of required samples change if the true robustness probability is not 100%, but say 99%? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your review and comments. We address them below. **Size of the certified NNs.** Techniques like CROWN and randomized smoothing do indeed scale up to large networks for *local robustness* certification. However, our work is concerned with using local robustness checks to provide *global robustness guarantees*. The per-example verification time reported by Wang et al. is large for our application, as meaningful global guarantees require 10K-100K local robustness checks. Approaches that explicitly target global robustness (such as the ones referred to by reviewer `gsc4`, [a, b, c]) consider significantly smaller networks, with a number of parameters in the order of a few hundreds to a few thousands. Our approach scales significantly better than these approaches. **Randomized smoothing** does not address global robustness but provides certifications for a smoothed version of the original classifier around a given point. For certification, randomized smoothing can only provide local robustness guarantees, whereas we are interested in global robustness guarantees conditioned on the prediction confidence, that provably generalize to unseen data. **Comparison to Webb et al.** Our objective is fundamentally different to Webb et al. They provide a *statistical estimate* for the probability of failure with respect to a given robustness property (as a boolean criterion, see eq. (2) in Webb et al.), with no guarantee on soundness of the estimate. In contrast, we provide a high probability *global guarantee* on robustness (as a metric) conditioned on the prediction confidence (eq. 24). Furthermore, the sample complexity of our approach does not depend on the dimensionality of the problem, and can be computed a priori (eq. 7). In contrast, Webb et al. provide no formal sample complexity bounds. For any point they consider, they require a number of Metropolis-Hastings transitions that may increase with the dimensionality of the problem. Moreover, Webb et al. rely on a problem-dependent termination condition. We will add this discussion in the related work section. **Number of required samples.** The sample size required for our guarantees does not depend on the prediction confidence of the classifier and is instead fully captured by Equation 6 and 7. That is, the sample size depends only on the choice of parameters $\epsilon$ and $\delta$, and scales as $\mathcal O(\frac{1}{\epsilon}(\ln\frac{1}{\epsilon}+\ln\frac{1}{\delta}))$. This is one of the strengths of our approach, as the sample size does not depend on the dimensionality of the problem (see the remarks on the VC dimension below). **VC Dimension.** The VC dimension $d$ of the quality space is always equal to 2, for any given neural network, as it only depends on the property (i.e., robustness) we investigate. This is one of the main strengths of our approach, as the VC dimension of our problem does not depend on the choice of the learning algorithm, or on the specifics of the input data. The range space we consider is always constituted by the intersection of two axis-aligned half spaces in the 2-dimensional quality space, that is, it is constituted by the shaded regions in Figure 1. **Unbounded distribution support.** Yes, our method does not rely on the assumption that the data distribution is bounded. All our statements are non-parametric in nature. We do not rely on any specific notion of locality in the input space, besides the ones required by the local robustness oracle. **100% vs 99% robustness.** If we understand your question correctly, we want to clarify that the method does not depend on the fact that 100% of the observed samples are robust, and the sample complexity is not affected by this (see also above). Our method relies on the fact that if enough points ($\epsilon$-net many) are sampled and the minimum robustness value among them, for a given confidence $\kappa$, is found to be $\rho$, then we can say that with high probability a new point with confidence at least $\kappa$ will be at least $\rho$-robust. Our approach guarantees that such a statement indeed generalizes to test data. We hope our comments addressed the questions you raised. We are happy to engage in further discussion.
Summary: This paper introduces a novel method for certifying the global robustness of neural networks (NNs). The method employs a sampling procedure to create an $\epsilon$-net, which is used along with a local robustness verifier to provide probabilistic guarantees on the robustness of the model, depending on its prediction confidence at a given point. Once the sampling procedure is performed, the approach can provide robustness guarantees for every point in the feature space without requiring the sampling procedure to be run again; in this sense, it provides global robustness guarantees. The approach assumes access to the true data distribution of instances in the feature space or a proxy of it (e.g., approximated with Gaussian noise). The experimental evaluation considers a feed-forward NN on MNIST and a ResNet-20 on CIFAR-10, demonstrating that the theoretical guarantees provided by the approach are also confirmed in practice. In particular, the probabilistic robustness guarantees hold with high probability in practice. Claims And Evidence: The claims of the paper are sufficiently supported by mathematical proofs and experimental evidence. The experimental evaluation convincingly shows that the theoretical bounds provided by the approach are reflected in practice. However, there are a few small points that the authors should discuss more in depth, in particular the approximation of the data distribution used by their approach and the influence of the sampling procedure on the total runtime of the method. **Update after the authors' response**: the authors have satisfactorily answered to my questions and clarified the minor points. Methods And Evaluation Criteria: The considered models (a feed-forward neural network and ResNet-20) and datasets (MNIST and CIFAR-10) are reasonable and in line or larger than the ones considered in previous related work (like [a]). Moreover, the chosen evaluation criterias are comprehensive to assess that the provided guarantees are satisfied and describe the behavior of the networks well. [a] Athavale et. al., Verifying global two-safety properties in neural networks with confidence, CAV, 2024. Theoretical Claims: I checked the proof of Proposition 4.2, that serves as a building block for bounding the joint probability that the classifier provides a prediction above a certain threshold without being robust on the sample. I also checked the proofs of Lemma 4.3, which establishes a lower bound on the probability of obtaining a given (or bigger) prediction confidence by the classifier, and Lemma 4.5. Experimental Designs Or Analyses: The experimental results presented in the paper support the claim regarding the validity of the guarantees on global robustness provided by the proposed approach. Figure 3 shows that the test data closely follows the mapping used by the approach to relate confidence values to lower bounds on robustness. Additionally, Table 1 demonstrates that, in the majority of runs, the global robustness guarantees provided by the approach hold even for unseen test data. However, there are some exceptions where this may not hold, and the authors have detailed the reasons behind these cases. Supplementary Material: The sections of the Appendix have not been checked, with the exception of Section D (Additional Results). Relation To Broader Scientific Literature: This work is closely related to [a], since both address the global robustness property while considering the confidence of the verified model, i.e., global robustness is verified for inputs where the model exhibits a certain level of confidence in its predictions. While the approach in [a] proposes a method that provides deterministic guarantees on global robustness, the method presented in this paper offers probabilistic guarantees. Although probabilistic guarantees are looser than deterministic ones, this choice enables the proposed method to scale to larger models and leverage standard verifiers for local robustness. Other definitions of global robustness and verification methods have been proposed in [b], [c], and [d], since there is not a one-size-fits-all definition of global robustness in the literature at the moment. The authors should also discuss these approaches in relation to the ones they consider. [a] Athavale et. al., Verifying global two-safety properties in neural networks with confidence, CAV, 2024. [b] Kabaha and Cohen, Verification of Neural Networks’ Global Robustness, Proc. ACM Program. Lang., 2024. [c] Wang et. al., Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding, IJCAI, 2023. [d] Calzavara et. al., Beyond Robustness: Resilience Verification of Tree-Based Classifiers, Computers&Security, 2022. **Update after the authors' response**: the authors have satisfactorily answered to my questions and discussed the suggested references. Essential References Not Discussed: There are no essential references omitted by the authors. However, they could make their discussion of related work more comprehensive by considering additional relevant approaches (see the _Relation To Broader Scientific Literature_ section). **Update after the authors' response**: the authors have satisfactorily answered to my questions and discussed the suggested references. Other Strengths And Weaknesses: ## Strengths - This is the first approach that provides probabilistic approximate guarantees on the global robustness of neural networks, which can also scale to larger networks than those considered in previous work. - The adopted notion of global robustness is intuitive and reasonable. - Good theoretical foundation, including proofs of the statements that support the approach. - The experimental evaluation is convincing. ## Weaknesses - The approach works with both the original data distribution of the samples in the feature space and an approximation of it. However, the quality of the approximation used in the experimental evaluation is not clearly discussed. - Missing details on how the sampling procedure influences the runtime of the approach. - Related work discussion can be improved. **Update after the authors' response**: the authors have satisfactorily addressed these weaknesses. Thus, I will raise my score. Other Comments Or Suggestions: The authors should consider strengthening the discussion of related work by including works [a], [b] and [c] and discussing how the verification approach and/or the global robustness property considered in the current paper differ from those proposed in these three works. [a] Kabaha and Cohen, Verification of Neural Networks’ Global Robustness, Proc. ACM Program. Lang., 2024. [b] Wang et. al., Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding, IJCAI, 2023. [c] Calzavara et. al., Beyond Robustness: Resilience Verification of Tree-Based Classifiers, Computers&Security, 2022. **Update after the authors' response**: the authors have satisfactorily discussed the suggested references. Questions For Authors: - In Section 6 (Experimental evaluation), the authors states that `we discuss how the data distribution $D$ can be approximated sufficiently well for our purposes with Gaussian noise.`. This point is important for the inner workings of the proposed certification approach, but I do not see any comments or results highlighting that the data distribution is well approximated. Could you please provide a comment on this and offer evidence? Have I missed something? - The proposed approach requires sampling a large number of instances to obtain global robustness guarantees, e.g., 21,892 instances for the MNIST dataset. How does this step impact the total runtime of your approach? For which specific steps is the runtime reported in Tables 2, 3, and 4 computed? **Update after the authors' response**: the authors have satisfactorily answered to these questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review and for the interesting and relevant references. **Quality of distribution approximation.** In the paper, "sufficiently well for our purpose" is intended to reflect that such an approximation is able to provide guarantees that generalize to unseen data. To avoid any misunderstanding, we will further elaborate on this in the paper. Getting enough representative samples from the true data distribution can be challenging in the real-world. However, our experiments demonstrate that the simple sampling procedure we use allows us to obtain global guarantees that consistently generalize to unseen data, despite possible deviations introduced by the sampling procedure itself. The fact that our (Gaussian-noise-based) sampling procedure is sufficient substantiates the practical effectiveness of our approach. We also refer you to our answer to a similar question of reviewer `8m6m`. **Impact of sampling on runtime.** The influence of the sampling procedure itself on the runtime is negligible compared to the time spent performing the local robustness checks with the oracle. Overall, the runtime of the approach is linear in the sampling complexity in Equation 7 which, in turn, only depends on the chosen parameters $\epsilon$ and $\delta$, and not on the characteristics of the learning algorithm used or on the input dimensionality. **Runtimes in Tables.** Thank you for pointing this out, we will make it more explicit in the tables. The runtime reported is the runtime to produce our guarantees from the set of already sampled points, and thus includes evaluating their class, confidence and robustness according to the local robustness oracle. The sampling procedure itself is fast (in total around 10s for the CIFAR10 samples containing ~700k data points), as the vast majority of the runtime is spent in checking robustness. The runtime of this local robustness check heavily depends on the specific oracle chosen. The large runtime in table 3 is determined, for instance, by the large computational requirements of the Marabou verifier. We have performed additional experiments on MNIST using the LiRPA library used by $\alpha$-$\beta$ CROWN as a local robustness oracle, which significantly improves the total runtime (see reply to `md7J`). **Different definitions of robustness.** The definition in Kabaha and Cohen [b] essentially corresponds to the definition we use, as global robustness is defined as robustness for those points which are classified with large enough (margin-based) confidence. The notion discussed by Wang et. al. [c] is a sensitivity-based notion of global robustness. Their definition is not specifically tailored to a classification task, as it does not consider a notion of class explicitly. We think our notion is more convenient for a classification task, as our method allows us to provide statements about the robustness conditioned on a specific confidence value. Rather, the notion in Wang et. al. [c] captures a notion of function smoothness over all the input space. Not too dissimilarly, the definition in Calzavara et. al. [d] considers global robustness as stability in the output on a subset of the input features, together with a label-based notion of robustness. We think these definitions are equally viable for their respective tasks, with our specific choice simply being a natural definition for the (classification) problem at hand. Our definition only imposes constraints over the behavior of the model for changes that affect the output class, and it is thus a more interesting notion, compared to function smoothness, for classification tasks. This class-based setting also gives us access to prediction confidence as a natural indicator of robustness, as both softmax and margin-based confidence are proportional to the required changes in the output-space to change the class. We thank you again for the additional references that we will discuss and compare in more detail in a revised version of the manuscript. Please let us know if we can provide any further details.
Summary: This method tackles the problem of constructing a method for estimating global robustness of a NN (or other function) that has a formal guarantee. The authors take the approach of probabilistically relaxing the definition of robustness and producing a probabilistic certificate of global robustness. The size of the sample required for the certificate is independent of the input dimension, number of classification classes, etc., which is particularly important for scaling to e.g. deep NNs. Claims And Evidence: Claims of robustness of certificate are supported by theoretical analysis, see Theorem 4.4 / 4.6. Methods And Evaluation Criteria: Experiments make sense for the problem at hand. There are ablation studies showing a more confident predictions tend to be more robust under the method proposed in the paper, and that there is a strong dependence between prediction confidence and robustness. Theoretical Claims: Yes, checked up to Theorem 4.6. Experimental Designs Or Analyses: Experiments are sound but could be more impressive and demonstrate scaling, even with a single consumer-grade GPU. MNIST and CIFAR-10 are quite small for 2025 (although no doubt useful for the computationally intensive experiments using formal evaluation). Supplementary Material: I read Section A containing the main proofs. Relation To Broader Scientific Literature: This work is particularly connected to prior work that defines NN robustness as probability, rather than that relating it to formal verification or distance to adversarial examples. Prior work has shown methods that are can estimate probabilistic robustness either at a given point or globally, but have not had a formal guarantee of robustness. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typo in Line 162 right column: "For this, we devise on a" => remove "on" Questions For Authors: Could you please provide additional results showing how the method really scales to larget input dimension and model size? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments. As you suggested we added additional experiments to demonstrate the potential of our approach. In our reply, we also address the questions about our experimental evaluation from other reviewers. We first point out that **the goal of our experiments is to show that our robustness guarantees indeed generalize to unseen data**. Our approach’s complexity scales linearly with the complexity of local robustness checks. We investigated the additional oracles and models proposed by the reviewers. ## New experiments with larger architectures We focus on MNIST and CIFAR-10, as these are commonly used in related verification literature (as in [1, 2] referred by `8m6m`). To address scaling, we have performed additional preliminary experiments with bigger architectures for adversarial as well as formal verification. We run additional experiments with: * a VGG network (vgg11_bn [e]), a convolutional architecture with ~10 million parameters, on CIFAR10 with a PGD oracle; * ConvBig, a convolutional network used in the references provided by reviewer `8m6m`, on MNIST with a LiRPA (from αβ-CROWN) oracle; * the feed forward (FF) network on MNIST (as previously used in the manuscript), with a LiRPA (from αβ-CROWN) oracle. | Architecture | Training | Oracle | $\epsilon$ | $p_{\min}$| $\delta$ | $\lvert N\rvert$ | $\lvert M\rvert$ | $\lvert\mathbf x_{\kappa\leq\kappa_{\max}}\rvert$ | $n_c$ | $\hat p_\kappa$ | verification runtime (s) | accuracy | |--|--|--|--|--|--|--|--|--|--|--|--|--| | vgg11_bn | Standard | PGD | $10^{-4}$ |$0.01$| $0.01$ | $685044$ | 10 | 9559 | 0 | 0 | 731 | 0.9174 | | ConvBig | DIFF_AI | LiRPA | $2.5\cdot 10^{-3}$|$0.05$ | $0.01$ | $21892$ |12 | 9667 | 4 | 0.000309 | 1838 | 0.9320 | | MNIST-FF | AT | LiRPA | $2.5\cdot 10^{-3}$ |$0.05$ |$0.01$ | $21892$ | 7 | 9726 | 1 | 0.000851 | 508 | 0.9328 | We report architecture, training procedure, the robustness oracle, the size of the mapping $\lvert M \rvert$, the number of predictions for which $\kappa$ is smaller equal than $\kappa_{\max}$ denoted by $\lvert\mathbf x_{\kappa\leq\kappa_{\max}}\rvert$, the estimators $n_c$ and $\hat p_\kappa$ from Sec 6, the runtime of the verification, and the accuracy of the classifier. In all our additional experiments, our approach obtains global robustness guarantees that generalize to unseen data, as in all cases $\hat p_{\kappa}<\epsilon/p_{\min}$ and $n_c \leq \lvert D_{\text{test}}\rvert\lvert M\rvert \epsilon$. These results demonstrate that our approach works with a variety of datasets, models, and training procedures. We will include these results in the revised manuscript. ## Scalability The scalability of our method is closely tied to the scalability of the chosen local robustness oracle. If results for a single local robustness check can be computed in a time $t$, then we can expect a total runtime of $t\cdot s$, with $s$ the sample complexity obtained from Equation 7. PGD scales well in practice: the verification runtime for VGG (~12mins) does not significantly increase in comparison to our results on the different Resnet20 networks (5-20mins, depending on the average robustness of the specific network), even though vgg11_bn has about 40 times as many parameters. For some of our experiments with the bigger models, we cannot yet provide results for formal verification with LiRPA, as the GPU hardware requirements of the tool surpass the capabilities of our consumer GPU. With reference to the discussion with reviewer `gttu`, we also want to point out that while CROWN verifiers can verify properties on larger networks, the per-sample verification time as reported, for instance, in Wang et al., is still prohibitive to provide global robustness guarantees. Our experiments show that we obtain good global guarantees that generalize to unseen data for relatively smaller models and input dimensions (28x28x1 grayscale images for MNIST on a smaller network) as well as for larger models and input dimensions (32x32x3 color images on a larger Resnet or VGG network). References: [e] https://github.com/chenyaofo/pytorch-cifar-models
null
null
null
null
null
null
Hyperbolic-PDE GNN: Spectral Graph Neural Networks in the Perspective of A System of Hyperbolic Partial Differential Equations
Accept (poster)
Summary: The paper introduces Hyperbolic GNN, a novel method that models a graph as a system of differential equations by leveraging the good properties of hyperbolic differential equations. It incorporates the topological structure characteristics of the graph into the message passing by modeling the differential equations, and provides a complete theoretical proof for the solution space of the constructed system. To address the computational complexity caused by Laplacian, polynomial theory is introduced to improve the original Laplacian and enhance the nonlinearity of the filter. Compared to state-of-the-art methods, the approach exhibits excellent performance on a large number of graph datasets and image datasets. This paper demonstrates a well-organized logical structure and rigorous theoretical proofs, making a significant contribution to the theoretical research on GNN within the paradigm of differential equations. Claims And Evidence: This paper presents the whole process of the proposed Hyperbolic GNN through a specific hyperbolic equation. The graph is modeled as a system of differential equations, and the coefficient matrix is completely determined by the topological structure of the graph itself. Based on the ordinary differential equation theory, the existence and spatial structure of the solution of the system are completely proved, and the original Laplacian of the system is approximated by the polynomial theory. Methods And Evaluation Criteria: The proposed approach models the graph as a system of differential equations. The coefficient matrix of this system depends entirely on the graph's properties, and then induces the corresponding solution space. Based on the solution space, the method offers a clear description of node feature changes or feature directions during message passing, significantly enhancing the interpretability of graph neural networks. Compared to traditional GNNs, it offers a more accurate depiction of graph node embedding spaces during message passing. Additionally, the incorporation of polynomial approximation enhances flexibility, reduces computational complexity, and ensures the provision of more adaptive nonlinear filters in practical applications. Theoretical Claims: In this paper, Theorems 3.1 and 3.3 are the core parts to prove the existence of solutions and the structure of the solution space. For the process of proof, the key is to transform the original partial differential equations into a system of homogeneous linear ordinary differential equations with constant coefficients by variable substitution, and then utilize the solution theory of ordinary differential equations. The proof is relatively clear and rigorous. Experimental Designs Or Analyses: The experiments are mostly convincible, which mainly consist of three parts: 1. The performance of Hyperbolic GNN is compared with common spectral graph networks on graph datasets. 2. Based on hyperbolic partial differential equations, it is analyzed that the base enhances the performance of traditional spectral GNNs, and the effectiveness of polynomial approximation is verified. 3. Interestingly, the designed Hyperbolic GNN is applied to image filtering experiment, and the effect is clear. The results also seem to verify that the polynomial approximation can achieve more flexible filters. Supplementary Material: I primarily focus on the proof process concerning the solution space in Appendices A.1, A.2, and A.3. The proof is mostly clear. Relation To Broader Scientific Literature: Compared to traditional spectral GNNs, this paper proposes a PDE-based model to achieve message passing mechanism. Besides, compared with the current differential equation-based paradigm, it provides better scalability and interpretability. Essential References Not Discussed: The work highly related to this paper is mainly spectral GNNs based on the differential equation (ODEs, PDEs) paradigm. The necessary articles have been cited in this paper. Other Strengths And Weaknesses: Strengths: - This paper is mostly clear with a well-organized structure and mathematical proofs. - The proposed method models the graph as a system of hyperbolic partial differential equations, constructing an embedding variation space for graph nodes. This effectively captures the direction of node feature changes or significant feature directions during message passing, offering enhanced scalability and interpretability. - The designed approach incorporates polynomial approximation into the Laplacian within the framework of differential equations, achieving significantly enhanced flexibility. - The experimental results across diverse graph datasets demonstrate the effectiveness of the proposed method. Furthermore, extensive experiments conducted on image datasets confirm its operational flexibility and practical applicability. Weaknesses: - The possible computational costs under different modules are not analyzed. - The performance of the constructed GNN may vary across different hyperbolic equations. - Although the forward Euler method is simple, the implicit method may work better. Other Comments Or Suggestions: Please see pros & cons. Besides, there are suggestions: - In Table 1, the unit basis vectors in Euclidean space are typically formatted as bold italics (e.g., $\bm{e}_i$). The current writing may require further consideration. - In Appendix A.2, the notation C in Equation (27) of Theorem A.2 denotes the equation system's coefficient matrix. This differs mathematically from the C in Equation (28), suggesting appropriate notational distinction. Questions For Authors: - Considering the computational cost of the different components of the analysis approach may further strengthen the paper's point of view. - Will different polynomials affect the performance of the proposed GNN? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **[Cons 1/Q1]**: *Computational costs under different modules.* **[Answer]**: The efficiency of the proposed method primarily depends on the choice of the spectral GNN filter. A more efficient filter leads to higher efficiency. Additionally, hyperparameters ($i.e.$, termination time $T$ and time step $\tau$) also affect the efficiency of the model. A smaller $T/\tau$ results in fewer iteration steps and higher efficiency. **[Cons 2]**: *The impact of different hyperbolic equations.* **[Answer]**: In essence, this work proposes a spectral GNN framework applicable to a broad class of hyperbolic PDEs. The specific GNN architecture for particular equations can be derived following this framework. In dynamical system modeling, second-order hyperbolic equations are most commonly encountered, characterized by real-valued eigenvalues and wave-like solution behavior. The computational performance of the constructed GNN primarily depends on the numerical methods employed. The explicit scheme induced by the forward Euler method, when combined with initial conditions, enables rapid iterative solving, whereas implicit backward schemes may result in slower computation speeds. Different hyperbolic equations may exhibit variations in numerical stability. We will further investigate potential subtle distinctions arising from varying forms of hyperbolic equations in future works. **[Cons 3]**: *The implicit method may work better.* **[Answer]**: The advantage of the forward Euler method lies in its **simplicity and efficiency**. The implicit method offers better performance but involves solving linear equation systems, incurring higher costs. Literature [1] demonstrates that the performance improvement brought by the implicit method is small, yet it triggers significant performance costs. [1] Ben Chamberlain, James Rowbottom, Maria I. Gorinova, Michael M. Bronstein, Stefan Webb, Emanuele Rossi: GRAND: Graph Neural Diffusion. ICML 2021: 1407-1418. **[Q2]**: *The effect of different polynomials.* **[Answer]**: As shown in Tables 4 and 5, polynomials such as Bernstein and Jacobi can fit arbitrary filters, making them superior GNNs. Chebyshev polynomials exhibit the Runge phenomenon, leading to the fitting of suboptimal filters. Since our method enhances the capability of polynomial fitting for filters, it results in significant performance improvements for Chebyshev polynomials. **[Sug]**: *Improvement of writing.* **[Answer]**: We will consider and incorporate your suggestions into the revised version. Thanks for your comments.
Summary: This paper introduces "Hyperbolic GNNs" (Hyperbolic Graph Neural Networks), a type of spectral graph neural network where the message passing is implemented through hyperbolic partial differential equations. The network can thus be viewed as a kind of dynamical system. The authors provide several experiments demonstrating the superior performance of the hyperbolic GNNs compared to other spectral graph networks. Claims And Evidence: Yes, it is my opinion that the claims are backed up by both theoretical and experimental evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked parts of the proofs of the Theorems and they seem logically sound and correct to me. Experimental Designs Or Analyses: No Supplementary Material: Yes, I have reviewed the proofs of the theorems which are relegated to the appendix. They seem correct to me. Relation To Broader Scientific Literature: See below Essential References Not Discussed: Here comes my main criticism of this paper. The paper introduces a type of network that they refer to as "Hyperbolic GNN". But the hyperbolic graph neural networks already exist in the literature. As far as I know they were first introduced in the paper "Hyperbolic Graph Neural Networks" by Liu, Nickel, Kiela (Neurips 2019), arXiv:1910.12892 After reading the paper under review as well as the paper by Liu et al, I have come to the conclusion that these papers are not referring to the same kind of networks at al. As mentioned above, in the paper under review the term "hyperbolic" enters because the message passing is implemented using hyperbolic PDEs. However, in the paper by Liu et al, the term "hyperbolic" refers to the underlying hyperbolic geometry, i.e. the signature of the manifold on which the network is defined. According to me understanding these are fundamentally different. Moreover, there is also the paper "Hyperbolic Graph Convolutional Neural Networks" by Chami et al (Neurips 2019) which introduces a similar structure for convolutional networks. So, first of all, I think it is very strange that the authors of the present paper would introduce the name "Hyperbolic GNNs" without any type of mentioning or comparison with the previous papers on hyperbolic graph neural networks. If indeed my understanding is correct, and these are inherently different structures, then I think it would be appropriate for the authors of the present paper to change the name of their networks, and the title of their paper. The name "hyperbolic graph neural networks" is taken and they have to come up with something else. So until this issue has been resolved I cannot recommend publication of the present paper. Update after rebuttal: I approve of the authors changes and I will modify my overall assessment accordingly. Other Strengths And Weaknesses: See my comments above. Other Comments Or Suggestions: See above Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **[Cons]**: *The question about the concept of hyperbolic PDE.* **[Answer]**: We appreciate your theoretical endorsement for our approach. As you pointed out, the hyperbolic PDE in this paper differs fundamentally from the hyperbolic geometry mentioned in the literature [1,2]. The focus of hyperbolic PDEs lies in constructing the **message passing process** of GNNs ($i.e.$, $\frac{\partial^2 \mathbf{X}}{\partial t^2} = a^2 \widehat{\mathbf{L}} \mathbf{X}$) based on the theory of partial differential equations. This design can naturally map nodes into a solution space spanned by a set of eigenvectors of the graph, which improves the capability of learning complex graph filters, such as handling the challenging heterophilic graphs. On the other hand, hyperbolic geometry focuses more on the **hierarchical structure** of graphs. It typically involves mapping nodes to hyperbolic space using models such as the Poincaré Ball Model ($i.e.$, $d(\mathbf{x}, \mathbf{y})=\mathrm{arcosh}(1+2\frac{||\mathbf{x}-\mathbf{y}||^2}{(1-||\mathbf{x}||^2)(1-||\mathbf{y}||^2)})$) or the Lorentz Model ($i.e.$, $<\mathbf{x}, \mathbf{y}>=-x_0y_0+\sum^n_{i=1}x_ny_n$) to extract hierarchical relationships between nodes. The essence of the two concepts mentioned above is fundamentally different, with their focus being distinct. To avoid ambiguity for the readers, we will incorporate your valuable suggestions and modify the title as well as the method names in the next version. We will also discuss the distinctions from hyperbolic geometry in related works to avoiding misunderstanding. Below are our new title and method name for modification: ***"Hyperbolic-PDE GNN: Spectral Graph Neural Networks in the Perspective of A System of Hyperbolic Partial Differential Equations"*** [1] Qi Liu, Maximilian Nickel, Douwe Kiela: Hyperbolic Graph Neural Networks. NeurIPS 2019. [2] Ines Chami, Zhitao Ying, Christopher Ré, Jure Leskovec: Hyperbolic Graph Convolutional Neural Networks. NeurIPS 2019. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I approve of your changes and will update my review accordingly.
Summary: This paper proposes a Hyperbolic Graph Neural Network (GNN) framework based on a system of hyperbolic partial differential equations (PDEs), establishing a novel message-passing paradigm that derives topology-aware node representations by solving these equations. Supported by a solid theoretical foundation and comprehensive empirical validation, the method demonstrates both effectiveness and flexibility. This work makes a significant contribution to redefining GNN architectures and advancing the field. Thanks to the author for the reply, this solved most of my problems, I will keep my score. Claims And Evidence: The authors provide thorough theoretical proofs and experimental evidence to support their claims. Methods And Evaluation Criteria: Extensive experiments on node classification and image signal filtering tasks validate the effectiveness and robustness. Theoretical Claims: The appendix includes informative theoretical proofs. The existence of solutions to the hyperbolic PDE system is formally established (Appendix A.2), and the simplified solution space linking node features to topology is derived and analyzed (Appendix A.3). The mathematical foundations are robust and align with established principles. Experimental Designs Or Analyses: The experiments on node classification and image filtering tasks are well-designed and adhere to established practices in spectral GNN research. The selection of datasets, baselines, and evaluation protocols is appropriate, ensuring reliability and reproducibility. Supplementary Material: The Supplementary Material are reviewed. Appendix A (Theoretical Claims) and Appendix C (Experimental Analyses) are addressed in the main text. Appendix B provides relevant discussions on related work. Relation To Broader Scientific Literature: This work leverages mathematical-physical methods to address AI challenges. While existing graph studies often employ diffusion equations empirically, this paper particularly indicates the effectiveness of PDE-based approaches through topology-aware solution spaces, providing theoretical improvements to existing works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Novelty: The framework introduces a principled connection between node representations and topology via hyperbolic PDEs, providing a theoretical and novel message-passing paradigm. 2. Theoretical Interpretability: The derivation of solution spaces and their decomposition into topology-dependent components is mathematically sound and clearly articulated. 3. Significance: Beyond proposing a practical GNN, the work provides theoretical insights into PDE-based methods, facilitating deeper understanding of related approaches. Weaknesses: 1. Some of results on the node classification task remains slightly below state-of-the-art benchmarks (in Table 3). 2. A limited discussion of complexity and parameter efficiency may hinder practical performance. It would be better to conduct more analysis. Other Comments Or Suggestions: The theoretical derivations (e.g., Appendices A.2–A.3) enhance interdisciplinary accessibility. While the framework excels in spectral GNNs, its applicability to non-spectral architectures (e.g., spatial GNNs) warrants further exploration. Questions For Authors: 1. What are the practical and theoretical distinctions between different polynomial families (e.g., Chebyshev vs. Bernstein) in approximating the solution space? Which family yields optimal empirical results? 2. Could the hyperbolic framework be adapted to non-spectral GNNs (e.g., attention-based models)? What challenges might arise? 3. The framework introduces additional hyperparameters (e.g., polynomial order). How should practitioners balance expressivity and computational overhead? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **[Cons 1]**: *Slightly lower performance on some datasets.* **[Answer]**: In this paper, we aim to propose a general framework for GNNs, with the advantage of being applicable to spectral GNNs. The hyperbolic PDE-based paradigm allows the model to learn complex graph filters, generating better results on heterophilic graphs. Extensive results demonstrate the superiority, and reflect a new research direction of GNNs. We leave further improvements in future works. **[Cons 2/Q3]**: *Complexity and parameter efficiency.* **[Answer]**: Converting spectral GNNs into a dynamical system may increase the the computational costs, but can also be controlled by hyperparameters (termination time $T$ and time step $\tau$). For instance, when $T=1$ and $\tau=1$, the efficiency is comparable to the original method while performing better. Furthermore, this method introduces no additional parameters apart from the MLP parameters for initializing $\varphi(\cdot)$. Therefore, the cost and efficiency is acceptable and worthwhile. **[Q1]**: *The theoretical distinctions between different polynomial families.* **[Answer]**: Different polynomials, such as Chebyshev and Bernstein polynomials, would fit filters **based on their mathematical properties**. For example, Chebyshev polynomials exhibit Runge's phenomenon, while Bernstein polynomials can fit any function. Experimental results indicate that in graph tasks, learnable GPR performs the best, whereas ChebNetII may excel in image tasks. **[Q2]**: *Applicability of hyperbolic framework on non-spectral GNNs.* **[Answer]**: Our method is applicable to non-spectral GNNs. We can simply replace the function $P(\cdot)$ in Equation 17 with any feasible convolution operation such as attention mechanisms, as demonstrated in our enhancement of GAT on image tasks. The effectiveness of this framework depends on whether the base model can effectively adapt to the task. Attention mechanisms can effectively capture the importance levels between pixels, thus Hyperbolic-GAT further improves the performance of GAT.
Summary: This paper proposes to formulate message passing in spectral GNNs as a system of hyperbolic PDEs by extending the concepts of gradient and divergence on manifolds to graphs. Based on this formulation, node features are shown to propagate messages along specific directions of eigenvectors and therefore better capture the topology of graphs. To improve the efficiency of the model, polynomials are used to approximate the solution. Experimental results on some graph tasks demonstrate the effectiveness of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I read all the proofs to get the general ideas but could not follow all the details Experimental Designs Or Analyses: Yes Supplementary Material: I reviewed the supplementary material Relation To Broader Scientific Literature: The proposed idea seems novel. It generally improves existing spectral GNNs and the interpretability of message passing scheme Essential References Not Discussed: No Other Strengths And Weaknesses: Strengs: - The exposition is smooth and easy to follow - The proposed idea of formulating message passing as a system of hyperbolic PDEs seems novel - Various spectral GNNs are considered to demonstrate the practical benefits of the proposed method Weaknesses: - Experimental evaluation for a variety of graph tasks is somewhat limited - The proposed method only shows marginal improvements over state-of-the-art methods on the node classification task - The mathematical proof of Theorem 3.5 is not provided Other Comments Or Suggestions: Please see the questions. Questions For Authors: - The proof of Theorem 3.5 is not provided or I am missing something ? - Why other popular graph learning tasks like graph classification or link prediction are not considered ? - Since the message passing scheme is formulated as a system of hyperbolic PDEs, I am wondering if the proposed model is advantagous over existing ones on graph datasets with strong hierarchical structures (e.g., Disease, Airport) ? If that is not the case, could the authors elaborate ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **[Cons 1/2]**: *Somewhat limited experimental evaluation, and some marginal improvements on a few datasets.* **[Answer]**: The proposed method is essentially a general framework that enhances the capability of spectral GNNs, as shown in Tables 4 and 5. The results on both homophilic and heterophilic graphs, and the extensive experimets on the low-pass filter and arbitrary filter methods, consistently achieve competitive results. All the analyses comprehensively demonstrate the effectiveness of our method. On the other hand, our method also demonstrates high interpretability to connect spectral GNNs. It provide high flexibility to implement different spectral graph filters for further improvements, and reflect a new direction of GNN studies. Thanks for the comments. We will conduct further exploration in our future works. **[Cons 3/Q1]**: The lack of the proof of Theorem 3.5. **[Answer]**: Theorem 3.5 is a well-known mathematical theory as Weierstrass approximation theorem [1,2], and we will introduce more details and references in the next version. [1] Stone, M. H. (1937), "Applications of the Theory of Boolean Rings to General Topology", Transactions of the American Mathematical Society, 41 (3): 375–481, doi:10.2307/1989788, JSTOR 1989788. [2] https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem **[Q2]**: *Evaluation without graph classification or link prediction.* **[Answer]**: Actually, our method is a foundational and general framework, which is a **task-agnostic** graph learning method. The node classification task is a widely-adopted evaluation setting for evaluating the effectivness of GNNs [1,2,3], and thus we follow this setting. The extensive results also reflect the superiority. Thanks for the suggestion, we would like conduct more explorations on link prediction and graph classification in our future works. [1] Wang, X. and Zhang, M. How powerful are spectral graph neural networks. ICML 2022. [2] Geng, H., Chen, C., He, Y., Zeng, G., Han, Z., Chai, H., and Yan, J. Pyramid graph neural network: A graph sampling and filtering approach for multi-scale disentangled representations. KDD 2023. [3] Zheng, S., Zhu, Z., Liu, Z., Li, Y., and Zhao, Y. Node-oriented spectral filtering for graph neural networks. IEEE Trans. Pattern Anal. Mach. Intell., 46(1):388–402. **[Q3]**: *Comparison with methods for hierarchical structures.* **[Answer]**: Actually, our hyperbolic PDE-based GNNs and traditional hyperbolic GNNs are quite different paradigms. - Traditional hyperbolic GNNs map nodes to hyperbolic embedding space, which can be capable to capture hierarchical structures. - This paper proposes a new paradigm (Equation 9 and 24) of message passing through **a system of hyperbolic PDEs**. This paradigm can naturally map nodes into a solution space spanned by a set of eigenvectors of the graph, which improves the capability of learning complex graph filters, such as handling the challenging heterophilic graphs. Therefore, the motivations, modeling paradigms and properties are different. Please refer to #R3 for more details. Thanks for your suggestions. We will make more discussion in the next version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their clarification that addresses my concerns. I will keep my initial score.
null
null
null
null
null
null
Introducing 3D Representation for Dense Volume-to-Volume Translation via Score Fusion
Accept (poster)
Summary: The authors present Score-Fusion, a volumetric translation model that learns 3D representations by assembling perpendicularly trained 2D diffusion models in score function space. It can reduce the 3D training computational cost and data demand, and the results is comparable in different downstream tasks. However, limitations exist. Claims And Evidence: I have concerns about the claim, since the authors made very broad claims about “Medical Image” in general. 1. A concern is that the experimentations were done on brain only. Brain is well-known to be relatively the best structured and clean among all organs, compared to breast, liver, prostate, etc. 2. The experimentations were done on single modality of medical image - MRI only, without any evidences from CT, ultrasound, Mammogram, etc. 3. The authors ignoring the physical limitations of the medical imaging. It seems like the authors are treating all medical imaging as isotropic or nearly-isotropic. However, many of the MRI sequences do not generate such results with isotropic properties. For example, prostate MRI. T2Tse prostate MRI suffers from 320x320x20 dimensionality, which means higher resolution in x-y plane (320x320), and much lower resolution (20x320) in both x-z and y-z planes. In this case, I would doubt if the authors’ methodology still works or not. Therefore, although the authors have done experimentations on two different downstream tasks, the organ & image modality & image physics limitations let me feel concerned about the generalizability of the proposed method, and the validity of the claims made by the authors. Methods And Evaluation Criteria: The concerns are stated under “Claims and Evidence” together. Theoretical Claims: Yes. About Diffusion models and model fusions. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, technical details like ablation studies, etc. Relation To Broader Scientific Literature: More efficient way of using 2D Diffusion models to mimic 3D Diffusion models’ results, in Brain and in MRI specifically. Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. About generalizability, I have mentioned some of my concerns under “Claims And Evidence”, please response to them 2. I’m not clear of the network architecture, although you mentioned it is UNet-like. I don’t think you need to describe everything about the parameters you set in this network, but apparently the current description is not enough. 3. “To further enhance the speed of multi-modality fusion, we employ a smaller variant of our model, adjusting the number of channels in each layer.” Would like to learn more about this. 4. “For instance, DDMM-Synth (Li et al., 2023) suggested using both MRI and low-resolution CT scans to produce high resolution CT images. Training a separate model for each possible combination of input conditions would result in exponential time complexity, making it generally impractical …” I hold my opinion regarding the discussions here. T2 MRI -> T1 MRI image translation, and the T2 MRI -> CT image translation, are VERY different as MRI and CT have different physical properties. T2->T1 is within the same image modality, however, MRI->CT is using two different image modalities. From my understanding, the statement made by Li et al. 2023 was majorly becuase - at least you have to get some physics information of CT so that you can create an “accurate” high-resolution CT image that won’t impact the diagnosis. For example, it is possible that some tiny lesions could be identified by the CT but cannot be identified by the MRI. If not providing ANY CT information, the translation results is useless in terms of diagnosis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer XUPB, Thank you for your detailed and constructive feedback. Here is our response to the raised questions and concerns: ## Weakness (1-2): Generalizability beyond brain MRI: Thank you for your insightful review. We highly agree that results on datasets beyond brain MRI can significantly strengthen our paper. We evaluated our method on sparse-view reconstruction with CT data: Here are the quantitative results for sparse-view CT: | Model | PSNR | SSIM | MMD | |:---:|:---:|:---:|:---:| | TPDM | 33.78 | 0.922 | 58.2 | | Ours-TPDM| 34.53 | 0.931 | 20.78 | We perform the sparse-view reconstruction task on the CHAOS[1] with 36 views. The CHAOS dataset contains CT/MRI images of the liver. Each CT volume has 78-289 slices of resolution 512x512. During training, we cropped the third axis to have a common shape of 64. During inference, we adopted a simple sliding window strategy along the third axis. We resized each CT slice to 256x256, following TPDM[2]. These results on the CT data demonstrate our method's effectiveness beyond MRI, showing the generalizability of Score-Fusion to other modalities (CT). It also demonstrates our method's effectiveness on a different organ (liver). The results help support the claim of “medical image” in the title. We acknowledge that there are many other modalities, such as ultrasound, and there are other organs, such as the breast. The results on these datasets will also enhance our claims. Unfortunately, given the limited time during rebuttal, we are not able to include even more dataset results and we leave these applications to future work. However, we believe it is a common practice to use two modalities, especially CT and MRI, for papers targeting volumetrics medical image translation, according to multiple previous works, including TPDM[2], ScoreSDE[3], DiffusionMBIR[4], etc. [1] CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation Challenge Data, A.E. Kavur et al [2] TPDM. [3] Solving Inverse Problems in Medical Imaging with Score-Based Generative Models, Song et al. [4] Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models, Chung et al. ## Weakness 3: Non-isotropic medical imaging: As mentioned above, our new experiments on CT data are trained on 256x256x64 data due to the nature of the CHAOS dataset. Due to the time limit of the rebuttal, we did not get results on the fastMRI dataset yet, where the resolution is 320x320x20. __We plan to include the fastMRI results in the second round of rebuttal. We hope this can still be considered.__ As an extension of score-fusion, we tried to extend Score-Fusion for video super-resolution. We used MGLD-VSR[5] as the 3D model and used two perpendicular models on the time-space slices (i.e., (x-t) slices and (y-t) slices). Due to the capacity requirement of the video model, we are only using 5 consecutive slices in this problem, resulting in a 1280 x 960 x 5 or 2560 x 1920 x 5 resolution for the volume. This can potentially help address the concern of extreme resolution. Here are the quantitative results on the VideoLQ dataset[6]. We use DOVER[7] as a metric, which is a spatial-temporal metric for video quality assessment. | Model | DOVER($\uparrow$) | |:---:|:---:| | MGLD| 0.748 | | Ours-MGLD| 0.755 | Although the video results might be beyond the scope of the rebuttal, we would be glad if these preliminary results could help address the concern and could be included in the current paper. [5] Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution [6] Investigating tradeoffs in real-world video super-resolution [7] Exploring Video Quality Assessment on User-Generated Contents from Aesthetic and Technical Perspectives ## Question 1: See weakness 1-2 ## Question 2-3: Network architecture: We have already included a detailed model architecture in Supplementary Material section F and Tables 10, 11, 12. We will emphasize this by pointing to the Supplementary Material section in the experiment section of the main paper for clarity. ## Question 4: We acknowledge that a lot of modality translation is a task setting that is not directly useful in the medical imaging domain, such as MRI to CT, where MRI lacks certain information due to physical properties, as the reviewer mentioned. However, from a multi-modal learning perspective, efficiently training a multi-modal model by fusing knowledge from multiple single-modal models can be useful due to the difficulty in training a foundation model. We will modify the discussion on this part in our final version of the paper. Also, we want to emphasize that our problem setting ( i.e., low resolution flair + T1ce => high resolution flair) does not have the issue that the reviewer mentioned. I hope the above rebuttal addresses your concerns. Again, we would like to express our sincere gratitude for taking the time to review our paper and providing insightful questions.
Summary: In this work, the authors focus on using diffusion models for 3d volume-to-volume translation tasks such as super resolution and modality translation. Since 3d volumes (characteristic of medical data) are too big to computationally run a diffusion model, the authors first train two diffusion models in perpendicular axes. Then, they train a 3d diffusion model which is able to fuse both views to reconstruct the desired volume. Claims And Evidence: Yes Methods And Evaluation Criteria: - Yes, the authors conduct experiments on tasks like super resolution and modality translation which are common tasks in medical volume-to-volume translation. - They evaluate on metrics like PSNR, SSIM, FID which are standard metrics. - They evaluate both tasks on two datasets: Brats and HCP - Authors also conduct downstream task (medical segmentation) to justify the superior quality of their generated volumes Theoretical Claims: N/A Experimental Designs Or Analyses: - Good experiments. See Method section above for more details. - Is there a reason why the authors did not considere LDM for the diffusion framework? Because DDPM-based models have image size constraints in 2D (authors had to use 192x192 sizes which is small). - The standard deviation and t-test of the results was not provided. Hence, unsure if the quantitative performance is statistically significant or not. Supplementary Material: Yes, reviewed all. Relation To Broader Scientific Literature: This work relates to practical medical volume generation. Existing works in literature focus on 2D slices which has limited applicability in the real setting. Since the authors' work is in 3D, it has potential for real-world usage. Essential References Not Discussed: N/A Other Strengths And Weaknesses: My score is reduced mainly due to the comments in the Experiments section. If the authors can address this, then i can consider increasing the score after discussing with other reviewers as well. Other Comments Or Suggestions: N/A Questions For Authors: none. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer vsot, Thank you for your insightful review of our work. Here is our response: ## Experimental Question 1, LDM for diffusion framework: The reason why we are not using LDM is related to our task setting. __Intuitively,__ the main aim of Super-resolution is to recover/generate the high-frequency detail from a lower resolution image. In both LDMs, the high-frequency details are mainly generated with an autoencoder, which may result in worse performance in super-resolution. Therefore, we chose DDPM over LDM, in which the high-frequency detail is carefully modelled with a diffusion process, although LDM could potentially improve computational efficiency. __Empirically,__ we find that the 3D LDM's autoencoder is usually unreliable. At an early stage of this project, we tried 3D LDM using medical diffusion[1]. Although the model can reconstruct the image with semantically correct and realistic results, the reconstruction error is already pretty large. resulting in a __~25__ reconstruction PSNR. (We get a reconstruction PSNR of __26.23__ on a subset of the validation set of the BraTs dataset). We do acknowledge there are more advanced 3D LDM recently, e.g. MAISI[3], but they rely on much more data (more than 20 datasets combined for MAISI) to train a reliable auto-encoder model, which makes an unfair comparison between models. In addition, if we use perpendicular 2D LDM, their diffusion process will be in different perpendicular 2D latent spaces, which makes the fusion of 2D scores difficult. Therefore, we do not use 2D LDMs such as Make-A-Volume[2]. Another reason is that Score Fusion is the first work to explore a learning-based method for score function fusion to the best of our knowledge. We believe that using a more standard and vanilla version of the diffusion model can provide more solid and generalizable insights in terms of score-function fusion. In future work, we may consider extending score fusion to latent diffusion models in a text-to-image setting, where semantic information is better supported by the LDM. We will also include the negative results for 3D LDM and the discussion in the future version of our paper. Thank you for your insightful review. [1] Medical Diffusion: Denoising Diffusion Probabilistic Models for 3D Medical Image Synthesis. [2] Make-A-Volume: Leveraging Latent Diffusion Models for Cross-Modality 3D Brain MRI Synthesis. [3] MAISI: Medical AI for Synthetic Imaging ## Experimental Question 2, Statistical significance: We have already included the std values in the Supplementary Material in Table 5, confirming that our improvements are statistically significant. In the future version of our paper, we will show the results more directly in the experiment section for clarity. Again, we appreciate your recognition of our work's potential for real-world applications. We would like to express our sincere gratitude for taking the time to review our paper
Summary: The authors study medical volume-to-volume translation, presenting Score Fusion, a 3D volumetric translation model. The model is based on a fine-tuning process, which starts from an average of 2D models. The method is tested on multiple tasks on two medical datasets, being compared with a number of approaches. Claims And Evidence: The claims are supported by empirical evidence. Methods And Evaluation Criteria: The proposed solution is reasonable and efficient. Theoretical Claims: N/A. Experimental Designs Or Analyses: It is typical to perform super-resolution at multiple scales. This should be performed by the authors. Aside from this, I did not find any flaws in the experiments. Supplementary Material: Yes, read it all. Relation To Broader Scientific Literature: The topic is interesting and method is timely. Essential References Not Discussed: There are some relevant references on medical image-to-image translation that are not acknowledged, e.g. [A, B] [A] Haimour, Fatima, Rizik Al-Sayyed, Waleed Mahafza, and Omar S. Al-Kadi. "Bidirectional brain image translation using transfer learning from generic pre-trained models." Computer Vision and Image Understanding 248 (2024): 104100. [B] Ristea, Nicolae-Cătălin, Andreea-Iuliana Miron, Olivian Savencu, Mariana-Iuliana Georgescu, Nicolae Verga, Fahad Shahbaz Khan, and Radu Tudor Ionescu. "CyTran: A cycle-consistent transformer with multi-level consistency for non-contrast to contrast CT translation." Neurocomputing 538 (2023): 126211. Other Strengths And Weaknesses: The paper is mostly easy to follow. Other Comments Or Suggestions: Section titles are not consistently capitalised. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer AnYS, We want to thank you for your feedback noting that "the topic is interesting and method is timely." and providing constructive suggestions for our paper. ## Experimental Question 1, multi-scale SR: Thanks for raising this point. We agree that multi-scale super-resolution is common practice. We tried to address your concern about performing super-resolution at multiple scales and to demonstrate the generalizability of different super-resolution scales. We performed experiments on 2x and 8x SR. Here are the quantitative metrics. (The 4x results are the original results presented in the paper.) Quantitative Results for 2x Super-resolution: | Model | PSNR | SSIM | MMD | |:---:|:---:|:---:|:---:| | TPDM | 33.22 | 0.929 | 23.48 | | Ours-TPDM| 34.61 | 0.947 | 10.67 | Quantitative Results for 8x Super-resolution: | Model | PSNR | SSIM | MMD | |:---:|:---:|:---:|:---:| | TPDM | 28.45 | 0.903 | 69.17 | | Ours-TPDM| 29.38 | 0.924 | 46.44 | We can see that our model consistently outperforms baselines across multiple super-resolution scales for all three metrics, showing the generalizability to different super-resolution scales. ## Question 2 Missing references: We appreciate your suggestions [A, B] and will incorporate these in our final version: ## Question 3, Typos: Thank you for pointing this out. We will ensure consistency in section title capitalization in the final version and also check for other typos. Again, we greatly appreciate your time and effort in reviewing our work and providing positive feedback and constructive advice. --- Rebuttal Comment 1.1: Comment: The authors have addressed the raised concerns. --- Reply to Comment 1.1.1: Comment: Dear Reviewers, Thank you again for your time, constructive feedback, and thoughtful evaluations. We truly appreciate your acknowledgment that our rebuttal and updates have addressed the concerns raised. We're also grateful for your positive assessments and support of our work throughout the review process. Warm regards, Authors
Summary: The paper proposes to improve 3D representation learning of medical image volumes in diffusion models. Unlike earlier methods that ensemble 2D models by averaging their weights, the proposed method does “fusion in score function space”. This is shown to improve. They show improved performance in downstream tasks such as segmentation. Additionally, “ensembling in score function space” is also shown to result in feasible training of large 3D Diffusion Models by initializing using pretrained 2D models and then fine-tuning the 3D model. The synthetic generation quality is also shown to be better both via qualitative examples and quantitative image quality metrics. Claims And Evidence: Yes, compared to multiple baselines, improved representation learning and subsequently improvement in downstream tumor segmentation task is shown. Methods And Evaluation Criteria: Yes Theoretical Claims: n/a Experimental Designs Or Analyses: Yes, the experiment design and analysis looks sound. Supplementary Material: n/a Relation To Broader Scientific Literature: The paper seems to have gone a step further in merging diffusion models to obtain a better 3D representation learning from pretrained 2D models. Whereas earlier methods simplify the problem of learning 3D data distribution by simplifying into product of 2D distributions, the proposed method ensembles the estimations from the 2D models to obtain weight vector and residual terms as learnable parameters for fine tuning. Essential References Not Discussed: n/a Other Strengths And Weaknesses: The paper is very well written and easy to follow with good synthesis of literature. The contribution is well motivated and shows good synthesis performance and improvement on downstream tasks. The possibility to train a full capacity 3D diffusion models without having to resort to compromises such as diffusion in latent space rather than full 3D volume space is very enticing. In the downstream BraTS tumor segmentation task, it is advisable to report challenge-specific metrics such as Lesion-wise Dice Score (rather than Dice Score over the entire image) that penalizes not only overlap but also heavily penalizes missing lesions. Additionally, 95% Haussdorff Distance may also be reported. Additional, downstream tasks such as sparse-view reconstruction and other volume-to-volume translation tasks could have been added to substantially add weight to the claim of better representation learning. Other Comments Or Suggestions: n/a Questions For Authors: Again, to reiterate, Additional, downstream tasks such as sparse-view reconstruction and other volume-to-volume translation tasks could have been added to substantially add weight to the claim of better representation learning. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer ELBW, We sincerely thank you for taking the time to review our paper and for your insightful and positive assessment, noting that "the paper is very well written and easy to follow with good synthesis of literature" and "the contribution is well motivated." Here is our response to your questions and reviews. ## Weakness 1: challenge-specific metrics for tumor segmentation: We have added lesion-wise Dice scores and 95% Hausdorff Distance (HD95) metrics below. Following [1], we set thresholds (150, 250, and 100 for TC, WT, and ET) to remove small predicted segmentations across all the evaluations. Both Condition Results: | Model | lesion-wise Dice scores | HD95 | |:--:|:---: |:---:| | - |TC ---- WT ---- ET | TC ---- WT ---- ET | | TPDM | 0.779 0.689 0.742 | 18.107 18.937 32.011 | | Ours-TPDM| 0.783 0.692 0.745 | 17.864 18.790 32.046 | | TOSM | 0.779 0.677 0.745 | 18.287 20.223 32.287 | | Ours-TOSM | 0.781 0.698 0.748 | 17.428 17.982 30.638 | Modality Translation Results: | Model | lesion-wise Dice scores | HD95 | |:--:|:---: |:---:| | - |TC ---- WT ---- ET | TC ---- WT ---- ET | | TPDM | 0.756 0.653 0.736 | 25.085 23.854 39.001 | | Ours-TPDM | 0.760 0.685 0.751 | 19.193 21.788 36.925 | | TOSM | 0.751 0.624 0.734 | 21.254 21.227 37.359 | | Ours-TOSM | 0.758 0.671 0.744 | 20.837 22.406 35.755 | Super Resolution Results: | Model | lesion-wise Dice scores | HD95 | |:--:|:---: |:---:| | - |TC ---- WT ---- ET | TC ---- WT ---- ET | | TPDM | 0.778 0.688 0.746 | 18.079 18.894 31.793 | | Ours-TPDM | 0.780 0.692 0.745 | 18.389 18.157 33.842 | | TOSM | 0.777 0.660 0.747 | 18.180 21.043 32.052 | | Ours-TOSM | 0.778 0.690 0.746 | 17.953 18.589 31.888 | Our methods show improvements over the baselines in most cases, with a particularly notable gain for WT lesion segmentation under the Modality Translation task. Moreover, lowered Hausdorff distances suggest that our method helps the segmentation model capture subtle lesion boundaries more accurately, indicating more precise morphological alignment. This further demonstrates that our model's generated results are more friendly for downstream segmentation models. [1] Ferreira, A.,et al. How we won brats 2023 adult glioma challenge? just faking it! enhanced synthetic data augmentation and model ensemble for brain tumour segmentation. arXiv preprint arXiv:2402.17317. ## Weakness 2: sparse-view reconstruction in CT: Thanks for raising this. We highly agree that sparse-view reconstruction in CT can substantially add weight to the claim of better representation learning.. We evaluated our method on sparse-view reconstruction with CT data: Quantitative results for sparse-view CT: | Model | PSNR | SSIM | MMD | |:---:|:---:|:---:|:---:| | TPDM | 33.78 | 0.922 | 58.2 | | Ours-TPDM| 34.53 | 0.931 | 20.78 | We perform a sparse-view reconstruction task on the CHAOS dataset[1] with 36 views. The CHAOS dataset contains CT/MRI images of the liver. Each CT volume has 78-289 slices, each slice 512x512. During training, we cropped the third axis to have a common shape of 64. During inference, we adopted a simple sliding window strategy along the third axis. We resized each CT slice to 256x256, following TPDM[2]. These results on CT data demonstrate our method's effectiveness beyond MRI, enhance the generalizability of Score-Fusion to other modalities, and help support the claim of “medical image” in the title. Again, we sincerely appreciate your thorough review and constructive suggestions, which have helped us improve our work. [1] CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation Challenge Data, A.E. Kavur et al [2] TPDM. --- Rebuttal Comment 1.1: Comment: Thank you for addressing the suggestions. The earlier score is already high, so i am keeping the score as is. --- Reply to Comment 1.1.1: Comment: Dear Reviewers, We sincerely appreciate the time and effort you’ve dedicated to reviewing our paper. Thank you for recognizing that our responses have addressed your earlier concerns. We're also grateful for your positive evaluations and for maintaining your supportive scores. Thank you again for your thoughtful feedback and contributions to improving our work. Best regards, Authors
null
null
null
null
null
null
OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction
Accept (poster)
Summary: The authors propose a novel Vision-Language-Action robotic policy called OTTER. Motivated by the computational burden that fine-tuning VLMs for robot policy use entails, as well as by observations that fine-tuning VLMS for action generalization risks weakening the pre-trained vision-language alignment, the authors propose freezing vision and language encoders, extracting task-relevant visual features guided by language instructions. They compare OTTER to previous methods Octo and OpenVLA through experiments on Libero and in a real world setting, and ablate several aspects of OTTER in the real world setting. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: OTTER offers an alternative to heavy fine-tuning of VLMs/LLMs for robotic policy learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: S1: Technical Novelty: The idea of reducing the number of visual tokens input to the model is important for computational efficiency, and OTTER’s use of existing information from CLIP’s features to extract the most important visual tokens is elegant. S2: Real world experiments: The real world experiments are well-designed and thorough. It is often difficult to get robot policies to work in real-world settings, as shown by Rows 1-2 in Table 2, so OTTER’s strong performance is very promising. Weaknesses: W1: Generalization: OTTER vastly outperforms Octo and OpenVLA on the real world experiments but only performs on par with them on the synthetic Libero benchmark. My concern here is that OTTER is tailored to the experimental real world set up and will not see the same boosts in performance on other environments. W2: Ablations: Several of OTTER’s design decisions are not ablated, namely: whether to use $X_{attn}$ vs $X_{out}$ from the CLIP vision encoder, $N_{q} = 4$, and context length set to 12. My concern is that these design decisions/hyperparameters are tailored to the real world setting or to Libero, and whether it’s necessary to tune these to make OTTER generalize well to a different environment. Other Comments Or Suggestions: N/A Questions For Authors: 1. l. 288 col. 2: What is a “primitive”? 2. What are the computational savings of using a fraction of the visual patch tokens? 3. Why is there a difference between the action horizon length and the number of predicted actions? 4. Can you describe the time limit for task termination? 5. Can you give a comparison of the number of model parameters, training time, and inference time for OTTER, Octo and OpenVLA? This is important for assessing usability. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *W1: OTTER .. performs on par with them on the synthetic Libero benchmark...* The Libero benchmark consists of simple tasks with fixed scenes, distractors, and minimal object variation. As a result, all methods can achieve comparable performance by fitting well to the benchmark’s constraints. Since CLIP is predominantly trained on real-world data, its vision encoder is more effective in real settings than in synthetic ones. In our real-world experiments, OTTER was evaluated on four primitives, unseen objects, and novel distractors. Experiment videos are available at https://ottericml.github.io/, showcasing OTTER's ability to generalize across diverse environments. *W2: Several of OTTER’s design decisions are not ablated, namely: whether to use Xattn vs Xout from the CLIP vision encoder, Nq=4, and context length set to 12.* We have included the ablation experiments on using X_out. Specifically, we trained OTTER using the X_out from CLIP on the DS-PnP and evaluated on the pick and place tasks. The results are shown in the table below: | Method | Unseen Pick and Place Task Mean ± Std. Err. | |--------------|--------------------------------------------| | OTTER X_out | 18.6% ± 4.3% | | OTTER | 62% ± 4.2% | These results demonstrate that using X_attn​ significantly improves real-world performance, aligning with findings from ClearCLIP and Appendix Figure 6. Leveraging X_attn enhances OTTER’s generalization in real-world settings compared to using X_out. We also conducted ablations in simulation to assess the impact of context length and N_q​ on performance in Libero-object tasks. The results are shown below: | Context Length | Success Rate | |----------------|--------------------| | 4 | 90% ± 1.2% | | 8 | 90% ± 1.4% | | 12 | 89% ± 1.2% | | 16 | 87% ± 1.3% | | N_q | Success Rate | |-----|--------------------| | 2 | 87% ± 1.4% | | 4 | 89% ± 1.2% | | 8 | 88% ± 1.3% | Overall, our ablation studies support the idea that OTTER’s success is not solely dependent on hyperparameter tuning for a specific setting. Instead, key design choices, such as using X_attn play a more significant role in improving generalization to real-world tasks. *Q1*: We define primitive as a fundamental, reusable action or skill that can be applied across multiple tasks when combined with different objects. For example, picking up and pouring are two different primitives, Picking up an apple and picking up a banana are two different tasks but the same primitive. *Q2*: For an image input of size `224 × 224`, the CLIP vision encoder initially generates **257 tokens** (256 from `16 × 16` patches plus 1 CLS token). We extract `N_q = 4` text-relevant visual features from `X_attn` of size `15 d`, and concatenate them into a single token. `d` is the latent dimension. The computational complexity of processing these features is approximately `O(4 × 15 d)`. A standard Transformer’s attention mechanism has a complexity of `O(T²d)`, where `T` is the number of input tokens. For full-resolution image tokens (`T = 257`), the computational cost is `O(257²d)`. Thus, the ratio of computational cost when using compressed visual tokens compared to processing all 257 tokens is: `(O(4 x15d) + O(d))/O(257^2 d) = 0.001`. This demonstrates that our approach reduces the computational cost to approximately **0.1%** of the full-token processing cost, leading to significant efficiency gains in both training and inference. *Q3*: The action horizon length H=8 is the parameter used in the receding horizon control [1] for action smoothness, as in [2]. For a number of predicted actions of 12, only the first 8 numbers are used for temporal ensembles to calculate the action to execute. [1] Mayne, D. Q., & Michalska, H. (1988, December). Receding horizon control of nonlinear systems. [2] Chi, Cheng, et al. "Diffusion policy: Visuomotor policy learning via action diffusion." *Q4*: As in A.2, all models except OpenVLA are given a maximum of 30 seconds per trial. Due to OpenVLA's larger size and slower inference speed, it is allotted 60 seconds. *Q5*: Octo has 93 million parameters and an inference speed of 33 Hz. It was trained for 300k steps in 14 hours using a TPU v4-128 pod. Fine-tuning Octo on the DS-ALL dataset for 40k steps takes 6 hours using 8 A100 GPUs. OpenVLA, with 7 billion parameters and a much lower inference speed of 0.5 Hz, was trained for 14 days (21,500 A100-hours) using a cluster of 64 A100 GPUs. Fine-tuning OpenVLA on DS-ALL for 15k steps takes 15 hours on 8 A100 GPUs. OTTER, the smallest with 25.5 million parameters and the highest inference speed of 50 Hz, was pre-trained on the OXE dataset for 40k steps in 12 hours using 4 A100 GPUs. Fine-tuning on DS-ALL for 40k steps takes 6 hours on 8 A100 GPUs.
Summary: This paper aims to address a critical issue wherein the simultaneous fine-tuning of visual and language encoders during VLA training significantly exacerbates the risk of overfitting and impairs their original perceptual capabilities. To overcome this, the authors propose an explicit alignment strategy, which utilizes language features to guide the extraction of visual features, thereby obviating the need for encoder fine-tuning. Experiments on downstream tasks demonstrate the superiority and effectiveness of this proposed method. Claims And Evidence: The experimental results presented in the paper somewhat support its claims. Methods And Evaluation Criteria: Intuitively, I also believe this straightforward approach can indeed yield performance improvements. Theoretical Claims: NA Experimental Designs Or Analyses: In Table 2, several baseline methods, such as OpenVLA and Octo, exhibit near-zero success rates. The authors, however, did not compare their approach with more recent baseline methods like Pi-0. Thus, although the proposed method achieves noticeable improvements, the poor performance of baselines may stem from architectural limitations rather than solely from encoder-related issues. Supplementary Material: The authors provide several task settings and implementation details, but I noticed that the tasks evaluated are predominantly simple pick-and-place scenarios. In this context, the demonstrated generalization appears primarily visual rather than task-level. I am curious whether the proposed method contributes significantly to task-level generalization, since visual generalization alone seems more pertinent to the Visual-Language Model (VLM) rather than the Visual-Language Alignment (VLA) itself. Relation To Broader Scientific Literature: I believe the primary contribution of this paper is highlighting the importance of freezing the original modality encoders, rather than jointly fine-tuning them, as many concurrent VLA studies have done. Joint fine-tuning of encoders tends to induce severe overfitting, which explains why such VLA methods typically exhibit poor generalization performance. This paper introduces and empirically validates an approach that avoids fine-tuning encoders, effectively demonstrating its benefit in improving generalization. Essential References Not Discussed: The authors could include a discussion on related VLA works that do not freeze encoders, such as CogACT, to illustrate the potential limitations of encoder fine-tuning approaches. Specifically, methods like CogACT jointly fine-tune visual and linguistic encoders, which could lead to severe overfitting risks. This encoder fine-tuning can cause the models to lose their original perceptual and semantic generalization capabilities, making them overly specialized to specific training tasks or datasets. By analyzing such limitations, the authors could clearly demonstrate why freezing modality encoders is crucial to preserving their intrinsic representational abilities and achieving better generalization across tasks. Other Strengths And Weaknesses: Overall, the authors present a simple yet effective method that emphasizes the necessity of preserving original encoder capabilities. This insight is valuable to the community, as it highlights a critical consideration often overlooked in current VLA research. Although the proposed method itself is straightforward and additional baseline comparisons would further strengthen the findings, I am currently inclined toward acceptance given its conceptual clarity and practical contribution. Other Comments Or Suggestions: NA Questions For Authors: Have the authors observed task-level generalization of their proposed method, and how does it compare with contemporary approaches? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *The authors, however, did not compare their approach with more recent baseline methods like Pi-0.* We thank the reviewer for pointing this out. We have added experiments comparing OTTER with Pi-0 as below. | Method | Pouring | Drawer | Poking | Pick and Place | Mean±Std. Err. | |-----------|-----------|----------|----------|----------|----------| | π0-Fast-Droid | 0% | 0% | 0% | 61% | 29% ± 3.5% | | Finetuned π0-Fast-Droid | 0% | 45%| 27% | 51% | 35% ± 3.8% | | OTTER-OXE-L | 77% | 75% | 93% | 75% | 77% ± 3.3% | Specifically, we consider the Pi-0 Fast [1] model trained on Droid. Since our experiment setup is the droid setup, we first evaluate the pi-0 performance directly on our setup, denoted as $\pi_0$-Fast-Droid. As shown in the table, while $\pi_0$-Fast-Droid achieves decent performance on the pick and place primitives, it fails on all the other three primitives as the majority of the Droid dataset is the pick and place primitive. Therefore, we also consider fine-tuning $\pi_0$-Fast-Droid on the DS-ALL, denoted as Finetuned $\pi_0$-Fast-Droid. Finetuned $\pi_0$-Fast-Droid achieves non-zero success rate on Drawer and Poking primitives, but still fails on the pouring primitive. There is also a degraded performance on the Pick and Place primitives for Finetuned $\pi_0$-Fast-Droid. This highlights the difficulty of our experiment setup. OTTER-OXE-L can achieve high success rate on all four primitives on the same amount of demonstrations, indicating that using text-aware visual features extracted from a pre-trained VLM can increase the data efficiency and enhance the generalization ability. [1] Pertsch, Karl, Kyle Stachowicz, Brian Ichter, Danny Driess, Suraj Nair, Quan Vuong, Oier Mees, Chelsea Finn, and Sergey Levine. "Fast: Efficient action tokenization for vision-language-action models." arXiv preprint arXiv:2501.09747 (2025). *I noticed that the tasks evaluated are predominantly simple pick-and-place scenarios.* We have primitives other than the pick-and-place as shown in Table 2, where we consider 4 primitives, pouring, drawer, poking and pick-and-place. We have provided the videos for the experiments on https://ottericml.github.io/ to show the diversity of the evaluation tasks. *The authors could include a discussion on related VLA works that do not freeze encoders, such as CogACT.* We thank the reviewer for pointing out this reference. We appreciate and agree with the reviewer’s discussion on CogACT and will include this in the related work of the revised draft. *Have the authors observed task-level generalization of their proposed method, and how does it compare with contemporary approaches?* In OTTER, we define primitive as a fundamental, reusable action or skill that can be applied across multiple tasks when combined with different objects. For example, picking up and pouring are two different primitives, picking up an apple and picking up a banana are two different tasks but the same primitive. OTTER has shown strong task-level generalization as shown in Table 1 and 2. We assume the reviewer is referring to primitive-level generalization. As the reviewer pointed out earlier, keeping a frozen pre-trained Visual-Language Model (VLM) primarily enhances visual generalization. In OTTER, our focus is on leveraging this strength to improve object-level generalization. While OTTER does not explicitly target primitive-level generalization, it can bring certain benefits in enabling it. By reducing the burden of generalizing to unseen objects, the model can allocate more capacity to learning task primitives and compositional behaviors, which enables more efficient adaptation to novel primitives. We agree that exploring primitive-level generalization is a compelling future direction, and we explicitly encourage work in this area.
Summary: OTTER is a Vision-Language-Action (VLA) model that enhances robotic task execution by leveraging the semantic alignment capabilities of pre-trained Vision-Language Models (VLMs) without fine-tuning. By extracting text-aware visual features aligned with task instructions, OTTER preserves the rich visual-language understanding from pre-training, enabling effective handling of novel objects and environments. It demonstrates strong zero-shot generalization, outperforming state-of-the-art VLA models on unseen robot manipulation tasks. Empirical results show that OTTER's performance scales with larger pre-trained vision-language encoders, increased policy network capacity, and pre-training on larger robot datasets, making it a robust solution for diverse and unseen robotic scenarios. ## Update after rebuttal After reading all reviewers' comments and authors' responses, some of my confusion about technical details are explained. However, in comparison with previous work in LLaVA-style, author select only 1 token, such as CLS token, to keep the same with OTTER's setting, but it's not the setting of OpenVLA, so that I think there is a promblem of comparing with OpenVLA. OpenVLA builds their framework based on large language model such as LLaMA2, but OTTER is based on a Transformer-based policy model. Although OTTER has better performance than OpenVLA in experiments, OTTER's capacity of dealing with in-the-wild instructions are questionable, and maybe OpenVLA is not suitable to be a baseline of OTTER. Overall, OTTER designed a novel framework to extract and merge multi-modal features in VLA tasks efficiently and obtained an impressive performance in several tasks. Therefore, I will keep my rating unchanged. Claims And Evidence: Otter use a novel parameter-efficient text-aware vision feature extraction scheme in VLA model, rather than commonly used finetuning or projection. By selectively use visual features, visual information can be extracted and used efficiently. However, experiments and ablation studies mainly focus on effects of embodied encoder or pre-trained CLIP encoder. Effectiveness of text-aware visual feature extraction are not well explored. Methods And Evaluation Criteria: The method part mainly focus on visual feature processing to solve problems in utilizing CLIP features in existing VLMs. Although the improvement in visual features are reasonable, results in Table.1 shows that embodied features have a large effect on overall performance. Thus, it's quite confusing whether the visual features are the key in this kind of tasks. Theoretical Claims: N/A Experimental Designs Or Analyses: Experiments presented by this work is quite impressive, which includes environments both in simulation and real world. Also, generalization ability on unseen tasks are well tested. But in my opinion, ablation study lack of the analysis of claimed text-aware visual feature extraction. Details of DFP-OTTER can be shown and compared in supplementary. Meanwhile, mainstream VLMs, such as LLaVA, usually freezes CLIP encoder during training rather than finetuning. Another baseline using linearly projected frozen CLIP features shall be included. Supplementary Material: Supplementary provides details of tasks, training settings and visualization of visual features. Relation To Broader Scientific Literature: Otter achieves impressive performance both in simulation and real world environments, trained and unseen tasks. The benchmarking setting is quite important in the field. And it highlights the importance of well-pretrained vision encoder in VLA models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: Otter adopts current VLM paradigm and explored into VLA model. By designing a selective visual feature extraction scheme, Otter achieves impressive exprimental results. Weakness: 1. Baseline of linearly projected CLIP features shall be compared. 2. Beyond text-aware visual feature extraction, visual features are further compressed before input into policy network. Details of this step are not well presented and analysed. Other Comments Or Suggestions: See above. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: "Effectiveness of text-aware visual feature extraction are not well explored." We agree that explicitly isolating the impact of our proposed text-aware visual feature extraction is important. The ablation study (DFP-OTTER baseline in Tables 1 and 3 and Appendix D, Table 9) was designed to illustrate the benefit of explicitly selecting text-aware visual tokens compared to directly passing visual features to the policy transformer. Specifically, the results demonstrate that selective text-aware feature extraction substantially outperforms this direct-feature-passing baseline on unseen scenarios. Q2: "Although the improvement in visual features are reasonable, results in Table.1 shows that embodied features have a large effect on overall performance. Thus, it's quite confusing whether the visual features are the key in this kind of tasks." While embodied features (proprioceptive information such as robot end-effector pose) significantly contribute to overall performance—given that robotic tasks inherently require spatial grounding—the visual features remain fundamentally critical. If the policy is provided with only embodied features without meaningful visual inputs, the robot cannot perceive objects at all, yielding a zero success rate. Furthermore, pi0 also takes the embodied features. As shown in the additional results in response to reviewer Yr2M, OTTER outperforms pi0, indicating the significance of text-aware visual features. Q4/W1: "Mainstream VLMs, such as LLaVA, usually freeze CLIP encoder during training rather than finetuning. Another baseline using linearly projected frozen CLIP features shall be included." Thank you for raising this important point. We address this from two angles: 1. To investigate the effectiveness of simply using frozen CLIP features with linear projection, we conducted a specific ablation using the CLS token from the CLIP encoder as the only visual representation (see Appendix D, Table 9). For convenience, we list the relevant results here: | Method | Unseen Tasks | |-----------------------------|--------------| | OTTER (CLS token only DFP) | 6% ± 0.8% | | OTTER (Ours) | 62% ± 4.2% | These results show that using just the CLS token with a linear projection under the OTTER formulation—similar to mainstream frozen-encoder approaches like LLaVA—is insufficient for achieving generalization in robotic manipulation tasks. 2. Furthermore, unlike mainstream VLM architectures such as LLaVA, prior state-of-the-art VLA models (e.g., OpenVLA) explicitly demonstrate that fine-tuning the visual encoder significantly improves robotic performance. As reported by OpenVLA (Section 3.4 and Table 1), fine-tuning the vision encoder leads to notably higher success rates compared to using a fully frozen vision encoder. This suggests that directly adopting the mainstream frozen-encoder approach from general VLM literature is not optimal for robotic applications, and our explicit text-aware feature extraction provides an effective alternative solution. We will clearly articulate both points in our revised manuscript to thoroughly address this comparison. W2: "Beyond text-aware visual feature extraction, visual features are further compressed before input into policy network. Details of this step are not well presented and analyzed." Thank you for pointing this out. We compress visual features using a learnable attention pooling operation to produce a compact representation, as briefly described in Section 3.2. Specifically, the text-aware visual feature extraction step outputs multiple (m) text-aligned visual tokens (empirically, we used the first 15 text-tokens, which sufficiently covers the instructions) each of dimension 768. Directly concatenating these tokens would result in a prohibitively high-dimensional input (15 × 768) for the policy transformer, significantly increasing computational complexity. Therefore, to mitigate this issue and maintain computational efficiency, we introduce a learnable attention pooling layer to aggregate these visual tokens to reduce the dimension before passing them into the policy. This attention pooling layer has N_q = 4 query tokens and a latent dimension of 64. We further concatenated these 4 tokens into 1 token. We will clarify and expand this step in the revised manuscript, explicitly discussing both the motivation behind the dimensionality reduction and the details of the attention pooling mechanism.
Summary: The OTTER keeps pre-trained VLMs fixed to preserve the rich semantic understanding acquired during pretraining, enabling strong zero-shot generalization to novel objects and environments, as demonstrated in both simulation and real-world experiments. This text-aware visual feature is extracted using pre-trained CLIP on the top of the VLM. The experiments are conducted on a real robot and Libero with comparison to Octo and OpenVLA. Claims And Evidence: Although OTTER claims to "preserve the generalizability of VLMs for effective performance in unseen scenarios," the definition of "unseen scenarios" remains unclear. Methods And Evaluation Criteria: The method makes sense, but the experiments are problematic; further details are provided in the following section. Theoretical Claims: N/A Experimental Designs Or Analyses: 1) The baseline DFP-Otter is not a direct comparison with the base VLA. Typical VLA models use a pre-trained VLM, such as OpenVLA, and do not employ an attention pooling layer to obtain independent tokens. A fairer comparison would involve using the same OpenVLA architecture and simply adding text-aware visual features to demonstrate effectiveness, rather than building an entirely different model. 2) The DFP does not appear to be a plausible architecture for passing visual features to a transformer. Other common methods in VLM—such as using an MLP (LLaVA), q-former (BLIP), c-abstractor (CVPR'24), or feature pyramid extractor (DVCP, CVPR'24)—should be considered. 3) While Otter is claimed to handle many "unseen tasks," the tasks presented in the appendix (real-world tasks) mainly involve attributes like object color or objects already seen during training. Since CLIP can effectively handle these cases, it would be more informative to assess its performance on truly unseen objects—for example, whether it can preserve correct visual tokens. 4) The supplementary material does not include video, which leaves unclear how challenging the tasks are and whether the evaluation is robust. 5) Although CLIP is used as a prior for selecting text-aware visual tokens, other methods, such as VLM, could serve the same function. A comparison with VLM would add value to the study. 6) Libero represents a relatively simple simulation task. Given the emphasis on "unseen tasks" in this work, alternative simulations—such as a language table—might be more suitable for evaluation. Supplementary Material: Yes, the appendix. Relation To Broader Scientific Literature: Vision-language-action models have emerged as a popular domain in imitation learning for robotics. This paper presents a straightforward technique to enhance VLA models. Essential References Not Discussed: The central idea behind Otter is to leverage text-aware visual features to enhance the effectiveness of VLA. However, several existing VLA models—for example, pi0—have not been discussed. Additionally, pertinent works that employ a chain-of-thought approach in VLA, such as embodied CoT and CoT-VLA, are also omitted. Given that using text-aware visual features contrasts with these methods, it is crucial to include them in the discussion. Other Strengths And Weaknesses: Other Strengths: The paper is well-written, and its idea is both straightforward and intuitively effective. Weaknesses: The experimental results are underwhelming. As mentioned earlier, OTTER is highly dependent on the reliability of CLIP, and it would be beneficial to see experiments conducted in more complex real-world scenarios. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Claims and Evidence** Q1: Clarification on "unseen scenarios" Thank you for raising this point. In our paper, "unseen scenarios" explicitly refer to tasks involving entirely novel objects or novel combinations of objects and spatial configurations not encountered during training. Crucially, our evaluation—both in real-world (Section 4.1, Appendix A.2) and LIBERO simulation benchmarks—requires the model to distinguish the correct target object from visually similar distractors. Our tasks indeed evaluate visual grounding generalization based on language instructions. **Experimental Designs and Analyses** Q1: Regarding the fairness of baseline comparisons (DFP-OTTER vs. OpenVLA) We appreciate this concern and clarify our intentions: 1. Purpose of DFP-OTTER Baseline: it demonstrates explicitly that directly passing visual/textual features into a vanilla transformer is insufficient, underscoring the necessity of selective text-aware visual feature extraction for generalization. 2. Compatibility with existing VLM-based VLAs: Our paper introduces a novel VLA architecture (OTTER) specifically designed around text-aware visual feature extraction. Adding such features directly into existing VLM-based models like OpenVLA would necessitate substantial re-training of the base VLM (e.g., Prismatic VLM [1]), exceeding our available computational resources. Exploring this integration is indeed a compelling future research direction, which we explicitly encourage. [1] Karamcheti, Siddharth, et. al. "Prismatic VLMs: Investigating the design space of visually-conditioned language models." ICML 2024. Q2: Regarding plausibility of Direct Feature Passing (DFP) We included the Direct Feature Passing (DFP) baseline specifically as a minimal ablation to clearly demonstrate the benefit of selective, text-aware visual feature extraction. Introducing additional intermediate methods like MLPs or Q-formers within the VLM would divert focus from our primary contribution, which is explicitly leveraging pre-trained CLIP for improved generalization. Q3: Clarification on experiments involving truly unseen objects Our experimental setup explicitly uses entirely novel objects, as detailed in Appendix A.2, Table 6. These unseen objects differ not only in color but also in geometry and texture. Each evaluation includes explicit distractors, ensuring robust assessment of the model’s visual-language grounding capability. OTTER's superior performance demonstrates effective grounding on genuinely novel objects. Q4: Availability of supplementary video material Supplementary videos demonstrating task complexity and model performance are now available at our anonymous website: https://ottericml.github.io/ Q5: CLIP vs. other VLM alignment methods While existing VLM-based approaches (e.g., OpenVLA, Pi0-Fast) **implicitly** learn alignments between text and visual tokens through fine-tuning their respective large-scale pre-trained models (such as Prismatic or PaliGemma), our method **explicitly** formulates text-aware visual token extraction as a separate, parameter-free process leveraging the frozen pre-trained CLIP model. Thus, the comparison requested by the reviewer is already covered by our existing baseline evaluations against state-of-the-art methods such as OpenVLA (Tables 1–3) and Pi0-Fast (added in this rebuttal). These comparisons illustrate the advantage of the explicit, selective token extraction approach compared to implicit token alignment methods embedded in other VLM-based approaches. Q6: Suitability of LIBERO for simulation evaluation We selected LIBERO explicitly due to its usage in recent state-of-the-art VLA works (OpenVLA, π0-Fast-Droid), facilitating fair and direct comparisons. LIBERO supports evaluations involving explicit distractors under varied language instructions, thus providing meaningful assessments of visual-language generalization. **Essential References Not Discussed** Q1: Discussion of embodied CoT and CoT-VLA works We appreciate this suggestion and will explicitly include and contrast these methods in the updated related work section of the revised manuscript. Q2: Additional comparison with π0 / its extensions We include additional comparisons and discussions with π0-Fast-Droid in response to reviewer Yr2M, the current state-of-the-art VLA model, leveraging extensive pre-training on proprietary robot data. **Weaknesses** W1: Dependence on CLIP and complexity of real-world scenarios We agree that evaluating more complex real-world scenarios is valuable. However, current evaluations (please see updated tables on the website) already demonstrate significant challenges for state-of-the-art models (OpenVLA, Octo, π0-Fast-Droid), validating our approach’s robustness. Exploring tasks with more complex interactions remains an important future direction, explicitly mentioned in the revision.
null
null
null
null
null
null
Graph Adaptive Autoregressive Moving Average Models
Accept (spotlight poster)
Summary: In this paper, the authors propose addressing the computational complexity issues in traditional graph transformers and the over-squashing problem in GNNs by converting the input graph into a sequential representation and incorporating an autoregressive moving average model with an attention selection mechanism. Claims And Evidence: The authors argue that the biggest challenge in applying sequence models to graph-structured data is `how to transform a graph into a sequential representation`. However, their discussion is limited to the scope of SSM. In fact, research on converting graph data into regular or Euclidean-structured data has long been explored in the graph community, as seen in studies R1–R6. Additionally, the authors employ `recurrent layers and dynamical system` designs in their method. Similar approaches have also been investigated in the community, such as recurrent layer methods (R5, R7, R8) and implicit model approaches (R9, R10). The authors should consider incorporating these works into the discussion. R1. Learning convolutional neural networks for graphs. ICML 2016 R2. Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs. ICLR 2019 R3. Relational Pooling for Graph Representations. ICML 2019 R4. pathGCN: Learning General Graph Spatial Operators from Paths. ICML 2022 R5. Going Deeper into Permutation-Sensitive Graph Neural Networks. ICML 2022 R6. All in a row: Compressed Convolution Networks for Graphs. ICML 2023 R7. The Graph Neural Network Model. Trans. Neural Networks 2009 R8. Towards Dynamic Message Passing on Graphs. NeurIPS 2024 R9. Implicit Graph Neural Networks. NeurIPS 2020 R10. Optimization-Induced Graph Implicit Nonlinear Diffusion. ICML 2022 Methods And Evaluation Criteria: - The proposed method can essentially be seen as MixHop (R11) with layer-selective attention. The authors should consider comparing a simple MixHop approach with GRAMA in the ablation study to demonstrate the effectiveness of the more complex strategy proposed in this paper. - Since GRAMA needs to maintain a feature matrix of size $L\times n\times d$, its scalability may be affected. For example, MixHop is prone to OOM issues when applied to large-scale graphs. The authors should consider incorporating more large-scale graphs as benchmarks (e.g., R12). R11. MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing. ICML 2019 R12. Open Graph Benchmark: Datasets for Machine Learning on Graphs. NeurIPS 2020 Theoretical Claims: There are no issues in this regard for now. Experimental Designs Or Analyses: The experiments in this paper are relatively comprehensive. In addition to the main content, the appendix provides a substantial amount of supplementary experimental results. However, there are still some issues with the experimental designs and analyses. - One of the motivations of this paper is to address the time consumption issue caused by global attention in graph transformers. Many subsequent GT methods have already attempted to tackle this problem (R13–R15). The authors should consider thoroughly comparing GRAMA with these methods to further substantiate its advantages. - GRAMA relies on multiple time steps and blocks to extract features from the input graph. However, the authors only report the time performance of GRAMA+GCN. According to the experimental results, GRAMA combined with GatedGCN or GPS often achieves better results. How do these two models perform in terms of time consumption? R13. GOAT: A Global Transformer on Large-scale Graphs. ICML 2023 R14. Polynormer: PolynomialExpressive Graph Transformer in Linear Time. ICLR 2024 R15. Exphormer: Sparse Transformers for Graphs. ICML 2023 Supplementary Material: Yes. Especially supplementary A, D, E, and F. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Please refer to the claims and evidence. Other Strengths And Weaknesses: The paper aims to improve the model's ability to capture long-range information with reduced computational cost. This issue has been a long-standing focus of research in the graph community. Other Comments Or Suggestions: **Post Rebuttal**: My concerns have been addressed. Based on the rebuttal, I am changing my score to 3. I hope the authors will incorporate the discussions from the rebuttal phase into the revised version of the manuscript. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the thorough and constructive feedback. We have carefully addressed all concerns and believe the revisions have strengthened our paper. We hope our responses are satisfactory and that you will consider updating your score. --- **Regarding R1-R6:** The Reviewer is correct that prior works have studied converting graphs into sequences, which we now cite in the revised paper. However, as noted, our focus is on Graph State Space Models, making those works only tangentially related. Our aim is to design a graph SSM that enables long-range processing while preserving permutation equivariance. For instance, methods like R3–R4 are only equivariant in expectation, and works like R4–R6 treat graphs strictly as sequences. In contrast, we highlight that a key challenge in applying SSMs to graphs is that graphs are not inherently sequential. To clarify this distinction, we have revised our paper to include this discussion. Thank you. **Regarding R5-R10:** We refer the Reviewer to Appendix A, where we discuss GNNs as dynamical systems and recent graph recurrent methods. Following your suggestion, we now cite and highlight the differences with R5–R10 in the revised paper. **Regarding MixHop:** We refer the Reviewer to the related works section, where we discuss multi-hop architectures like JKNet and DRew. In the revised paper, we added a discussion on MixHop. As shown in Tables 2 and 8, GRAMA outperforms MixHop, and our ablation studies (Tables 9–11) show that even without the selective mechanism, GRAMA still performs better. This highlights that the gain comes not only from selectiveness but also from GRAMA’s overall design: unlike MixHop’s dense, non-recurrent projection, GRAMA uses a recurrent, non-dense aggregation inspired by dynamical systems and modern RNNs. We have added this discussion and further comparisons, including non-selective GRAMA, to the revised paper. The original results are also summarized in the table below. |Method|Peptides Func|Peptides Struct|Roman-Empire|Amazon-Ratings|OGBN-Arxiv| |---|---|---|---|---|---| |MixHop-GCN |68.43±0.49|0.2614±0.0023 |79.16±0.70|47.95±0.65|71.29±0.29| |GRAMA$_{GCN}$ |70.93±0.78|0.2439±0.0017|88.61±0.43|53.48±0.62|73.86±0.21| **Regarding complexity:** The Reviewer is correct that GRAMA maintains a feature matrix of size $R \\times n \\times d$, with $R = L$. However, unlike MixHop—which assigns learnable weights per layer and uses a dense layer with space complexity $L \\cdot d^2$—GRAMA is more efficient. In the non-selective (naive) variant, GRAMA learns only $2L$ parameters per block, which is highly scalable since $L \\ll d$ and $L \\ll n$. In the selective variant, parameter count depends only on $d$, with space complexity $d^2$; the $L$ dependence appears only in the $L^2$ complexity of the sequence input, as discussed in Appendix E.3. We have clarified this distinction in the revised paper and, following your suggestion, added a MixHop comparison on OGBN-Arxiv, further demonstrating GRAMA’s effectiveness. **Regarding R13-R15:** In the original paper, we compare GRAMA with GOAT and Exphormer in Tables 5 and 6, where GRAMA outperforms both. We appreciate the reviewer's suggestion and now include additional comparisons with Polynormer. Together, these results highlight GRAMA’s strong performance relative to other leading methods. |Method|Roman-empire|Amazon-ratings|Minesweeper|Tolokers|Questions| |---|---|---|---|---|---| |GOAT|71.59±1.25|44.61±0.50|81.09±1.02|83.11±1.04 |75.76±1.66| |Polynormer|92.55±0.3 |54.81±0.49|97.46±0.36 |85.91±0.74| 78.92±0.89| |Exphormer|89.03±0.37|53.51±0.46|90.74±0.53|83.77±0.78|73.94±1.06| |GRAMA$_{GCN}$ (Ours)|88.61±0.43 |53.48±0.62| 95.27±0.71| 86.23±1.10|79.23±1.16| |GRAMA$_{GatedGCN}$ (Ours)|91.82±0.39 |53.71±0.57|98.19±0.58|85.42±0.95|80.47±1.09| |GRAMA$_{GPS}$ (Ours) |91.73±0.59|53.36±0.38|98.33±0.55|85.71±0.98|79.11±1.19| **Regarding runtimes:** As shown in Table 9, GRAMA’s runtime scales proportionally with its backbone. Designed to enhance various GNNs, GRAMA achieves strong performance, especially with backbones like GatedGCN or GPS. We recognize that in some cases, lower computational cost may be important. To address this, one can use the naive ARMA coefficient learning (Section 3.2) or a lighter backbone such as GCN—both of which reduce overhead while still benefiting from GRAMA. Our experiments show these variants maintain strong improvements over their backbones and remain competitive with other methods at lower runtime. We also report GRAMA runtimes with GatedGCN and GPS in the table below and in the revised paper. The key takeaway is that GRAMA offers a flexible and scalable trade-off between efficiency and performance. |Metho |Depth|4|8|16|32| | |---|---|---|---|---|---|---| |GatedGCN| |27.57|47.98|85.36| 171.27 | | |GPS| |1139.05| 2286.96 | 4545.46 | OOM | | |GRAMA$_{GatedGCN}$ (Ours)| |51.49|117.01|270.64|792.32 | | |GRAMA$_{GPS}$ (Ours)| |1162.13|2346.94|4642.19|OOM| | --- Rebuttal Comment 1.1: Comment: I sincerely appreciate the authors’ detailed response, and most of my concerns have been adequately addressed. However, I find that GRAMA performs on par with, or even worse than, attention-based methods on several datasets. This raises an important question: recent graph Transformer variants have demonstrated the ability to model global dependencies within a single layer, often with *linear time and space complexity*. In this context, what is the motivation for adopting a layer-stacking approach (or a step-stacking approach for GRAMA) to expand the receptive field, especially when it introduces *additional computational cost without significant performance gains*—and in some cases, even results in degradation? I suggest the authors include further discussion on this design choice to strengthen the methodological justification of the work. In addition, although GARMA reduces memory consumption by using fewer parameters compared to MixHop, does the parameter reuse across layers lead to out-of-memory (OOM) issues during backpropagation, thereby affecting the scalability of the model? How do the authors address this potential problem? --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for responding to our rebuttal, and for acknowledging that our responses address most of your concerns. Below, we address your questions. We hope that you find them satisfactory and that you will consider revising your score. --- **Regarding GRAMA vs. attention layers:** Our GRAMA offers a general framework that can be coupled with different GNNs (including MPNNs and Graph Transformers), as shown in our experiments. In particular, this framework allows the achievement of a state space model (SSM) equivalent approach for graph learning tasks that, compared with previous graph SSMs, is (i) permutation-equivariant, (ii) allows working with longer sequences and model interactions that are beyond pairwise interactions at each block. Therefore, one can also use GRAMA with other backbones, such as the examples suggested by the Reviewer. If the Reviewer has a specific suggestion for such backbones, we are happy to include it in our evaluations. **Regarding complexity and performance improvement with GRAMA:** We would like to kindly note again that as shown in our analysis and provided runtimes, while GRAMA adds more computations, which are thoroughly discussed in the paper, it still remains within the magnitude of its backbone. Moreover, we would like to note that **our GRAMA consistently, and often significantly, improves the performance of its backbone, in all of the experiments provided in our paper and rebuttal**. In particular, the design of GRAMA can learn to default to the original backbone. Hence, theoretically (and practically based on our experiments), GRAMA only extends and improves its backbone. This is also reflected in our discussion with Reviewer c8Wb, and our paper was revised to better reflect that, in addition to other discussions and reasonings on the design of GRAMA. **Regarding "GRAMA performs on par with, or even worse than, attention-based methods on several datasets:"** We kindly note that in our paper and rebuttal, we **consider more than 20 graph attention methods and variants on more than 20 datasets**. In total, we have **132 experimental comparisons with attention-based architectures**. Overall, besides Polynormer which outperforms GRAMA in 3 cases, **we found our GRAMA to outperform attention-based methods in 129/132 of the comparisons.** Moreover, as discussed above, our method can also be coupled with Polynormer and extend it. We think that, therefore, our GRAMA offers a valuable framework to the graph learning community. **For your convenience, we also provide a head-to-head comparison summary of GRAMA with its backbones, showing consistent improvement of the backbones. In any case where GRAMA+Backbone is better than the corresponding Backbone, we mark the result in bold. A broader comparison also exists in our paper in Table 15:** | Model | Diameter ($\log_{10}(MSE)\downarrow$) | SSSP ($\log_{10}(MSE)\downarrow$) | Eccentricity ($\log_{10}(MSE)\downarrow$) | Peptides-func (AP $\uparrow$) | Peptides-struct (MAE$\downarrow$) | MalNet-Tiny (Acc $\uparrow$) | Roman-empire (Acc $\uparrow$) | Amazon-ratings (Acc $\uparrow$) | Minesweeper (AUC $\uparrow$) | Tolokers (AUC $\uparrow$) | Questions (AUC $\uparrow$) | |---|---|---|---|---|---|---|---|---|---|---|---| | GCN | 0.7424±0.0466 | 0.9499±9.18·10−5 | 0.8468±0.0028 | 59.30±0.23 | 0.3496±0.0013 | 81.00 | 73.69±0.74 | 48.70±0.63 | 89.75±0.52 | 83.64±0.67 | 76.09±1.27 | | GatedGCN | 0.1348±0.0397 | -3.261±0.0514 | 0.6995±0.0302 | 58.64±0.77 | 0.3420±0.0013 | 92.23±0.65 | 74.46±0.54 | 43.00±0.32 | 87.54±1.22 | 77.31±1.14 | 76.61±1.13 | | GPS | -0.5121±0.0426 | -3.599±0.1949 | 0.6077±0.0282 | 65.35±0.41 | 0.2500±0.0005 | 92.64±0.78 | 82.00±0.61 | 53.10±0.42 | 90.63±0.67 | 83.71±0.48 | 71.73±1.47 | | GRAMA$_{GCN}$ (Ours) | **0.2577±0.0368** | **0.0095±0.0877** | **0.6193±0.0441** | **70.93±0.78** | **0.2439±0.0017** | **93.43±0.29** | **88.61±0.43** | **53.48±0.62** | **95.27±0.71** | **86.23±1.10** | **79.23±1.16** | | GRAMA$_{GatedGCN}$ (Ours) | **-0.5485±0.1489** | **-4.1289±0.0988** | **0.5523±0.0511** | **70.49±0.51** | **0.2459±0.0020** | **93.66±0.40** | **91.82±0.39** | **53.71±0.57** | **98.19±0.58** | **85.42±0.95** | **80.47±1.09** | | GRAMA$_{GPS}$ (Ours) | **-0.8663±0.0514** | **-3.9349±0.0699** | **-1.3012±0.1258** | **69.83±0.83** | **0.2436±0.0022** | **94.37±0.36** | **91.73±0.59** | **53.36±0.38** | **98.33±0.55** | **85.71±0.98** | **79.11±1.19** | **Regarding memory:** Thank you for the question. In our experiments, we have not encountered OOM issues *if the backbone itself does not yield OOM error*. That is, given an architecture and its configuration that can fit in memory, we are able to use it as a backbone for GRAMA. This is also reflected in our additional runtimes comparison in our responses and in the original paper - **our GRAMA can also be used with deep models without memory issues, given that the backbone model can fit in memory.**
Summary: This paper introduces a Graph Adaptive method GRAMA based on a learnable ARMA framework to address limitations in existing Graph State Space Models. GRAMA preserves permutation equivariance while enabling efficient long-range information propagation via a selective attention mechanism. Theoretical connections to Selective SSMs highlight its ability to capture long-range dependencies. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: no Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: yes Essential References Not Discussed: no Other Strengths And Weaknesses: ### Strengths: The paper has a strong theoretical foundation with detailed proofs supporting the proposed method. The experimental section is exceptionally thorough, covering a wide range of benchmarks and providing comprehensive comparisons with existing methods. ### Weaknesses: 1. The proposed method relies on three specific GNN backbones: GCN, GatedGCN, and GPS. However, the rationale behind this selection is not clearly discussed. Additionally, the paper does not explore the potential advantages and disadvantages of using other backbones. 2. The comparison with existing methods primarily focuses on performance improvements, but there is limited discussion on the architectural and design differences, especially regarding how the method integrates with graph neural networks. 3. The choice of benchmark tasks may introduce bias, and there is a lack of comparison between graph datasets with varying structural irregularities and heterogeneity, which could affect the generalizability of the results. 4. The paper lacks clear and intuitive visualizations. The method diagram is not straightforward, and there are no direct experimental visuals to illustrate the claimed oversquashing problem, making it harder to connect theory with empirical results. 5. While the appendix is thorough, it is overly lengthy. The core experimental results and conclusions could be distilled more concisely for better clarity and focus. Other Comments Or Suggestions: see weakness. Questions For Authors: see weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for acknowledging the **“strong theoretical foundation”** with **“detailed proofs”**, with an **“exceptionally thorough”** experimental section. We would also like to express our gratitude for the thoughtful comments and feedback, to which we have done our best efforts to respond, below. We hope that you find our responses satisfactory and that you will consider revising your score. --- **Regarding GNN backbones:** Thank you for the comment. We selected these backbones because they are widely used and represent both linear MPNNs and graph transformers. Our aim was to demonstrate GRAMA's versatility across backbone types, which is reflected in the consistent improvements over both the GNN backbones and other methods. While linear MPNNs and graph transformers differ in computational complexity, our results show that strong performance is achievable with any of the selected backbones. Overall, our experiments with three popular backbones on 22 datasets—plus 4 more added in the rebuttal in response to Reviewers mg6D and c8Wb—highlight the effectiveness and generality of GRAMA. **Regarding architectural differences:** We thank you for the comment. In our paper, we compare both the quantitative results (i.e., downstream performance) of GRAMA and existing methods, primarily focusing on graph SSMs, as well as the qualitative (i.e., architectural and theoretical) differences. We kindly note that in Section 1, Section 2, Section 3, as well as Appendix A, we state the main differences between GRAMA and other graph SSMs: (i) GRAMA can process sequences which are beyond pairwise interactions, different than the graph SSM in Huang et al. (2024); and (ii) GRAMA is permutation-equivariant, while other graph SSMs like in Behrouz & Hashemi (2024) and Wang et al. (2024a), are not permutation-equivariant and are based on heuristics that order the graph nodes to obtain a sequence. Nonetheless, following your suggestion, we have revised our paper to better reflect and highlight these differences. We also highlighted differences between other methods like GRIT and GRED, in our response to Reviewer mg6D. Regarding the integration with GNNs, we kindly note that Figure 1 illustrates our method and its ability to integrate with potentially any GNN backbone. This is also reflected by our experiments with different types of backbones such as GCN, GatedGCN, GPS. To fully accommodate your comment, we revised our paper to include and expand the discussions in our response. **Regarding datasets:** Thank you for the comment. Our main goal was to evaluate GRAMA’s long-range effectiveness across multiple benchmarks, including Graph Property Prediction (GPP) and LRGB. As noted in Appendix D, GPP graphs are drawn from diverse distributions (e.g., Barabasi-Albert, tree, caveman, line), while LRGB graphs represent molecules with inherently different structures. Additionally, the five heterophilic tasks involve nodes of the same class being distantly connected. These choices ensure benchmark diversity beyond a single application domain. Following suggestions from Reviewers mg6D and c8Wb, and inspired by your comment, we also added experiments on four additional benchmarks (beyond the original 22) to further demonstrate GRAMA’s effectiveness. These results appear in our response to Reviewer c8Wb and have been added to the revised paper. **Regarding method diagram:** Thank you for the comment. Figure 1 illustrates GRAMA’s overall approach, showing how a learned selective mechanism aggregates information from previous states alongside a chosen backbone GNN. This process is described in the caption and detailed in Section 3. Following your suggestion, we revised the caption for clarity. Additionally, we refer the Reviewer to Figure 2, which highlights GRAMA’s strong performance on long-range tasks. Inspired by your comment, we now include visualizations of the learned coefficients across datasets to offer further insight into the selective mechanism. Regarding the link between theory and empirical results: (i) Figure 2 shows GRAMA maintains performance as task range increases—consistent with Theorem 4.4; (ii) across long-range benchmarks, GRAMA's competitive results align with the theoretical insights discussed in Section 4. We added this discussion in the revised paper and thank you for helping us improve its clarity. **Regarding Appendix:** Thank you for thoroughly reviewing our appendix and for your comment. Our goal was to provide complete details, including: (i) distinguishing GRAMA from existing methods (Appendix A, related to your comment 2); (ii) theoretical proofs; (iii) implementation details; (iv) experimental settings; (v) additional results and complexity discussion; and (vi) a summary of all results. We appreciate your suggestion and have revised the appendix to present this information more clearly. Thank you.
Summary: This paper introduces a novel GNN architecture based on a state-space model, proposing a new method to transform graph data into a sequence. Unlike previous approaches with similar goals, this work presents a principled approach to sequence generation, ensuring a provably permutation-invariant framework. The proposed model is designed to mitigate oversquashing. Additionally, the paper provides insightful theoretical analysis of the presented architecture. Claims And Evidence: The claimed contributions of this paper are supported by theoretical results and extensive empirical experiments. Methods And Evaluation Criteria: The chosen synthetic dataset and real dataset are appropriate and sufficient in number to support the claims. They are selected to adequately demonstrate the claimed benefit of the method (ability to model long-range dependencies). Some datasets that are usually reported (e.g., Cluster, CIFAR-10, MNIST) might be missing, but sufficient results are presented. Theoretical Claims: I have not examined the proof of the theoretical results in detail. However, Theorem 4.1 appears to restate known results, and additional citations may be necessary. For example, see [1]. [1] Aoki, M. (1990). State Space and ARMA Models. In M. Aoki (Ed.), State Space Modeling of Time Series (pp. 21–38) Experimental Designs Or Analyses: I have checked the experimental design in the main text. There are no apparent soundness or validity issues. The statistical significance of the results should be reported when possible. Supplementary Material: I reviewed the additional results, as well as the complexity analysis included in the supplementary material. Relation To Broader Scientific Literature: The main contribution of this work is to introduce a principled approach to integrating state-space models into a GNN framework. This is important as state-space models are gaining interest due to their attractive properties, and their integration into graph learning is meaningful, given the importance of modeling long-range dependencies in graph settings. Essential References Not Discussed: The essential references seem to be included, although I am not working in the field of state-space-based GNN models, so I do not have a deep understanding of the existing work. Other Strengths And Weaknesses: **Strengths** - The proposed architecture is novel and principled. - The work is overall well-written and has great clarity. The experimental section is detailed enough to be reproducible. - The experimental section is extensive and supports the stated claims. **Weaknesses** - Limitations of the proposed architecture are not discussed. One point that could be addressed is the scalability to larger graphs. Other Comments Or Suggestions: - Mentioning guarantees or results related to the Weisfeiler-Lehman test could strengthen this paper. I suspect that it likely retains any guarantees provided by the base GNN layer. - I would recommend expanding the discussion on complexity, particularly regarding the number of parameters, in the main text rather than relegating it to the supplementary material. While reading, I got the impression that this model would be much more expensive and slower than other GNN architectures, but this is not the case. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for the positive feedback and assessment of our paper. We are also grateful for the actionable feedback to which we respond below. We have made our best efforts to accommodate each of your comments, and we hope you find our responses satisfactory. --- **Regarding benchmarks:** We thank the reviewer for the suggestions for additional datasets. We followed your suggestion (as well as Reviewer mg6D) and now added more results on the mentioned datasets (Cluster, CIFAR10, MNIST, PATTERN). The results, provided in the Table below, show that our GRAMA continues to consistently offer strong performance compared to its backbone GNN as well as other leading methods. We added the results and the discussion to our revised paper. | Model | ZINC (exists in paper) | MNIST | CIFAR10 | PATTERN | CLUSTER | |---|:---:|:---:|:---:|:---:|:---:| | GCN | 0.367±0.011 | 90.705±0.218 | 55.710±0.381 | 71.892±0.334 | 68.498±0.976 | | GatedGCN | 0.282±0.015 | 97.340±0.143 | 67.312±0.311 | 85.568±0.088 | 73.840±0.326 | | GPS | 0.070±0.004 | 98.051±0.126 | 72.298±0.356 | 86.685±0.059 | 78.016±0.180 | | EGT | 0.108±0.009 | 98.173±0.087 | 68.702±0.409 | 86.821±0.020 | 79.232±0.348 | | GRIT | 0.059±0.002 | 98.108±0.111 | 76.468±0.881 | 87.196±0.076 | 80.026±0.277 | | GRAMA$_{GCN}$ (Ours) | 0.142±0.010 | 97.871±0.188 | 70.283±0.417 | 82.660±0.183 | 74.294±0.595 | | GRAMA$_{GatedGCN}$ (Ours) | 0.140±0.008 | 98.119±0.104 | 74.612±0.450 | 86.715±0.099 | 76.883±0.317 | | GRAMA$_{GPS}$ (Ours) | 0.061±0.003 | 98.292±0.135 | 75.917±0.408 | 87.406±0.067 | 79.659±0.194 | **Regarding Aoki (1990):** We thank you for the careful reading of our theoretical sections of the paper. In our paper, we cite Aoki (2013) as a reference to background works on SSMs, and in our Theoretical Properties of GRAMA (Section 4), we state that our goal is to adapt findings in the world of SSMs and ARMA models into a graph-learning framework. Our intention is to credit the findings of previous works, as cited and discussed in Section 4 and Appendix B. We thank you for the reference, which we have now added and discussed in our revised paper and appendix. In particular, we have clarified that the results were developed in previous studies for non-graph models, and here we expand it such that it fits into a graph-learning framework. **Regarding statistical significance:** We thank the Reviewer for acknowledging the soundness and validity of our experiments. Throughout our experiments, we consistently included the standard deviation of our results. In some of the ablation studies, we reported the average performance using the same evaluation settings as in other experiments. In our final version, we will also include the already computed standard deviation of these experiments. **Regarding limitations:** Thank you for the suggestion. In our paper, we have discussed the complexity as well as runtimes of GRAMA, showing that compared with other methods like graph transformers, it retains better scalability. Following your suggestion below, as well as Reviewer’s mg6D comment, we have revised our paper, to better discuss this aspect in the main paper. **Regarding WL test:** We welcome your suggestion. Because our model learns to selectively attend to different states, it can also learn to attend to the current state, which means it can default to the backbone GNN. Hence, it can retain at least the expressiveness of the backbone GNN. Due to space limitations, we will include a formal proof in the revised paper. Additionally, we think that an interesting future work will be to understand the exact expressiveness of methods like GRAMA and other graph SSMs in terms of the WL test. **Regarding complexity discussion:** We thank the Reviewer for the thoughtful suggestion. We agree that one of the benefits of GRAMA is its reduced computational complexity compared with other models, e.g., transformers. Following your advice, as well as Reveiwer’s mg6D question, we revised our paper such that it includes a broader complexity discussion in the main paper, as well as better referring to Appendix E.3, where we provide a full analysis and runtimes comparison.
Summary: This paper introduces GRAMA, which utilizes ARMA (autoregressive moving average) to design graph state space models. The paper claims this way of design can preserve permutation equivariance and enable long-range message passing with good empirical accuracy across multiple datasets. Claims And Evidence: - The paper needs to better discuss and compare with existing graph mamba works. For example, some claims may not be convincing: - "limit their focus to pairwise interactions rather than sequences" (L16) & "however, this design choice may not fully exploit the sequence-handling capacity of SSMs" (L104) - Why is it so important to model the input as sequences in graph learning, where things are just not natural sequences? - There are multiple standard benchmarks for evaluating the capability of capturing long-range dependencies of graph transformers, including datasets such as mnist, cifar10, pattern, cluster, malnet-tiny, PascalVOC-SP, Peptides-Func, Peptides-Struct, zinc, zinc-full, etc. Even though the paper seems to include many benchmarks, many of them are not that widely used for evaluating graph transformer-like models. It would be better if the authors could include more widely used datasets in this domain. - The presentation could be improved to better show why ARAMA is a better graph SSM compared with existing methods. What makes ARMA so special? And more baselines should be included to justify the superiority of ARAMA, e.g., [1, 2, 3] [1] Ma, Liheng, et al. "Graph inductive biases in transformers without message passing." International Conference on Machine Learning. PMLR, 2023. [2] Ding, Yuhui, et al. "Recurrent distance filtering for graph representation learning." arXiv preprint arXiv:2312.01538 (2023). [3] Huang, Yinan, Siqi Miao, and Pan Li. "What Can We Learn from State Space Models for Machine Learning on Graphs?." arXiv preprint arXiv:2406.05815 (2024). Methods And Evaluation Criteria: See above. Theoretical Claims: I did not go over the appendix, but the theoretical claims in the main text look reasonable as many of them are from the SSMs. Experimental Designs Or Analyses: See above. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is related to graph state space models and graph transformers, which may better capture long-range dependencies in graphs. This paper is especially related to graph mamba, which finds ways to translate graphs into sequences and then apply SSMs. Essential References Not Discussed: See above. Other Strengths And Weaknesses: Even though some included datasets are less popular, it's good to see GRAMA can perform well on those datasets. Other Comments Or Suggestions: See above. Questions For Authors: Another benefit that SSMs can bring compared with transformers is efficiency. There is some preliminary timing in the appendix, but how ARAMA scale to larger graphs? Can it inherit the efficiency from SSMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the thoughtful comments and the actionable feedback. We have taken significant measures to accommodate each of your comments. We hope that you will find our responses satisfactory, and that you will consider raising your score. We are also happy to read that the Reviewer acknowledges that **“GRAMA can perform well”**. --- **Regarding sequences in graph learning:** Thank you for the question. Indeed, graphs are by design not sequences, as discussed in the Introduction section. However, in order to utilize mechanisms such as SSMs or ARMA methods, to capture long-range dependencies, one needs to process sequences that are longer than the length of 2 (i.e., pairwise interactions). This is also reflected in existing literature such as Behrouz & Hashemi (2024) and Wang et al. (2024a). However, as we discuss throughout the paper, their shortcoming is that they rely on heuristics of ordering the nodes, which is not data-driven, and also not permutation equivariant. Following your question, we refined the discussion to better reflect this. **Regarding benchmarks:** We thank the reviewer for recognizing the large number of datasets and benchmarks considered in our paper, which covers 22 synthetic and real-world datasets. Moreover, following your suggestion, we now added results on the datasets suggested by the reviewer, from [1]. Below, we provide a table with these results, as well as a comparison with other methods. As can be seen, our GRAMA offers competitive performance, compared with other methods, while maintaining the complexity of the backbone GNN. | Model | ZINC (our results exist in paper) | MNIST | CIFAR10 | PATTERN | CLUSTER | |---|:---:|:---:|:---:|:---:|:---:| | GCN | 0.367±0.011 | 90.705±0.218 | 55.710±0.381 | 71.892±0.334 | 68.498±0.976 | | GatedGCN | 0.282±0.015 | 97.340±0.143 | 67.312±0.311 | 85.568±0.088 | 73.840±0.326 | | GPS | 0.070±0.004 | 98.051±0.126 | 72.298±0.356 | 86.685±0.059 | 78.016±0.180 | | EGT | 0.108±0.009 | 98.173±0.087 | 68.702±0.409 | 86.821±0.020 | 79.232±0.348 | | GRIT | 0.059±0.002 | 98.108±0.111 | 76.468±0.881 | 87.196±0.076 | 80.026±0.277 | | GRAMA$_{GCN}$ (Ours) | 0.142±0.010 | 97.871±0.188 | 70.283±0.417 | 82.660±0.183 | 74.294±0.595 | | GRAMA$_{GatedGCN}$ (Ours) | 0.140±0.008 | 98.119±0.104 | 74.612±0.450 | 86.715±0.099 | 76.883±0.317 | | GRAMA$_{GPS}$ (Ours) | 0.061±0.003 | 98.292±0.135 | 75.917±0.408 | 87.406±0.067 | 79.659±0.194 | **Regarding GRAMA and other methods:** Thank you for the question. In our paper, we attribute GRAMA’s improvements over existing graph SSMs and related methods to three key differences: (i) GRAMA supports beyond-pairwise interactions while preserving permutation equivariance—unlike [3], Behrouz & Hashemi (2024), and Wang et al. (2024); (ii) [2] employs LRU without a selective mechanism, which our results (Tables 9–11) show to be impactful; and (iii) GRIT [1], a graph transformer, differs fundamentally from graph SSMs in both computational cost and operation. GRIT emphasizes expressive positional encodings and is more resource-intensive than GraphGPS, whereas GRAMA is efficient, permutation-equivariant, and designed for long-range propagation. **Regarding more baselines:** We kindly note that in our comparisons, we have considered 22 datasets, as well as more than 30 baselines (i.e., different methods). In all cases, we see that our GRAMA offers similar or better performance compared with other methods, while maintaining the complexity of the backbone GNN. Nonetheless, we welcome your suggestion, and we now compare, discuss, and cite the works mentioned by the Reviewer in our revised paper. We also provide the results in the Table below, showing the competitive performance offered by our GRAMA: | Model | Peptide Func ($\uparrow$) | Peptide Struct ($\downarrow$) | |----------------|:---------------------:|:----------------------:| | GRIT [1] | 0.6988 ± 0.0082 | 0.2460 ± 0.0012 | | GRED [2] | 0.7085 ± 0.0027 | 0.2503 ± 0.0019 | | GRED + LapPE [2] | 0.7133 ± 0.0011 | 0.2455 ± 0.0013 | | GSSC [3] | 0.7081 ± 0.0062 | 0.2459 ± 0.0020 | | GRAMA (Ours, best variant) | 0.7093 ± 0.0078 | 0.2436 ± 0.0022 | **Regarding efficiency:** We thank you for the important question. The Reviewer is correct that one of the advantages of SSMs is their efficiency compared with transformers. In our paper, we provided the training and inference runtimes of GRAMA, including a comparison with other types of methods, from linear MPNNs (GCN and GatedGCN), to graph transformers (GPS). These results are shown in Table 9 and Table 10 in Appendix E.3. Furthermore, we have also discussed the theoretical complexity of GRAMA in Appendix E.3. Following your question, as well as Reviewer’s c8Wb suggestion, we moved the main discussion to the main paper, and we better refer to Appendix E.3 from the main paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses, which addressed most of my concerns and these results should be included in the revised manuscript. I find that the proposed method can still be interesting for people studying sequence models on graphs. Therefore, I will increase my rating. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for the response to our rebuttal and for increasing their rating. We added the results and discussions provided in our responses to the revised paper, and we think that the constructive feedback helped us to improve the paper. Thank you. With warm regards, Authors.
null
null
null
null
null
null
Deep Positive-Unlabeled Anomaly Detection for Contaminated Unlabeled Data
Reject
Summary: This paper proposes the deep positive unlabeled anomaly detection framework, to address the contaminated training samples problem for semi-supervised anomaly detection. Several anomaly detection datasets, including MNIST, CIFAR10, CIFAR100, etc., are utilized to evaluate the effectiveness of the proposed method. The results present that the proposed method outperform some alternatives. ## Updates after rebuttal After carefully reviewing all the comments and responses, I have decided to maintain my original scores. The primary reason is the lack of a comprehensive literature review. As I pointed out in my previous comments, there have been several notable works in the field of industrial anomaly detection. For instance, SoftPatch has made significant contributions to noisy anomaly detection, which is not an unsupervised method as claimed by the authors. Given this, it is essential to compare the proposed method with more up-to-date techniques. However, the most recent method cited in this paper is SOEL, which was published in 2023. Claims And Evidence: N/A Methods And Evaluation Criteria: The compared methods are not up-to-date, with the most advanced method SOEL published in 2023. In other anomaly detection fields like industrial anomaly detection, the author may find more advanced methods for comparisons. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1. The setting and motivation do not appear to be novel in the context of anomaly detection. For example, the noisy-AD (SoftPatch) setting in industrial anomaly detection has been previously explored. Therefore, the authors should provide a detailed comparison and discussion with such related work. 2. The proposed method seems to be a combination of Positive-Unlabeled (PU) learning and existing anomaly detectors, such as Variational Autoencoders (VAE). As a result, it lacks significant novelty. 3. The writing of the paper requires substantial improvement. For instance, the explanations of unsupervised anomaly detection and semi-supervised anomaly detection are unclear and need to be more precise and comprehensive. 4. The experiments are insufficient. The paper only reports the performance using two classical anomaly detectors: VAE and SVDD. Moreover, the chosen baselines may not represent the state-of-the-art (SOTA) methods in this field. A more extensive evaluation with additional SOTA methods would strengthen the validation of the proposed approach. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback, which we shall address below. > Weakness 1: The setting and motivation do not appear to be novel in the context of anomaly detection. For example, the noisy-AD (SoftPatch) setting in industrial anomaly detection has been previously explored. Therefore, the authors should provide a detailed comparison and discussion with such related work. We would like to emphasize that the problem setting we address is a well-established one in the field of semi-supervised anomaly detection. Our contribution lies in proposing a novel framework that integrates PU learning with deep anomaly detectors under this setting. Thank you very much for pointing us to SoftPatch. Upon reviewing the paper, we found that SoftPatch [1] is an unsupervised anomaly detection method, and therefore not directly comparable to our semi-supervised approach. Nevertheless, we agree that it is a relevant and interesting work, and we will include it in the related work section of the revised paper to provide a more comprehensive discussion. [1] Jiang, Xi, et al. "Softpatch: Unsupervised anomaly detection with noisy data." *Advances in Neural Information Processing Systems* 35 (2022): 15433–15445. > Weakness 2: The proposed method seems to be a combination of Positive-Unlabeled (PU) learning and existing anomaly detectors, such as Variational Autoencoders (VAE). As a result, it lacks significant novelty. First, we would like to clarify a minor misunderstanding: our method is based on Autoencoders (AE), not Variational Autoencoders (VAE). More importantly, existing anomaly detectors such as AE or DeepSVDD cannot be directly combined with PU learning, as PU learning is formulated for binary classification, while anomaly detectors typically follow a one-class formulation. Our contribution lies in bridging this methodological gap. As detailed in Section 3, we design a tailored risk estimator and training objective to effectively integrate PU learning with deep anomaly detection. We believe this design represents a key aspect of our novelty. > Weakness 3: The writing of the paper requires substantial improvement. For instance, the explanations of unsupervised anomaly detection and semi-supervised anomaly detection are unclear and need to be more precise and comprehensive. We provided explanations of the problem setting in Section 2.1, unsupervised anomaly detection in Section 2.2, and semi-supervised anomaly detection in Section 2.3. We sincerely apologize if any parts were unclear. To help us improve the clarity of the revised version, could you kindly let us know which aspects were difficult to follow? We would greatly appreciate your feedback and will revise the manuscript accordingly. > Weakness 4: The experiments are insufficient. The paper only reports the performance using two classical anomaly detectors: VAE and SVDD. Moreover, the chosen baselines may not represent the state-of-the-art (SOTA) methods in this field. A more extensive evaluation with additional SOTA methods would strengthen the validation of the proposed approach. > Methods And Evaluation Criteria: The compared methods are not up-to-date, with the most advanced method SOEL published in 2023. In other anomaly detection fields like industrial anomaly detection, the author may find more advanced methods for comparisons. We would like to clarify that SOEL (Li et al., 2023) is currently one of the state-of-the-art methods for semi-supervised anomaly detection under label contamination. SOEL is based on DeepSVDD (Ruff et al., 2018), a representative unsupervised anomaly detector. Accordingly, we include DeepSVDD and its extensions as baselines, including DeepSAD (Ruff et al., 2019), LOE (Qiu et al., 2022), and SOEL itself. In addition, AE (Hinton \& Salakhutdinov, 2006) remains a widely used baseline in the literature, and we also compare it with its semi-supervised extension, ABC (Yamanaka et al., 2019). For completeness, we further include Isolation Forest (Liu et al., 2008) as a representative shallow detector, along with a standard PU learning-based binary classifier (Kiryo et al., 2017). We believe that our evaluation is both up-to-date and comprehensive, covering state-of-the-art methods and their fundamental models. Our framework is compatible with various base detectors as long as their loss functions are non-negative and differentiable, and has shown empirical improvements when applied to AE and DeepSVDD. If there are particular methods you believe should be included in our comparison, we would be happy to take them into consideration.
Summary: This paper presents a deep positive-unlabeled anomaly detection framework designed to address the issue of contaminated unlabeled data in anomaly detection. The framework integrates PU learning with deep anomaly detection models such as autoencoders and deep support vector data descriptions. It enables the approximation of anomaly scores for normal data using both unlabeled and labeled anomaly data, allowing the training of anomaly detectors without labeled normal data by minimizing anomaly scores for normal data and maximizing them for labeled anomalies. The main contributions of the paper are: 1. The introduction of the deep PU anomaly detection framework, which effectively handles unlabeled data contaminated with anomalies. 2. Experimental results across various datasets demonstrating that the proposed approach achieves better detection performance compared to existing methods ## update after rebuttal The method's applicability to various data types and its generalization to unseen anomalies are now clearer. However, its limitations in handling distribution shifts remain a concern, as this issue is not adequately addressed in the paper. Therefore , I maintain my score of weak accept and encourage further exploration of this challenge in future work Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors conduct experiments across various datasets to demonstrate that the proposed deep positive-unlabeled anomaly detection framework achieves better detection performance compared to existing approaches. The results show the framework's effectiveness in handling contaminated unlabeled data and its strong performance in detecting both seen and unseen anomalies. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application at hand. The authors introduce a deep positive-unlabeled (PU) anomaly detection framework that integrates PU learning with deep anomaly detection models to handle contaminated unlabeled data. They demonstrate the effectiveness and superiority of the proposed approach by using the AUROC as a reasonable evaluation metric across various standard datasets Theoretical Claims: The theoretical claims in the paper are reasonable. The authors propose the deep positive-unlabeled anomaly detection framework based on the assumption of PU learning, where the unlabeled data distribution is a mixture of normal and anomaly data distributions. By expressing the normal data distribution as a combination of the unlabeled and anomaly data distributions, the authors derive a new training objective function that minimizes the anomaly scores for normal data and maximizes them for labeled anomaly data. This theoretical framework is logically consistent and aligns with the foundations of PU learning. Experimental Designs Or Analyses: The experimental designs in the paper are generally reasonable but have room for improvement. The authors conduct experiments across multiple standard image datasets, use the AUROC as an evaluation metric, and compare with various baseline methods. However, the paper lacks sufficient details in the experimental settings, such as data preprocessing steps, specific parameters of the model architecture, and hyperparameter settings during training. These details are crucial for other researchers to reproduce the experiments and validate the results. It is recommended to provide more detailed experimental settings in the paper to ensure the reproducibility of the results and facilitate further research. Supplementary Material: I reviewed the supplementary material, particularly the dataset example, the anomaly detection performance with various numbers of unlabeled anomalies, and the anomaly detection performance for seen and unseen anomalies. Relation To Broader Scientific Literature: The proposed deep positive-unlabeled (PU) anomaly detection framework aligns with the broader field of semi-supervised learning, where a small amount of labeled data is used in conjunction with a large amount of unlabeled data to improve model performance. Existing semi-supervised approaches, such as those using autoencoders and deep support vector data descriptions, have laid the groundwork for integrating labeled anomaly data with unlabeled data. The PU framework extends these methods by specifically addressing the issue of contaminated unlabeled data, a common problem in real-world scenarios. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is well-structured with clear logic. The methodology and experimental sections are described in detail, making it easy for readers to understand and reproduce the results.However the paper introduces a new framework, the theoretical analysis could be more in-depth. Other Comments Or Suggestions: No more Questions For Authors: 1. The paper's core idea is to use PU learning to handle unlabeled data contaminated with anomalies, but can this method adapt to the data characteristics and anomaly detection needs of different fields in practical applications? For example, can the framework remain robust when there's a certain distribution shift in the overall data? 2. Has the paper fully considered the issue of unknown anomaly detection? Ideologically, can it further expand the modeling and recognition strategies for unknown anomalies to enhance the model's generalization when facing new anomalies? Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback, which we shall address below. > Experimental Designs Or Analyses: However, the paper lacks sufficient details in the experimental settings, such as data preprocessing steps, specific parameters of the model architecture, and hyperparameter settings during training. These details are crucial for other researchers to reproduce the experiments and validate the results. It is recommended to provide more detailed experimental settings in the paper to ensure the reproducibility of the results and facilitate further research. Thank you for the suggestion. While the current version includes many experimental details such as network architecture (following Ruff et al., 2018), optimizer settings, training schedule, and dataset splits, we agree that providing further clarification would improve reproducibility. We will revise the paper to include more complete descriptions of experimental settings, possibly in tabular form to enhance readability. > Question 1: The paper's core idea is to use PU learning to handle unlabeled data contaminated with anomalies, but can this method adapt to the data characteristics and anomaly detection needs of different fields in practical applications? For example, can the framework remain robust when there's a certain distribution shift in the overall data? Similar to other semi-supervised anomaly detection approaches such as SOEL, our current approach is not designed to explicitly handle distribution shifts. We agree that distribution shift is a critical real-world challenge, and we plan to address it in future work. Fortunately, several approaches have been presented for adapting PU learning under distribution shift [1, 2], and we will explore incorporating such approaches into our semi-supervised anomaly detection framework. [1] Hammoudeh, Zayd, and Daniel Lowd. "Learning from positive and unlabeled data with arbitrary positive shift." NeurIPS 2020. [2] Kumagai, Atsutoshi, et al. "AUC Maximization under Positive Distribution Shift." NeurIPS 2024. > Question 2: Has the paper fully considered the issue of unknown anomaly detection? Ideologically, can it further expand the modeling and recognition strategies for unknown anomalies to enhance the model's generalization when facing new anomalies? Although it is important to provide theoretical guarantees for detecting unseen anomalies, our current approach offers only empirical and qualitative support for detecting unseen anomalies. What our approach guarantees is that, when labeled anomalies are available, the anomaly detector can be trained robustly even if similar anomalies are also present in the unlabeled data. As for detecting unseen anomalies, the performance mainly depends on the base anomaly detector. Anomaly detectors can treat data far from normal data as anomalies. Therefore, as long as unseen anomalies lie far from the normal data, they can be detected. Empirically, we have also observed in Appendix C that we may be able to improve the detection performance for unseen anomalies by using seen anomalies. Providing theoretical guarantees for this is an important direction for our future work. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The method's applicability to various data types and its generalization to unseen anomalies are now clearer. However, its limitations in handling distribution shifts remain a concern, as this issue is not adequately addressed in the paper. Therefore , I maintain my score of weak accept and encourage further exploration of this challenge in future work --- Reply to Comment 1.1.1: Comment: Thank you for your response. As you pointed out, distribution shift is an important challenge, and we would like to address it in our future work.
Summary: The paper presents a novel semi-supervised anomaly detection method to improve the anomaly detection performance in handling contaminated unlabeled data. It integrates PU learning with deep anomaly detection models such as AE and DeepSVDD. The proposed approach outperforms existing anomaly detection methods. Claims And Evidence: Most claims of this paper are well-supported, particularly in handling contaminated unlabeled data, improving detection accuracy, and building on a solid theoretical foundation in PU learning. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem being addressed, but the compared methods are outdated. Theoretical Claims: The paper provides a theoretical justification for the proposed method. Experimental Designs Or Analyses: The experimental design is well-structured and follows standard best practices. However, 1) the authors do not analyze how performance degrades when only a few labeled anomalies are available. 2) The compared methods are relatively outdated. Both the semi-supervised and unsupervised approaches used for comparison are from earlier works. Supplementary Material: Yes, I reviewed the supplementary material included in the paper. Relation To Broader Scientific Literature: The paper presents a novel semi-supervised method to improve the anomaly detection performance in handling contaminated unlabeled data. Essential References Not Discussed: The paper has discussed most relevant references. Other Strengths And Weaknesses: Strengths: 1. The paper presents a novel semi-supervised anomaly detection in handling contaminated unlabeled data. 2.The writing structure of the paper is very clear. Weaknesses: 1. The compared methods are relatively outdated. Both the semi-supervised and unsupervised approaches used for comparison are from earlier works. Many new models may outperform those based on AE and DeepSVDD, so it is necessary to include newer baselines. 2. The generalization of the method has not been validated on commonly used industrial anomaly detection datasets, such as MVTec and VisA datasets. 3. The paper only evaluates AUROC at the image level. However, pixel-level AUROC and other evaluation metrics, such as PR-AUC, have not been validated. Other Comments Or Suggestions: See questions. Questions For Authors: 1. The compared methods are relatively outdated. Both the semi-supervised and unsupervised approaches used for comparison are from earlier works. Many new models may outperform those based on AE and DeepSVDD, so it is necessary to include newer baselines. 2. The generalization of the method has not been validated on commonly used industrial anomaly detection datasets, such as MVTec and VisA datasets. 3. The paper only evaluates AUROC at the image level. However, pixel-level AUROC and other evaluation metrics, such as PR-AUC, have not been validated. 4. What happens if fewer labeled anomalies are available? In many real-world tasks, labeled anomalies are extremely scarce. The authors do not analyze how performance degrades when only a few labeled anomalies are available. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback, which we shall address below. > Question 1: The compared methods are relatively outdated. Both the semi-supervised and unsupervised approaches used for comparison are from earlier works. Many new models may outperform those based on AE and DeepSVDD, so it is necessary to include newer baselines. We would like to clarify that our experimental setup includes SOEL (Li et al., 2023), which is currently one of the state-of-the-art methods for semi-supervised anomaly detection under label contamination. SOEL is based on DeepSVDD (Ruff et al., 2018), a representative unsupervised anomaly detector. Accordingly, we include DeepSVDD and its extensions as baselines, including DeepSAD (Ruff et al., 2019), LOE (Qiu et al., 2022), and SOEL itself. In addition, AE (Hinton \& Salakhutdinov, 2006) remains a widely used baseline in the literature, and we also compare it with its semi-supervised extension, ABC (Yamanaka et al., 2019). For completeness, we further include Isolation Forest (Liu et al., 2008) as a representative shallow detector, along with a standard PU learning-based binary classifier (Kiryo et al., 2017). We believe that our evaluation is both up-to-date and comprehensive, covering state-of-the-art methods and their fundamental models. Our framework is compatible with various base detectors as long as their loss functions are non-negative and differentiable, and has shown empirical improvements when applied to AE and DeepSVDD. If there are particular methods you believe should be included in our comparison, we would be happy to take them into consideration. > Question 2: The generalization of the method has not been validated on commonly used industrial anomaly detection datasets, such as MVTec and VisA datasets. As practical benchmarks, we chose PathMNIST, OCTMNIST, and TissueMNIST from the MedMNIST datasets (https://medmnist.com/), which was also used in the SOEL paper. Although the name "MNIST" may be misleading, we emphasize that these datasets are real medical datasets. Our intention was to provide a fair comparison under the same conditions as SOEL, and our approach consistently outperforms SOEL on these datasets. In future work, we plan to include experiments on MVTec and VisA. > Question 3: The paper only evaluates AUROC at the image level. However, pixel-level AUROC and other evaluation metrics, such as PR-AUC, have not been validated. Our approach is designed for image-level anomaly detection, where the goal is to determine whether an entire image contains anomalies, rather than localizing them at the pixel level. Therefore, pixel-level AUROC is not applicable to our method, as it does not produce pixel-wise anomaly scores. We followed the SOEL paper in using image-level AUROC, which we believe is the most appropriate metric for evaluating this setting. In future work, we will consider including additional evaluation metrics suitable for image-level detection, such as image-level PR-AUC. > Question 4: What happens if fewer labeled anomalies are available? In many real-world tasks, labeled anomalies are extremely scarce. The authors do not analyze how performance degrades when only a few labeled anomalies are available. In our experiments, we used 250 labeled anomalies. For example, even when this number is reduced to 50, the superiority of our proposed method remains. When evaluated on MNIST, our PUSVDD achieved an AUROC of 0.994, while SOEL obtained 0.963.
Summary: This paper tackles the task of semi-supervised anomaly detection when unlabeled training data is contaminated with anomalies. Specifically, this paper leverages positive-unlabeled learning to estimate anomaly scores for normal and anomaly data for anomaly detection. The quantitative results demonstrate the superiority of the proposed method over competing approaches. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There seems to be no theoretical claim in this paper. Experimental Designs Or Analyses: Yes Supplementary Material: All supplementary materials are reviewed. Relation To Broader Scientific Literature: To the reviewer, the main contribution of this paper is leveraging the existing positive-unlabeled learning for tackling the task of semi-supervised anomaly detection when training unlabeled data is contaminated with anomalies. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1) This paper presents detailed preliminaries, aiding readers in understanding the proposed method. 2) This paper provides a detailed discussion of related work. 3) The authors provide a detailed experimental setup. 4) I appreciate that the authors conduct an analysis on various contamination rate. Weaknesses: 1) I appreciate that the authors provide a qualitative experimental comparison on a toy dataset in Figure 1. However, I believe that conducting experiments on real datasets may be necessary and be possible which could enhance the credibility of the evidence. Furthermore, why does PU learning fail to detect unseen anomalies, while integrating PU learning with deep anomaly detectors enables the detection of such anomalies? 2) As a key contribution, why does integrating positive-unlabeled learning with deep anomaly detectors help the detection model effectively estimate anomaly scores for test data and detect anomalies that are unseen during training? The authors are expected to provide a comprehensive explanation of the proposed method’s effectiveness from both empirical and theoretical perspectives. 3) In the problem setup, the authors mention that unlabeled anomalies are similar to labeled anomalies. However, in practice, unlabeled anomalies and labeled anomalies are likely to originate from different distributions. How does the proposed method perform in this scenario? 4) The assumption that hyperparameter α is known during training is not appropriate, as anomaly contamination rate for unlabeled data is difficult to obtain in practical scenarios. 5) The authors mention that the proposed can be applied to self-supervised methods, how does it perform? 6) Can the proposed method be applied to tabular data? How is the performance? Other Comments Or Suggestions: No Questions For Authors: See the weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your feedback, which we shall address below. > Weakness 1: ... conducting experiments on real datasets ... We provide a qualitative evaluation using a toy dataset in Figure 1, and quantitative evaluations using real datasets such as SVHN, CIFAR10/100, and Path/OCT/Tissue-MNIST in Section 5. Although the name "MNIST" may be misleading, these are real medical datasets, and were also used in the SOEL paper, which represents the state-of-the-art in this field. To better clarify behavior on real data, we will include qualitative evaluations on these datasets in the revised paper. > Weakness 1: Furthermore, why does PU learning fail to detect unseen anomalies, while integrating PU learning with deep anomaly detectors enables the detection of such anomalies? > Weakness 2: As a key contribution, why does integrating positive-unlabeled learning with deep anomaly detectors help the detection model effectively estimate anomaly scores for test data and detect anomalies that are unseen during training? ... As shown in Figure 1a, since conventional PU learning is designed for binary classification, its decision boundary lies between the seen anomalies and normal data. As a result, even if unseen anomalies are far from the normal data, they cannot be detected. In contrast, our approach integrates PU learning with deep anomaly detectors, combining (1) PU learning’s ability to approximate the normal distribution using unlabeled and anomaly data, and (2) the anomaly detector’s ability to treat data far from this normal data distribution as anomalies. As a result, our approach can detect unseen anomalies as long as they deviate from normal data distribution. Tables 4 and 6 demonstrate that our approach outperforms conventional PU learning in detecting unseen anomalies. > Weakness 3: ... unlabeled anomalies and labeled anomalies are likely to originate from different distributions ... When labeled and unlabeled anomalies originate from different distributions, our approach treats all unlabeled data as normal, similarly to ABC and DeepSAD. We emphasize that even in such cases, if the unlabeled anomalies are few in number and lie far from the unlabeled normal data, they can be detected due to the nature of the base anomaly detector. Our target scenario assumes human-in-the-loop feedback, where labeling a few clear anomalies in unlabeled data is feasible. Our goal is to use such limited supervision to improve anomaly detection performance in this practical and cost-effective setting. To handle stronger distribution shifts of anomalies, we plan to incorporate PU methods designed for selection bias [1, 2]. [1] Kato, Masahiro, Takeshi Teshima, and Junya Honda. "Learning from positive and unlabeled data with a selection bias." International Conference on Learning Representations. 2019. [2] Wang, Xutao, et al. "PUE: Biased positive-unlabeled learning enhancement by causal inference." Advances in Neural Information Processing Systems 36 (2023): 19783–19798. > Weakness 4: The assumption that hyperparameter $\alpha$ is known during training is not appropriate ... We assume that the labeled and unlabeled anomalies come from the same distribution. Under this assumption, as described in Section 3.1, $\alpha$ can be estimated using existing PU learning methods (e.g., Menon et al., 2015). In practice, $\alpha$ reflects the proportion of unlabeled data that resembles the labeled anomalies. Therefore, in the scenario described in Weakness 3, where labeled and unlabeled anomalies come from different distributions, the estimated value of $\alpha$ would be small. In such cases, our approach behaves similarly to existing semi-supervised approaches such as ABC and DeepSAD. Moreover, even if $\alpha$ is not estimated accurately, our approach is empirically more robust to its value than existing baselines, as demonstrated in Figure 2. > Weakness 5: The authors mention that the proposed can be applied to self-supervised methods, how does it perform? Our main contribution lies in using PU learning to approximate the loss for normal data using only unlabeled data and labeled anomalies. Hence, as long as the loss function for normal data is defined, our approach can be applied to any anomaly detection models, as described in Section 3.3. For example, it can be applied to self-supervised methods such as MHRot (Hendrycks et al., 2019), NTL (Qiu et al., 2021), and ICL (Shenkar \& Wolf, 2021), which are mentioned in the LOE paper (Qiu et al., 2022). We plan to evaluate the performance of our approach when applied to self-supervised anomaly detection and include the results in the revised paper. > Weakness 6: Can the proposed method be applied to tabular data? How is the performance? Yes. We have confirmed that our approach performs well on KDD99, a well-known tabular dataset for network anomaly detection. We will include additional tabular experiments in the revised version. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for your rebuttal. However, I still have major concerns about the novelty of this paper. As the authors claimed, the main contribution lies in utilizing PU learning to estimate the normal distribution, but the PU learning is an existing method. Additionally, the ability to detect unseen anomalies is derived from existing anomaly detection techniques. Furthermore, the authors stated that the proposed method performed well on self-supervised methods and tabular data, but this is neither reflected in the manuscript nor adequately addressed in the rebuttal. Even if it does, the performance still stems from existing methods. Based on these points, I am afraid that I would keep my previous rating. --- Reply to Comment 1.1.1: Comment: Thank you for your continued feedback and the opportunity to clarify our contributions. We would like to respectfully address your concerns regarding the novelty and scope of our contributions. > However, I still have major concerns about the novelty of this paper. As the authors claimed, the main contribution lies in utilizing PU learning to estimate the normal distribution, but the PU learning is an existing method. Additionally, the ability to detect unseen anomalies is derived from existing anomaly detection techniques. While it is true that PU learning is a well-established technique, the novelty of our work lies in bridging PU learning and deep anomaly detection in the context of semi-supervised anomaly detection under label contamination, a setting that has not been sufficiently explored. As described in Section 3 of the paper, we propose a new training objective that allows deep anomaly detectors (which typically rely on one-class learning) to benefit from PU learning (which is designed for binary classification). This integration is technically non-trivial, as it requires aligning the assumptions and training dynamics of PU learning and one-class anomaly detection. Regarding the ability to detect unseen anomalies, we fully agree that it stems from the underlying anomaly detector's design. However, we demonstrate empirically in Tables 4 and 6 (and Appendix C) that the integration with PU learning improves the generalization performance to unseen anomalies compared to using PU or the anomaly detector alone. Our contribution is not to invent an entirely new detection mechanism, but rather to develop a robust training framework that works effectively under realistic conditions of contamination and limited supervision. > Furthermore, the authors stated that the proposed method performed well on self-supervised methods and tabular data, but this is neither reflected in the manuscript nor adequately addressed in the rebuttal. Even if it does, the performance still stems from existing methods. We would like to respectfully clarify a misunderstanding. In the manuscript and rebuttal, we did not state that our method "performed well" on self-supervised methods. Rather, we stated that our framework can be applied to self-supervised anomaly detection methods, as long as the loss over normal data is defined. This was motivated by the SOEL (Li et al., 2023) paper, which also discusses compatibility with methods such as MHRot, NTL, and ICL. We plan to evaluate our method on these self-supervised baselines in future work and include such results in an extended version of the paper. Regarding tabular data, we did conduct a preliminary experiment using the KDD99 dataset and confirmed that our approach achieves strong performance. We will include these results in the revised version of the paper. Our evaluation strategy is designed to ensure a fair and consistent comparison with SOEL, which also builds on existing anomaly detectors by applying semi-supervised learning. In particular, SOEL is based on DeepSVDD, a widely adopted unsupervised method. Accordingly, we include DeepSVDD and its extensions (DeepSAD, LOE, and SOEL) as baselines. We also evaluate against AE and its semi-supervised variant ABC, as well as Isolation Forest and a standard PU learning method. We believe that this constitutes a comprehensive and up-to-date set of comparisons, including both state-of-the-art methods and foundational models. We hope this clarifies that while our work builds upon existing components, its novelty lies in the unified framework and practical relevance, not in isolated algorithmic innovations. We respectfully ask that these contributions be considered in the final assessment.
null
null
null
null
null
null
Super Deep Contrastive Information Bottleneck for Multi-modal Clustering
Accept (poster)
Summary: To fully explore the complex latent information and interdependencies among multi-modal data, this paper propose a super deep contrastive information bottleneck for multi-modal clustering method. It incorporates the rich information from the hidden layers of the encoder into the clustering process to comprehensively capture modality features and their associations. Furthermore, a dual contrastive learning strategy is designed to ensure more precise and stable clustering performance. Experimental results show the promising performance by the method, which fully validate its advantages and well-designed framework. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence in the experimental parts in section 3. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem in this paper. This paper propose a super deep contrastive information bottleneck for multi-modal clustering method, and the experimental results show the promising performance by the method, which fully validate its advantages and well-designed framework. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: I have checked the soundness/validity of any experimental designs or analyses in the experimental sections. Supplementary Material: There is no supplementary material for this paper. Relation To Broader Scientific Literature: This paper proposes a novel super deep contrastive information bottleneck for multi-modal clustering method, and it is different from prior related findings. Essential References Not Discussed: There is no essential references not discussed in this paper. Other Strengths And Weaknesses: This paper propose a novel super deep contrastive information bottleneck for multi-modal clustering, which is new and well-organized. The experiments are well-designed and have shown its advantages over existing methods. However, there are still some concerns shown below: 1.Hidden layer information integration mechanism: Is it a shared encoder or an independent encoder? How to fuse the hidden layer information of different modalities? 2.Improper language expression: The lengthy sentence structure affects readability. 3.The experimental data is not public: it is recommended to provide a data download link to improve the reproducibility of the paper. 4. Shortcoming is missing: The shortcomings of the proposed method are not given in the paper. Generally, for a conference paper, advantage and limitation analysis are the major parts. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: Please see the above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **Q1: Hidden layer information integration mechanism: Is it a shared encoder or an independent encoder? How to fuse the hidden layer information of different modalities?** ***Response:*** Thank you for the insightful question. Below, we clarify the integration mechanism of hidden layer information in our approach. Instead of a shared encoder, we adopt an independent encoder for each modality. Different modalities usually have different statistical properties, making it difficult to directly fuse their hidden layer information at an early stage. This design allows each modality to learn modality-specific feature representations, which is beneficial given the different statistical properties of various modalities. Instead of performing direct fusion at the hidden layer level, we compare the hidden layer representations of different modalities to extract shared and complementary information. Specifically, each encoder learns to align its representations by evaluating the relationship between its hidden layer features and the hidden layer features of other modalities. We achieve this by maximizing the mutual information of hidden layer features between different modalities, which encourages aligned representations while preserving modality-specific features. By adopting this strategy, we ensure that the final multimodal representations are both consistent and rich, leading to a more efficient feature learning process. We hope this explanation clarifies our integration mechanism. **Q2: Improper language expression: The lengthy sentence structure affects readability.** ***Response:*** Thank you for the constructive comments on sentence structure. To enhance the readability of the manuscript, we have made extensive revisions. We focused on checking redundant sentences and reducing them through segmentation or conciseness to improve clarity. For example, the sentence in the *'Introduction'* section, *'The proposed SDCIB can efficiently and meticulously mine the latent information between modalities through the hidden layers, and simultaneously focus on both feature distribution and clustering assignment to better capture the inherent structure of the data,'* has been revised to: *'The proposed SDCIB efficiently mines the latent information between modalities through the hidden layers. Meanwhile, it simultaneously considers both feature distribution and clustering assignment to better capture the inherent structure of the data.'* In the final version, we will adjust lengthy expressions into clearer and more readable sentences. **Q3: The experimental data is not public: it is recommended to provide a data download link to improve the reproducibility of the paper.** ***Response:*** We appreciate your suggestion and realize the importance of reproducibility in scientific research. Currently, our experimental section already cites the original source of the dataset to provide transparency of its origin. However, we also understand that providing a direct download link can further improve the accessibility of readers. We will revise the manuscript to add specific download links to ensure that it is publicly available and reproducible. **Q4: Shortcoming is missing: The shortcomings of the proposed method are not given in the paper. Generally, for a conference paper, advantage and limitation analysis are the major parts.** ***Response:*** Thank you for the insightful comment. Based on your suggestion, we have updated the conclusion to include the following limitations of the proposed SDCIB: * The proposed SDCIB demonstrates limited effectiveness when dealing with incomplete data, particularly when certain modalities or features are missing. In these cases, the model may face difficulties in accurately capturing the relationships between modalities, which could result in less optimal clustering outcomes. * The method requires the number of clusters to be predetermined, which may be a limitation as it assumes prior knowledge of the data's structure, making it less flexible in scenarios where this information is unavailable. * The proposed SDCIB primarily relies on batch learning, which may not be suitable for certain applications, such as streamed multi-modal data. While we recognize that these limitations could impact certain scenarios, we believe that future research can explore ways to address them, with the goal of improving the method’s robustness and expanding its range of applications. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
Summary: This paper proposes a Super Deep Contrastive Information Bottleneck (SDCIB) for multi-modal clustering, designed to fully exploit the latent information in multi-modal data. SDCIB integrates the rich information from the hidden layers of the encoder into the clustering process, optimizing both feature distribution and clustering assignments through contrastive learning. Experimental results on four multi-modal datasets demonstrate that SDCIB outperforms existing approaches. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The experimental results on multiple multi-modal datasets consistently demonstrate that the SDCIB method outperforms existing approaches. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the multi-modal clustering problem. SDCIB effectively leverages the rich information from the encoder's hidden layers, optimizing both feature distribution and clustering assignments through contrastive learning, which aligns to improve clustering performance in multi-modal settings. Theoretical Claims: The paper does not present any formal proofs or theoretical claims. Experimental Designs Or Analyses: I have reviewed the soundness and validity of the experimental designs and analyses in Section 3, including subsections 3.1 to 3.9. The experimental setup, dataset selection, evaluation metrics, and comparative analysis are appropriately designed to support the paper's claims. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper proposes a Super Deep Contrastive Information Bottleneck (SDCIB) method for multi-modal clustering in the broader scientific literature. It improves upon existing ideas in multi-modal clustering, contrastive learning, and information bottleneck, offering more efficient and powerful clustering solutions for multi-modal data. Essential References Not Discussed: Critical references have been included. Other Strengths And Weaknesses: The paper presents a well-structured argument and is written with clarity. Below are my detailed comments: 1. The paper discusses the use of deeper hidden layers to explore relationships, but the specific advantages of this approach are not explicitly outlined. I recommend the authors provide a more detailed explanation of the benefits of using deeper layers in Section 2. 2. The authors utilize MINE to estimate mutual information. It would be helpful if the paper included a discussion of why MINE was chosen over other potential estimation methods. Were alternative methods considered and, if so, why were they not chosen? 3. There are minor formatting and writing inconsistencies that could be addressed to improve the paper. For instance, ensure consistency in the usage of terms like "Information" on Page 3, and correct the reference to "IJCAL" in the bibliography. Other Comments Or Suggestions: I have one suggestion regarding the abstract: the authors are encouraged to provide quantitative descriptions of the results. For example, "The experiment shows that the method is effective" should be changed to "The accuracy of the method on the X dataset is improved by 5%". Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. **Q1: The paper discusses the use of deeper hidden layers to explore relationships, but the specific advantages of this approach are not explicitly outlined.** ***Response:*** Thank you for the insightful comment. The use of hidden-layer information provides the following advantages: * Richer Feature Representations: As the depth of the network increases, each hidden layer progressively refines the feature representations, capturing both low-level and high-level structures. This enables a more informative and structured representation, which is particularly beneficial for clustering tasks. * Improved Feature Compression: Deeper layers allow for a more effective compression process, where redundant information is filtered out while preserving essential details. By leveraging intermediate hidden-layer information, the model learns a more compact and meaningful feature representation. * Optimization Benefits in the Information Bottleneck Framework: Introducing hidden-layer information in the IB framework helps regulate the trade-off between compression and retention of relevant information, leading to more discriminative and generalizable clustering results. We will incorporate a more detailed discussion of these aspects in Section 2 to clarify the advantages of using deeper hidden layers. **Q2: The authors utilize MINE to estimate mutual information. It would be helpful if the paper included a discussion of why MINE was chosen over other potential estimation methods. Were alternative methods considered and, if so, why were they not chosen?** ***Response:*** Thank you for the suggestion. During the experiment, we did consider other methods such as variational mutual information estimation and InfoNCE. However, for mutual information $I(A;B)$, $A$ is a feature and $B$ is the label of the cluster, which results in the dimension of $A$ being much larger than that of $B$. We found that: * The variational method requires manual alignment of the dimensions of $A$ and $B$, which may cause information loss during the alignment process. * The InfoNCE method requires the construction of a large number of negative samples, and due to the small dimension of $B$, we found that this may lead to insufficient discrimination between positive and negative samples, thus affecting the accuracy of the estimation. In contrast, MINE provides a more robust mutual information estimation method, especially when there is a significant dimensional difference between the two variables. This method can estimate more efficiently and accurately without the need for dimensional alignment or worrying about insufficient discrimination of negative samples. Therefore, we chose MINE. **Q3: There are minor formatting and writing inconsistencies. For instance, ensure consistency in the usage of terms like "Information" on Page 3, and correct the reference to "IJCAL" in the bibliography.** ***Response:*** Thank you for the careful review. We have carefully reviewed the manuscript and made the necessary revisions. For example, the two instances of *'Information'* on page 3 had different formatting, and we have standardized them to title case as *'Hidden-layer Information'* and *'Consistency Information.'* Additionally, regarding the reference to *'IJCAL'* that you mentioned, we have corrected it in the references to: Nie, F., Li, J., and Li, X. Self-weighted multiview clustering with multiple graphs. In *Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI2017)*, pp. 2564–2570, 2017. Furthermore, we have checked all other references to enhance its overall precision. **Q4:The authors are encouraged to provide quantitative descriptions of the results. For example, "The experiment shows that the method is effective" should be changed to "The accuracy of the method on the X dataset is improved by 5%"** ***Response:*** Thank you for the detailed feedback on the content of the abstract. We have revised the abstract accordingly. Specifically, we changed the sentence *'We conduct experiments on 4 multi-modal datasets.'* to *'We conducted experiments on 4 multi-modal datasets and the accuracy of the method on the ESP dataset improved by 9.3%. The results demonstrate the superiority and clever design of the proposed SDCIB.'* This adjustment enhances the precision of the expression and more intuitively highlights the effectiveness of the proposed SDCIB. Thanks again. The modifications will be added to the final version.
Summary: This paper proposed an information bottleneck based method named SDCIB for addressing the multi-modal clustering problem, which aims to mine the complex correlations and interdependencies among modalities. It mainly contains two aspects, first, it incorporates the different hidden layers into loss functions to fully mine the relationships among modalities. Then, it also explores the consistency information among clustering assignments of modalities. The experimental results show the superiority of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No proofs and theoretical claims in this paper. Experimental Designs Or Analyses: I have carefully checked the soundness and validity of the experimental designs and their analysis, including all the subsections in Sec. 3. Supplementary Material: No supplementary material for this paper. Relation To Broader Scientific Literature: This paper proposed an information bottleneck based method named SDCIB for addressing the multi-modal clustering problem. I find the above is the key contributions, and is not proposed in prior findings or results. Essential References Not Discussed: The related works are essential to understand the key contributions and no important ones are not discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. The paper presents a well-articulated motivation, with its effectiveness rigorously validated through multiple experiments. 2. This paper exhibits a notable advancement in multi-modal clustering field, demonstrating superior performance over existing methods. 3. The authors offer a comprehensive and well-structured explanation of the method, effectively highlighting its significance. Weaknesses: 1. The full name of some abbreviations is missing, such as KL, which may influence the readability and the understanding on the paper. 2. It is seen that the improvement of the proposed method over other methods is significant. Will the source code be released to the public to enhance the development of the multi-modal clustering community? 3. Are there any limitations of the proposed method? The authors are encouraged to give them in the conclusion. Other Comments Or Suggestions: I have no other comments or suggestions. Please see my comments above. Questions For Authors: I have no other questions for authors, all my questions are shown Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **Q1: The full name of some abbreviations is missing, such as KL, which may influence the readability and the understanding on the paper.** ***Response:*** Thank you for pointing out this issue. We have verified the full names and abbreviations you mentioned one by one and made the following improvements. For the term *'KL divergence'* you pointed out, we have provided its full form as *'Kullback-Leibler divergence'* when it first appeared to ensure the professionalism and accuracy. In addition, we conducted a thorough check of other related terms and discovered that the abbreviation *'MINE'* was also not expanded the first time it appeared. Its full name is *'Mutual Information Neural Estimation'* and we have added this clarification in the manuscript. We will ensure that all key terms in the manuscript provide their full names when they first appear and are clearly presented in the final version. **Q2: It is seen that the improvement of the proposed method over other methods is significant. Will the source code be released to the public to enhance the development of the multi-modal clustering community?** ***Response:*** We sincerely appreciate your recognition of the effectiveness of the proposed SDCIB. We fully understand the importance of code availability in promoting research transparency and advancing the field of multi-modal clustering. Currently, we have organized and optimized the code to ensure its clarity and readability. We plan to release the source code after the official publication of the paper, hoping that it will contribute to the research community and support further advancements in this field. **Q3: Are there any limitations of the proposed method? The authors are encouraged to give them in the conclusion.** ***Response:*** We sincerely appreciate your valuable suggestion. In response, we have revised the conclusion to include the following limitations of the proposed SDCIB: * The proposed SDCIB shows limited performance when handling incomplete data samples, particularly when certain modalities or features are missing. In such cases, the model may struggle to accurately learn the relationships between the modalities, which could lead to suboptimal clustering results. * The number of clusters must be known in advance as most existing multi-modal clustering methods. This requirement can be restrictive, as it assumes prior knowledge of the data's underlying structure. * The proposed SDCIB primarily relies on batch learning, which may not be suitable for certain applications, such as streamed multi-modal data. Additionally, we acknowledge that while these limitations may affect certain scenarios, we will explore solutions to address them, with the aim of further enhancing the method’s robustness and applicability. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
Summary: In multi-modal clustering, effectively capturing the complex relationships between modalities remains a challenge. For solving this, this paper propose a new super deep contrastive information bottleneck method to maximize the utilization of latent information in multi-modal data. It firsts introduces hidden layer information from the encoder into the clustering process to enhance modality feature representation; then, it proposes a dual contrastive learning optimization strategy The experimental results demonstrate that the method not only enhances clustering performance but also exhibits strong applicability in multi-modal data modeling. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: No theoretical claims here. Experimental Designs Or Analyses: Yes, I have checked. By conducting experiments on four multi-modal datasets, the method significantly outperforms existing methods. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper propose a new super deep contrastive information bottleneck method to solve the multi-modal clustering problem. Essential References Not Discussed: No essential related works are missing in the paper. Other Strengths And Weaknesses: The proposed method is based on an information-theoretical method called information bottleneck which is named SDCIB and is also organized in a rigorous, theoretical sounded way. The descriptions is clear and well written, and is novel enough for this conference. I have my comments below: 1) It is good to see that the authors give some recent IB works on MMC problem, it is suggested to give a deeper analysis on the limitations of them. Although the differences with the proposed method is given now, some more analysis is also needed. 2) Some equation references is not proper, such as Eq. 2, Eq. 8. A bracket is missing throughout the whole manuscript. 3) Some English usage about the writing details is suggested to be improved, such as 'cluster number K'. Other Comments Or Suggestions: I have no other comments and suggestions. Questions For Authors: I have no other questions for this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **Q1: It is good to see that the authors give some recent IB works on MMC problem, it is suggested to give a deeper analysis on the limitations of them. Although the differences with the proposed method is given now, some more analysis is also needed.** ***Response:*** Thank you for the insightful comments on the information bottleneck work in this paper. Based on your suggestions, we conducted a more in-depth analysis of the limitations of recent IB works on MMC problems. These analyses were added to Section 2.1 of the manuscript as follows: Federici et al. [1] proposed a multi-modal IB method that can identify non-shared information between two modalities, but it only explores the correlation of different modalities through feature distribution, ignoring the consistency of cluster assignment, making the learned feature representation unfriendly to downstream clustering tasks. Yan et al. [2] proposed a multi-modal IB method that uses shared representations of multiple modalities to eliminate private information of a single modality. Yan et al. [3] further proposed an incremental IB method that builds acknowledge base to solve the clustering problem of incremental modalities. Both of the above works considered the consistency of feature distribution and cluster assignment at the same time, but they failed to consider the correlation between feature distribution and clustering results. All the above MMC IB methods ignore the rich information contained in the hidden layers of the encoder and fail to explicitly utilize it. The above limitations motivate us to the proposed SDCIB. **References** [1]: Federici, M., Dutta, A., Forré, P., Kushman, N., and Akata, Z. Learning robust representations via multi-view information bottleneck. arXiv preprint arXiv:2002.07017, 2020. [2]: Yan, X., Mao, Y., Ye, Y., and Yu, H. Cross-modal clustering with deep correlated information bottleneck method. IEEE Transactions on Neural Networks and Learning Systems, 2023. [3]: Yan, X., Mao, Y., Ye, Y., and Yu, H. Incremental multiview clustering with continual information bottleneck method. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024. **Q2: Some equation references is not proper, such as Eq. 2, Eq. 8. A bracket is missing throughout the whole manuscript.** ***Response:*** Thank you for the detailed review. In response to your specific suggestion regarding the equation references format, we have made thorough revisions to the whole manuscript. First, based on your feedback, we corrected Eq. 2 and Eq. 8 to the standardized format, Eq. (2) and Eq. (8), respectively. Additionally, we conducted a comprehensive review of the whole manuscript to ensure consistency and accuracy in all equation references. Furthermore, in order to further improve the quality of the manuscript, we not only addressed the equation references issue but also meticulously checked and adjusted all formatting details throughout the whole manuscript (including formulas, symbols, references, etc.) to ensure greater precision and rigor. **Q3: Some English usage about the writing details is suggested to be improved, such as 'cluster number K'.** ***Response:*** Thank you for the attention to the language details in our manuscript and for the helpful suggestions. We have carefully reviewed the whole manuscript for language issues and made detailed corrections where needed. As you suggested with *'cluster number $K$'*, we have corrected it to *'the number of clusters $K$'*, making it more standard. Additionally, we have checked for similar language issues throughout the manuscript and have made the necessary adjustments, such as *'the parameter $α$, and the parameter $β$'*, which has now been adjusted to a more fluent *'the parameter $α, β$'* to improve the accuracy and naturalness of the expression. To ensure the overall language quality, we have conducted an in-depth review of the whole manuscript's English expressions, optimizing wording and grammar to make the overall presentation more standardized and clear. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
null
null
null
null
null
null
Inverse Bridge Matching Distillation
Accept (poster)
Summary: This work incorporates the technique of score distillation in diffusion models to diffusion bridges for accelerated generation. The empirical performance demonstrates that the proposed approach is superior compared to existing baselines under multiple image-to-image tasks. Claims And Evidence: Most of the claim is well supported despite some comments on related work that needs more careful examination. Methods And Evaluation Criteria: The proposed method and evaluation make sense for accelerating the sampling for diffusion bridges. Theoretical Claims: The theoretical claims are checked in detail and are correct. However, the equivalent derivation has already been proposed in previous work, and its close connection with the proposed method in this work should at least be discussed (see weaknesses). Experimental Designs Or Analyses: The experiments are well-executed. Supplementary Material: I reviewed all the supplementary materials. Relation To Broader Scientific Literature: This research is related to the broader literature on solving inverse problems. Essential References Not Discussed: All key papers are cited, but they are not discussed thoroughly (which should be) in my opinion (see weaknesses). Other Strengths And Weaknesses: ## Strengths * This work demonstrates enough empirical significance in accelerating the sampling of diffusion bridges and designing some key techniques (e.g., noise-conditioned one-step generator) tailored for distilling from diffusion bridges. ## Weaknesses * In my opinion, although the authors mentioned some related works on distillation for diffusion models [1, 2], these works deserve a much more detailed discussion given their strong relevance — or even the use of essentially identical distillation loss functions — to the method proposed in this paper. For example, the high-level objective function of this paper is essentially the Fisher divergence used in [1], and the tractable training objective derived here is mathematically equivalent to those in [1, 2] (e.g., the loss function in this paper corresponds to the SiD ($\alpha=0.5$) loss in [1] and to the combined $\mathcal{L}_1 + \mathcal{L}_2$ loss in [2]). The derivation should be essentially the same as what has been done in [2] (same high-level objective and same final loss function). Although the focus shifts from diffusion models to diffusion bridges, and the motivation for adopting a Fisher divergence-like objective is different (which I think is a positive contribution if you can elaborate more on the connection between the KL for path measures and Fisher divergence), I still believe that better contextualization of these related works is necessary and would greatly benefit the community’s understanding of this research direction. * I feel the claim that "previously proposed samplers and consistency models can not work with unconditional bridges" needs more justification than provided and should be re-examined. For example, [3] develops DDIM-like samplers for I2SB, which is an unconditional bridge. For consistency models, it only requires that we can simulate the PF-ODE, which I believe is plausible once we know the drift of the unconditional bridge, given it is a Markovian process. I am happy to raise my score if these are properly addressed. [1] Zhou, Mingyuan, et al. "Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation." (ICML 2024). [2] Huang, Zemin, et al. "Flow generator matching." arXiv:2410.19310 (2024). [3] Wang, Yuang, et al. "Implicit Image-to-Image Schrodinger Bridge for Image Restoration." arXiv:2403.06069 (2024). Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer kKRF, thank you for your comments. **(1) Relation to SiD [1] and FGM [2].** Following your suggestion, we will extend the discussion of the related work in the main text. Below, we present the preliminary version of the extension: Unlike SiD [1] and FGM [2], we focus on diffusion-bridge models used for data-to-data translation and not for generation from noise. Furthermore, our high-level objective is the KL divergence $\text{KL}(\text{BM}(\Pi_{\theta})||M^*)$ between path measures of teacher model $M^*$ and path measure $\text{BM}(\Pi_{\theta})$ given by the generator $G_{\theta}$, which differs from Fisher Divergence used in SiD [1]. The motivation of our high-level objective is to restore alignment between data pairs $(x_0, x_T)$. We derive our final tractable objective using different techniques (see Appendix A), i.e., we do not use flow product or score product identities as in [1, 2] but hypothesize that analogous identities can be derived for diffusion bridge models. Our final objective can be rewritten similarly to the final objective used in SiD [1, Eq. 23] ($\alpha=0.5$) and FGM [2, Eqs. 4.11, 4.12]: $$ \mathbb{E}_{t, p\_{\theta}(x\_0, x\_t, x\_T)} \Big[ \underbrace{\|\widehat{x}\_0^*(x\_t, t, x\_T) - \widehat{x}\_0^{\phi}(x\_t, t, x\_T) \|^2}\_{\text{quadratic term}} + 2 \left< x_0^{*}(x_t, t, x_T) - x_0^{\phi}(x_t, t, x_T), x_0^{\phi}(x_t, t, x_T) - x_0 \right> \Big] $$ However, in both SiD [1] (for $\alpha=1.0, 1.2$ used in experiments) and FGM [2], the authors either omitted the quadratic term or even used a negative coefficient for it in image experiments since it introduced instability. Unlike SiD [1] and FGM [2], we do not omit any parts of the original loss function and use it as the theory provides it. **Relation between KL divergence of path measures and Fisher Divergence.** To highlight the difference between KL divergence of path measures (which we use) and Fisher Divergence (which is used in SiD [1]), consider two reverse-time diffusions $D_1$ and $D_2$ given by the same starting distribution $p(x_T)$ and SDEs: $$ D_1: dx\_t = v(x\_t, t)dt + g^2(t)d\bar{w}\_t, \quad D_2: dx\_t = \widehat{v}(x\_t, t)dt + g^2(t)d\bar{w}\_t. $$ Denote marginal densities as $p(x\_t)$ for $D\_1$ and $\widehat{p}(x\_t)$ for $D\_2$, then KL divergence and Fisher Divergences are given as: $$ \text{KL}(D\_1||D\_2) = \mathbb{E}\_{t, p\_t(x\_t) } \Big[\frac{1}{2g^2(t)}\|v(x\_t, t) - \widehat{v}(x\_t, t) \|^2 \Big] + \underbrace{\text{KL}(p(x\_T)||\widehat{p}(x\_T))}\_{= 0 \text{ if } p(x\_T) = \widehat{p}(x\_T)} $$ $$ \mathcal{L}\_{\text{SiD}}(D\_1, D\_2) = \mathbb{E}\_{t} \Big[\underbrace{\mathbb{E}\_{p(x\_t)}\|\nabla\_{x\_t}\log p(x\_t) - \nabla\_{x\_t}\log \widehat{p}(x\_t)\|^2}\_{\text{Fisher divergence between $p(x\_t)$ and $\widehat{p}(x\_t)$}}\Big] $$ Note that in SiD [1], the authors use the time average of Fisher Divergence, which compares only marginal distributions $p(x_t)$ and $\widehat{p}(x_t)$. However, two path measures with the same marginal distributions might not be equal. As a result, the minimization of Fisher Divergence with the teacher does not guarantee that the learned model will transform data in the same way as the teacher model. Nevertheless, Fisher Divergence allows one to build a Generator $x_0 = G_{\theta}(z)$ to produce data $p_{\theta}(x_0) \approx p_{\text{data}}(x_0)$, by matching marginals of data and generated samples. In contrast, we use KL-divergence between two path measures, which is zero if and only if two path measures are identical, since we need to get generations aligned to data coupling. **(2) Acceleration of the unconditional diffusion bridge models.** To obtain PF-ODE for the diffusion bridge model, one needs to subtract forward and reverse SDE drifts [4, end of Sec. 4]. For the conditional case, the drift of a forward process $p(x_0|x_T) \rightarrow \delta(x_T)$ is known analytically from Doob h-transform. In the unconditional case, the drift of a forward unconditional diffusion $p_0(x_0) \rightarrow p_T(x_T)$ is unknown. Hence, to restore PF-ODE in the unconditional case, one must learn an additional teacher model for forward-time translation, which is time-consuming. We will add that I3SB [3] is used to accelerate the unconditional bridge model. Their sampler coincides with the DBIM sampler but replaces the conditional model with an unconditional one to sample $\widehat{x}_0$. Their sampler provides good quality for moderate NFE (25+) but performs worse than distillation methods in a single NFE regime, e.g., for JPEG-10, their FID is 17, while ours is 3.8 (we will add it to the text). **Concluding remarks**. We would be grateful if you could let us know if our explanations have been satisfactory. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have. References: [1,2,3] the same. [4] Shi Y. et al. Diffusion Schrödinger Bridge Matching. --- Rebuttal Comment 1.1: Comment: Thank the authors for their rebuttal. Please make sure to add the discussion in the main text. I have a few follow-up comments: * I agree with your arguments about the difference between KL w.r.t. path measures and Fisher divergence, but I also think that their connection under the setting of diffusion models/diffusion bridges should be explicitly discussed. For diffusion models, the reverse drift is fully characterized by the marginal score function $\nabla_{x_t} \log p_t(x_t) = \mathbb E_{x_0 | x_t, t}[\nabla_{x_t} \log p(x_t | x_0)]$ , and thus, they are equivalent. For diffusion bridges, one can draw similar connections as the optimal drift is also given by $\mathbb E_{x_0 | x_t, t}[\nabla_{x_t} \log q(x_t | x_0)]$. The only difference I have seen here is that $\mathbb E_{x_0 | x_t, t}[\nabla_{x_t} \log q(x_t | x_0)]$ is no longer equal to the marginal score as in the case of diffusion models. Given that, the derivation of IBMD wouldn't be fundamentally different from the diffusion model case. * Besides, I am hesitant to accept the argument "We derive our final tractable objective using different techniques (see Appendix A), i.e., we do not use flow product or score product identities as in [1, 2]". To me, the core techniques used for the derivations in Appendix A is also the score identities used in [1, 2], i.e., $\mathbb E_{x_0, x_t} [\langle f(x_t), \nabla_{x_t} \log p_{t|0}(x_t | x_0) \rangle] = \mathbb E_{x_t} [\langle f(x_t), \mathbb E_{x_0 | x_t}[\nabla_{x_t} \log p_{t|0}(x_t | x_0)] \rangle]$. In conclusion, my point is that what matters here is the connection (or the technical equivalence) between them under this specific setup rather than their difference, which is valid in general but does not apply much in this setup. Although I appreciate your arguments about the differences between them, and I think this is a nice contribution to the paper. Nevertheless, I have adjusted my rating, and I encourage the authors to discuss the relationship between their methods and the mentioned score distillation techniques for diffusion models more thoroughly in the revised version. (minor) * I agree with your arguments for learning the PF-ODE that characterizes the marginal distribution of the unconditional bridge is demanding. But I am wondering that, for generation purpose, one may establish the ODE w.r.t. the conditional distribution of the unconditional bridge. For example, the deterministic sampler in [3] should also correpods to an ODE trajectory? --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. **(1) New version of the relation.** We extended the part about the relation between KL divergence of path measures and Fisher Divergence based on your feedback. We included more details specific to the diffusion and diffusion bridge models. **Relation between KL divergence of path measures and Fisher Divergence.** To highlight the difference between KL divergence of path measures (which we use) and Fisher Divergence (which is used in SiD [1]), consider two reverse-time diffusions $D_1$ and $D_2$ given by the same starting distribution $p(x_T)$ and SDEs: $$ D\_1: dx\_t = v(x\_t, t)dt + g^2(t)d\bar{w}\_t, \quad D\_2: dx\_t = \widehat{v}(x\_t, t)dt + g^2(t)d\bar{w}\_t.$$ Let $p(x_t)$ and $\widehat{p}(x_t)$ be the corresponding marginals. Then the KL divergence and Fisher divergence are given by: $$ \text{KL}(D\_1||D\_2) = \mathbb{E}\_{t, p\_t(x\_t)} \left[\frac{1}{2g^2(t)}\|v(x\_t, t) - \widehat{v}(x\_t, t)\|^2 \right] + \underbrace{\text{KL}(p(x\_T)||\widehat{p}(x\_T))}_{= 0 \text{ if } p(x\_T) = \widehat{p}(x\_T)}, $$ $$ \mathcal{L}\_{\text{SiD}}(D_1, D_2) = \mathbb{E}\_t \left[\mathbb{E}\_{p(x_t)}\|\nabla\_{x_t} \log p(x\_t) - \nabla\_{x\_t} \log \widehat{p}(x_t)\|^2 \right]. $$ In SiD [1], Fisher divergence is averaged over time and compares only marginal distributions $p(x_t)$ and $\widehat{p}(x_t)$ of two path measures. However, two path measures with the same marginal distributions might not be equal — thus, in general, minimizing Fisher divergence does not guarantee that $D_1 \approx D_2$ as stochastic processes. In the case of classical diffusion models where the forward drift $f(x_t, t)$ is fixed, reverse drifts are fully determined by marginal score functions: $$ \widehat{v}(x_t, t) = f(x_t, t) - g^2(t)\nabla_{x_t} \log \widehat{p}(x_t), \quad v(x_t, t) = f(x_t, t) - g^2(t)\nabla_{x_t} \log p(x_t). $$ Substituting these into the KL expression shows that in this specific setting — with a fixed forward SDE — KL divergence between path measures becomes equivalent (up to a constant) to the time-averaged Fisher divergence between the marginals. This explains why Fisher-based methods like SiD [1] may succeed in this context. However, this equivalence breaks down in the case of unconditional bridge matching. Here, the forward drift $f(x_t, t)$ is not fixed and depends on the data coupling $p(x_0, x_T)$. In turn, the forward drift $f_\theta(x_t, t)$ for the generated coupling $p_\theta(x_0, x_T)$ also depends on $\theta$. As a result, $f(x_t, t) \neq f_\theta(x_t, t)$, and the reverse drifts cannot be expressed solely in terms of marginal scores. Hence, KL divergence in the case of unconditional bridge matching is not equivalent to Fisher divergence between marginals. This difference is expected since, in the case of an unconditional diffusion bridge, one does not have a fixed forward process, which specifies the "dynamic part" of the measure. This highlights the importance of using KL divergence between path measures as a high-level objective instead of the previously used Fisher Divergence. **(2) Regarding score and flow product identities.** We agree that the used property is similar to score identity. We will remove this sentence in the extension of related work. **(3) PF-ODE.** In I3SB [3], the authors state in Theorem 1 that their sampler coincides with the PF-ODE of the Variance Exploding fixed Schrödinger Bridge (SB). By fixed, the authors assume that we consider SB between some distribution $p(x_0|x_T)$ and $\delta(x_T)$ for a fixed $x_T$. \textbf{This SB coincides with the forward diffusing given by Doob h-transform obtained for a VE SDE variance process, i.e., one considered in DDBM/DBIM papers.} It follows from the result that Schrödinger Bridge for VE SDE is the unique process that is Markovian and is a mixture of VE SDE bridges. Both conditions are satisfied since the authors of DDBM/DBIM use Doob h-transform for VE SDE. This result for a general Markovian SDE (not only VE SDE) can be found in [5, Theorem 2.12]. The authors of I3SB in Theorem 1 omitted that $\widehat{X}_0(X_t)$ should also depend on $X_T$, i.e., one should use $\widehat{X}_0(X_t, X_T)$ since this PF-ODE is derived for the SB with a fixed $x_T$. If we use this PF-ODE for a general case of bridge diffusion from $p(x_0) \rightarrow p(x_1)$, but with $\widehat{X}_0(X_t, X_T) \approx \widehat{X}_0(X_t)$ approximated by the unconditional model, then we will indeed obtain some ODE trajectories, but we do not know any theoretical guarantees on what this ODE will produce. **Concluding remarks**. We would be grateful if you could let us know if our explanations have been satisfactory. If so, we kindly ask that you consider increasing your rating. We are also open to discussing any other questions you may have. New reference: [5] Léonard, C. (2013). A survey of the Schrödinger problem and some of its connections with optimal transport. arXiv preprint arXiv:1308.0215.
Summary: The paper introduces IBMD, an inverse bridge matching distillation method for inverse problems. The key idea is to consider bridge matching distillation as an inverse problem and convert the constrained problem into an unconstrained one using the reparameterization trick. Based on the teacher models DDBM and $I^2SB$, distilled models with IBMD achieve low FID with fewer NFEs in super-resolution, image restoration, image inpainting and image-to-image translation compared to other baselines. ## Update after rebuttal I have no concerns regarding the submission, therefore, I will maintain my original rating of 4. Claims And Evidence: 1. The proposed distillation method reduces NFEs in the inverse problem while preserving the teacher model’s performance. As far as I know, this is the first distillation approach for both unconditional and conditional inverse problems. The equations and derivations are solid. 2. It would be better to clarify the reasoning behind the statement in Section 3.2, line 240: 'The key difference in the reformulated problem is that it admits clear gradients of the generator $G_θ$.' For example, explaining that all parts are differentiable would help. Methods And Evaluation Criteria: 1. The authors follow the typical setup for inverse problems (super-resolution, image restoration, translation, and inpainting) and select appropriate teacher models (unconditional for $I^2SB$ and conditional for DDBM) along with suitable baselines. 2. One curious point is that multi-step distillation is applied to the distillation of DDBM but not to CBD and CBT. What if single-step distillation is applied to the proposed approach? How would the results change in the metric? Theoretical Claims: I have checked Proposition and Theorems 3.1–3.4, along with their derivations in the Appendix. The proofs seem correct. Experimental Designs Or Analyses: The experimental designs are valid. Supplementary Material: I have checked the Appendix material, code in the supplementary material. Relation To Broader Scientific Literature: Due to the reduced NFEs achieved by the proposed method, it will be more applicable to real-world inverse problem applications. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. Treating bridge matching distillation as an inverse problem is novel. 2. The derivation for reparameterization is solid. 3. It enables one-step inference. **Weaknesses** 1. The qualitative results for inpainting are not satisfactory. Other Comments Or Suggestions: It would be better to use the same notation for 'single-step' and 'multi-step' in line 99. Questions For Authors: How much time does it take to distill each model each model $I^2SB$ and DDBM? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer bxNd, thank you for your comments. Here are the answers to your questions and comments. **(1) It would be better to clarify the reasoning behind the statement in Section 3.2, line 240: 'The key difference in the reformulated problem is that it admits clear gradients of the generator .' For example, explaining that all parts are differentiable would help.** Thank you for this suggestion. In the final version, we will clarify the differentiability of all parts of the final objective. **(2) One curious point is that multi-step distillation is applied to the distillation of DDBM but not to CBD and CBT. What if single-step distillation is applied to the proposed approach? How would the results change in the metric?.** If we correctly understand, you asked about single-step distillation applied to the result of the multi-step distillation. Indeed, we applied multi-step distillation to the original teacher model, e.g., I2SB and DDBM, but not to the already distilled models like CBD or CBT. We did so, since our method is designed for distillation of diffusion-bridge models (like I2SB and DDBM), while CBD and CBT are consistency models obtained from DDBM. Result of multi-step IBMD (ours) distillation is also not a diffusion bridge model. Due to that we do not consider one-step distillation of multi-step distilled models. **(3) It would be better to use the same notation for 'single-step' and 'multi-step' in line 99.** Thank you for this suggestion. We will change it in the final version. **(4) How much time does it take to distill each model each model I2SB and DDBM?** We present the training time of each model below: | Task | Teacher | Dataset | Approximate time on 8A100 | NFE | |---------------------------------------|---------|----------------|---------------|-----| | $4 \times$ super-resolution (bicubic) | I2SB | Imagenet | 40 hours | 1 | | $4 \times$ super-resolution (pool) | I2SB | Imagenet | 40 hours | 1 | | JPEG restoration, QF $=5$ | I2SB | Imagenet | 24 hours | 1 | | JPEG restoration, QF $=10$ | I2SB | Imagenet | 40 hours | 1 | | Center-inpainting ($128 × 128$) | I2SB | Imagenet | 24 hours | 4 | | Center-inpainting ($128 × 128$) | DDBM | Imagenet | 12 hours | 4 | | Sketch to Image | DDBM | Edges/Handbags | 40 hours | 1 | | Sketch to Image | DDBM | Edges/Handbags | 1 hour | 2 | | Normal to Image | DDBM | DIODE-Outdoor | 48 hours | 1 | | Normal to Image | DDBM | DIODE-Outdoor | 7 hours | 2 | About 75\% of this training time is used to get the last 10-20\% decrease of FID (e.g., drop from 3.6 to 2.5 FID in pooling SR setup or from 4.3 to 3.8 FID in JPEG with $QF=5$), while training for the first 25\% of time already provides a good-quality model. On Sketch-to-image and Normal-to-image in multistep regime with 2 NFEs, convergence appears faster than in the corresponding single-step version. We will add the approximate time used for training to Table 7 of Appendix B (Table with all hyperparameters). **Concluding remarks**. We would be grateful if you could let us know if the explanations we gave have been satisfactory in addressing your concerns and questions about our work. We are also open to discussing any other questions you may have. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have read it carefully, along with the other reviews. I have no concerns regarding the submission, therefore, I will maintain my original rating of 4.
Summary: This paper proposes a new distillation scheme for diffusion bridge models. The main idea is to parameterize the entire formulation based on stochastic generator G. The student must follow input-output pairs produced by G, which constrains the path of diffusion bridge, which must coincide with the teacher path. Optimizing G, one can find a viable student. To make the formulation tractable, the constrained problem is reformulated into an unconstrained problem, resulting in a bilevel optimization problem that is somewhat GAN-like. The generator G becomes the resulting one-step generator, and the difference between the teacher error and the "student error" acts as a discriminator. G can also be designed in multi-step fashion. The entire formulation can be applied to both unconditional bridge matching and conditional bridge matching. Experiments show that the proposed method provides state-of-the-art results with much fewer steps than existing methods. ## update after rebuttal All the reviewers have rated the paper positively. The additional results provided in the rebuttal are also convincing. I maintain my original score. Claims And Evidence: The proposed formulation based on stochastic generator G, as well as the bilevel reformulation, is sound and novel. I checked the proof and they are correct in my opinion. Methods And Evaluation Criteria: As mentioned above, the method and the proofs are sound. The method was evaluated on popular benchmark datasets. Theoretical Claims: As mentioned above, the proofs ((a) the parametrized matching problem becomes constrained optimization (9), (b) it can be reformulated as an unconstrained bilevel optimization, and (c) it can be reparametrized based on the denoisers/samplers) are sound. Experimental Designs Or Analyses: The method was evaluated on popular benchmark datasets, and the experiments show most important metrics (NFE, FID, CA). The proposed method shows state-of-the-art performance with much fewer sampling steps. Supplementary Material: I focused on the proof part. I briefly checked the rest. Relation To Broader Scientific Literature: Diffusion bridge models are an important topic in diffusion models, and they can be used in many data-to-data translation problems. Providing a faster sampling method for diffusion bridge models can benefit many related areas. Essential References Not Discussed: I believe the bibliography is thorough. Other Strengths And Weaknesses: The proposed distillation technique based on the G formulation is sound and novel. This particular formulation allows for the handling of both unconstrained and constrained diffusion bridge models. One downside is that the learning procedure can be quite complicated and heavy, as also pointed out in the Discussion section. Other Comments Or Suggestions: Please see the below question. Questions For Authors: The main question I have is about the meaning of the bilevel formulation. In my understanding, the second term with \phi here acts as an "expander," which means, G can become trivial (or can collapse) without this term. Initially, I thought that \phi is the student and is the resulting model. However, after reading the whole paper, I realized that G is the final model, and even if \phi looks like a student, it is actually an auxiliary component. In other words, the proposed method trains another DBM just to help train G. (This makes the whole formulation quite heavy because now we have two additional models G and \phi. How long does it take for training?) I'd like to ask whether this interpretation is right, and I'd like to see a deeper discussion in the paper regarding the role of \phi. Currently, there is not much of a discussion about the meaning of the final formulation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer AN8r, Thank you for your comments. Here are the answers to your questions and comments. **(1) In my understanding, the second term with \phi here acts as an "expander," which means, G can become trivial (or can collapse) without this term ... In other words, the proposed method trains another DBM just to help train $G$. I'd like to ask whether this interpretation is right, and I'd like to see a deeper discussion in the paper regarding the role of $\phi$.** Yes, this interpretation is correct. To show it more formally, note that the minimal value of the inner problem is the averaged variance of $x_0 \sim p_{\theta}(x_0|x_t, x_T)$: $$ \min_{\phi} \mathbb{E}_{p\_{\theta}(x\_t, t, x\_0, \textcolor{MyRed}{x_T})} \big[\lambda(t) \| \widehat{x}_0^{\phi}(x_t, t, \textcolor{MyRed}{x_T}) - x_0 \|^2 \big]\Big] = \mathbb{E}\_{p\_{\theta}(x\_t, t, x\_0, \textcolor{MyRed}{x\_T})} \big[\lambda(t) \| \mathbb{E}\_{p\_{\theta}(x\_0|x\_t, x\_T)}[x_0] - x_0 \|^2 \big]\Big] = $$ $$ = \mathbb{E}\_{p\_{\theta}(t, x\_t, x\_T)} \Big( \lambda(t) \Big\[\underbrace{\mathbb{E}\_{p\_{\theta}(x\_0|x\_t, x\_T)} \big[ \| \mathbb{E}\_{p\_{\theta}(x\_0|x\_t, x\_T)}[x_0] - x_0 \|^2 \big]\Big]}\_{\text{Variance of } p\_{\theta}(x\_0|x\_t, x\_T)}\Big\] \Big). $$ For $t=T$, this is directly the variance of the generator $x_0 \sim p\_{\theta}(x_0|x_T)$. Since we are maximizing this part over $\theta$, it enforces the generator to produce more diverse outputs and avoid collapsing. Following your advice, we will add this discussion on the interpretation of how the auxiliary model $\phi$ helps to train the Generator $\theta$ to the final version of the paper. **(2) How long does it take for training?** We present the training time of each model below: | Task | Teacher | Dataset | Approximate time on 8A100 | NFE | |---------------------------------------|---------|----------------|---------------|-----| | $4 \times$ super-resolution (bicubic) | I2SB | Imagenet | 40 hours | 1 | | $4 \times$ super-resolution (pool) | I2SB | Imagenet | 40 hours | 1 | | JPEG restoration, QF $=5$ | I2SB | Imagenet | 24 hours | 1 | | JPEG restoration, QF $=10$ | I2SB | Imagenet | 40 hours | 1 | | Center-inpainting ($128 × 128$) | I2SB | Imagenet | 24 hours | 4 | | Center-inpainting ($128 × 128$) | DDBM | Imagenet | 12 hours | 4 | | Sketch to Image | DDBM | Edges/Handbags | 40 hours | 1 | | Sketch to Image | DDBM | Edges/Handbags | 1 hour | 2 | | Normal to Image | DDBM | DIODE-Outdoor | 48 hours | 1 | | Normal to Image | DDBM | DIODE-Outdoor | 7 hours | 2 | About 75\% of this training time is used to get the last 10-20\% decrease of FID (e.g., drop from 3.6 to 2.5 FID in pooling SR setup or from 4.3 to 3.8 FID in JPEG with $QF=5$), while training for the first 25\% of time already provides a good-quality model. On Sketch-to-image and Normal-to-image in multistep regime with 2 NFEs, convergence appears faster than in the corresponding single-step version. We will add the approximate time used for training to Table 7 of Appendix B (Table with all hyperparameters). **Concluding remarks**. We would be grateful if you could let us know if the explanations we gave have been satisfactory in addressing your concerns and questions about our work. We are also open to discussing any other questions you may have.
null
null
null
null
null
null
null
null
Preference learning made easy: Everything should be understood through win rate
Accept (poster)
Summary: The paper first introduce the concepts of preference consistency and prevalence consistency, and then proved that the only form of loss function that satisfies both preference and prevalence consistency is a type of win rate, proposing the h-win rate. The paper then argues the benefits of h-win rate, and analyzed DPO and SFT through this lens, showing that they do not fit the consistencies proposed. Empirical results shows a correspondance between win rate and the loss proposed. Claims And Evidence: yes. proofs look correct and empirical results indicate relavance. Methods And Evaluation Criteria: The paper mostly proposes two criteria to look at when analyzing alignment algorithms. Well in experiments the resulting algorithm from the theory does not seem to outperform existing methods, it does help elucidate why DPO often do not achieve SOTA results in practice. Theoretical Claims: mostly checked prop 3.3 and skimmed the others. Looks correct. Experimental Designs Or Analyses: The experiments, while well indended, look a bit small in scale and not very strong. In particular, there are three sets of experiments. Figure 2 compares expected and actual win rates of different methods, but it's not clear the datasets and the model (I assume it's pythia 2.8B and Anthropic HH, but unclear); Table 1 shows different h win rate methods, but did not show a clear winner, and thus it's hard to draw a useful conclusion there; Figure 3 shows the correspondance between train loss and win rate, but I'm curious about the sudden drop of loss on the left subfigure. In general, while the idea is interesting and useful, the experiments are not well supported, it would be helpful to conduct higher scale experiments, and show better correspondance between win rates and train loss, which is the main benefit of the methodology part. Supplementary Material: yes, the proofs. Relation To Broader Scientific Literature: The paper helps elucidate some of the alignment research problems in practice. Essential References Not Discussed: no. Other Strengths And Weaknesses: no Other Comments Or Suggestions: no Questions For Authors: Is it possible to extend the proposed pairwise metrics to listwise? Practitioners often have a list of N_i responses to the i-th prompt, and preference scores assigned to each of the responses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We answer your questions (paraphrased for brevity) below: 1. [datasets and model of Figure 2?] - The reviewer is correct that the model is Pythia 2.8B. The dataset is Open Assistant. 2. [Table 1 shows different h win rate methods, but did not show a clear winner, and thus it's hard to draw a useful conclusion there] - We'd like to emphasize that the lack of a clear winner is the primary conclusion of Table 1. This result suggests that design choices for WRO which affect the optimal solution do not seem to have a first-order systematic effect on performance in practice; optimization success does though, as a show later in Figure 3. 3. [Figure 3 shows the correspondence between train loss and win rate, but I'm curious about the sudden drop of loss on the left subfigure.] - Great question. Each point on either plot in figure 3 is a different final model from a training run of a given WRO variant. The sudden drop of loss denotes a subset of models which achieve final loss past a certain threshold (around -0.35). These models achieve much better win rates than models which are not able to reach such a loss value, suggesting that simply being able to optimize past a certain threshold is strongly indicative of test win rate performance. 4. [The experiments, while well intended, look a bit small in scale and not very strong. It would be helpful to conduct higher scale experiments, and show better correspondence between win rates and train loss, which is the main benefit of the methodology part.] - We'd like to emphasize that the current experiments consist of 72 distinct RL fine-tuning runs on a several billion parameter model. Unfortunately, limited resources prevent us from significantly scaling up past the current experiments during the response period, but if the reviewer has specific suggestions, we'd be happy to incorporate them into the final version of the paper. - It is also worth noting that the level of correspondence we are already seeing between train loss and test win rate is extremely striking, especially given the fact that different WRO variants do not necessarily even share the same scale for loss (due to choice of $\beta$ and $h$). In other words, we would not expect, a priori, such significant rank correlations, much less a better correspondence. Instead, the fact that the correlation exists with train loss, even when it doesn't with other design choices we hypothesized a priority to be relevant, suggests that optimization success is the first order consideration for WRO. 5. [Is it possible to extend the proposed pairwise metrics to listwise?] - Yes! In the case of preference for a set of $k$ responses, the binary preference classifier would become a k-class preference classifier, and h-win rate would become a multi-way win rate over $k-1$ anchor distributions, which could optionally be the same distribution. Additional assumptions also generalize, such as Plackett-Luce for Bradley-Terry. We’ve added this discussion to the appendix. Thank you for your review. Please let us know if you have any additional questions or concerns; otherwise, we would greatly appreciate it if you would consider raising your score.
Summary: The paper examines what constitutes a "grounded" evaluation of a policy or language model's alignment with (human) preferences. Assuming the evaluation function is both preference-consistent and prevalence-consistent, meaning it is linear in the distribution of contexts, the distribution of alternatives, and the policy under evaluation, the paper establishes that the only grounded evaluation metric is the $h$-win rate. Building on this, the paper argues that maximizing the $h$-win rate (for some $h$) is a natural objective for aligning generative models. This approach offers two key benefits: (1) win rate correspondence, meaning that optimizing the objective directly improves the $h$-win rate, and (2) win rate consistency, ensuring that the optimal policy obtained from the optimization also maximizes the $h$-win rate. RLHF is one such win rate optimization (WRO) algorithm. However, methods like DPO and SFT do not optimize win rates. In particular, DPO lacks win rate correspondence. The paper then presents an empirical study to test whether theoretical insights translate to practice and whether WRO methods outperform others. Interestingly, the empirical results contradict theoretical expectations, highlighting the importance of optimization and other influencing factors. Claims And Evidence: The paper's main claims are theoretical. Given the proposed properties of the evaluation function, namely, preference-consistency and prevalencepconsistency, are desirable, the paper then provides a sound theoretical foundation. Additionally, the authors are transparent about the empirical evaluations, which I appreciate. There is a subtlety in interpreting the results that I wish the authors had discussed further. Specifically, the dependence of the optimal solution on the anchor distribution is, by design, undesirable. This issue affects all $h$-win rate methods except those using BT with $h = \text{logit}$ (i.e., RLHF). It is counterintuitive that what makes a generative model optimal depends on the anchor distribution used for comparison. Compare this to reward maximization where the optimal policy does not depend on how the preference dataset is sampled. For further discussion, see [2] in the additional related work. Methods And Evaluation Criteria: Again, assuming that the evaluation function should be preference-consistent and prevalence-consistent, WRO makes sense. However, I wish there were better motivations for these properties. In particular, as I understand, prevalence-consistency for the generator basically rules out non-RL methods, including all direct methods (that's my read from Eq. 2 where I don't have any other way than RL to maximize w.r.t. $\theta$). Theoretical Claims: As far as I checked, the theory is sound. I think you don't need BT to prove Prop. 4.1. I suspect any increasing symmetric $sigma$ works here. Another suggestion is that having two anchor/competitor distributions was confusing. Since you will assume these are the same, maybe do not introduce them differently. The analysis of DPO was interesting. First, I think DPO is win rate-consistent, which could be better highlighted ([1] and [2] have similar observations, I believe). Second, this analysis made me think if win rate-correspondence is a big deal. Doing a little exploration, I came across this intuition: calculating the gradient of the DPO's objective, we can see that DPO works to make the win rate induced by a policy get closer to the true win rate under true reward. When these two completely match, DPO's policy internally represents the true reward, which means it is the optimal policy. However, this does not mean that we are getting closer to the optimal reward in every step. This is only my intuition but it would help if the authors clarify/elaborate further on their intuition in Sec 5.1. I think this will be very helpful as the readers generally don't expect and MLE estimator such as DPO to have an undesirable property. Experimental Designs Or Analyses: I appreciate the transparency of the results. I think the experiments were insightful and interesting. A few minor questions: Please further elaborate on how three cases for $\hat{p}_l$ are obtained in Table 1. How did you conclude from experiments that SFT > DPO? My read of Fig. 2 is that DPO is superior to all methods. Another room for improvement is Fig. 1. I suspect some of the win rate correspondence violations of DPO are due to its KL-regularization. I think Fig. 6 partly confirms this, though this cannot completely explain why DPO is not win rate correspondent. Compared to Fig. 6, I think Fig. 1 is a very extreme case. I'd choose a smaller $\beta$. Also, it helps to have the same plot for logit-WRO-KL so we can see how many of these violations are due to regularizations. Supplementary Material: No. Relation To Broader Scientific Literature: The paper mostly reminds me of $\psi$-PO (Azar et al.) but still has new insights. It is also very related to GPO (Tang et al.). All of these works try to understand preference optimization in its most generality. I like how this work further categorizes these methods in terms of WRO and non-WRO. Essential References Not Discussed: The paper has done a good job of covering relevant literature. When I heard the win rate, I immediately thought of two very recent works from social choice theory and alignment [1, 2]. These two also discuss how RLHF and DPO are related to win rate, or in other words, Borda count. Their setting is slightly different from the current work as they consider the possibility of different reward functions for individuals, but I thought they can still be interesting and relevant for the authors. [1] Distributional preference learning: Understanding and accounting for hidden context in RLH [2] Direct Alignment with Heterogeneous Preferences Other Strengths And Weaknesses: The paper is well-written. Just as a suggestion, maybe use consistent capitalization in the title and sections. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to above points. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful review! Questions (paraphrased) and responses below: 1. [Undesirable that optimal solution depends on anchor, not true for reward maximization] - This dependence is a limitation of the information contained in the distribution of pairwise comparisons itself. As mentioned in Prop 4.1, we agree that the issue can be bypassed with additional assumptions (e.g., under BT and finite rewards, optimizing any ℎ-win rate over any anchor optimizes for all ℎ-win rates over all anchors). We have added more discussion. 2. [Better motivations for properties, prevalence-consistency for generator rules out non-RL] - We want preference-consistency or else a model can be deemed good even if it generates dispreferred responses. We want prevalence-consistency wrt generator or else we are not evaluating the model’s generation behavior. Enforcing prevalence-consistency means objectives include an expectation over the model, but this could be approximated with off-policy samples and importance weighting for instance. 3. [I don’t think you need BT to prove Prop 4.1] - Good point. Prop 4.1 is meant to show what BT gives us (see 1), but we have generalized the proof. 4. [Two anchor/competitor distributions is confusing] - Could you clarify what you mean? If you mean $p_{anchor}$ vs. $p_{ref}$, we distinguish them to highlight that the anchor we are optimizing against can be different from the reference we are regularizing to (but please let us know if we are misunderstanding!) 5. [Highlight that DPO is win rate-consistent] - DPO’s optimal solution matches that of a regularized WRO objective; thus we can say that it satisfies regularized win rate consistency. We mention this in appendix A and have added a line in the main paper, but it is also the motivation for DPO, i.e., an objective whose solution matches that of RLHF. 6. [Is win-rate correspondence a big deal? Can you clarify the intuition for DPO?] - Failing win rate correspondence makes training and model selection difficult (i.e., why reduce loss if it doesn’t help with the goal, how to select checkpoint when best loss is not best win rate). - Intuition: DPO performs MLE of the preference classifier, where this classifier is a function of the policy model. If loss goes down, this implicit preference classifier is getting closer to the true preference classifier (in expectation over the offline distribution). However, this does not mean that the corresponding policy model is getting better at win rate under the true preference classifier. First, prevalence mismatch between offline data and model means the preference classifier could get worse for responses that are relevant under the model even as it gets closer to ground truth for the offline data. Second, DPO breaks preference consistency as the implicit preference classifier is a function of the policy model itself and thus changes throughout training as the policy model changes. 7. [Elaborate on three cases for $p_{\ell}$] - Oracle is a model used to label the preference data and evaluate the win rate of the trained models. BT = True is a reward model trained on the oracle-labeled preference data. BT = False is a preference classifier trained on the oracle judge-labeled preference data without BT (lines 377-384). 8. [SFT > DPO?] - Summarizing footer 4, we conclude SFT > DPO with respect to ease of optimization but based on works which show optimization difficulties of DPO, e.g., failure to increase the probability of the chosen response or improve the rankings. 8. [I suspect some win rate correspondence violations of DPO are due to KL, Fig 1 is extreme] - To formalize, we add a new property called regularized win rate correspondence, i.e., improvement in loss implies improvement in win rate or divergence to reference. This is very inclusive, as even a change that results in a large decrease in win rate will still meet regularized win rate correspondence if divergence to reference decreases. Any regularized WRO objective satisfies this. DPO (off- or online) does not: loss can improve even though win rate and divergence get worse (as DPO does not directly optimize for either). - Note that Fig 1 is less extreme than DPO as setting is online and infinite data. Fig 5 is even less extreme with a uniform starting model and equally spaced rewards, and Fig 6 further decreases $\beta$ in the least extreme setting in Fig 5. These figures show that DPO still does not satisfy win rate correspondence even as we continue to make the setting more and more favorable (and at some point, smaller $\beta$ doesn't help). Even so, given that the Fig 1 is quite specific, we've moved it to the appendix and have replaced it with a discussion on regularized win rate correspondence. 10. [Additional related work] - Thank you, they were a pleasure to read, and we have added them to related work. Thank you again for your questions. If we have answered them to your satisfaction, we hope you will consider raising your score.
Summary: This paper introduces two consistency measures to study preference models. The paper proves that the only evaluation criteria that respects both is the win-rate. This finding is generalized into h-WinRate -- win rate under a monotonically non-decreasing transformation $h$. This measure is used as an optimization objective, called win rate optimization (WRO), which is later used to study preference learning algorithms. The paper shows that while RLHF respects this objective (under KL-regularization), DPO and SFT do not. Empirical analysis of DPO exhibits that improving DPO loss does not necessarily improve the win-rate. Further experiments with varying $h$, $\beta$ and preference model suggests there is no one winning setting across different benchmarks. Claims And Evidence: The paper claims that win-rate is the unique measure that should be used to evaluate preference models. This is also supported by other work in literature, including IPO or BonBon. The paper hypothesizes two consistency measures that uniquely suggest win-rate is the only evaluation that respects these measures. These claims are supported by theoretical analysis of RLHF, DPO, and SFT. The paper empirically shows that DPO-loss is not necessarily correlated with win-rate, which is suggested by the theory. But the remaining experimental results are mixed. - One main criticism is that the theory in the paper is similar to IPO, especially the optimization and analysis of preference learning. Similar to the paper, IPO uses a non-decreasing function with win-rate. The proposition 4.1 can be proven using Eq (7) in IPO with $\tau$ going to zero. I think having more empirical analysis can distinguish the paper better than related work. - Expected win-rates suggest a global ordering amongst RLHF, DPO, and SFT, with a monotonically non-decreasing relationship between win-rate and $\beta$. But, empirical results suggest otherwise; DPO performs the best and RLHF is slightly better than SFT. While the authors suggest there might be other factors at play, such as optimization etc. these are not supported by any empirical evidence. - It is interesting to me that using oracle $\hat{p}_l$ doesn't exhibit the best performance. Given that the true objective is to improve the win-rate under the oracle preferences, this is counter-intuitive. - The paper uses only Pythia-2.8b model to train preference models as well as to judge results. I think this limits the analysis. Using the same base model for both preference learning and judging can introduce biases. It would help if other similar sized models, such as Gemma2-2b, Qwen2-1.5b, are used as well. Methods And Evaluation Criteria: The paper studies win-rate through newly-proposed measures and introduces a new objective for preference learning. It compares different methods under win-rate; which is the main metric of study. There are two datasets that are commonly studied in the literature. Theoretical Claims: I checked the correctness of proofs in the main text. Experimental Designs Or Analyses: As I explained above, experimental results are limited. I think using other similar sized LLMs, analyzing the results in more detail would help the paper. Supplementary Material: I reviewed the appendix. In particular, proofs of theorems in the main text, additional figures and related text, and experimental details. Relation To Broader Scientific Literature: The paper is broadly relevant to preference learning. Essential References Not Discussed: Related work sufficiently covers the literature. Other Strengths And Weaknesses: I think proposition 3.3 and definition 3.2 can be practically useful to check whether a preference learning method respects the win-rate objective. The alternative, comparing the objective directly to win-rate objective, can be more challenging. Other Comments Or Suggestions: One minor comment is about the notation. Functional composition in section 4.2 is different from the rest. Please use a consistent notation. Questions For Authors: Please see above respective sections for related questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your review. We answer your questions (paraphrased for brevity) below: 1. [Theory in the paper is similar to IPO. Proposition 4.1 can be proven using equation 7 in IPO with $\tau$ going to zero. More empirical analysis can distinguish the paper.] - We agree that one can use equation 7 in the IPO paper to gain intuition for proposition 4.1, but equation 7 in the IPO paper is the target distribution specifically for KL-regularized WRO objectives and does not include the additional analysis required to take the limit of $\tau$ to zero (e.g., proving existence of a limit, connection between limit of solution and solution of limit`) nor the specific implication of the Bradley Terry assumption. In contrast, Proposition 4.1 in this work is meant to emphasize a consequence of the Bradley Terry assumption on the dependence of WRO on choice of anchor distribution: Namely, it proves that under the Bradley-Terry assumption and finite rewards, the resulting solution optimizes win rate over all possible anchors regardless of the choice of anchor in the objective. In other words, the Bradley-Terry assumption effectively gives us the ability to optimize over all anchors, whereas without that assumption we choose the anchor. We have updated the text to emphasize this point. 2. [Expected versus observed win rates; the authors suggest there might be other factors at play, such as optimization etc., but these are not supported by any empirical evidence.] - Section 6.3 / Figure 3 provides empirical evidence of optimization being an important factor for the misalignment between expected versus observed win rate patterns. Namely, the train loss achieved by a given run is significantly predictive of the test win rate, even as other factors which dictate the target distribution of the objective are not (Table 1). This result is especially striking given that the losses for different objectives do not necessarily share the same scale. 3. [It is interesting to me that using oracle $\hat{p}_\ell$ doesn't exhibit the best performance] - We agree. This result suggests that there is a more important first order factor, which Figure 3 (a scatterplot of train loss versus test win rate for selected models across different WRO runs), and the corresponding significance test suggests is optimization success. 4. [The paper uses only Pythia-2.8b model to train preference models as well as to judge results. I think this limits the analysis. Using the same base model for both preference learning and judging can introduce biases. It would help if other similar sized models, such as Gemma2-2b, Qwen2-1.5b, are used as well.] - Due to resource constraints, we are unfortunately unable to run the same experiments on additional models at this time (requires 7 offline finetuning jobs and 36 online finetuning jobs for just a single dataset with no hyperparameter sweep), but we take the reviewer's point that it would be helpful to add more models to the final paper and are working to do so. And just to clarify, we are using separately trained models for the judge and policy model even if they are based on the same base model. 5. [I think proposition 3.3 and definition 3.2 can be practically useful to check whether a preference learning method respects the win-rate objective. The alternative, comparing the objective directly to win-rate objective, can be more challenging.] - Thank you for highlighting the practical usefulness of proposition 3.3 and definition 3.2. We agree! 6. [Section 4.2 notation] - Thanks for the feedback, fixed! We hope we have addressed your concerns, and we hope you will consider raising your score. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. 1. Regarding Eq 7 in IPO paper, it is under the assumption of a BT model: "Applying this proposition to the objective function of Equation (6), for which there exists an analytical solution, reveals that under the BT assumption the closed form solution to DPO and RLHF can be written as". For $\tau$, I agree with you, it seems the authors in IPO assumed it is strictly positive for their analysis. 2. For Figure-3 to support your claim, worse training loss should indicate almost an ordering amongst different methods, like: SFT > RLHF > DPO. Can you add colors to Figure-3 to understand model-based correlations? Can you explain if these losses are comparable across models? --- Reply to Comment 1.1.1: Comment: Thanks for the comments! 1. We agree that the IPO paper the mentions the BT assumption to connect Eq 7 with RLHF & DPO. The additional implication of the Bradley Terry assumption highlighted in Proposition 4.1 in our work is that under the assumption and finite rewards, all WRO methods share the same optimal solution which is also optimal over all anchor distributions. 2. Thanks for the question. To clarify, figure 3 only plots WRO runs (which include RLHF), not any SFT or DPO runs. The figure is meant to showcase that optimization success is a first-order of consideration for WRO. We would not expect SFT & DPO losses to be directly comparable to each other or RLHF (for instance, neither SFT nor DPO satisfy win rate correspondence); in fact, we would not necessarily expect WRO losses across different values of h or beta to be comparable either, making the fact that there is a trend between loss and win rate across these runs all the more notable. See [here](https://imgur.com/a/Ntu6Her) for Figure 3 stratified into the different choices of $h$, $\beta$, $\hat{p}_\ell$.
Summary: The paper argues that win rate should be the primary evaluation metric in preference learning, as it is the only measure that respects both preferences and prevalences in pairwise comparison data. The authors introduce a win rate-centric framework and classify preference learning methods into WRO and non-WRO approaches. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, Supplementary Material: No. Relation To Broader Scientific Literature: The authors’ analysis is well-reasoned; however, I believe their theoretical framework relies on overly idealized assumptions, which diminishes the practical significance of their conclusions. I agree that win rate is the most aligned metric with preference, but the analysis overlooks key challenges: 1. In-Distribution Assumption: The authors assume that training and evaluation data are drawn from the same distribution. In practice, training data cannot fully cover real-world applications, necessitating regularization to improve OOD generalization. 2. Unreliable Preference Models: The preference model may be biased or unreliable, which is why additional optimization objectives—such as length regularization—are incorporated into preference optimization. While these objectives break WRO, they often lead to better empirical results. Given these limitations, I disagree with the claim that "everything should be understood through win rate.", and WRO methods, as acknowledged by the authors, not necessarily outperform non-WRO ones. While the there are some theoretical insights, the conclusions provide limited practical guidance for real-world preference learning and this area. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the review. Responding to your concerns below: 1. [Overly idealized assumptions (in-distribution assumption, unreliable preference models) diminish practical significance of conclusion] - Our framework is not focused on idealized assumptions per se but rather what can be learned from the preference data distribution alone, specifically to clarify the role of assumptions vs. information inherent in preference data itself. In fact, our analysis is meant to make it easier to consider challenges such as those posed by the reviewer: Namely, our work clarifies what is optimal when nothing else is considered besides the information contained in preference data alone. This understanding of what is in preference data makes it possible to disentangle anything extra in a given method as an additional assumption or strategy. We are not arguing that anything extra is bad if the resulting objective is not pure WRO; instead, win rate optimization should be seen as the starting point, with any addition or modification understood modularly based on the underlying assumption being encoded. - To drive that point home, we've added the following lines to the discussion: - …this work offers a simplifying insight: win rate is the only evaluation that can matter based on preference data alone, and thus all of preference learning should be understood in relation to it—both how well a given method optimizes for win rate as well as the role of additional assumptions that move a method beyond pure WRO. 2. [Limited practical guidance for real world preference learning] - We respectfully disagree; some immediately implementable practical guidance that comes out of our analysis includes 1. given how important and finicky optimization success is for WRO objectives (including RLHF), it could be helpful to kick off multiple seeds to find one that optimizes best; 2. given that DPO loss fails win rate correspondence on multiple fronts, one should consider performing model selection with a metric other than validation loss; 3. greater generation diversity as well as alternative filtering strategies can improve the win rate limits of supervised fine tuning on preferred samples; 4. studying optimization strategies (e.g., contemporaneous paper [1]) would be a high leverage direction of inquiry, given the importance of optimization success in practical performance of preference learning algorithms. Based on our response, we hope you will consider raising your score. Thank you! [1] https://arxiv.org/pdf/2503.14286
null
null
null
null
null
null
UnHiPPO: Uncertainty-aware Initialization for State Space Models
Accept (poster)
Summary: This paper studies the HiPPO framework with noisy data. While the original HiPPO(-LegS) framework is based on projecting a function onto Legendre polynomials, it assumes that the input function is noise-free. The paper proposes an alternative way of formulating the HiPPO framework, called UnHiPPO, which is based on modeling the posterior of the states given the noisy measurements. The UnHiPPO framework introduces a new hyperparameter $\sigma$ that is related to the noise level and ablation studies have been carried out to show its necessity. ## update after rebuttal I thank the author(s) for their response and would maintain my score. Claims And Evidence: I find the claims made in the submission clear and convincing, modulo the first point I raised in the weaknesses section. Methods And Evaluation Criteria: The proposed method makes sense for the problem of interest. Theoretical Claims: I checked the derivations of the formulas in the main text. I did not check the mathematics in the supplementary material. Experimental Designs Or Analyses: I agree with the soundness of the experimental designs. I do have some questions about the analyses, which I raise in the questions section. Supplementary Material: I scanned through the entire supplementary material without carefully checking its correctness. Relation To Broader Scientific Literature: This paper studies the noise handling of a specific initialization scheme of LSSL/SSM. Both the objective (making the model noise-aware) and the class of models are of interest to many sequential and time series problems. Essential References Not Discussed: NA Other Strengths And Weaknesses: I am ambivalent about my recommendation of this manuscript. On the one hand, the derivation of the UnHiPPO framework in this paper is beautiful and principled; on the other hand, I am not totally persuaded by the significance of the contribution to the application of LSSLs/SSMs, as there is a gap between the theory of the (Un)HiPPO framework and its practical success in such sequence models (as outlined in the questions section). In the end, I vote for weak acceptance, keeping in mind that such a gap is not an intrinsic flaw of the manuscript itself and that the UnHiPPO framework is potentially useful in fields other than LSSLs/SSMs or even deep learning. Strengths: * The noisy-aware modeling is an important topic and has not been considered in the HiPPO literature. * The derivation of the UnHiPPO framework in this paper is beautiful and principled. Weaknesses: * While the UnHiPPO framework is derived from mathematical principles, there lacks a theoretical comparison of how UnHiPPO and HiPPO handle noises. That is, I would like to see a clear and convincing theorem that shows UnHiPPO is more robust to noises than HiPPO. * The UnHiPPO state transitioning matrix $\mathbf{A}$ is void of a simplified representation (e.g., diagonal-plus-low-rank or diagonal). This makes it harder to apply this initialization in the more efficient S4/S4D/S5 models. * The presentation of the UnHiPPO framework can potentially be improved. See the comments section. Other Comments Or Suggestions: 1. I suggest clearly stating the issue with a noisy function $f$ and the objective of designing a noise-aware system before starting to derive it. 2. The manuscript is very well-written and easy to follow up to the end of section 3. In section 4, the discussion is a bit confusing and required several reads for me to figure out the core idea. That is, (7)-(8) requires more explanation of how it connects to (4). The UnHiPPO systems are not derived in a way that an ML researcher standardly imagine, where one seeks parameters of an autoregressive system with given inputs. The idea is more of "input matching given the guessed states." This can be made more straightforward at the beginning of section 4 to avoid confusion. 3. I cannot understand the following sentence in section 4: "In contrast to what one might expect, the observations $y\_{k\_t}$ do not exist in HiPPO and, instead, the signal $f(t)$ corresponds to the control signal $u\_t$." Please consider rephrase or further explain it. 4. At the end of section 2, I suggest expanding "best possible compression of $f$" to "best possible compression of $f$ in the $L^2$ space" to make the statement more precise. Questions For Authors: 1. My main question is about the application of the HiPPO in LSSL. As mentioned earlier, I think there is a gap between theory and practice. The following subquestions may be useful to consider: 1. In Figure 7, you showed an ablation study that changing $\sigma^2$ can vary the performance of the model. If you look at the model more carefully, then only a small proportion of $\sigma^2$ leads to a better performance than the model initialized by HiPPO. This made me wonder: is it the noise-aware mechanism or something else that determines the performance of the model as the hyperparameter varies. By "something else", I mean things like the magnitudes of the eigenvalues of the discrete-time matrix $\mathbf{A}$ that controls the system's memory (e.g., [1] and [2]), the imaginary parts of the eigenvalues of the continuous-time matrix $\mathbf{A}$ that controls the frequency bias and the approximation-estimation (e.g., [3] and [4]), and the stability of the parameterization of the system (e.g., [5]). Can the authors show more empirical studies of whether the noise-awareness is really *the* thing? For example, it is useful to show Figure 7 multiple times given different $\rho^2$ and see how the optimal $\sigma^2$ changes. 2. As mentioned in the manuscript, what is used in theory (a time-varying system with a $1/t$ scaling) is different from what is used in practice (a time-invariant system that drops $1/t$). How does the theory account for this change and what is the noise-awareness of the time-invariant system? 3. HiPPO and UnHiPPO are only initializations. You have to train the systems eventually. How stable is UnHiPPO when trained? 2. On page 6, you mentioned that "An unfortunate side effect is that $\sigma^2$ cannot be interpreted as the noise variance of the data directly." While I understand that one does not have $\sigma^2 = \rho^2$, I wonder if there is a connection between them. That is, if I know the noise level in the input, is there a principled way for me to select or scale the hyperparameter $\sigma^2$? 3. Can UnHiPPO be thought of as a regularization scheme, where $\sigma^2$ controls the magnitude of regularization? This seems pretty clear in Figure 3-4. I wonder if some theory can be derived in this direction. 4. The UnHiPPO discussed in this paper is only for HiPPO-LegS, while there are also many variants of HiPPO (e.g. HiPPO-LegT). Can the work in this paper be extended to those? To be clear, I do not look for detailed derivations of each of them. I only wonder if the analysis in this paper is generic or ad hoc to HiPPO-LegS. [1] Antonio Orvieto et al., Resurrecting recurrent neural networks for long sequences, International Conference on Machine Learning, 2023. [2] Naman Agarwal et al., Spectral state space models, arXiv preprint arXiv:2312.06837, 2023. [3] Annan Yu et al., Tuning frequency bias of state space models, International Conference on Learning Representations, 2025. [4] Fusheng Liu and Qianxiao Li, Autocorrelation Matters: Understanding the Role of Initialization Schemes for State Space Models, International Conference on Learning Representations, 2025. [5] Shida Wang and Qiaoxiao Li, Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization, International Conference on Machine Learning, 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and careful reading. **Phrasing** We will clarify Section 4 based on your feedback. The phrase "observations do not exist in HiPPO" refers to the fact that the data does not take the role of an observation in HiPPO, but rather of a control signal. We will update the section to make this clear. We have also adopted your clarification "best possible compression of $f$ in the $L^2$ space". **Effect of noise level** https://figshare.com/s/132436b00e91612513fd We ran two experiments similar to Figure 7, but this time the noise level $\rho^2$ is set to a lower and a higher value. The best results were obtained for $\sigma^2 = 10^{10}$. However, as we increase the noise level $\rho^2$, we notice that higher $\sigma^2$ values perform comparatively well, as opposed to what happens in Figure 7, when $\sigma^2 = 10^{14}$. **What is the effect of ignoring the time-variance of the true dynamics?** We examined the effect of this in Figure 5 and the surrounding text in the paragraph "Time-invariant Dynamics" empirically, but we did not analyze it theoretically. **How can we select $\sigma^2$?** At the moment, our best strategy for choosing $\sigma^2$ is empirical. The reason that $\sigma^2$ cannot be chosen as the noise in the data comes down to $\Sigma = I$ in the Kalman filter. Because this adds uncertainty to all degrees of the polynomial representation at each step, it also increases the uncertainty in the high-degree components. Even thought the dynamics are regularized, the highest degrees still grow quickly, so $\sigma^2$ needs to be of a similar order of magnitude as $B_H^T P_k^{-}B_H$ in $s_k$ to have an effect. Choosing to the diagonal of $\Sigma$ to fall quickly should be able to counteract this effect, though it is unclear how quickly it would need to fall exactly. **Can UnHiPPO be thought of as a regularization scheme, where $\sigma^2$ controls the magnitude of regularization?** Yes, as we have shown in Figure 4, $\sigma^2$ controls directly how sensitive the system is to high-frequency components / noise in the data. **Can UnHiPPO be extended to HiPPO variants other than LegS?** We decided to focus on the LegS variant of HiPPO, because it is the most relevant one in the literature. Extending the approach to other polynomial bases should be possible, because we only use properties of Legendre polynomials explicitly in Section 4.1 to derive the regularized HiPPO matrix. The same derivation should be theoretically possible in other bases, thought it might turn out to be either unnecessary if the basis behaves well under extrapolation or not produce a closed-form of $Q_i$.
Summary: The paper extends HiPPO by incorporating uncertainty-awareness, enhancing the robustness of state space models (SSMs) to noise. - The study is limited to SC10; testing on additional datasets and tasks would improve generalizability, a limitation acknowledged in the paper. - The choice of regularization method is not thoroughly analyzed, though the proposed approach is discussed in detail. Exploring alternative strategies would strengthen the contribution. - The $\sigma^2$ hyperparameter is crucial but lacks a systematic selection method across different datasets, despite some analysis of its impact. - Claims of negligible runtime cost are supported by experiments, but further evaluation on larger datasets would provide a clearer comparison with standard HiPPO. - No structured formulation for the UnHiPPO matrix is provided, which may impact computational efficiency—an issue noted by the authors. - The closed-form solution is preferred over the trapezoidal rule, and while its instability in UnHiPPO is noted, further explanation could clarify this choice. - More analysis on the impact of different $\Sigma$ structures on performance would be beneficial. - Empirical comparisons with existing methods would help contextualize UnHiPPO’s advantages and limitations beyond theoretical discussion. Claims And Evidence: See summary Methods And Evaluation Criteria: See summary Theoretical Claims: See summary Experimental Designs Or Analyses: See summary Supplementary Material: See summary Relation To Broader Scientific Literature: See summary Essential References Not Discussed: See summary Other Strengths And Weaknesses: See summary Other Comments Or Suggestions: See summary Questions For Authors: See summary Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. Please see Figure 9 in Appendix B for a visualization of the effect of different discretizations. It demonstrates the instability of some methods in both HiPPO and UnHiPPO and in particular the remarkable stability of the closed-form solution.
Summary: The paper proposes to extend high-order polynomial projection operators (HiPPO) that are used to initialise the dynamics of recent state space models. HiPPO theory is agnostic to measurement noise. The paper extends HiPPO operators to capture uncertainty arising from measurement noise. Specifically, the paper proposes to infer the posterior pdf of the HiPPO coefficients conditional on the noisy observations. The parameters of the pdf are estimated using a Kalman filter. The updated Kalman state (i.e., the mean of the posterior) itself is a first-order difference equation, where the transition of states from time step t-1 to t is captured by a transition matrix, and where the excitation is projected into state-space by an input vector. The transition matrix and input vector are therefore synonymous to the HiPPO matrices, but – owing to their Bayesian formulation – capture uncertainty in the measurement noise. The paper embeds the proposed matrices within LSSL (Gu & Dao 2021). The results of the resulting uncertainty-aware LSSL are compared against LSSL for a 10-class subset of the Speech Commands dataset. Results indicate improvements of up to approximately 1.5 percentage points of the proposed model over the baseline. ## Update after rebuttal: I thank the authors for their rebuttal. I particularly appreciate their effort in providing new results for different datasets to address my concerns. In my opinion, the idea of capturing uncertainty to improve robustness to noise is interesting and very promising. Unfortunately, similar to the results in the paper, the new results provided during the rebuttal also demonstrate only small improvements (< ~3%) compared to the LSSL baseline. In my opinion, more convincing results are required to evidence the paper's claim, i.e., that the proposed initialisation improves robustness against noise. I therefore maintain my score. Again, I think this is a promising approach and I would encourage the authors to undertake a more thorough experimental investigation with the aim to identify scenarios in which the proposed approach clearly outperforms LSSL. Claims And Evidence: The main claim of the paper is that the performance and noise robustness of models initialised with HiPPO can be improved by capturing explicitly uncertainty in the measurement noise. The mathematical formulation is presented in a clear and concise manner. Convincing examples are provided in Section 5 to illustrate the benefits of the proposed method. Methods And Evaluation Criteria: In my opinion, the proposed method provides an elegant and simple approach to embed measurement uncertainty in HiPPO. The dataset that was selected is appropriate to validate the proposed methodology, given the inherent challenges arising from speech in noise. While the results in the presented in the paper are sufficient for *validation*, the results showcase only small improvements in accuracy. To help identify the benefits (and potential limitations) of the proposed model, I would have expected a more thorough study involving different tasks and datasets for *evaluation*. For example, most SSMs since LSSL are evaluated on the LRA benchmark. Theoretical Claims: See “Claims and Evidence” Experimental Designs Or Analyses: In general, the experimental design provides the necessary information about the models to reproduce the studies. However, there is some information missing about the distortion of the speech signals: 1) Based on the information provided in Section 6, I gather that the authors added noise to Speech Commands to investigate performance under varying conditions. Is this correct? If so, my main concern is that Speech Commands was recorded in varying acoustic conditions, i.e., the clips already include background noise at varying noise levels. The results in Figure 6 seem to indicate LSSL actually performs equally well as UnLSSL at low noise levels (i.e., when no additional noise was added to the noisy Speech Commands utterances). This seems contradictory to the main claim of the paper. 2) It is unclear how noise was added. Were the signals normalised to a particular level prior to adding noise? What is the target SNR? How is this handled considering that clips often contain long periods of silence / background noise with very short speech utterances? 3) It is unclear what type of noise was added. The type of noise will have a significant impact on the results. For example, white noise is relatively easy to remove. However, realistic noise sources are rarely white. Have the signals been distorted with realistic noise sources, such as speech-like noise or music? Which noise dataset was used? Provided that my understanding of the noise experiments is correct, I recommend to repeat the experiments with a dataset that is recorded in anechoic or studio conditions (e.g., TIMIT, VCTK). It would also be possible to estimate the noise levels in Speech Commands from periods of speech inactivity. Supplementary Material: Supplementary Material contains code Relation To Broader Scientific Literature: A timely paper that fits will within recent work on the initialisation of SSMs, including, e.g., [1] below. In my opinion, the novel contribution of the paper is the extension of HiPPO to a framework that embeds uncertainty. [1] Liu & Li, “Autocorrelation Matters: Understanding the Role of Initialization Schemes for State Space Models”, ICLR 2025 Essential References Not Discussed: N/A Other Strengths And Weaknesses: *Strengths:* This is a very well written paper, that provides clear motivation and justification for assumptions, coherent and linear explanations, and helpful illustrative examples. I particularly appreciated the concise summary of HiPPO in Section 3. *Weaknesses:* The Figures in Section 6 provide nice visualisations of the general trend of accuracy and error curves. However, considering the small differences in performance between UnLSSL and LSSL, Tables are required to provide a precise comparison in performance. For example, in Figure 7 at \sigma^2 = 10^{10}, I struggle to determine if the difference in accuracy is 1 percentage point, more or less. Other Comments Or Suggestions: N/A Questions For Authors: 1. P. 6, paragraph following (29): “In contrast to LSSL where we can get the discretized dynamics at any t directly, for UnLSSL we compute them for all integer steps t ∈ [tmax] and then select a subset.” – I assume that this is necessary since the uncertainty-aware transition matrix and input vector are obtained from the discrete-time update equation in (24). Please can you clarify this point? 2. Same paragraph: “Instead, we also compute all intermediate steps, which mirrors the more realistic setting where we also observe data at 1, 2, . . . , t − 1.” – Is the initialization of UnLSSL therefore data dependent? 3. How were the noise experiments conducted? What type of noise was added? At what SNR? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. **How was the noise added? How were the signals normalized?** We follow LSSL implementation and the code from repository of the authors. After the audio files are loaded, signals are divided by 32k to be normalized (see [here)](https://github.com/state-spaces/s4/blob/e757cef57d89e448c413de7325ed5601aceaac13/src/dataloaders/datasets/sc.py#L407). After the noise is added, we apply z-score normalization. **What type of noise was added?** The two noise sources in equations (7) and (8), namely $\mathrm{d}\mathbf{\beta}$ and $\varepsilon_{t_k}$, are Gaussian. In line with the theory, we sampled Gaussian noise with 0 mean and fixed variance. We have not experimented with other noise sources. **Computation of parameters**: > P. 6, paragraph following (29): “In contrast to LSSL where we can get the discretized dynamics at any t directly, for UnLSSL we compute them for all integer steps $t ∈ [t_{max}]$ and then select a subset.” – I assume that this is necessary since the uncertainty-aware transition matrix and input vector are obtained from the discrete-time update equation in (24). Please can you clarify this point? That is correct. In the end, the parameters are tied to Kalman filtering in Equation (23), which is a linear-dynamical system (LDS) evolves with the transition matrix computed for each time step. We briefly mentioned this in the next sentence: "In theory, we could jump to any directly in the Kalman update in Equation (23), but that would increase the uncertainty as if there was no data before, changing the dynamics." > Same paragraph: “Instead, we also compute all intermediate steps, which mirrors the more realistic setting where we also observe data at 1, 2, . . . , t − 1.” – Is the initialization of UnLSSL therefore data dependent? No, it just means that we compute the parameters for the case where we observe data at each integer time step. Otherwise, the computed parameters would be computed as if we had not observed any data for longer stretches of time and therefore increase the uncertainty estimate and smoothing. **How were the noise experiments conducted? What type of noise was added? At what SNR?** For each audio signal, we compute the standard deviation of the signal and add a random noise with times the computed standard deviation. We used 10 different values: 0.0, 1e-7, 1e-6, 3.16e-6, 1e-5, 1.77e-5, 3.16e-5, 5.62e-5, 1e-4. We observed values larger than 1e-4 repress most of the audio signals in SC10 and leaves nothing but noise for this dataset. Since we do not add the noise based on amplitudes, and instead based on the standard deviation, we do not have a constant SNR value. For example, an audio with constant signal would never be effected by noise even when is non-zero as the standard deviation of the signal itself is 0. We run the LSSL model with HiPPO and UnHiPPO initialization on the same data and report averages of three different seeds. The complete configuration for the noise experiments is available in the supplementary material in `config/experiment/sc-raw-noise.yaml`. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I understand that the evaluation is based on an existing paper. However, this does not resolve my concerns regarding the experimental setup. I am concerned that the results are inconclusive considering that white Gaussian noise is added to signals that are already distorted by varying levels of background noise. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Based on your initial review, we evaluated our model on [RWCP-SSD](https://openslr.org/13/), a dataset of non-speech, dry sounds recorded in a professional anechoic studio with a reported signal-to-noise ratio of 50dB. We used two subsets of the data for two separate classification tasks. On the first subset of about 3500 recordings, we try to detect the material of colliding objects (wood, metal, plastic, or ceramic). The second subset of about 4200 recordings contains characteristic sounds of various objects, e.g. metal articles (coin, bell), paper (tearing, dropping book), musical instruments (drum, bugle), electronic sound (phone, toy), mechanical sound (spring, stapler), which we try to distinguish. https://figshare.com/s/132436b00e91612513fd As before, we normalize the data, add white noise w.r.t. the standard deviation of the audio signal, and apply z-score normalization. We ran the same noise experiment that we showed in Figure 6 and 8 in the paper. The results show that the UnHiPPO initialization also improves the robustness to noise on these two new datasets.
Summary: This paper investigates state space models (SSMs) through the lens of linear stochastic control theory and proposes a novel initialization method to enhance robustness against input noise. The authors first reformulate the linear recurrence in SSMs as a homogeneous linear dynamical system with noise, replacing the input signals with their closed-form reconstructions. Next, they derive a regularized HiPPO formulation by enforcing the online approximator to extrapolate linearly and maintain a consistent time derivative with the closed-form approximation at the boundary. Building on this new dynamic system, the authors compute the posterior mean estimate under Gaussian noise, which ultimately yields an improved initialization for the state transition matrix in SSMs. Claims And Evidence: Most claims in the paper are clear and well-supported. However, I have some reservations regarding certain steps in the derivation of the UnHiPPO initialization: To obtain data-free dynamics, the authors substitute $f(t)$ with $\hat{f}_{\le t}(t)$. While this choice eases the derivation, it introduces a discrepancy between the dynamics used for initialization and those used during actual signal processing. Further justification would strengthen this step Additionally, as far as I understand, models such as LSSL and S4 do not learn time-dependent matrices $A$ and $B$ directly. Instead, they learn static matrices $A$ and $B$ which are then converted into time-varying forms during training or inference. In contrast, the proposed initialization appears to generate a separate $A_k$ and $B_k$ for each timestep. It remains unclear whether the proposed method requires time-wise parameterization in practice, and if not, how the initialization is reconciled with the time-independent parameterization commonly used in these models. Methods And Evaluation Criteria: The primary goal of this paper is to enhance the robustness of state space models (SSMs) against input noise. The proposed method is an initialization scheme that provides a posterior estimate of the signal under Gaussian noise at the initialization stage. The derivation appears rigorous and well-grounded. However, when integrated into an SSM, the scheme seems to instantiate time-dependent parameters $A_k$ and $B_k$, which may diverge from the common practice of using time-invariant parameters $A$ and $B$ that are converted to $A_k$ and $B_k$ on the fly during training or inference. The experiments focus primarily on evaluating the robustness of the proposed approach under noise perturbations, which is well-aligned with the stated objective. Theoretical Claims: All derivations look correct to me. Experimental Designs Or Analyses: The experimental setup feels somewhat limited. Unlike many recent works on SSMs that evaluate on the Long Range Arena benchmark [1], this paper validates the proposed approach on a relatively small-scale dataset. Additionally, the comparisons are restricted to LSSL with HiPPO initialization, without including other competitive baselines. It would strengthen the empirical evaluation to include comparisons with more recent and widely adopted models such as S4 [2], DSS [3], or other state-of-the-art SSM variants. [1] Tay et al., Long Range Arena: A Benchmark for Efficient Transformers [2] Gu et al., Efficiently Modeling Long Sequences with Structured State Spaces [3] Gupta et al., Diagonal State Spaces are as Effective as Structured State Spaces Supplementary Material: The supplementary materials were not reviewed in detail. However, based on the structure of the main paper, it appears that no critical components of the proposed method are deferred to the appendix. Relation To Broader Scientific Literature: The proposed method is highly relevant to recent advancements in sub-quadratic sequence modeling, which aim to improve the efficiency and effectiveness of long-sequence processing. The underlying dynamical system studied in this work forms the core foundation for this class of models. Essential References Not Discussed: Two core works on the initialization and parameterization of SSMs appear to be missing: [1] Gupta et al., Diagonal State Spaces are as Effective as Structured State Spaces [2] Gu et al., How to Train Your HiPPO: State Space Models with Generalized Orthogonal Basis Projections Other Strengths And Weaknesses: Strengths: + Analyzing SSMs through the lens of stochastic linear control theory provides a novel perspective and a powerful theoretical tool. I found Section 4.1 particularly insightful, in particular, the idea of regularizing the extrapolation behavior of SSMs using boundary conditions. The authors may also want to emphasize that this regularized formulation contributes directly to improving robustness against noise. Weaknesses: - The writing could benefit from greater clarity. For example, it took me some time to fully understand how $\hat{f}_{\le t}$ depends on $\tau$. If I understand correctly, $\hat{f}$ should be expressed as two functions of $t$ and $\tau$ separately rather than ambiguously as a function of one variable. - While the derived initialization is theoretically interesting, the empirical analysis could be strengthened. I recommend evaluating the method on more diverse tasks, such as language modeling, to demonstrate broader applicability. Additionally, it would be valuable to discuss how this initialization connects with recent advances in SSMs, such as Mamba [1], and whether it can be integrated into these newer architectures. [1] Gu et. al. Mamba: Linear-Time Sequence Modeling with Selective State Spaces Other Comments Or Suggestions: I caught one typo: Ln 279-280, a GELU nonlinearity followed a linear layer -> a GELU nonlinearity following a linear layer Questions For Authors: A question is whether the proposed initialization remains compatible with computationally efficient parameterizations, which serve as a core advantage of SSMs. The original HiPPO matrix has been shown to admit a normal-plus-low-rank decomposition, enabling fast computation. Subsequent work further simplifies this structure by approximating HiPPO with diagonal matrices. It remains unclear whether the UnHiPPO formulation retains or admits similar structural properties. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed review. **Time-dependence of matrices** It is correct that SSMs learn $A$ and $B$ and discretize them on the fly. However, at least for LSSL, the time step at which the learned matrices are discretized is fixed per feature to make the model adapt to multiple timescales and does not actually vary with time. Therefore, it is factually equivalent that we learn $A_k$ and $B_k$ directly. **Experiments** We did not evaluate on LRA, because its synthetic tasks have discrete input data, which does not fit the assumptions of our derivation. In there, we assume that the noise is Gaussian and therefore continuous. An adaptation for discrete data and discrete noise would be a nontrivial adaptation. **Typo and related work** Thank you for your careful reading. We have added the missing "by" to clarify that the GELU nonlinearity comes first and included the two works you mentioned in our related work section. **Compatibility with efficient parametrizations** As we described under Limitations (Section 8), we were unfortunately not able to derive a similarly structured representation of our initialization, because the pseudo-inverse and the Kalman filter equations pose a significant challenge, which we elaborate in the following. $\\mathbf{A}_{\\mathrm{R}}$ is not a normal matrix, therefore, there is no trivial diagonalization of it. Previously, Gu et. al. (S4) used correction matrices to obtain skew-symmetric matrices which are in turn a special case of normal matrices. Here, though, finding a correction matrix is not trivial due to pseudo-inverse. Considering the definition of pseudo-inverse for matrices constituting of linearly independent columns, \\begin{equation} \\mathbf{M}^{\\dagger} = (\\mathbf{M}^{\\ast}\\mathbf{M})^{-1}\\mathbf{M}^{\\ast}\\end{equation} where $(\\cdot)^{\\ast}$ denotes the conjugate transpose that is equivalent to transpose for a real matrix. Since $\\mathbf{A}_{\\mathrm{R}}$ is real, we can safely use, \\begin{equation}\\mathbf{M}^{\\dagger} = (\\mathbf{M}^{\\mathsf{T}} \\mathbf{M})^{\-1}\\mathbf{M}^{\\mathsf{T}}\\end{equation} Then, pseudo-inverse of (19) becomes \\begin{equation} \\begin{pmatrix}\\mathbf{I}\\\ \\mathbf{B}^{\\mathsf{T}}\_{\\text{H}}\\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix}^{\\dagger} = \\begin{pmatrix}\\begin{pmatrix}\\mathbf{I}\\\ \\mathbf{B}^{\\mathsf{T}}\_{\\text{H}}\\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix} \\begin{pmatrix}\\mathbf{I} \\\ \\mathbf{B}^{\\mathsf{T}}\_{\\text{H}}\\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix}\\end{pmatrix}^{\-1} \\begin{pmatrix}\\mathbf{I}\\\ \\mathbf{B}^{\\mathsf{T}}\_{\\text{H}}\\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix}^{\\mathsf{T}}\\nonumber \\end{equation} and $\\mathbf{A}_{\\mathrm{R}}$ can be explicitly written as \\begin{equation} \\mathbf{A}\_{\\mathrm{R}} = \\begin{pmatrix}\\underbrace{{\\begin{pmatrix}\\mathbf{I} \\\ \\mathbf{B}\_{\\text{H}}^{\\mathsf{T}} \\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix}}^{\\mathsf{T}}{\\begin{pmatrix}\\mathbf{I} \\\ \\mathbf{B}\_{\\text{H}}^{\\mathsf{T}} \\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix}}}\_{\\text{First part}} \\end{pmatrix}^{-1} \\underbrace{{\\begin{pmatrix}\\mathbf{I} \\\ \\mathbf{B}\_{\\text{H}}^{\\mathsf{T}} \\\ \\mathbf{Q}^{\\mathsf{T}}\\end{pmatrix}}^{\\mathsf{T}} \\begin{pmatrix}\\mathbf{A}\_{\\text{H}}^{\\mathsf{T}} - \\mathbf{I} \\\ 2\\mathbf{Q}^{\\mathsf{T}} \\\ \\mathbf{0}\\end{pmatrix}}\_{\\text{Second part}} \\end{equation} First part gives us a symmetric and square matrix. Therefore, we know that inverse of this will be also symmetric, and this part would not be a problem for diagonalization if it were to stand by itself, or alongside another symmetric matrix. Second part does not result in a symmetric, skew-symmetric or in general a normal matrix. Therefore, the overall result needs a correction matrix to become diagonalizable matrix. However, hand calculation requires taking the inverse of the first part. Even after that, multiplication with a non-normal matrix needs to be computed, together which makes it too complicated to reveal the structure of the matrix, so that it can be manipulated to have a normal form.
null
null
null
null
null
null
Scalable Equilibrium Sampling with Sequential Boltzmann Generators
Accept (poster)
Summary: This paper proposes Sequential Boltzmann Generators, consisting of two conceptual ingredients: first, that invertible normalizing flows operating on Cartesian coordinates can scale to molecules as large as hexapeptides by leveraging non-equivariant transformers and the recent TarFlow framework; second, that annealing from the flow likelihood to the target with AIS and SMC can dramatically improve sampling efficiently. The authors show across-the-board improvements over CNFs on ALDP and scale these comparisons up to chignolin. Claims And Evidence: The main claim of the work is that the proposed collection of strategies enables scaling of Boltzmann generators to systems of unprecedented size without the use of coordinate transformations. This claim is well supported by experiments — this is the first demonstration of out-of-the-box Boltzmann generators on hexapeptides or molecules of similar complexity. However, there are significant caveats to some details, discussed below. Methods And Evaluation Criteria: Yes, the choice of model system and evaluation criteria is reasonable and effective at demonstrating the paper’s key claims. Theoretical Claims: I did not carefully check the proofs of theoretical claims. Experimental Designs Or Analyses: There are deficiencies with some of the experimental designs or analyses * ***[Severe]*** For the proposed method, the authors report ESS after SMC resampling, which is a completely meaningless metric as by definition the ESS can be maintained to be arbitrarily high via resampling. ***The authors must address this point (ideally via the next suggestion); otherwise I will change the recommendation to Reject.*** * [Moderate] The authors should separately report the performance of SBG with and without resampling (i.e., only with AIS) to allow a more direct comparison of I.I.D. proposal quality and ESS. * [Moderate] The authors should clarify (or even better, report both) the Wasserstein distances of the proposal w/o reweighting vs the samples with reweighting (which should presumably approach 0 without finite sample effects). * [Minor] It would be great if the authors reported more fine-grained Wasserstein metrics, for example in TICA space for the larger molecules. * [Minor] It would be great if ESS was also reported for Chignolin. * [Minor] The proposal appears to still contain very high-energy structures. It could be interesting to analyze the types of errors exhibited in these structures. Supplementary Material: Yes, I reviewed parts of the additional discussion in the supplementary material. Relation To Broader Scientific Literature: The paper contributes to the literature on training Boltzmann generators with access to data, where the main technical challenge is in the parameterization of the learned distribution in a way that permits exact likelihoods. The choice adopted by this paper is quite novel compared to previous works, which have generally used continuous flows or flows over internal coordinates. In particular, this work opens up the long-sought possibility of scalable, transferable, exact likelihood flows. The contribution made by this paper should significantly change the course of future work in this area. Essential References Not Discussed: All essential works were discussed. Other Strengths And Weaknesses: Since the model permits fast likelihood evaluation, there is a missed opportunity to explore a mix of data-based and energy-based training as done in the original Boltzmann generator paper. Also, with the freedom from internal coordinates, there is also a missed opportunity to explore the training of transferable models. Other Comments Or Suggestions: The definition of T-W2 distance is not clear. Could the authors clarify? Is it a multidimensional W2 distance or an average of one-dimensional W2 distances? The distinction between 100k vs 10k samples is not clear — could the authors clarify when this downsampling happens? I would also note that the discussion in Appendix A.1 and Appendix C seems unnecessary and might be confusing. Most readers will understand that AIS cannot be accomplished without fast likelihoods, and it would be strange to propose an additional control so as to exactly cancel out the annealing terms. The Ito filtering paper is also very new and this paper should not feel obliged to guard against misunderstandings of their key result. Also, the writing of Proposition 1 seems to be obfuscating the fact that the additional term is just a Gaussian on the CoM and the partition function a power of $2\pi$. Actually, could the authors clarify where the $\log ||c^2||/\sigma^3$ comes from in the log density of the $\chi(3)$ distribution? Questions For Authors: I have no important questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Rebuttal Reviewer S3Vr We would like to thank the reviewer for their time, feedback, and positive appraisal of our work. We are heartened to hear that the reviewer feels that the “contribution made by this paper should significantly change the course of future work in this area.” We also thank the reviewer for acknowledging that the main claims of our work are “well supported by experiments”. We now address the questions and suggestions raised by the reviewer in order, and note that additional results are included in this link: https://anonymous.4open.science/api/repo/sbg/zip ## ESS after SMC and SBG without resampling The reviewer makes an astute observation that using SMC with adaptive resampling, ESS can be maintained to an artificially high value. We agree that such a metric in this case is not meaningful, and we had opted to include this to be in line with the broader literature, e.g. in pure sampling NETS includes ESS after SMC. We will update the paper to remove ESS after SMC and, as the reviewer suggested, include SBG without resampling (SBG-AIS) which is now included as another row in our rebuttal document Tables 1 and 2. We find that SBG with AIS outperforms the previous SOTA which is ECNF and our introduced ECNF++ for AL3-AL6 across sample-based metrics $\mathcal{T}-\mathcal{W}_2$ and $\mathcal{E}-\mathcal{W}_1$, and is slightly worse in terms of ESS to ECNF++. We thank the reviewer for allowing us to strengthen and clarify our empirical results, and we hope this new SBG-AIS variant alleviates this particular concern. ## Wasserstein distances of proposal w/o reweighting We thank the reviewer for this insightful idea. We have included Table 3 in the rebuttal link to quantify the performance of our proposal without reweighting, with importance sampling i.e., just BG, and with SBG. We observe a drastic improvement in $\mathcal{E}-\mathcal{W}_1$, for IS over the proposal, and an even greater improvement with SMC. We will include these results in our updated draft. ## Wasserstein in TICA space Thank you for the suggestion! We have now included in the rebuttal experiments (Tables 1, 2 and 3) TICA Wasserstein metrics for all AL3+. We find SBG outperforms all baselines (ECNF/ENCF++) in this metric. We will include these results in our updated draft. ## High energy structures That is a very interesting question. Upon further investigation we found that the highest energy AL6 samples result from steric clashes, as visualized in Fig 4. In Fig 5 we show the histogram and log histogram of shortest non-bonded interatomic distance between MD (ground truth) and SBG samples. An additional source of high energy samples is covalent bonds of insufficient length (Fig 6). We will include these results in our new appendix. ## Mixing energy-based training We thank the reviewer for this great suggestion. We have experimented with mixing energy-based training with normal data-based training and find preliminary evidence that this leads to better performance on $T-\mathcal{W}_2$ and the newly introduced TICA metric but marginally worse $\mathcal{E}-\mathcal{W}_1$ vs normal SBG (Tables 1-2). Given these promising initial results we will run a full set of experiments using this method and report these in the updated paper. ## Transferable BG We appreciate the reviewers' comments that operating in Cartesian coordinates more easily leads to training transferable models. In this work, our primary focus was on proving the scalability of SBG as it relied on normalizing flows and inference scaling through SMC. Consequently, we believe that extending this to transferable systems is a natural direction of future work, but out of the scope of the current paper. ## Other questions > The definition of T-W2 distance is not clear. This is a multi-dimensional Wasserstein distance on torsion angles, accounting for the torus geometry of the angle space. > 100k vs 10k samples The subsampling is done after the final SMC step at the end of inference. > Proposition 1 To give further empirical credibility to CoM adjustment, we perform additional ablations in rebuttal Fig 2 and 3. We first find that the $||C||$ of the proposal samples does indeed follow an approximate Chi distribution, as expected given the training data augmentations. We also find that CoM adjustment is both important for stable IS reweighting (with a large but finite number of samples) and that sample metrics improve when the adjustment is employed in SMC. We will update the proposition statement and proof to better convey our theoretical result. ## Closing comments We would like to thank the reviewer for their time and effort. We hope all our answers here allow the reviewer to continue to positively endorse our paper, and we would love to have the opportunity to clarify any lingering questions should the reviewer have them. --- Rebuttal Comment 1.1: Comment: I appreciate the substantive additional results provided by the authors. I think these new results raise several subtle and interesting points. These should not be construed as changes in my evaluation of the paper but rather suggestions for a deeper analysis and discussion in the camera ready. It is disappointing to see that there is no widespread evidence that reweighting of any kind is able to improve the NF proposal, and further that SMC does not consistently improve upon AIS in terms of Wasserstein metrics. I will also complain about the organization of the new results. It would be extremely informative to have a table like the following: | | ESS | E-W1 | T-W2 | TICA-W2 | -- | -- | -- | -- | --| | ECNF++ proposal | N/A | ECNF++ reweighted | | SBG proposal | N/A | SBG reweighted | | | | | | SBG AIS | | | | | | SBG SMC | N/A | | | | I will maintain, however, my high rating on the paper on the basis of the fact that this is the first work of any kind to offer a glimmer of hope that high-quality NFs with fast, tractable likelihoods can be developed. I believe that strategies such as AIS or SMC will be essential to future work in Boltzmann generators and this paper provides important signal that NF architectures exist to realize such strategies. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their further consideration of our work in light of our rebuttal and extended results. We agree with the reviewers' welcome suggestion and intend to perform a deeper analysis and discussion of our rebuttal results in the camera-ready version. We acknowledge the reviewer's comment that “there is no widespread evidence that reweighting of any kind is able to improve the NF proposal”, but we would like to very politely push back. Specifically, in our rebuttal experiments, we found a reduction in our energy metrics after reweighting as empirically observed in Table 3 of our rebuttal PDF. We do, however, agree with the reviewer that reweighting does not appear to benefit macrostructure in our dihedral angles and TICA metrics. We hypothesize that there is potentially a trade-off between energy and macrostructure. Furthermore, we do not believe this to be unique to our proposed SBG and will add a similar comparison (proposal vs IS) for the ECNF++ in updated versions of the paper. We thank the reviewer for their suggestion to improve the presentation of our results, and we will adopt the reviewer's recommendation in modifying the presentation of the final table results in our camera-ready version. We are glad the reviewer shares our excitement for this work, and thank them greatly for their supportive comments that allowed us to strengthen the empirical caliber of our work.
Summary: This paper improves data-driven learning-based Boltzmann Generators (BG) with Sequential Monte Carlo (SMC), based specifically on a non-equilibrium transport method (NETS) recently proposed by (Albergo & Vanden-Eijnden (2024)). Unlike NETS whose source energy is based on a pre-defined prior, here the source energy is learned from data with a normalizing flow. Empirical results demonstrate the effectiveness of the proposed method. Claims And Evidence: Y Methods And Evaluation Criteria: To the best of my understanding, the proposed method represents a specific instantiation of NETS, wherein a data-driven source energy is employed. Given that NETS is theoretically applicable to any choice of source energy, the proposed approach seems incremental by incorporating a pre-trained normalizing flow (NF) as the source energy. Theoretical Claims: Y Experimental Designs Or Analyses: - What's the experiment setup for learning p_\theta? How many samples are required in $\mathcal{D}$? Since the proposed method is a two-stage approach. Training detail should be clarified in Sec 4. Supplementary Material: Y Relation To Broader Scientific Literature: Efficient sampling method from Boltzmann distribution could benefit AI4Science area for applications such as drug and material discovery. Essential References Not Discussed: Y Other Strengths And Weaknesses: See above sections Other Comments Or Suggestions: N/A Questions For Authors: - It seems redundant to introduce CNF in the main paper, given that NF is used in practice to model log p_\theta. What's the reason for mentioning CNF in Sec 2? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Rebuttal Reviewer HyU2 We thank the reviewer for their time and effort. We are glad that the reviewer found our empirical results to “demonstrate the effectiveness” of our method SBG. We next clarify the main points raised in the review and note that additional results are included in this link: https://anonymous.4open.science/api/repo/sbg/zip ## Novelty We acknowledge the reviewer's concern that SBG can be thought of as an application of NETS-style inference on a learned proposal. We first highlight that our approach differs from NETS as we do not learn an extra drift term through an auxiliary loss e.g. A PINN objective. In NETS, such a term is crucial to reduce the variance of the importance weights during a linear interpolation. However, SBG does not need to employ this computationally intense learning objective precisely because of our learned proposal. In contrast to an uninformative proposal, which has a very small overlap with the target, a learned proposal mitigates the need to learn a drift term. Thus, we would like to politely push back against the assertion that SBG is a simple instantiation of NETS, as it in fact SMC with a computationally efficient manner to perform Langevin dynamics that is unlocked by using a normalizing flow rather than an equivariant CNF which we argue is a novel insight that we exploit in the BG context. With regards to our framework, we again would like to politely disagree with the reviewer as our design choices fundamentally challenge the direction of BG research. More precisely, our main technical novelty lies in demonstrating the scalability of non-equivariant classical flows in contrast to the dominant trend to leverage equivariant CNFs. In addition, we include new algorithmic novelties such as CoM adjusted resampling; rebuttal Fig 2 and 3 highlight the improved stability and performance of IS / SMC when accounting for CoM augmentation. Furthermore, the overall scalability of our method to hexapeptides is a novel result, and is due to a combination of each component in SBG: 1) a non-equivariant classical flow and 2) SMC that leverages the exact energy of a classical flow. We highlight that these findings enabled Reviewer S3Vr to remark in their review, “The contribution made by this paper should significantly change the course of future work in this area.” Finally, we highlight that our paper contains several theoretical results that provide quantification of bias added through various thresholding schemes. Such schemes have routinely been employed in existing literature without justification or analysis. Our paper is the first to provide an exact characterization of the bias added—allowing practitioners to negotiate a select a problem-dependent thresholding value. ## Training details We appreciate the reviewer's comment regarding the training details for training the proposal flow. Whilst many of the training, inference and dataset details are included in Appendices D and E, we recognize the importance of including details in the main paper. We will update this in future versions of the work to include additional details in section 4 but briefly state that for all TarFlow models we train for 1,000 epochs with lr 1e-4 weight decay 4e-4 using the AdamW optimizer with a cosine lr schedule and EMA with decay 0.999. Furthermore, directly answering the reviewer's question concerning training samples, Appendix E includes a description of dataset construction from MD trajectories for each of the peptide systems. For each system we use a training set of 100k contiguous samples from a single MD trajectory. We understand that aspects of such details are important to be included directly in the experiments section, and we will revise the paper in the next draft to include the key details in the main body. ## Background on CNF We value the reviewer's feedback that the discussion of CNFs might seem ancillary to a paper that leverages classical normalizing flows as the model. The key reason to introduce CNFs is to illustrate that our inference time scaling through SMC, while theoretically possible using a CNF proposal, faces significant challenges in scalability due to the need to simulate the ODE using equation 4 to compute $\log p_{\theta}$ for **every energy evaluation** along the interpolation used in Langevin dynamics. In contrast, our TarFlow requires only 1 call for each Langevin step. However, we understand that this section may not have been as tightly integrated into the paper, and we will improve the clarity of presentation in the updated draft. ## Closing comments We thank the reviewer again for their questions, which allowed us an opportunity to clarify our paper. We hope that our rebuttal responses fully address all the important questions raised by the reviewer, and we kindly ask the reviewer to potentially upgrade their score if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise.
Summary: The manuscript presents the Sequential Boltzmann Generators (SBG), a novel extension to the existing Boltzmann generator framework for scalable sampling of molecular states in thermodynamic equilibrium. The framework removes the SE(3)-equivariance and encodes equivariance softly via data augmentations, achieving enhanced computation efficiency. The method leverages inference-time non-equilibrium transport via Sequential Monte Carlo (SMC), progressively refining proposal samples and improving their alignment with the target Boltzmann distribution. SBG employs a Transformer-based normalizing flow for efficient likelihood computation, avoiding the costly integration required by continuous normalizing flows. The experimental results demonstrate the state-of-the-art performance of SBG, successfully scaling equilibrium sampling to larger molecular systems that were previously intractable for standard Boltzmann generators. Claims And Evidence: Most of the claims are supported by clear and convincing evidence. However, the following claim might be problematic to some extent: 1. The paper claims that the model can generate uncorrelated samples efficiently, given the ESS results. However, while a high ESS and improved sampling efficiency are impressive, they do not directly confirm the independence of samples as an autocorrelation analysis. Methods And Evaluation Criteria: 1. The proposed method aligns well with the problem of scalable equilibrium sampling for molecular systems. Using a Transformer-based normalizing flow coupled with non-equilibrium transport via SMC is a reasonable approach to improve sample quality and importance weighting. 1. The evaluation metrics, namely Effective Sample Size (ESS), Wasserstein distances for energy distributions and dihedral angles, and Ramachandran plots, are reasonable for the task in the paper and provide a comprehensive assessment of sampling quality. However, to demonstrate that the model can generate uncorrelated samples, autocorrelation plots might be another indicative evaluation method. 1. The datasets are about different peptide systems (di-, tri-, tetra-, and hexapeptides), which are typical for molecular sampling work. However, additional benchmarks on more chemically diverse molecules could further validate the generalizability of SBG. Theoretical Claims: I checked propositions 1-3 and the sampling algorithms, and they are theoretically sound and mathematically consistent. Experimental Designs Or Analyses: 1. The paper benchmarks SBG against existing SOTA methods, including SE(3)-equivariant coupling flows and equivariant continuous normalizing flows. Therefore, the comparison between SBG and baselines gives indicating evidence of SBG's performance. 1. The comparison of computational efficiency is insightful, as it highlights the scalability of SBG compared to baseline models. Supplementary Material: I reviewed the supplementary material, mainly focusing on the proposition proofs and experimental details. Relation To Broader Scientific Literature: The key contributions of the paper contribute to the research in this area in terms of: 1. It builds upon the Boltmann generator (Noe'2019) and introduces non-equilibrium transport via Sequential Monte Carlo, improving sample quality and efficiency significantly. 2. It removes the explicit constraint of equivariance but implicitly learns this via data augmentation, which is also supported by some recent work in this area like AlphaFold3 (Abramson'2024) and MCF (Wang'2024). Essential References Not Discussed: All the related and essential references are discussed. Other Strengths And Weaknesses: Strengths: The paper novelly integrates Sequential Monte Carlo with Boltzmann Generators, enabling non-equilibrium transport for improved sample quality and scalability. Consequently, the normalizing flow operates directly on all-atom Cartesian coordinates, which was previously intractable for existing methods. In addition, the claims in the paper are well supported by the mathematical derivations and proofs. Weaknesses: While the paper claims that the model generates independent samples more efficiently, it does not analyze sample autocorrelation and compare the results to traditional MCMC methods. Other Comments Or Suggestions: See Strengths and Weaknesses. Questions For Authors: The evaluation in the paper is primarily focused on peptide systems, whereas it is unclear how well SBG generalizes to chemically diverse systems like small organic molecules or metal-organic frameworks. Therefore, have you considered any test on different chemical systems, and if not, do you anticipate any limitations when applying SBG to those systems? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Rebuttal Reviewer 9RzA We thank the reviewer for their thoughtful comments and feedback. We value that the reviewer found that most of our claims were supported by “clear and convincing evidence” and that our mathematical claims were “theoretically sound and mathematically consistent”. We are also glad that the reviewer found our use of SMC in BGs to be “novel” and allows for “scalability,” which is supported by our “insightful” comparison on computational efficiency. We next address all the salient points raised in the review and note that additional results are included in this link: https://anonymous.4open.science/api/repo/sbg/zip ## (Auto) Correlation of samples We acknowledge the reviewer's request for the inclusion of autocorrelation analysis. However, we emphasize that the samples generated by SBG can not suffer from autocorrelation. This is because only samples from $\tau = 1$ are returned as generated samples, hence there is no time dimension on which autocorrelation could exist. This is in contrast to methods such as MD in which a single trajectory is returned as generated samples, and samples may exhibit autocorrelation over sampling time. To bolster our study, we have included a new variant of SBG (SBG-AIS) that does not perform any resampling during the annealing process (See Tables 1 and 2). This ablation principally serves to eliminate the correlation that SMC might introduce during inference scaling. As we observe SBG-AIS, it achieves acceptable ESS and outperforms other baselines in sample-based metrics. We hope that this makes the impact of correlation during inference clearer in SBG. ## Benchmarking MCMC We thank the reviewer for the suggestion of additional baselines. We however, are unsure exactly which MCMC methods they believe to be suitable for benchmarking on the evaluation systems. We note that the goal of BGs is to amortize sampling and train a model which is able to quickly draw many uncorrelated samples. This is a fundamentally different approach with both benefits and drawbacks as compared to MCMC methods. ## Evaluating SBG beyond peptides We value the reviewer's feedback as exploring SBG’s generalizability to experimental settings beyond peptides considered in this paper is interesting. We first note that before SBG, modern BG’s utilized equivariant CNFs instead of classical normalizing flows that employed invertible architectures. This was due to the widely held belief that equivariant CNFs were more expressive and more amenable to larger-scale problems of interest. Despite this promise, these equivariant CNFs—as demonstrated in our ablations—struggle on datasets beyond AL3/4 and are much slower to evaluate due to the need for simulation. Thus, we argue that even demonstrating that a classical normalizing flow paired with an inference scaling strategy allows tackling even larger peptides in hexapeptide (AL6) is extremely interesting. These results demonstrate the expressive power of our framework in comparison to previous BGs. Such results enabled Reviewer S3Vr to highlight in their review, “The contribution made by this paper should significantly change the course of future work in this area.” As a result, we believe the experimental validation of our introduced SBG approach is well supported in the current peptide datasets and testing on other chemically diverse systems, while extremely interesting, is beyond the scope of the current paper, but remains an exciting direction of future work. At this time we don’t anticipate any limitations specific to chemically diverse systems such as small organic molecules or metal-organic frameworks, but chose to evaluate on peptides as has been the norm in many prior BG works. ## Closing comments We thank the reviewer for their time and effort in reviewing our work, and we hope the reviewer will kindly consider a fresh evaluation of our work, given the main clarifying points outlined above. We are also eager to engage in further discussion if the reviewer has any lingering doubts. --- Rebuttal Comment 1.1: Comment: Thank the authors for the comprehensive rebuttal. The rebuttal has addressed most of my concerns, especially the correlation of samples and the scalability of SBG in terms of sampling efficiency and tacking AL6. The only thing that is not fully resolved is how the SBG can do something with scientific insight, which is actually what my question is asking about. For example, can the model sample rare events for large peptides, or can it be generalized to other chemical systems besides peptides? With the above, I'd like to raise my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their time and consideration of our work, and are pleased to have addressed most of their concerns in our rebuttal. We additionally thank the reviewer for their score increase. The reviewer raises highly intriguing questions concerning scientific applications of the SBG, which we intend to explore in future work. We thank the reviewer for their thoughtful suggestions; we do not anticipate any inherent issues with exploring non-peptide systems, and agree that scaling to larger peptides to be of great scientific relevance, including the sampling of rare events.
Summary: This paper introduces Sequential Boltzmann Generators (SBG), an extension to the Boltzmann generator framework. By replacing conventional importance sampling with a non-equilibrium annealing process , the authors aim to transport proposal samples toward the target Boltzmann distribution. The authors also propose normalizing flow which leverages a Transformer‐based exactly invertible TarFlow that is trained with soft equivariance penalties rather than using equivariant architectures. Experimentally, the paper demonstrates increased performance over BG baselines on a series of molecular systems (ranging from dipeptides up to decapeptides such as Chignolin) in terms of effective sample size (ESS), Wasserstein distances, and inference time scaling. ## Update after rebuttal My doubts were initially about the scalability of SMC / re-weighting approaches in general being able to go beyond classical force-fields and lack of comparison to alternative diffusion samplers like iDEM (which essentially amortizes force-field evaluation in training time, not required for sampling). However, I want to acknowledge that showing efficacy of invertible NF with exact likelihoods enables some interesting downstream applications as the authors have accomplished here (to my knowledge, not currently possible with iDEM). I think this work highlights some interesting avenues in sampling and is a high-quality paper, so I want to increase my support 3->4. Claims And Evidence: SBG claims to improve scalability of equilibrium sampling in two ways: 1. by replacing the importance sampling reweighting of proposal samples with a target-informed non-equilibrium process, 2. Increase computational efficiency of the proposal model by using a non-equivariant invertible transformer architecture, enforcing equivariance softly via data augmentation. Based on the peptide sampling experiments, the authors have shown evidence of scaling up compared to prior boltzmann generator techniques and may very well be the first to sample these larger peptides (at least in cartesian coordinates). However, there are other generative modeling methods which aim to perform scalable amortized boltzmann sampling without requiring invertible networks or additional importance weighting / SMC. One example, which is only ever mentioned in Table 1, is iDEM which has shown to scale well to high-dimensional configurations (55 particle Lennard Jones potential). If the goal is scaling up equilibrium sampling, I find it confusing why the iDEM method is cited in the table but never discussed or benchmarked. Flow-annealed Bootstrapped importance sampling (FAB) and path-integral sampler (PIS) may also be competitive here, which can fully make use of equivariant architectures. If these concerns are clarified I would definitely reconsider my score. Methods And Evaluation Criteria: I think the proposed methods are described clearly and many design choices are directly related to the goal sampling molecular configurations. As mentioned above, I find only comparing against Boltzmann generator techniques strange. Theoretical Claims: The theoretical claims appear correct. Prop 2 is based on a recently established result. The energy thresholding technique suggested by Prop. 3 is interesting. Experimental Designs Or Analyses: (mostly reiterated from claims-evidence section) Based on the peptide sampling experiments, the authors have shown thorough evidence of scaling up compared to prior boltzmann generator techniques and may very well be the first to sample these larger peptides (at least in cartesian coordinates). However, there are other generative modeling methods which aim to perform scalable amortized boltzmann sampling without requiring invertible networks or additional importance weighting / SMC. If the goal is scaling up equilibrium sampling, I find it confusing why the iDEM method is cited in the table but never discussed or benchmarked. Flow-annealed Bootstrapped importance sampling (FAB) and path-integral sampler (PIS) may also be competitive here, which can fully make use of equivariant architectures. Supplementary Material: I only reviewed some additional implementation details of the experiments and training run-times. Relation To Broader Scientific Literature: I expect to see more application of these non-equilibrium processes as a drop in replacement of self-normalized importance sampling. I think the paper could have addressed other non-BG frameworks for sampling that have been proposed recently. Essential References Not Discussed: I think there definitely could be more discussion on recent generative modeling techniques for sampling outside of Boltzmann generators. In particular, there are diffusion-based samplers which all have different tradeoffs, but can possibly scale just as well here: Cited but not discussed: Flow Annealed Importance Sampling Bootstrap (2023), iDEM (Akhound 2024), Transport meets variational inference ( Vargas 2024) Not cited: Path-integral Sampler (Zhang 2021), Particle Denoising Diffusion Sampler (Phillips 2024), Sequential Controlled Langevin Diffusions (Chen 2024) (concurrent work) Other Strengths And Weaknesses: Strengths: The paper offers some interesting alternatives for normalizing flow architectures that do not rely on explicit equivariant parameterizations. The inductive biases / symmetries for molecular sampling problems are still quite strong, but the non-equivariant architectures may be necessary as we scale up to even larger systems and datasets. The use of NETs in place of self-normalized importance weighting is a generally smart drop-in design choice for Boltzmann samplers. Prop 3 gives us a principled technique for energy thresholding. Weaknesses: For the purpose of scaling up equilibrium sampling, I believe the paper is missing some essential baselines (in particular iDEM) and discussion of related methods listed above. It seems the paper only shows evidence of scaling over previous Boltzmann generator frameworks, but not other amortized samplers. I was a little bit disappointed in the novelty since the application of NETs and the architecture choice of the normalizing flows are somewhat orthogonal to each other. It seems they do work well together experimentally, but I believe more baselines are needed to prove its necessity for scalable equilibrium sampling. As I said before, If these concerns are clarified I would definitely reconsider my score. Other Comments Or Suggestions: If you have any other comments or suggestions (e.g., a list of typos), please write them here. The acronym EACF isnt defined until the appendix should be defined before the appendix. Many previous works evaluate on these synthetic energy functions based on the Lennard Jones potential (LJ). They are not as interesting as peptides, but they might help place the work better wrt previous work. Questions For Authors: If you have any important questions for the authors, please carefully formulate them here. Please reserve your questions for cases where the response would likely change your evaluation of the paper, clarify a point in the paper that you found confusing, or address a critical limitation you identified. Please number your questions so authors can easily refer to them in the response, and explain how possible responses would change your evaluation of the paper. Although there is amortization here from the normalized flow, it seems that sampling still requires a lot of queries from the potential energy model. How would this technique work in situations where the potential energy is computationally expensive to evaluate? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Rebuttal Reviewer VNPc We thank the reviewer for their time, feedback, and nuanced comments. We are glad that the reviewer found the non-equivariant NF an “interesting alternative” which is “necessary as we scale up to even larger systems and datasets”. We also appreciate that the reviewer recognized our use of SMC instead of IS as a “smart drop-in design choice”. Finally, we are heartened to hear that the reviewer agrees that Prop 3 “is a principled technique”. We now address their key questions raised in the review and note that additional results are included in this link: https://anonymous.4open.science/api/repo/sbg/zip ## iDEM as a Baseline We appreciate the reviewers' valuable comments regarding iDEM as an additional baseline. We would like to first politely recall the setting of iDEM and other diffusion samplers as completely data-free (i.e. there is no training set), in contrast to the BG setting which includes training on (biased) data. This allows BGs to scale much more easily than the data-free amortized samplers. Consequently, we argue that scalability claims in each setting cannot be meaningfully compared. To our knowledge only one sampling method has successfully scaled to any molecular task, which is FAB on ALDP using an intrinsic coordinate system, while molecular tasks are the focus of most works on BGs. To our knowledge, no sampling method has successfully scaled to any molecular task on cartesian coordinates, the focus of this work. To investigate this setting, we had private correspondence with the iDEM authors, who told us that iDEM failed to scale to ALDP even with substantial effort; to their knowledge no diffusion-based sampler can scale to molecular tasks. Following the reviewer's suggestion we trained iDEM on ALDP ourselves. In this case we use a vacuum instead of the implicit solvent used in our main results to speed up training (see Fig 1 in our link). We observe that iDEM is unable to successfully sample in this easier setting with most modes missing and a poor energy distribution. ## Novelty We value the reviewer's feedback that the application of SMC, and a general-purpose Normalizing Flow may initially appear to have limited technical novelty. We would like to politely push back against this assertion, as our framework and design choices fundamentally challenge the predominant approach in BG. Prior to SBG, all modern BGs in Cartesian coordinates have resorted to equivariant CNF’s; the fact that we can omit both exact equivariance and use an exact NF—is a novel insight. Moreover, our CoM adjustment strategy is also new. We ablate its importance in new plots in the rebuttal link (Fig 2 and 3), which shows that IS reweighting significantly benefits from this adaptation strategy. In fact Reviewer S3Vr notes, “The contribution made by this paper should significantly change the course of future work in this area.” We further wish to highlight that the paper contains numerous theoretical results on thresholding, some of which are utilized in existing papers without proper justification. Our theoretical results allow us to quantify the impact of thresholding schemes in this process. We hope that the reviewer may join us in agreeing that the design choices utilized in SBG allow for a fresh approach to building BG’s using non-equivariant components with soft penalties that demonstrably scale better to larger peptides in Cartesian coordinates—which remained an open challenge until SBG. ## Additional References We acknowledge the reviewer's comment regarding the inclusion of more non-BG based samplers. We will update the paper with a dedicated discussion on non-BG based sampling and include the references suggested by the reviewer in FAB, PIS, PDDS, and the concurrent work SCLD. ## LJ Potential The reviewer is correct to highlight that many previous samplers and BG papers also evaluate on LJ potential systems. Such systems are however less challenging than even small peptides (e.g ALDP), making them of limited interest once peptides can be successfully tackled. We thank the reviewer for their suggestion that this may help better place the work, but instead draw attention to the evident failure of iDEM on ALDP in Fig 1 as clear evidence that BG methods are superior in the data-available setting. ## Computational expense We thank the reviewer for enquiring regarding the efficiency of our method with respect to energy / force evaluations. During sampling a single force evaluation is required per-particle per timestep, hence if the force is expensive this will present a computational cost. ## Closing comments We thank the reviewer for their valuable feedback and great questions. We hope that our rebuttal fully addresses all the important points raised, and we kindly ask the reviewer to potentially upgrade their score, as they indicated, if they are satisfied with our responses. We are also more than happy to answer any further questions that arise, please do let us know! --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thorough response. Regarding diffusion samplers, I don't think there is some limitation that prevents pre-training a diffusion model on biased-data, so I'm not sure I agree with the authors point that BG are inherently more scalable. SBG is going further than previous works here in the case of scaling classical force-fields, but as the authors acknowledge, more accurate force-fields (e.g. DFT solvers) will present significant computational and scalability challenges (as with almost any re-weighting or SMC approach). There are even huge efforts right now trying to learn large equivariant GNNs to predict these DFT force-field calculators (MACE-OFF [Kovacs 2023]) in order to achieve more computationally efficient evaluations that are more realistic than classical force-fields. Even still, I imagine these GNN would be quite challenging to incorporate into SBG. This is my reason for pushing back on the focus of scalability claims in the paper. That being said, I have considered the additional iDEM results and my concerns for lack of diffusion sampling baselines are alleviated (based on the remark of the original iDEM authors, I trust that baseline is representative of iDEMs best performance on this benchmark) and I'm leaning towards accept now 2->3. [Kovacs 2023] MACE-OFF23: Transferable Machine Learning Force Fields for Organic Molecules --- Reply to Comment 1.1.1: Comment: We appreciate the time the reviewer has taken to reconsider their evaluation of our work, and their score increase. The reviewer is correct to identify that diffusion-based samplers could be pretrained on biased data, however this is not the standard approach as most of these works consider the data-free setting. It remains an open (and highly interesting) research question to establish if diffusion-based samplers pretrained on a similar dataset to a BG outperform the BGs themselves. However, as it stands, given a dataset of MD trajectory BGs remain the most successful method for Boltzmann sampling, with no diffusion-based sampler achieving acceptable performance on ALDP in cartesian coordinates. We acknowledge the reviewers' concern regarding the computational cost of more accurate force fields. Whilst SMC requires more force evaluations than IS, methods including diffusion-based samplers also suffer from this requirement, hence we believe this to not be a limitation unique to our work (or to BGs). The SBG supports an arbitrary differentiable target energy function, hence it would be algorithmically trivial to incorporate learned DFT approximations as the reviewer suggests. Exploring such avenues, and improving the force-evaluation efficiency of the SBG is an exciting direction for future work. We thank the reviewer for their acknowledgement of our iDEM results and our discussion with the iDEM authors, and are glad this has alleviated this concern they held. We once again thank the reviewer for their comments and feedback, which has enhanced the empirical quality of our work.
null
null
null
null
null
null
Sleeping Reinforcement Learning
Accept (poster)
Summary: This paper considers a tabular episodic reinforcement learning setting where the set of available actions is not fixed but varying over episodes, states and time steps. The paper studies two different ways the available actions are revealed to the learner: per-episode (available actions are revealed at the beginning of each episode) and per-stage (available actions are revealed only at each time step within an episode). In the per-episode regime, the paper proposes an algorithm that works for both adversarial and stochastic cases, and proves that its sleeping regret is not larger (in big-O notation) than standard RL. In the per-stage regime, the paper proves both lower and upper bounds in this regime for two different types of action distributions - one where the action availability is independent of past states and actions and one where the action availability is dependent on the previous state and action. Claims And Evidence: This is a theory paper. All theorems are clearly stated and accompanied by their proofs in the appendix. Methods And Evaluation Criteria: The approach is sound and the results are significant. There are two different notions of regret, one for the per-episode regime (Definition 3.1) and one for the per-stage regime (Definition 4.2). These notions of regret make sense. Theoretical Claims: For the upper bounds, I only skimmed their proofs in the appendix. Since the algorithms are based on the standard approach of optimism, all the upper bound proofs seem fine. For the lower bounds, I did carefully check the proofs. In particular, I checked the proof of Theorem 4.2, which is the most important lower bound in this paper. I found the proof correct. Experimental Designs Or Analyses: The paper has no experiments. Supplementary Material: I checked the lower bound proofs in the paper, in particular Appendix E.2 (Proof of Theorem 4.2). The proof is correct. Relation To Broader Scientific Literature: The paper extends the existing literature on sleeping bandits to reinforcement learning. Sleeping reinforcement learning is much harder than sleeping bandits, so the contributions of the paper are significant. The paper also make novel technical contributions in deriving optimistic algorithms for sleeping reinforcement learning. I find it interesting that such an optimism-based approach works (in the per-episode regime) even for adversarial action availability, which was not the case for sleeping adversarial bandits (e.g. Nguyen and Mehta, AISTATS 2024). In the independent and stochastic per-stage regime, the paper proposes an approach that estimates the distribution of action availability. This approach is similar to existing approaches in sleeping bandits with stochastic availabilities (e.g. Saha et al, ICML 2020). The most surprising result to me is that on both independent and Markovian stochastic per-stage regimes, the dependency on the number of actions is exponential. This was not the case in sleeping bandits. Essential References Not Discussed: I am not aware of any missing important references. Other Strengths And Weaknesses: No other comments. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. Could the authors outline any significant technical challenges for the stochastic per-stage regime where the *type* of the distribution of the action availability is unknown? In particular, why doesn't your current estimation method for $C^{ind}$ works for $C^{Markov}$? The distribution does *not* depend on the episode, so couldn't we simply estimate the conditional probability that an action might be available given the current state, previous state, previous action and previous availability? 2. Going back to the exponential dependency on $A$: the lower bound construction in Theorem 4.2 requires the availability to change based on previous availability $\mathcal{A}\_{k,h-1}$. Do you think the exponential dependency on A can be removed if the availability depends only on $s\_{k,h}, s\_{k, h-1}$ and $a\_{k,h-1}$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our work, and for having appreciated the significativeness of our work. We also thank the Reviewer for the comments on the results which we will use to expand the discussion on the results exploiting the additional page. Below, our answer to the Reviewer's questions. > Could the authors outline any significant technical challenges for the stochastic per-stage regime where the type of the distribution of the action availability is unknown? In particular, why doesn't your current estimation method for $C^{ind}$ works for $C^{markov}$? The distribution does not depend on the episode, so couldn't we simply estimate the conditional probability that an action might be available given the current state, previous state, previous action and previous availability? We can adapt the design of the estimator for $C^{ind}$ to handle $C^{markov}$ by incorporating **conditional probabilities**, as the Reviewer correctly suggests. However, we conjecture that the resulting **regret bound** would remain **of the same order (with exponential dependence on $A$)** as in our current approach (Theorem 4.1), which relies on the **augmented MDP** approach. That said, we believe our method allows us to avoid several non-strictly necessary calculations. We will add a comment on that in the paper. > Going back to the exponential dependency on $A$: the lower bound construction in Theorem 4.2 requires the availability to change based on previous availability $\mathcal{A}\_{k,h-1}$. Do you think the exponential dependency on A can be removed if the availability depends only on $s_{k,h}, s_{k,h-1}$ and $a_{k,h-1}$? Yes, this dependency can be removed in the described scenario and the regret bound we would obtain should be **very similar** to the **independent availability** scenario, as the additional complexity in the Markovian case is generated, as the Reviewer properly noticed by looking at the lower bound construction, by the challenge in estimating the transition probabilities over the action sets. We will add a comment on that using the additional page. Thank you for pointing it out.
Summary: The paper introduces Sleeping Reinforcement Learning (SleRL), a new reinforcement learning paradigm where the set of available actions varies over time due to external constraints or stochastic processes. Two settings are considered: per-episode disclosure where available actions for all states are revealed at the start of each episode and per-stage disclosure where available actions are revealed only at the current step. They show an upper bound and a lower bound for the regret of a modification of UCBVI in each setting. Claims And Evidence: The regret bounds are supported by proofs. Methods And Evaluation Criteria: The algorithm is a modification of a classical algorithm called UCBVI, and it makes sense for this problem. Theoretical Claims: I have roughly checked the proof of Theorem 3.1 and it looks correct. Experimental Designs Or Analyses: No experimental designs. Supplementary Material: No. Relation To Broader Scientific Literature: This work is related to Multi-Armed Bandits under the name of “Sleeping” MABs and Reinforcement Learning with constrained action spaces. Essential References Not Discussed: This paper has discussed essential references. Other Strengths And Weaknesses: Strengths: 1. This paper is a pure theoretical paper and offers detailed analysis about the upper bound and lower bound for the algorithm. 2. This paper uses a novel construction (Figure 3) to show that an exponential dependence on the number of actions $A$ is unavoidable in the regret and proves a lower bound (Theorem 4.2). Weaknesses: 1. This paper lacks numerical analysis to test their theoretical results. 2. The gap between upper bound and lower bound may be too loose for Markovian Per-stage Disclosure. Other Comments Or Suggestions: 1. In the definition of three regrets, is there a typo? I guess $T$ and $K$ should be the same thing, the number of episodes. 2. It would be better to show some numerical results to verify the bounds. Questions For Authors: 1. Is it possible to add some numerical results to verify your results? 2. Is it possible to reduce the bound gaps, i.e. the exponential term $2^{A}$ and $2^{A/2}$, between Theorem 4.1 and 4.2? 3. Is it possible to reduce the lower bound condition for $T$ with a factor $H^{10}$ in Theorem 5.1? ## update after rebuttal: I raise the score from 2 to 3. Please refer to the comment below for reason. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our work. Below, our answer to the Reviewer's questions and concerns. > Is it possible to add some numerical results to verify your results? To numerically validate the algorithm and show the impact of action availability on performance, we consider a modification of the well-known Frozen Lake environment (https://gymnasium.farama.org/environments/toy_text/frozen_lake/). In the original version, the agent must traverse a frozen lake (i.e., a grid) from a start to a goal position, avoiding holes in the surface (i.e., unavailable positions in the grid). We modify it by letting such holes open and close stochastically (i.e., the unavailability of each position is sampled at each time step from a Bernoulli with parameter $p$). We assume the start and goal to be fixed in the top-left and bottom-right corners of the grid, respectively. The agent can move up, down, left, right, or stay. If the agent selects an action that would move it to an unavailable state, it stays in the current position. The reward function is defined as follows: the agent receives reward $1$ for state-action pairs that have the goal as next state, and $0$ otherwise. The episode stops at the end of the time horizon (not when we reach the goal state). Given that all positions (except the start, the goal, and the position occupied by the agent) have the same probability of being frozen at any time step, it is easy to see that the optimal policy tries to move along the diagonal that connects the start to the goal. To compute the value of the optimal policy, we perform a Montecarlo simulation for each evaluated setting. We compare our S-UCBVI (Algorithm 7) with standard UCBVI (Azar et al., 2017), observing that the latter interacts with the environment as shown in Figure 1. We compare them in terms of reward, to highlight that both algorithms converge, yet UCBVI converges to a suboptimal objective. We evaluate grids of size $G \times G$, with $G \in \\{2,3,4\\}$, varying the stochasticity parameter as $p \in \\{0,0.5,0.75\\}$. We consider a time horizon of $H=10$ for each episode and $K= 2 \cdot 10^5$ episodes. The Reviewer can find the plots of the experiments here: https://drive.google.com/file/d/19WlZpKUHoSoYxx4PgT3jNLqyfmrm9twu/view?usp=sharing and the code here: https://drive.google.com/file/d/1NBoDjr9P_eaUcO1Na4nyzE71q4zgAKwR/view?usp=sharing As expected, when $p=0$, we observe no performance difference. Instead, as the environment stochasticity increases, we observe a greater gap in the performances of the two algorithms, with S-UCBVI (our algorithm) achieving the optimal value (obtained via Montecarlo simulations as described above), represented as an horizontal line, and UCBVI not reaching such optimum. This is due to the fact that UCBVI cannot observe action availabilities, effectively playing in an environment with a lower optimum. > Is it possible to reduce the bound gaps [...] between Theorem 4.1 and 4.2? The lower bound of Theorem 4.2 that shows an exponential dependence in the cardinality of the action set represents a **statistical barrier** in the learning for Sleeping MDPs with **Markovian** per-stage availability. For this reason, as common in the literature (e.g., MDPs with adversarial transitions [1]), the goal is no longer matching such an exponential lower bound, but rather finding **structures** (e.g., independent action availability) that overcome such barriers. The result of Theorem 4.1, indeed, has to be interpreted as just an exemplification of the fact that, when accepting such an exponential dependence, the Sleeping MDPs with Markovian per-stage availability can be addressed through the augmented MDP method. We will clarify this in the paper. [1] Tian, Y., Wang, Y., Yu, T. and Sra, S. Online Learning in Unknown Markov Games. ICML 2021 > In the definition of three regrets, is there a typo? I guess $T$ and $K$ should be the same thing, the number of episodes. We checked and the notation is correct and compliant with (Azar et al., 2017) where $T = KH$ is the total number of interactions (see Section 2), $K$ is the number of episodes and $H$ is the horizon of the single episode. > Is it possible to reduce the lower bound condition for $T$ with a factor $H^{10}$ in Theorem 5.1? Just like in (Azar et al., 2017), for the UCBVI algorithm and its analysis, it is challenging to remove such a dependence on the horizon $H$ from the minimum $T$ condition. This is exacerbated, in our case compared to (Azar et al., 2017), since we consider stage-dependent transitions (doubling the exponent of $H$). Nevertheless, there exist works [2] that succeed in mitigating such a condition at the price of more complex (and less effective in practice) algorithms. Importing such techniques to the Sleeping MDP setting can be an interesting future work. [2] Zhang, Z., Chen, Y., Lee, J. D., and Du, S. S. Settling the sample complexity of online reinforcement learning. COLT 2024. --- Rebuttal Comment 1.1: Comment: I appreciate your responses for my questions. The numerical experiment shows the advantage of your algorithm over UCBVI, and your responses answer my questions about the regret bound. I have adjusted my score accordingly.
Summary: This paper studies a new paradigm called Sleeping Reinforcement Learning, where the available action set varies during the interaction with the environment. The authors study several settings, including the per-episode disclosure, in which the available action sets are revealed at the beginning of each episode, and the per-stage disclosure, in which the available actions are disclosed only at each decision stage. The authors provide algorithms, upper and lower bounds for the problem. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: The work is related to the works on sleeping bandits, where the action sets vary in time. To the best of my knowledge, this is the first theoretical work on sleeping RL. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This is the first work to theoretically study the important sleeping RL setting. It is well motivated, the authors provide many examples to show the importance of this setting. 2. The paper is well-written and easy-to-follow. 3. The authors conduct a complete study in this topic, providing algorithms with regret upper bounds, and proving novel lower bounds. Weaknesses: 1. It may be better to also provide a lower bound for the setting of independent per-stage disclosure. 2. In Lines 130-132, $V^*_{ED}(A)$, $V^*_{SD}(A)$, $V^*_{LLC}(A)$ are used before definition. Other Comments Or Suggestions: There is a typo in Line 195. Questions For Authors: It is possible to also provide a lower bound for the setting of independent per-stage disclosure? I believe this may help better understand the problem. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our paper and for having appreciated the motivation, clarity and novelty of our work. Below, our answer to the Reviewer's comments. > It may be better to also provide a lower bound for the setting of independent per-stage disclosure. > It is possible to also provide a lower bound for the setting of independent per-stage disclosure? I believe this may help better understand the problem. We thank the Reviewer for giving us the opportunity to elaborate on this. After careful consideration, we argue that a **tight lower bound for the independent per-stage disclosure case is the same as that of standard MDPs, i.e., $\mathbb{E}[R(T)] \ge {\Omega}(H\sqrt{SAT})$** (see Domingues et al., 2021). Clearly, the lower bound for standard MDPs is a lower bound for the independent per-stage disclosure case, since the latter includes a smaller class of problems in which all actions are always available, i.e., $C^{\text{ind}}_h(\mathcal{A}|s) = 1$ for every $(s,h) \in \mathcal{S}\times [H]$. Concerning the tightness, we need to carefully compare it with the upper bound of Theorem 5.1. First of all, the lower bound $\mathbb{E}[R(T)] \ge {\Omega}(H\sqrt{SAT})$ (Domingues et al., 2021) is for the **expected regret** (as customary in the literature), while our upper bound for the per-stage disclosure case of Theorem 5.1 is for the **regret in high probability**. Thus, we need to convert the latter to the expected regret in order to perform the comparison. To do so, we start from the first regret bound of Theorem 5.1 (disregarding constants and logarithmic terms, but not terms depending on $\delta$): $$ R(T) \le \text{UB}(\delta) := \widetilde{O} \left( H \sqrt{SAT \log\left(\frac{2^A}{\delta} \right)} + H^6S^3A 2^A \log\left(\frac{2^A}{\delta} \right)^2 \right), \qquad \text{w.p.} \quad 1-\delta. $$ Notice that by disregarding the dependence on $\delta$ and for sufficiently large $T$, we obtain the second regret bound of Theorem 5.1 (still in high probability). To get the upper bound for the expected regret, we make the choice of $\delta = \frac{2^A}{T}$, whenever $T \ge 2^A$, obtaining (since $R(T) \le T$ always): $$ \begin{aligned} \mathbb{E}[R(T)] & = \mathbb{E}[R(T) \mathbb{1}\\{R(T) \le \text{UB}(\delta) \\}] + \mathbb{E}[R(T) \mathbb{1}\\{R(T) > \text{UB}(\delta) \\}] \\\\ & \le\text{UB}(\delta) + T \delta \\\\ & \le \widetilde{O} \left( H \sqrt{SAT \log\left(T \right)} + H^6S^3A 2^A \log\left(T \right)^2 + 2^A\right) \\\\ & = \widetilde{O} \left( H \sqrt{SAT } + H^6S^3A 2^A \right). \end{aligned} $$ The latter is of order $\widetilde{O} \left( H \sqrt{SAT }\right)$ whenever $T \ge \Omega(H^{10}S^5A2^{2A})$. We will adjust the presentation of the result in the final version of the paper, clarifying the difference between expected regret and regret with high probability bounds, and, consequently, the discussion on the result and the future works. Thank you for rising this point. > In Lines 130-132, $V_{ED}^* (A)$, $V_{SD}^*(A)$, $V_{LLC}^*(A)$ are used before definition. We agree with the Reviewer that (at least) an informal definition is needed also in this part. It was an oversight, we fixed it. Thank you. > Typo in Line 195. Thanks, we fixed it.
null
null
null
null
null
null
null
null
Fast Video Generation with Sliding Tile Attention
Accept (poster)
Summary: this paper addresses the problem of slow speeds in video generation. the paper proposes a method called sliding tile attention that is designed to address the challenge. the proposed method learns to do sliding and attending over local spatial and temporal region, allowing the reduction of redundancy in computing full attention. the resulting method is much faster than prior methods. Claims And Evidence: the claims are supported by experimental evaluations. Methods And Evaluation Criteria: the proposed method and evaluation metrics make sense Theoretical Claims: the theorems are technically sound Experimental Designs Or Analyses: the evaluation is thorough. Supplementary Material: no Relation To Broader Scientific Literature: i believe the resulting method will have a huge impact on many researchers working on diffusion models. i see the merit of the proposed method and the potential impact this paper could have Essential References Not Discussed: references are adequate Other Strengths And Weaknesses: i enjoyed reading this paper and i believe this will be useful for many applications. however, I would still like to request some clarifications 1. will the code be released to ensure reproducibility 2. will the proposed method have the potential to be applied to 3D or 4D generation? (I know 3D and 4D gen is beyond the scope of this paper, so I'm not asking for any experiments or comparisons, but I'm genuinely curious about the impact that this method could potentially have Other Comments Or Suggestions: n/a Questions For Authors: please see the comments in #Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer BHCc for their supportive feedback and genuine interest in our work. > *Will the code be released to ensure reproducibility* We appreciate your emphasis on reproducibility. We confirm that all the code, configurations, and scripts required to reproduce our experiments will be fully open-sourced. This will ensure the community can easily replicate and build upon our results. > *Will the proposed method have the potential to be applied to 3D or 4D generation?* We appreciate your insightful question regarding STA's potential applicability beyond standard video generation. Videos are already a type of 3D data (time × height × width). We believe STA naturally generalizes to other forms of 3D data such as 3D object generation (e.g., voxel grids) or even 4D data generation (e.g., dynamic point clouds or time-evolving 3D structures), so long as those data conform to some degree of data locality.
Summary: This paper introduces sliding tile attention (STA) to address prohibitive compute cost in attention calculation. The authors observed that attention scores in pretrained video diffusion models predominantly concentrate within localized 3D windows. The proposed STA can eliminate redundancy from full attention by sliding and attending over local spatial-temporal region. STA operates tile-by-tile with a novel hardware-aware sliding window design, preserving expressiveness while being hardware-efficient. Experimental results verify the effectiveness of the proposed method in achieving diffusion acceleration. Claims And Evidence: This paper claims that "attention scores in pretrained video diffusion models predominantly concentrate within localized 3D windows.". The claims are supported by experimental results. Methods And Evaluation Criteria: This paper introduces sliding tile attention (STA) to address prohibitive compute cost in attention calculation. Both qualitative and quantitative results verify the effectiveness of the proposed method. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes. STA operates tile-by-tile with a novel hardware-aware sliding window design, preserving expressiveness while being hardware-efficient. Supplementary Material: No Supplementary Material is provided Relation To Broader Scientific Literature: This paper introduces sliding tile attention (STA) to address prohibitive compute cost in attention calculation. The authors observed that attention scores in pretrained video diffusion models predominantly concentrate within localized 3D windows. The proposed STA can eliminate redundancy from full attention by sliding and attending over local spatial-temporal region. STA operates tile-by-tile with a novel hardware-aware sliding window design, preserving expressiveness while being hardware-efficient. Experimental results verify the effectiveness of the proposed method in achieving diffusion acceleration. Essential References Not Discussed: No Other Strengths And Weaknesses: Pros: 1. The authors observed that attention scores in pretrained video diffusion models predominantly concentrate within localized 3D windows. 2. A novel hardware-aware sliding window design is proposed , preserving expressiveness while being hardware-efficient 3. Extensive experimental results verify the effectiveness of the proposed method. Cons: 1. More training details about STA w. Training are missing. 2. No demo videos are provided in the supplementary files, which makes it hard to judge the effectiveness of the proposed method. 3. For video generation, only HunyuanVideo is used to conduct experiments. More other architectures such as CogVideoX and Mochi should be discussed and compored. Other Comments Or Suggestions: Please refer to Weaknesses for more details Questions For Authors: Please refer to Weaknesses for more details Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer nmHo for their constructive feedback. Below, we address specific comments one by one. > *More training details about STA w. Training are missing.* We have provided the primary details of STA training, including datasets, prompts, learning rate, and hardware specifications, in Appendix Section B and briefly mentioned them in line 265 of the manuscript. If there are specific additional details you would like us to include beyond these, we would be happy to provide them. We will also open-source all of our **training code, configurations, and scripts** to facilitate reproducibility. > *No demo videos are provided in the supplementary files* Thanks for raise this concern. We provide an anonymous link to 20 sampled videos from our human evaluation experiments here: https://drive.google.com/drive/u/4/folders/1kRDt4ahiYQj1zk593FE6Hg2CgNsJcdIf > *For video generation, only HunyuanVideo is used to conduct experiments.* Following your valuable suggestion, we further validated STA on Wan2.1 in a training-free setup, as shown in the results below. Together with FLUX's result in Table 5 of our manuscript, we believe they provide broader evidence of STA's generalizability. Resolution: 93X1024X1024 | Steps | Sparsity | SSIM ↑ | PSNR ↑ | Latency | Speedup | |------------|-----------------------------------|--------|--------|---------|---------| | 50 steps full attn| 0.0% | –| – | 1996s | 1.00× | | 18 steps full + 32 steps STA| 50.31% | 77.41 | 20.16 | 1322s | 1.51× | | 25 steps full attn| 0.0% | –| – | 998s | 1.00× | | 9 steps full + 16 steps STAl | 49.83% | 79.3% | 22.29 | 661s | 1.51× | | 10 steps full attn| 0.0% | –| – | 402s | 1.00× | | 4 steps full + 6 steps STA | 46.61% | 82.7% | 22.25 | 277s | 1.45× | Resolution: 93X768X1280 | Steps | Sparsity | SSIM ↑ | PSNR ↑ | Latency | Speedup | |--------------------------|--------------|--------|--------|---------|---------| | 50 steps full attn | 0.0% | – | – | 1839s | 1.00× | | 18 steps full + 32 STA | 52.71% | 80.97 | 22.09 | 1241s | 1.48× | | 25 steps full attn | 0.0% | – | – | 920s | 1.00× | | 9 steps full + 16 STA | 52.36% | 79.48 | 22.3 | 621s | 1.48× | | 10 steps full attn | 0.0% | – | – | 378s | 1.00× | | 4 steps full + 6 STA | 49.13% | 79.49 | 22.73 | 258s | 1.46× | The results confirm STA's applicability across different model architectures. We truly appreciate your thoughtful review. If our response has addressed your concerns, we would be grateful if you might consider a higher score. If there are any remaining points we could further clarify or improve, we would be sincerely thankful for your guidance.
Summary: This paper introduces sliding tile attention (STA) that operates tile-by-tile with a novel hardware-aware sliding window design, preserving expressiveness while being hardware-efficient. STA achieves 1.36–3.53× end-to-end speedup with no or minimum quality loss. ## update after rebuttal" Thanks for the authors' response. The authors have addressed most of my concerns. My final score is 4 "accept". Claims And Evidence: Yes, claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, proposed methods and evaluation criteria make sense for the problem Theoretical Claims: Yes, overall looks correct Experimental Designs Or Analyses: Yes, the experiments overall are sound Supplementary Material: No. Relation To Broader Scientific Literature: Video generation model is running with very high latency. Reducing latency is a very significant topic to the area. This paper introduces some innovations along the direction. Essential References Not Discussed: I don't see any specific missing pieces but it would be helpful to extend the current relative work section with more literatures on attention based speed-up that are more related to the paper, instead of other types of speed-up. Other Strengths And Weaknesses: The paper overall is well written and easy to follow. Experiments are sound, contribution of the paper is clear. The topic is very important to the community. Overall it's a good paper. Other Comments Or Suggestions: Extend the current relative work section with more literatures on attention based speed-up that are more related to the paper, instead of other types of speed-up. It would be great to present results on more models / hardware types to show case the generalizability of the work. Questions For Authors: Are the results generalizable to other models than Huanyuan-Video? E.g., StepVideo, Wan, CogVideoX, LTX. Can you please provide some analysis on those models? Are the optimization hardware dependent? It would be good to report speed-ups on other types of GPUs. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer C6tW for their insightful suggestions and valuable questions. Below we address your comments and strengthen our paper. > *Are the results generalizable to other models than Huanyuan-Video?* As suggested, we further validated STA's generalizability beyond Huanyuan-Video. In our original manuscript, we demonstrated the effectiveness of STA on image-generation models like FLUX. Here, we additionally apply STA to Wan 2.1 in a training-free setup. Resolution: 93X1024X1024 | Steps | Sparsity | SSIM ↑ | PSNR ↑ | Latency | Speedup | |------------|-----------------------------------|--------|--------|---------|---------| | 50 steps full attn| 0.0% | –| – | 1996s | 1.00× | | 18 steps full + 32 steps STA| 50.31% | 77.41 | 20.16 | 1322s | 1.51× | | 25 steps full attn| 0.0% | –| – | 998s | 1.00× | | 9 steps full + 16 steps STAl | 49.83% | 79.3% | 22.29 | 661s | 1.51× | | 10 steps full attn| 0.0% | –| – | 402s | 1.00× | | 4 steps full + 6 steps STA | 46.61% | 82.7% | 22.25 | 277s | 1.45× | Resolution: 93X768X1280 | Steps | Sparsity | SSIM ↑ | PSNR ↑ | Latency | Speedup | |--------------------------|--------------|--------|--------|---------|---------| | 50 steps full attn | 0.0% | – | – | 1839s | 1.00× | | 18 steps full + 32 STA | 52.71% | 80.97 | 22.09 | 1241s | 1.48× | | 25 steps full attn | 0.0% | – | – | 920s | 1.00× | | 9 steps full + 16 STA | 52.36% | 79.48 | 22.3 | 621s | 1.48× | | 10 steps full attn | 0.0% | – | – | 378s | 1.00× | | 4 steps full + 6 STA | 49.13% | 79.49 | 22.73 | 258s | 1.46× | The above results confirm the generalizability of STA beyond Huanyuan-Video. On Wan 2.1, STA achieves consistent speed-ups across different step settings and resolutions without compromising perceptual quality. > *Are the optimization hardware dependent?* STA itself is not hardware dependent, and we designed our method to be easily deployable across GPU architectures: **A100/RTX 4090** Using the FlexAttention backend, STA achieves substantial speedups with minimal overhead, as demonstrated in the table below: RTX 4090: | Methods | Implementation | Config | Sparsity | TFLOPS | Latency(ms) | MFU | Kernel Efficiency | Speedup | |-----------|----------------|---------------|----------|--------|-------------|--------|-------------------|---------| | Full Attn | FA2 | - | 0.00% | 164.03 | 958.68 | 51.85% | 100.00% | 1.00× | | Full Attn | Flex Attn | - | 0.00% | 164.03 | 984.16 | 50.51% | 97.41% | 0.97× | | STA | Flex Attn | wt=(18,24,24) | 91.00% | 14.76 | 896.74 | 49.89% | 96.22% | 10.69× | A100: | Methods | Implementation | Config | Sparsity | TFLOPS | Latency(ms) | MFU | Kernel Efficiency | Speedup | |-----------|----------------|---------------|----------|--------|-------------|--------|-------------------|---------| | Full Attn | FA2 | - | 0.00% | 164.03 | 697.61 | 75.36% | 100.00% | 1.00× | | Full Attn | Flex Attn | - | 0.00% | 164.03 | 999.03 |52.63% | 69.83% | 0.70× | | STA | Flex Attn | wt=(18,24,24) | 91.00% | 14.76 | 89.97 | 52.59% | 69.78% | 7.75× | **On H100 GPUs** We further leverage Tensor Memory Accelerator (TMA) capabilities to achieve even better performance. Specifically, STA implemented in TK on H100 achieves a 1.43× improvement compared to FlexAttention implementation, showcasing additional optimization potential with modern hardware. In summary, STA does not inherently depend on specific GPU architectures. While advanced hardware features like TMA can further boost STA's performance, using FlexAttention alone provides straightforward and efficient speedups across other GPUs. > *Extend the current relative work section with more literatures on attention based speed-up that are more related to the paper, instead of other types of speed-up.* We appreciate your suggestion to enhance the related work section of our manuscript. Should the paper be accepted and additional space permit, we plan to incorporate discussions on the following areas:​ 1. Efficient Vision Transformers (ViTs) with a focus on attention mechanisms such as Swin Transformer and EfficientViT. 2. Efficient attention mechanisms in large language models such as StreamingLLM and H2O.​ 3. Attention quantization techniques such as SageAttention.​
Summary: This paper introduces Sliding Tile Attention (STA), a novel attention mechanism designed to accelerate video generation using Diffusion Transformers (DiTs). The key idea is to leverage the observation that attention scores in pretrained video diffusion models are predominantly concentrated within localized 3D windows, thus eliminating redundancy from full attention. STA operates tile-by-tile with a hardware-aware sliding window design, preserving expressiveness while being hardware-efficient. The paper claims that STA achieves significant speedups (1.36–3.53× end-to-end) with minimal or no quality loss compared to existing methods like FlashAttention-3. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors provide extensive experimental results, including efficiency metrics (MFU and latency), human evaluations, and automated metrics (VBench, SSIM, PSNR, CD-FVD). The evidence demonstrates that STA achieves significant speedups with minimal quality loss. The paper also includes detailed comparisons with baseline methods like CLEAR, NATTEN, and Swin, showing that STA outperforms these methods in both efficiency and quality. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of accelerating video generation with DiTs. The STA mechanism is designed to address the computational overhead of 3D full attention by leveraging localized attention patterns. The evaluation criteria include both efficiency metrics and quality metrics.The methods are well-motivated and the evaluation is comprehensive. Theoretical Claims: The theoretical claims in the paper are correct. The authors provide a clear formulation of the STA mechanism, including the tiling strategy and the attention mask definition. The theorems (3.1 and 3.2) are used to quantify the number of dense and mixed blocks in different attention configurations, which helps in understanding the efficiency gains of STA. Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid. The authors conducted extensive experiments to evaluate the efficiency and quality of STA. They benchmarked the efficiency of various attention algorithms, including STA, against baseline methods using metrics like MFU and latency. Human evaluations were performed on a large set of prompts to assess the quality of generated videos. Supplementary Material: NAN Relation To Broader Scientific Literature: NAN Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: I think STA should have restrictions on input resolution, length, and aspect ratio. The authors should clarify this key issue. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate Reviewer 3sP9's insightful question regarding the constraints on input resolutions and aspect ratios for Sliding Tile Attention (STA). Below, we clarify and expand upon these points: > *I think STA should have restrictions on input resolution, length, and aspect ratio. The authors should clarify this key issue.* Indeed, STA requires input video latents to have dimensions that are integer multiples of the tile size to fully achieve the promised efficiency gains from sparsity. We briefly mentioned this constraint in line 214 of our submission. Specifically, in our current implementation, we use a tile size of (6, 8, 8), meaning the optimal latent dimensions are multiples of these numbers, i.e., (a × 6, b × 8, c × 8) where a, b, and c are positive integers. For video latents whose dimensions do not strictly satisfy this requirement, we suggest two practical approaches to address this limitation: **Padding with Masking**: When generating a video with a dimension slightly different from multiples of the tile size, we can introduce padding tokens to match the tile-aligned dimension (which is a standard practice that is also used in FlashAttention kernel when the input sequence length is not an integer multiple of block size.) For instance, to generate a video with dimensions (29, 45, 76), we would pad the latents to size (30, 48, 80). In practice, padding tokens are then masked during attention computation to avoid contaminating the attention. **Cropping**: Alternatively, one can generate a slightly larger latent (e.g.,(30, 48, 80)) and subsequently crop the result to the desired dimensions. Although both methods introduce computational overhead due to processing padding tokens or larger dimensions, this cost remains minimal compared to STA’s efficiency gains. To thoroughly address your question, we conducted additional experiments under multiple scenarios and resolutions. Below, we show the kernel benchmark results of applying STA to a (29, 45, 76) latent on RTX 4090 by padding to (30, 48, 80): | Methods | Implementation | Config | Sparsity | TFLOPS | Latency(ms) | MFU | Kernel Efficiency | Speedup | |-----------|----------------|---------------|----------|--------|-------------|--------|-------------------|---------| | Full Attn | FA2 | - | 0.00% | 141.22 | 825.46 | 51.84% | 100.00% | 1.00× | | Full Attn | Flex Attn | - | 0.00% | 141.22 | 873.78 | 48.98% | 94.47% | 0.94× | | STA | Flex Attn | w=(29,24,24)| 85.00% | 21.18 | 149.77 | 42.86% | 82.68% | 5.51× | | STA | Flex Attn | w=(18,24,24)| 89.55% | 14.76 | 89.76 | 49.83% | 96.12% | 9.20× | Our evaluations suggest this padding overhead is negligible compared to the substantial latency reduction achieved by STA. We also present end-to-end results applying STA to Wan 2.1 at a different resolution (93 frames × 1024 × 1024) in a training-free setup, further demonstrating STA is applicable to varying input dimensions: | Steps | Attn Sparsity | SSIM ↑ | PSNR ↑ | Latency | Speedup | |------------|-----------------------------------|--------|--------|---------|---------| | 50 steps full attn| 0.0% | –| – | 1996s | 1.00× | | 18 steps full + 32 steps sta| 50.31% | 77.41 | 20.16 | 1322s | 1.51× | We will explicitly clarify these practical constraints and solutions in the revised manuscript to better inform readers. We hope the additional experimental results address your concern and clarify the constraints involved. If our response resolves your reservations, we would be grateful if you would consider raising your score. We also welcome any further questions or suggestions you may have.
null
null
null
null
null
null
Trusted Multi-View Classification with Expert Knowledge Constraints
Accept (spotlight poster)
Summary: This paper proposes Trusted Multi-View Classification with Expert Knowledge Constraints (TMCEK). There are core contributions: (1) Integrating expert knowledge into multi-view learning to enhance both interpretability and uncertainty estimation, and (2) proposing a novel distribution-aware subjective opinion framework that extends the conventional model through the incorporation of an evidence distribution concentration measure. Finally, the effectiveness of TMCEK is validated on sleep stage classification, outperforming existing models in both classification accuracy and interpretability. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. The proposed method and evaluation criteria make sense for the problem. Theoretical Claims: I have checked the correctness of Proposition 3.4 and Proposition 3.5. The algebraic steps in Appendices A.1–A.2 are clear and free of errors, demonstrating the framework’s mathematical validity. This theoretical foundation directly supports the empirical results, reinforcing the method’s reliability. Experimental Designs Or Analyses: The experimental setup is comprehensive, with evaluations on multiple datasets and comparisons against a variety of baseline models. The experiments are well-designed, but the testing data distribution and the specifics of how the datasets were divided could benefit from more clarity. Supplementary Material: I have checked the appendix, which includes detailed method descriptions, experimental settings, and additional results. The supplementary material is comprehensive and provides useful insights into the experimental process. Relation To Broader Scientific Literature: The paper contributes to the broader field of multi-view learning and uncertainty estimation. It builds on previous work in trusted learning and expert knowledge integration. The authors effectively position TMCEK within the context of related literature, referencing key studies on multi-view learning, evidence theory, and sleep stage classification. Essential References Not Discussed: The authors have cited the essential related works for understanding the key contributions of the paper. Other Strengths And Weaknesses: Strengths: 1.The framework is innovative, combining interpretability and performance improvement. 2.The use of expert knowledge through Gabor kernels enhances model transparency. 3.The promising results, particularly in the context of sleep disorder diagnosis, are crucial in areas where trust and transparency are essential. The paper's strong demonstration of the method's performance across multiple datasets further adds to its impact. 4.The paper is well-written and structured, offering a clear and thorough explanation of both the methodology and results. Weaknesses: 1.The paper presents the method primarily for sleep staging, but there is limited discussion on how this method might generalize to other domains. 2.The paper compares the proposed TMCEK model to various existing multi-view methods like EDL and RCML. Could you provide more details on how the distribution-aware subjective opinion mechanism improves uncertainty estimation compared to traditional methods? Other Comments Or Suggestions: Some symbols like d lack a proper explanation in the context. Questions For Authors: 1.The paper presents the method primarily for sleep staging, but there is limited discussion on how this method might generalize to other domains. 2.The paper compares the proposed TMCEK model to various existing multi-view methods like EDL and RCML. Could you provide more details on how the distribution-aware subjective opinion mechanism improves uncertainty estimation compared to traditional methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive and encouraging comments. Below are our responses. **Q: The paper presents the method primarily for sleep staging, but there is limited discussion on how this method might generalize to other domains.** A: Thanks for your professoional question. While our method is demonstrated for sleep stage classification, the underlying framework of integrating expert knowledge through Gabor-based feature extraction and a distribution-aware subjective opinion mechanism is **general**. In any application where domain-specific patterns are critical (e.g., medical imaging, speech recognition, or remote sensing) , one could parametrize them and tailor the initial kernel functions or expert constraints accordingly. For example, our framework can be effectively applied to image classification by adapting its core components to process image data. In this setting, the first convolutional layer is replaced with 2D Gabor filters, which are designed to capture edges, textures, and other structural patterns in images. These Gabor kernels can be initialized with parameters informed by expert knowledge—such as preferred orientations, scales, and frequencies that are known to highlight important features in images—and then fine-tuned through backpropagation during training. This results in a feature extraction process that is both interpretable and closely aligned with domain-specific patterns. Additionally, our framework employs a distribution-aware subjective opinion mechanism to estimate uncertainty. In image classification, this mechanism quantifies not only the overall evidence supporting each class but also the dispersion of that evidence, allowing for a more reliable estimation of confidence in the predictions. However, we acknowledge that in certain domains lacking sufficient domain-specific expert knowledge or clearly defined feature representations, our framework may not be fully applicable. **Q: The paper compares the proposed TMCEK model to various existing multi-view methods like EDL and RCML. Could you provide more details on how the distribution-aware subjective opinion mechanism improves uncertainty estimation compared to traditional methods?** A: Traditional uncertainty estimation methods, such as those based on Evidential Deep Learning (EDL), rely primarily on the overall sum of evidence, making them insensitive to its distribution across classes. To address this, our distribution-aware subjective opinion mechanism incorporates an explicit measure of evidence concentration using the Gini coefficient. This dual consideration of both magnitude and distribution improves sensitivity by assigning higher uncertainty when evidence is more concentrated, even if the total sum remains constant, and enhances conflict resolution by adjusting the fusion rule to weight each view’s contribution based on evidence dispersion. Morever, theoretical analysis (see Propositions 3.4 and 3.5 in our paper) and experimental results demonstrate that this approach leads to more reliable confidence estimates, particularly in scenarios with ambiguous or noisy data.
Summary: This paper introduces an innovative trusted multi-view classification approach designed to tackle the significant shortcomings of existing methods, namely, opacity at the feature level and imprecise confidence assessments at the decision level. Its primary contribution resides in advancing current trusted multi-view classification techniques by bolstering interpretability at the feature level and refining uncertainty estimation. This enhanced framework is subsequently applied to sleep stage classification tasks. The proposed methodology demonstrates superior performance compared to state-of-the-art (SOTA) methods across multiple sleep stage classification datasets. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked the correctness of all proofs for Propositions 3.4 and 3.5 in Sec. A.1 and A.2. They are clearly articulated and mathematically rigorous, leaving little room for doubt about the soundness of the theoretical claims. Experimental Designs Or Analyses: I have verified the robustness and validity of the experimental designs and analyses. The designs encompass a thorough evaluation, including performance benchmarks against previous state-of-the-art (SOTA) methods, assessments of individual module effectiveness, an analysis of hyperparameter sensitivity, interpretability, and robustness. Supplementary Material: I have reviewed all supplementary material that show more details of the compared methods, datasets, implementation details, visual results and supplementary results. Relation To Broader Scientific Literature: The paper is related to trusted multi-view learning‌. Its core contribution lies in ‌enhancing existing trusted multi-view classification through improved feature-level interpretability and uncertainty estimation‌, ‌and extending it to sleep stage classification task. Essential References Not Discussed: The paper has discussed the essential references. Other Strengths And Weaknesses: Strengths: --The paper provides enhancements in both performance and interpretability. --The integration of expert knowledge with uncertainty estimation represents a notable contribution. To my knowledge, this is the pioneering work in applying trusted multi-view classification to sleep stage classification, marking a practical application. Weaknesses: --In regards to the lower classification performance observed for the N1 stage (Figure 4), have the authors investigated methods to mitigate class imbalance or improve classification accuracy for this specific stage? --The paper utilizes the Gini coefficient to measure evidence concentration. Have the authors considered alternative methods, and what potential implications might these alternatives have on the overall results? Other Comments Or Suggestions: I have not any other comments or suggestions here. see Strengths and Weaknesses Questions For Authors: see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback and suggestions. Below are our responses. **Q: In regards to the lower classification performance observed for the N1 stage (Figure 4), have the authors investigated methods to mitigate class imbalance or improve classification accuracy for this specific stage?** A: Regarding the lower classification performance observed for the N1 stage, our paper takes sampling strategy to alleviate the challenges posed by the unbalanced distribution of classes (in Appendix A.6). During training, batch samples are randomly selected from the training subset using normalized probabilities that are inversely proportional to the number of samples in each class. This ensures that underrepresented classes such as N1 are more likely to be included in each batch, thereby mitigating class imbalance. On the Sleep-EDF20 dataset, we use one fold to verify the effectiveness of the sampling strategy. | Method | acc | f1 | kappa | wake_f1 | n1_f1 | n2_f1 | n3_f1 | rem_f1 | |----------|--------|--------|-------|--------|-------|-------|-------|-------| | Sample | 0.8674 | 0.7953 | 0.8266 | 0.9254 | 0.4035 | 0.8883 | 0.9246 | 0.8347 | | No Sample| 0.8725 | 0.7672 | 0.8312 | 0.9288 | 0.2532 | 0.8943 | 0.9313 | 0.8285 | From the results, we can observe that the sampling strategy effectively mitigates the class imbalance issue by significantly boosting N1 classification performance, leading to a better overall balance (as reflected by the macro F1-score), even though there is a slight reduction in overall accuracy and kappa. **Q: The paper utilizes the Gini coefficient to measure evidence concentration. Have the authors considered alternative methods, and what potential implications might these alternatives have on the overall results?** A: Regarding the Gini coefficient used for measuring evidence concentration, we recognize that alternative measures—such as entropy or variance—could be explored. Each measure might bring different sensitivity characteristics in uncertainty estimation. For instance, entropy may offer a more nuanced view of dispersion; however, the theoretical properties of the Gini coefficient, as analyzed in our propositions, provide clear advantages in our framework. Future work will examine these alternatives to assess their potential impact on both uncertainty quantification and overall performance. We replaced the method on the multi-view dataset. In the experiment, we fixed the loss weight $\\beta$ to 0.5 and $\\gamma$ to 0.5. The experimental results are as follows. | Method | HD| Scene | CUB | PIE | | ---- | ---- | ---- | ---- | ---- | | Gini | 98.40 $\pm$ 0.37 | 72.60 $\pm$ 0.99 | 95.33 $\pm$ 1.25 | 96.47 $\pm$ 0.98 | | Var | 98.15 $\pm$ 0.51 | 72.42 $\pm$ 1.24 | 93.00 $\pm$ 2.56 | 96.76 $\pm$ 1.19 | | Entropy| 97.75 $\pm$ 0.65 | 67.09 $\pm$ 1.21 | 94.00 $\pm$ 1.62 | 94.41 $\pm$ 2.01 | These results suggest that the theoretical properties of the Gini coefficient offer a robust measure for capturing evidence dispersion, which in turn contributes to better uncertainty estimation and overall performance. --- Rebuttal Comment 1.1: Comment: Thanks to the responeses in detail from the authors. I confirmed that the responses have solved all of my concerns. In consideration of the comments from the other reviewers, I will keep my decision. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable suggestions and guidance, as well as your thoughtful recognition of our work.
Summary: This paper proposed a novel trusted multi-view classification method, called TMCEK. Compared with the existing trusted multi-view classification methods, TMCEK embeds the Gabor function into the first convolutional layer as its kernel to enhance feature-level interpretability. Moreover, it introduces a distribution-aware subjective opinion mechanism to derive more reliable and realistic confidence estimates. The TMCEK obtains state-of-the-art results against compared SOTA methods, especially providing an interpretability feature map aligned with humans. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, the proposed method makes sense for the problem at hand. Theoretical Claims: I have checked the correctness of all proofs for theoretical claims. The theoretical claims regarding the uncertainty estimation mechanism are solid and well-supported by mathematical analysis. The authors present a rigorous derivation that clearly shows how their distribution-aware subjective opinion framework enhances uncertainty quantification compared to traditional methods. The proofs provided for Propositions 3.4 and 3.5 are mathematically sound and logically consistent. Overall, there are no apparent issues with the correctness of these proofs. Experimental Designs Or Analyses: The experimental designs for evaluating the proposed model involve five aspects: performance comparison, each module effectiveness, hyper-parameter sensibility, interpretation and robustness. I think that the experimental designs are solid and the results are sufficient for supporting their claims. Supplementary Material: The supplementary material provides useful details on the methodology, experimental settings, and additional experimental results. It contributes to a better understanding of the model's application and performance. Relation To Broader Scientific Literature: This work improves the trusted multi-view classification from two aspects: (1) feature-level interpretability by embedding expert knowledge and (2) more reliable and realistic confidence estimate by incorporating the distribution of evidence. These contributions are completely innovative. Essential References Not Discussed: The paper has covered and discussed the essential references. Other Strengths And Weaknesses: Strengths: 1. The expert knowledge is applied to the trusted multi-view classification, which is a fresh view to trusted multi-view classification. 2. The work finds a new problem that the subjective opinion is distribution-unaware and defines distribution-aware subjective opinion by incorporating the distribution of evidence. 3. Compared with exiting methods, the paper not only achieves state-of-the-art results, but also provides a confident degree and Gabor kernel aligned with human domain experts for decision results. Weaknesses: 1. The use of Gabor kernels for feature extraction at the first convolutional layer is a key feature of the model. Could you provide further details on how the Gabor kernels are optimized during training and how they compare to other common feature extraction techniques regarding their impact on sleep stage classification? 2. The paper employs attribution maps to analyze the importance of the Gabor kernels. Clarify and extend the explanation of the saliency map method. A more detailed background on this attribution technique would help readers, particularly those not familiar with this approach, to fully appreciate its significance in your model’s interpretability. Other Comments Or Suggestions: Please See the Weaknesses Questions For Authors: Please See the Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are sincerely grateful to the reviewer for dedicating their time and effort to review our work. Below are our responses. **Q: The use of Gabor kernels for feature extraction at the first convolutional layer is a key feature of the model. Could you provide further details on how the Gabor kernels are optimized during training and how they compare to other common feature extraction techniques regarding their impact on sleep stage classification?** A: We are sincerely grateful to the reviewer for dedicating their time and effort to review our work. The Gabor kernels are embedded in the first convolutional layer, where their parameters are jointly optimized with the rest of the network via backpropagation. Their design aligns with expert knowledge of EEG waveforms, with some kernels learning to match critical patterns such as slow waves, delta, or theta rhythms that are key in sleep staging. In contrast to standard convolutional kernels that are learned from scratch without any explicit domain bias, Gabor kernels offer enhanced interpretability—since their waveform shapes can be directly compared to known EEG characteristics—and guided feature extraction by acting as filters tuned to critical frequency bands. Experimental results indicate that this guided approach not only yields competitive classification performance but also improves the model’s interpretability, as demonstrated by our attribution analysis. **Q: The paper employs attribution maps to analyze the importance of the Gabor kernels. Clarify and extend the explanation of the saliency map method. A more detailed background on this attribution technique would help readers, particularly those not familiar with this approach, to fully appreciate its significance in your model’s interpretability.** A: We use saliency maps to interpret the role of individual Gabor kernels in decision making by computing the gradient of the output (or evidence) with respect to the feature maps from the Gabor layer. This approach relies on two key ideas: sensitivity analysis, where regions or kernels that cause large changes in the output when perturbed are deemed more important, and visual attribution, which allows us to directly visualize which kernel responses contribute most to a specific classification. Attribution maps are generated by backpropagating the gradient information through the network, effectively highlighting the areas that have the highest impact on the final decision. This method not only enhances transparency by linking specific kernel activations to decision outcomes but also bridges the gap between black-box deep learning models and expert interpretability requirements. For readers less familiar with attribution methods, this technique is well-documented (see, e.g., Ancona et al., 2019) and serves as a powerful tool to validate that kernels corresponding to critical EEG patterns—such as slow waves or theta waves—are indeed influential in the sleep staging decision. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The further detailed clarifications have addressed most of my concerns. After reading the comments from other reviewers, I would like to keep my positive rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable suggestions and guidance, as well as your thoughtful recognition of our work.
Summary: This study proposes an expert knowledge-guided trusted multi-view classification framework that achieves dual advancements in interpretability and uncertainty quantification. Specifically, the proposed method introduces expert knowledge as a tool of the feature-level interpretability and defines distribution-aware subjective logic for more sensitive uncertainty estimation. The experimental results show its superior performance on three public datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense for the trusted multi-view classification. Theoretical Claims: Based on my check, the theoretical claims made in this paper are underpinned by a solid mathematical foundation. The authors successfully demonstrate, through rigorous proofs in Propositions 3.4 and 3.5, how their novel distribution-aware subjective opinion framework offers improved uncertainty estimation. Experimental Designs Or Analyses: I have checked all the experiments in the experimental section. Supplementary Material: I have read all the content in the appendix. Relation To Broader Scientific Literature: The paper has a close relation to multi-view learning and uncertainty estimation. It makes a significant contribution at enhancing interpretability and uncertainty estimation. Essential References Not Discussed: The paper has included the main related works that are crucial for understanding the context and significance of their contributions. Other Strengths And Weaknesses: Strengths: 1. The novel integration of expert knowledge and uncertainty estimation offers clear advantages. 2. The paper provides strong experimental validation and achieves the SOTA results. 3. The deficiency of existing trusted multi-view methods and the superiority of the proposed method is illustrated by example. Overall, the organization of paper is coherent and easily understood. Weaknesses: 1. Could you clarify why certain Gabor kernel outputs are considered redundant during the training process? Are there any strategies for improving kernel optimization? 2. It would be beneficial to include ablation experiments where different components of the model are removed or modified, such as removing the Gabor layer or other key modules. This would help quantify the contribution of each part of the model to the overall performance. 3. It might be beneficial to further explore the hyperparameters involved in the loss function. Could you provide additional insight or analysis regarding how different hyperparameter settings might impact the performance of the model? Other Comments Or Suggestions: See weaknesses Questions For Authors: See weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your detailed feedback and thoughtful questions. Below are our responses. **Q: Could you clarify why certain Gabor kernel outputs are considered redundant during the training process? Are there any strategies for improving kernel optimization?** A: Some Gabor kernels become redundant during training because the training process could not optimize them or their learned waveforms overlap significantly with others. In our framework, the first convolutional layer uses Gabor kernels whose parameters (center, width, frequency) are tuned via gradient descent. When multiple kernels converge to capture similar EEG characteristics (for example, several may approximate slow‐wave or theta patterns), only a subset is needed to sufficiently cover the most discriminative features. The training loss naturally down‐weights kernels that do not contribute additional information to the final decision, making some kernels effectively redundant. To improve kernel optimization, strategies such as incorporating diversity regularization can be employed to explicitly penalize similarity among kernels and encourage each to capture distinct patterns. Additionally, using domain-informed initialization schemes that span a broader range of frequencies and scales, as well as regularization techniques like adding penalty terms or employing dropout to discourage redundancy, can promote a more distributed set of feature extractors. **Q: It would be beneficial to include ablation experiments where different components of the model are removed or modified, such as removing the Gabor layer or other key modules. This would help quantify the contribution of each part of the model to the overall performance.** A: We agree that ablation studies are essential for quantifying the contribution of each module. In our ablation experiments on one fold of the Sleep-EDF 20 dataset, we evaluated the impact of removing the Gabor layer and the Distribution-Aware Fusion Module. The results are as follows: | Gabor | Distribution | acc | f1 | kappa | |-------|-------------|--------|-------|--------| | ✔ | ✔ | 0.8673 | 0.7953| 0.8266 | | ✔ | ✘ | 0.8613 | 0.7735| 0.8173 | | ✘ | ✔ | 0.8618 | 0.7683| 0.8179 | These results demonstrate that the complete model—integrating both the Gabor layer and the Distribution-Aware Fusion Module—achieves higher overall performance. This indicates that the two components work: the Gabor layer enhances interpretability by learning EEG features aligned with expert knowledge (such as slow, delta, or theta waves), while the Distribution-Aware Fusion Module optimizes uncertainty estimation by considering the distribution of evidence. Together, they not only improve classification accuracy but also make the model’s decision-making process more transparent. **Q: It might be beneficial to further explore the hyperparameters involved in the loss function. Could you provide additional insight or analysis regarding how different hyperparameter settings might impact the performance of the model?** A: Our overall loss function is composed of three components. The first component is the accuracy loss computed from the aggregated evidence. The second component is the accuracy loss computed for each individual view, which is then weighted by β. The third component is the consistency loss, which combines two sub-losses that measure the differences in probability outputs across views and the cosine similarity of the evidence, weighted by ζ and η, and then scaled by γ in the overall loss. We conducted hyperparameter experiments on ζ and η as follows. | ζ | η | HD | Scene15 | CUB | PIE | | ---- | ---- | ---- | ---- | ---- | ---- | | 0.3 | 0.7 | 97.70$\pm$0.37 | 73.47$\pm$1.44 | 92.50$\pm$2.17 | 96.32$\pm$1.04 | | 0.4 | 0.6 | 98.00$\pm$0.47 | 72.73$\pm$1.37 | 94.33$\pm$1.33 | 96.47$\pm$1.50 | | 0.5 | 0.5 | 98.40$\pm$0.37 | 72.60$\pm$0.99 | 95.33$\pm$1.25 | 96.47$\pm$0.98 | | 0.6 | 0.4 | 98.35$\pm$0.51 | 72.60$\pm$1.24 | 94.67$\pm$1.55 | 97.06$\pm$1.80 | | 0.7 | 0.3 | 98.05$\pm$0.80 | 73.22$\pm$1.34 | 93.67$\pm$1.80 | 95.59$\pm$1.32 | Based on the experimental results, we can observe that the values of ζ and η do not have a significant impact on the overall performance. This indicates that while these hyperparameters play a role in balancing the consistency loss components, their precise values can be flexibly chosen according to specific application requirements without drastically affecting the results. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns have been addressed. I also read your replies to the other reviewers, and I think this is a good work and can contribute to the field of Trusted Multi-View Classification. I'm happy to raise my score to 4. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful recognition of our work and raise the score. Thanks very much!
null
null
null
null
null
null
SE(3)-Equivariant Diffusion Policy in Spherical Fourier Space
Accept (poster)
Summary: This paper introduces the Spherical Diffusion Policy (SDP), a novel method for robotic manipulation that enforces continuous SE(3) equivariance with spherical Fourier representations. The authors propose a spherical modification of the FiLM layer commonly used in diffusion-based policy networks, prove the invariance property of their proposed approach, and empirically verified their approach in both real and simulated domains. ## update after rebuttal Accepted the original version of the paper. No updates to the review are necessary. Claims And Evidence: The paper claims that enforcing SE(3) equivariance using spherical Fourier features leads to better generalization across varied 3D arrangements and improved sample efficiency. These claims are supported by detailed simulation results. Methods And Evaluation Criteria: The authors evaluate their approach on a number of MimicGen tasks and real-world robotics tasks, finding that their method (SDP) outperforms other equivariant and non-equivariant baselines. While it seems the MimicGen environments only change the yaw dimension of the objects, the real world experiments use randomizations along other dimensions as well. Overall, I believe the set of tasks is sufficient for evaluation. Still, It would be nice to see how this method fares in more cluttered scenes. Theoretical Claims: The paper makes claims about the equivariance of the convolution and SFiLM, which I verified in the appendix. Experimental Designs Or Analyses: They additionally perform an analysis of how all baselines degrade with increasing initialization noise, showing that SDP degrades the most gracefully. However, it's not 100% clear how these experiments were performed from the description: "We train all the baselines on progressively tilted environments with 100 demonstrations." Does tilt=30 degrees imply that demonstrations are on environments sampled between 0 and 30 degrees? Some clarification on this experiment would be helpful. Supplementary Material: Outside of the equivariance proofs, I did not review the supplementary material. Relation To Broader Scientific Literature: This work extends and improves upon recent advances in diffusion-based policy learning and equivariant neural networks. It builds directly on prior works such as Diffusion Policy (Chi et al., 2023), EquiDiff, and EquiBot, addressing limitations related to discretized equivariance and computational inefficiencies. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Novel algorithm for SE(3) invariant closed loop control for robot manipulation - Extensive evaluation and demonstration of the benefits of this approach - Detailed theoretical justification for the approach Weaknesses: - The paper is a bit densely written at several points, making it not very approachable - No testing in cluttered scenes - No statistical analysis of the results or inference timing data reported Other Comments Or Suggestions: Grammatical Issues: “… the proof essentially follows Schur’s lemma (Schur, 1905) that any linear operation of SO(3) irreps acts as on each irreducible subspace is equivariant.” “Diffser” → “Diffuser” “Diffusor” → “Diffuser” Questions For Authors: EquiBot (Yang et al., 2024a) are limited to degree l = 1 representation that suppress rich information. Can the authors elaborate on this point and give some examples of problems for which that would be a limiting constraint? The authors claim that their work is more “computationally efficient” than competing approaches, but fail to provide any inference-time comparisons. It was not clear to me why equation 5 was chosen in particular. It’s clear from the proof in A.2 that it preserves equivariance, but it seems other expressions (e.g. without normalization) would achieve the same goal. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We respond below: > "However, it's not 100% clear how these experiments were performed from the description: "We train all the baselines on progressively tilted environments with 100 demonstrations." Does tilt=30 degrees imply that demonstrations are on environments sampled between 0 and 30 degrees? Some clarification on this experiment would be helpful." We are sorry for the confusion. Tilt=30 degrees imply the environment (the table top) is sampled between 0 and 30 degrees. We will add the clarification to the paper. > "The paper is a bit densely written at several points, making it not very approachable" Could the reviewer please point out which points can be improved? > "No testing in cluttered scenes" For clarification, the (i) Pick Place task and (j) Kitchen tasks in MimicGen (see Figure R3 for visualization) involve multiple objects and can be viewed as slightly cluttered scenes. Unfortunately we don’t have time to test densely cluttered scenes. > "No statistical analysis of the results or inference timing data reported" > "The authors claim that their work is more “computationally efficient” than competing approaches, but fail to provide any inference-time comparisons." The step-wise success rate for physical experiments is reported in Table A.1. Moreover, in Table R1 we add the inference timing data. > "Grammatical Issues" Thank you for pointing out the grammar issues and typos, we will fix these errors in the revision. > "EquiBot (Yang et al., 2024a) are limited to degree l = 1 representation that suppress rich information. Can the authors elaborate on this point and give some examples of problems for which that would be a limiting constraint?" Please see the [response](https://openreview.net/forum?id=U5nRMOs8Ed&noteId=rKiX1W2hTw) to ""Although demonstrated in Sec. 4.2, the practical advantages of the proposed ... remain unclear. Specifically, what does it mean that "Vector Neuron only supports up to l=1"?"" > "It was not clear to me why equation 5 was chosen in particular. It’s clear from the proof in A.2 that it preserves equivariance, but it seems other expressions (e.g. without normalization) would achieve the same goal." We agree other expressions (without normalization or using tensor products, etc) could achieve the same equivariance. The proposed method stems from the nonlinearity operation in Vector Neurons. We find Equation 5 works well and we haven’t compared other expressions.
Summary: This paper propose one new method called “Spherical Diffusion Policy (SDP)” for robot manipulation. The paper focus on 3D generalization, using SO(3) and T(3) equivariance (i.e., full SE(3) group) to handle random tilts and object placements. It design a special spherical encoder with spherical FiLM layer and “spherical denoising temporal U-net (SDTU).” The method is tested on many simulation tasks as well as real robot tasks (both single-arm and bimanual). The result show improved performance compared to baselines like EquiDiff or Diffusion Policy in environment with random initial poses. ## After Rebuttal Though the overall results reported by authors are very good, in the authors' rebuttal, they are not able to reply my concern that the baseline is not tuned. The authors provide the param table while they deliberately miss the baseline (DP3) that is not tuned in the table. Also the reply from authors "We use the DP3 results reported in the CoRL paper Equivariant Diffusion Policy" show that the results are directly copied from previous works without justification. I am not sure if there is an issue on this but this looks suspicious to me. Considering the above mentioned factors, I would change my score from weak accept to weak reject. The other parts of this paper are all good. Claims And Evidence: The claims are supported by simulation and real-world experiments. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense to me. Theoretical Claims: Yes. Experimental Designs Or Analyses: The experiment analysis overall looks good to me, with an extensive baseline comparison and a few ablation studies. However, there is a small problem regarding to baselines. I found that the results for DP3 (Ze et al., 2024) do not make sense to me, as it uses 3D information yet even worse than image-based methods, which does not align with my tuning experience. Could authors tune some parameters of this baseline to see, like the longer prediction horizon? Besides, it is good to report the parameters for all baselines. Supplementary Material: Yes. The video looks good to me. Relation To Broader Scientific Literature: The contributions are very related to the literature. Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: 1. More ablation studies would be helpful to understand the method. 2. Detailed parameters for baselines for fair comparison. Questions For Authors: As mentioned above, the authors are suggested to tune the baseline methods or report the failure mode of the baselines/their own method. Besides, how the baseline is implemented and tuned should be reported. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We respond below: > "Could authors tune some parameters of this baseline to see, like the longer prediction horizon? Besides, it is good to report the parameters for all baselines." We use the DP3 results reported in the CoRL paper Equivariant Diffusion Policy. Additionally, we implemented a new baseline, DP3-canonicalization, which improves upon DP3 but still significantly underperforms SDP (see Table R2). Regarding DP3's poor performance on the MimicGen tasks, we hypothesize that this is due to its use of an MLP as the vision encoder. The MLP may struggle to capture information about multiple objects or their orientations—factors that are critical in the MimicGen tasks. > "More ablation studies would be helpful to understand the method." In Table R2, we add DP3 (no equivariance) and DP3-cano (global SE(3)-equivariance) as additional baselines for ablation. We find that SDP, which has local equivariance, outperforms the global SE(3)-equivariance used in DP3-cano. We also introduce a new baseline, Discrete SDTU, which enforces discrete equivariance using the Octahedral or Cubic group, which discretize SO(3)—similar to EquiDiff, which uses the cyclic group C8 that discretize SO(2). Our results show that Discrete SDTU leads to an approximate 10% drop in performance, highlighting the advantage of SDP’s use of continuous spherical Fourier representations. > "Detailed parameters for baselines for fair comparison." We provide detailed parameters for baselines. SDP generally adopts Diffusion Policy’s hyperparameters, except for batch size, because SDP is heavier. Table R3. Hyperparameters for baselines. | | SDP | EquiDiff | EquiBot | DP | |---------------------------|--------|----------|---------|--------| | Batch Size | 32 | 64 | 64 | 64 | | Prediction Horizon | 16 | 16 | 16 | 16 | | Action Horizon | 8 | 8 | 8 | 8 | | Learning Rate | 1e-4 | 1e-4 | 1e-4 | 1e-4 | | Epochs | 500 | 500 | 500 | 500 | | Learning Rate Scheduler | cosine | cosine | cosine | cosine | | Noise Scheduler | DDPM | DDPM | DDPM | DDPM | | Diffusion Train/Test Step | 100 | 100 | 100 | 100 | | Encoded Scene Dimension | 128 | 128 | 128 | 128 | > "report the failure mode of the baselines/their own method." Detailed failure modes are reported in Section B.3, Table A1, and Figure A4. The major failure mode is inaccurate action prediction when the end effector is about to make contact. We will strengthen the connection between the failure modes analysis section and the main text.
Summary: This paper works on improving the equivariance of diffusion policy for manipulation tasks. To this end, the authors align the input point clouds and robot state into a canonical coordinate frame to achieve translational equivariance, and then project the encoded observations onto spherical harmonic basis to achieve SO(3) equivariance. To further handle the SO(3) equivariance within network structure itself, the authors upgrade the 1D convolution in vanilla diffusion policy to a mix channel temporal convolution, and the FiLM condition to a spherical FiLM layer. To demonstrate the effectiveness of the proposed method, especially its SE(3) equivariance, the authors augment the scene to manipulate with additional rotation and translation, and achieved superior performance than existing baselines that use absolute control or velocity control mode. Claims And Evidence: The major claim of this paper is its SE(3) equivariance and its better sample efficiency and generalization. These claims are in general well supported with straight forward proof of propositions as well as extensive experiments. However, I have two major concerns regarding to this claim: - First, do we really have to achieve SO(3) equivariance within the network architecture itself when the observation inputs are point clouds? - If the author target to achieve SO(3) equivariance to relative rotation to the end-effector, the point clouds and robot arm states can be further aligned within gripper's local coordinate frame by applying a known rotation. - If the authors target to achieve SO(3) equivariance to object's absolute rotation, then it is equivariant when the scene is under a global rotation to a demonstration. However, during the experiments, the proposed method demonstrate significantly better performance on manipulation tasks with multiple rigid objects, which needs more justification. - Second, the translation equivariance is achieved by explicit alignment of point clouds and proprioceptive states, therefore the diffusion policy network itself is SO(3) equivariant. Normalization is a standard preprocessing in point cloud networks, therefore it is better to explain how this normalization is distinct from others. In general, I prefer to have SO(3) in the title instead of SE(3) to be more focused and reduce confusion. This paper needs more discussions and justifications about these two issues that will better validate the motivation of this paper. Methods And Evaluation Criteria: The methods and evaluation criteria make sense in general, except for one minor issue: - Projection on to spherical harmonic basis will lose details of added noise and therefore potentially interferes the diffusion process. Therefore, it would be beneficial to discuss it and correspondingly validate the maximum degree of spherical harmonics in ablation studies. Theoretical Claims: I checked the propositions in this paper and they are generally correct. My only concern is that theoretically the policy is only SO(3) equivariant to scene under a global rotation. However, the experiments demonstrate superior performance in scene with multiple randomly placed objects, such as Three Pc. Assembly, which requires more justification and explanation. Experimental Designs Or Analyses: The experiment designs are valid except for two minor issues: - First, the paper only compare with previous works with 100 demonstrations for training, it worth to conduct experiments on more demonstrations, such as the settings in EquiDiff to see how the performance saturate. - Second, in Sec.5.3 it is somewhat surprising that absolute position control is significantly worse. However, there is no further elaboration about the detailed settings, such as wether the control signal is directly from the policy network or converted from relative control signals. In addition, it is also unclear what are the conclusion we can draw from this performance gap. Supplementary Material: I have reviewed both the appendix and the supplementary video. Relation To Broader Scientific Literature: This paper is an application of SO(3) equivariance to diffusion policy tasks. Therefore, it has connection to previous works about equivariant networks. While the results demonstrated in this paper is domain-specific and does not expand the understanding of equivariant network architectures in general. Essential References Not Discussed: Essential references have been discussed to understand this paper. Other Strengths And Weaknesses: Strengths: - The performance is significantly better than existing work when number of demonstration is limited for training. Weaknessnes: - Some descriptions needs to be further elaborated: line-236~240: The architecture of the encoder is difficult to understand. Does the authors mean that a resnet is applied point-wisely first and then sent to EquiformerV2 for further feature extraction? It is better to illustrate the detail of encoder as well to reduce confusion. - Any motivation to have physical experiments with only one ray-fin finger as the end-effector on each arm? Which is less consistent to the simulation. Other Comments Or Suggestions: - Is that better to remove e_i^T−e_i^T since it is constantly zero, or is there any specific reason to keep it? Questions For Authors: Please refer to the above comments and address major concerns about: - Claims And Evidence: what are the specific rotation this paper aims to be equivariant to, transformation between gripper coordinate frame, global transformation of the scene, or individual transformation of objects within a scene - Experiments: Discuss how relative position control and SE(3) equivariance contribute to the performance distinctly and their relation. Also discuss about how the performance varies when more demonstration is given. - Methods: Elaborate more about the structure of the encoder and the projection onto spherical harmonics's effect on the diffusion process. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We respond below ([link to figures](https://limewire.com/d/DAndu#qMh5UOCXI6)): >"... the point clouds and robot arm states can be further aligned within gripper's local coordinate frame by applying a known rotation. If the authors target to achieve SO(3) equivariance to object's absolute rotation, then it is equivariant when the scene is under a global rotation to a demonstration." Leveraging SE(3)-invariant representations is a valuable baseline suggestion, and we have included it as an ablation. Specifically, we implemented a baseline called DP3-cano, which achieves SE(3)-invariance by canonicalizing (i.e., normalizing or transforming) both the point cloud and the action into the gripper frame. However, experimental results show that DP3-cano significantly underperforms the equivariant method SDP. Table R2. Additional Ablations. | | | Coffee 15 | Three Pc. As. 15 | Square 15 | Threading 15 | Avg SR | |-----------------|-------------------------|-----------|------------------|--------|-----------|--------| | SDP | SE(3) equivariance | 54 | 49 | 38 | 53 | 49 | | Discrete U-net* | Octahedron equivariance | 42 | 16 | 34 | 48 | 35 | | DP3-cano* | SE(3) equivariance | 40 | 0 | 8 | 12 | 15 | | DP3* | None | 20 | 0 | 0 | 4 | 6 | *Currently, these baselines have been trained for 300 out of the planned 500 epochs. We expect the final average success rate to improve by approximately 4%. We hypothesize that local equivariance in SDP plays a crucial role. Furthermore, Benjamin et al. [1] demonstrate that using equivariant features results in significantly lower prediction errors compared to invariant features in molecular property prediction tasks. [1] Benjamin Kurt Miller, Mario Geiger, Tess E. Smidt, Frank Noé, Relevance of Rotationally Equivariant Convolutions for Predicting Molecular Properties, Machine Learning for Molecules Workshop at NeurIPS 2020 > "Second, the translation equivariance is achieved by explicit alignment of point clouds ... Normalization is a standard preprocessing in point cloud networks, ... I prefer to have SO(3) in the title instead of SE(3) to be more focused and reduce confusion." For clarification, our method is SE(3)-equivariant. Specifically, SDP achieves SO(3) equivariance through the use of spherical Fourier representations, and T(3) (translation) equivariance via canonicalization. It is important to note that canonicalization differs from normalization in that the generated action is also transformed into the canonical frame. > "it would be beneficial to discuss it and correspondingly validate the maximum degree of spherical harmonics in ablation studies." We agree that truncated spherical harmonic coefficients approximate the underlying spherical function, and using a low maximum frequency can lead to a loss of important details. As shown in Table A3 of the paper, setting $l = 1$ results in a noticeable performance drop. However, performance saturates at $l = 2$ and $l = 3$, suggesting that higher frequencies provide diminishing returns. Due to space limitations, this table is currently located in the appendix, but we will strengthen the connections in the main text to highlight this finding. > "... it worth to conduct experiments on more demonstrations, such as the settings in EquiDiff to see how the performance saturate." This is a great question—we have plotted data scaling curves in Figure R1. Each point represents the average performance across four tilted-table tasks (with tilt angles in $[0, 15^\circ]$) for SDP, EquiDiff (EDP), and DiffPo (DP). Notably, SDP trained with $10^2$ demonstrations outperforms EDP trained with $10^3$, while EDP with $10^2$ demonstrations achieves comparable performance to DP trained with approximately $10^{2.5}$ demonstrations. > "... no further elaboration about the detailed settings, such as wether the control signal is directly from the policy network or converted from relative control signals." The absolute position control does not have translational equivariance while relative position control does, as explained in Section 4.1 and proof in Appendix C.2. The ablation study in Section 5.3, which uses absolute action control, highlights the importance of translational equivariance in our method. > "The architecture of the encoder" See Figure R2. > "Any motivation to have physical experiments with only one ray-fin finger ..." We use the shared robot in our lab and didn’t make any changes on the hardware. > "Is that better to remove e_i^T−e_i^T" Yes, it's for the convenience of presentation in the paper.
Summary: This paper presents a novel SE(3)-equivariant diffusion policy, named Spherical Diffusion Policy (SDP), aimed at improving generalization for robotic manipulation tasks across varying 3D transformations. The key motivation stems from the assumption that embedding states, actions, and denoising processes in spherical Fourier space ensures SE(3)-equivariance, thereby enabling robust generalization across transformed scenes without extensive data collection. Specifically, the framework incorporates a spherical encoder to embed scene features, spherical FiLM layers for equivariant conditioning, and a spherical denoising temporal U-net for spatiotemporal equivariance. Additionally, theoretical analyses verify the equivariance of the proposed method. Extensive simulation and physical experiments demonstrate the effectiveness and superior performance of SDP over state-of-the-art baselines on multiple challenging robot manipulation tasks. **update after rebuttal** After reading the rebuttal, most of my concerns have been addressed, and I am inclined to keep my original score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper advances prior works on equivariant diffusion policies by generalizing leveraging spherical Fourier representations for improved generalization in robotic manipulation tasks. Essential References Not Discussed: Some SE(3) Diffusion Models should be cited as the related work, such as: [1] SE (3) diffusion model with application to protein backbone generation, ICML'2023 [2] SE (3) diffusion model-based point cloud registration for robust 6d object pose estimation, NeurIPS'2023 [3] SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp and motion optimization through diffusion, ICRA'2023 Other Strengths And Weaknesses: 1. Strengths - The paper introduces an innovative SE(3)-equivariant diffusion policy framework, effectively enabling model generalization across SE(3)-transformed environments; - The proposed spherical Fourier features, spherical FiLM layers, and spherical denoising temporal U-Net achieve SE(3)-equivariance with solid theoretical grounding; - Extensive empirical evaluations in both simulation and real-world robotic experiments demonstrate substantial improvements in performance and generalization compared to existing state-of-the-art methods. 2. Weaknesses Although demonstrated in Sec. 4.2, the practical advantages of the proposed spherical Fourier representations over previous SE(3)-equivariant representations (e.g., Vector Neuron and ET-SEED) remain unclear. Specifically, what does it mean that "Vector Neuron only supports up to l=1"? Furthermore, could you clarify how the "truncated spherical Fourier coefficients provide a compact approximation of spherical features and are compatible with SO(3) rotations," in contrast to the "computationally heavy SO(3) irreps used in ET-SEED"? - Table 1 indicates that as the degree of SE(3) initialization increases, SDP continues to exhibit significant performance degradation, raising concerns about the actual effectiveness and robustness of the proposed method. Could you address this point explicitly? - Please clarify the fundamental differences between the spherical Fourier representations employed in this paper and those used in Spherical Fourier Neural Operators. - Given the importance of inference speed in robotic control applications, comparisons regarding computational efficiency are currently missing. It would be beneficial if the authors could provide clarity on this aspect. Other Comments Or Suggestions: See weaknesses. Questions For Authors: Why not directly consider SE(3)-invariant representations rather than focusing on designing SE(3)-equivariant representations? Both approaches yield similar results, and SE(3)-invariant representations might even be better designed than equivariant ones? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We respond below: > "Some SE(3) Diffusion Models should be cited as the related work, such as: ..." Thank you for highlighting these relevant works. We will include citations to these papers in the revised related work section. >"Although demonstrated in Sec. 4.2, the practical advantages of the proposed ... remain unclear. Specifically, what does it mean that "Vector Neuron only supports up to l=1"?" Both Vector Neurons (VN) and our method can be interpreted as representing features in spherical Fourier space, up to a specified maximum frequency $l$. While our method supports arbitrary spherical harmonic types (we use $l_{max}=2$ in our paper), VN is limited to using only scalars ($l=0$) and vectors ($l=1$) as features. The scalars in VN are mathematically equivalent to type-0 features, which are invariant to rotation. The 3D vectors $V = [V_x, V_y, V_z]$ used in VN correspond to type-1 features $c_1 = [c^{-1}_1, c^{0}_1, c^{1}_1]$, as both are three-dimensional and transform under rotation via a rotation matrix. Thus, VN only supports spherical features up to type-1. However, type-1 features have limited representational capacity—for example, they are incapable of capturing spherical distributions with two distinct modes. >" could you clarify how the "truncated spherical Fourier coefficients ..." in contrast to the "computationally heavy SO(3) irreps used in ET-SEED"?" Both SDP and ET-SEED achieve SE(3) equivariance by assigning SO(3)-steerable features to each point in the point cloud. While both methods use irreducible representations (irreps), SDP employs lightweight, order-wise linear layers—each order $m$ only connects to itself. In contrast, ET-SEED (based on the SE(3)-Transformer) utilizes heavier and more redundant fully connected linear layers that connect all orders $m$ across all types $l$. > "...comparisons regarding computational efficiency are currently missing." Empirically, although ET-SEED employs a two-stage diffusion process, its inference time is 60× slower than that of SDP (29.4s vs. 0.44s; see Table below). Moreover, SDP's inference time is on the same order of magnitude as that of Diffusion Policy (SDP is approximately 5× slower than DP). Table R1. Practical statistics for SDP and SOTA baselines. | | Diffusion Policy | EquiDiff | EquiBot | SDP | ET-SEED | |----------------------------|------------------|--------------------------|----------------------------|----------------------------|-----------------------------------| | Inference Speed (Second) ↓ | 0.09 | 0.14 | 0.18 | 0.44 | 29.4 | | Training Batch Size ↑ | 64 | 64 | 64 | 32 | 1 | > "Table 1 indicates that as the degree of SE(3) initialization increases, SDP continues to exhibit significant performance degradation, .... Could you address this point explicitly?" Authors agree that SE(3) equivariant methods should perform consistently no matter how the task is rotated or translated in 3D, provided the transformations are of the full environment. However, in Table 1, these tasks are not perfectly SE(3) transformed, since the gravity direction is not transformed, the camera view (occlusion) is not transformed, and the robot (kinematics) is not transformed. All of these factors could add complexity to the task (e.g., greater table tilt leads to increased object instability), thus SDP continues to exhibit performance degradation. Despite the increasing complexity, we find that SDP outperforms all baselines on all levels of SE(3) initialization of the tilted table tasks in Table 1. > "Please clarify the fundamental differences ... in Spherical Fourier Neural Operators." Both methods leverage the spherical Fourier (SF) transform to process signals on the sphere. However, our method is purely based on SF, while SFNO combines an SF neural network for global convolution with a pointwise MLP for local non-linearity. Additionally, our proposed SDTU enables SO(3) and temporal equivariant convolution, and SFiLM provides equivariant spherical conditioning—capabilities not present in SFNO. > "Why not directly consider SE(3)-invariant representations?" Leveraging SE(3)-invariant representations is a valuable baseline suggestion, and we have included it as an ablation. Specifically, we implemented a baseline called DP3-cano in [Table R2](https://openreview.net/forum?id=U5nRMOs8Ed&noteId=IN1AESox8P), which achieves SE(3)-invariance by canonicalizing (i.e., normalizing or transforming) both the point cloud and the action into the gripper frame. However, experimental results show that DP3-cano significantly underperforms the equivariant method SDP.
null
null
null
null
null
null
Learning Curves of Stochastic Gradient Descent in Kernel Regression
Accept (poster)
Summary: This paper analyzes the excess risk and the minimax lower bound of single-pass stochastic gradient descent (SGD) in kernel regression under the combinations of the following settings: 1) the model is well-specified or misspecified ( the source condition constant $s<1$ or $s\geq 1$ ); 2) the number of data is in proportional/large-dimension regime ($n\sim d^\gamma$ or $n\gg d^\gamma$ for some constant $\gamma>0$); 3) the online (single-batch) SGD is run on exponentially decaying step size schedule or constant step size with averaged iterates. There are several implications from the above results, for example, SGD achieves min-max optimal rate in almost all the cases, in contrast to the saturation effect on the kernel ridge regression (KRR), and it is also the first theoretical proof showing exponential decaying step size schedule is better than iterative averaging method in the context of kernel setting. Claims And Evidence: Yes, the theoretical claims are proved in the paper and validation experiments are shown. Methods And Evaluation Criteria: Yes. Theoretical Claims: I check the proof from the main text as well as the proof in the appendix up to section B. I might have time to proof-read the rest, but the proof seems correct so far. The technique is a standard bias-variance decomposition with a careful modification for the settings in interest. Experimental Designs Or Analyses: Yes, they are sound and valid. Supplementary Material: There is no supplementary material attached. Relation To Broader Scientific Literature: The contribution is significant in kernel method, online learning and also to the broader scope of neural network training under the neural tangent kernel (NTK) framework. As pointed out in the paper, saturation effect and the learning curve for KRR / kernel gradient flow is known. But such results on online SGD setting is novel and brings important insights to the field. Essential References Not Discussed: Most relevant related works are cited. Other Strengths And Weaknesses: The paper is written in a precise and concise way, the concepts and intuitions are well-elaborated, so the paper is easy to follow. Other Comments Or Suggestions: Maybe a summarising table about the results on each combination of settings could help the presentation. This would offer a more macroscopic view on the contributions/findings of the paper. Questions For Authors: I am interested in potential extensions of the current results presented in this paper: 1. What would be the technical difficulties to extend the result to non-dot-product kernels? 2. Could one extend the analysis of the online SGD training to other spectral algorithms mentioned in [1]? --- [1] Lu, Weihao, Yicheng Li, and Qian Lin. "On the Saturation Effects of Spectral Algorithms in Large Dimensions." Advances in Neural Information Processing Systems 37 (2025): 7011-7059. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to extend our sincere appreciation to you for the thorough review and valuable feedback. We are grateful that you not only accurately summarized our contributions but also expressed strong appreciation. We would like to address your suggestion and the question you raised regarding possible extensions of our research. *** # Author's Response to Suggestion: >*1. "Maybe a summarising table about the results on each combination of settings could help the presentation."* This is a meaningful suggestion. We will include tables both summarizing our results and comparing them with prior work to facilitate a better understanding of this paper. Due to space constraints, this content will be added to the Appendix. *** # Author's Response to Questions ## Author's Response to Question 1: >*"1. What would be the technical difficulties to extend the result to non-dot-product kernels?"* Extending our proof to general kernels only requires verifying two conditions: (1) capacity condition or the eigenvalue decay rate (EDR) of the target kernel, and (2) a distributional condition controlling the residual noise $f _t^{b(1)}$ and $f _t^{v(1)}$, such as the kernel is bounded, or $\mathbb{E}[K(\mathbf{x},\mathbf{x} )K _{\mathbf{x}}\otimes K _{\mathbf{x}}]\preceq \kappa \mathbb{E}[K _{\mathbf{x}}\otimes K _{\mathbf{x}}]$, or $\mathbb{E} \left \langle f,K _{\mathbf{x} } \right \rangle ^4\le \kappa \left \Vert f \right \Vert _{\mathcal{H} } ^2$. Once the EDR of the target kernel is known and the kernel satisfies the required distributional condition, our analysis can be applied by analyzing the iterative behavior of the residual noise under these corresponding conditions. ## Author's Response to Question 2: >*2. "Could one extend the analysis of the online SGD training to other spectral algorithms mentioned in [1]?"* This is a deep and interesting question that points to a promising direction for future work. Spectral algorithms construct the estimator $\hat{f} _{\lambda}=\phi _{\lambda}(T _X)g _Z$ using the sample basis function $g _Z=\frac{1}{n}\sum _{i=1}^{n}y _iK(x _i,\cdot)$, the sample covariance operator $T _X=\frac{1}{n}\sum _{i=1}^{n}K _{x _i}K _{x _i}^{\ast}$, and a filter function $\phi _{\lambda}$. To obtain a stochastic version of the corresponding spectral algorithm, one may first reformulate the spectral estimator as the solution to a variational problem on a functional manifold. Then, by selecting an appropriate Bregman divergence, the stochastic approximation to the spectral method can be obtained by stochastic mirror descent. This perspective enables the analysis of the implicit regularization induced by stochastic mirror descent on the functional manifold. Furthermore, by examining how the data structure constrains the optimization trajectory, one can derive generalization bounds. **References:** [1] Lu et al. "On the Saturation Effects of Spectral Algorithms in Large Dimensions." Advances in Neural Information Processing Systems 37 (2025): 7011-7059. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I would maintain my current score and wish the authors the best on the remaining review period.
Summary: This paper studies the generalization performance of kernel regression trained by online / single-pass SGD and compares it with offline methods such as ridge regression. Specifically, the analysis is conducted under a standard source condition assumption on the target (including the misspecified case) and focuses on dot-product kernels with uniform data on the sphere. Under these assumptions, the authors derive excess risk decay rates in the input dimension $d$ under two scalings of sample size $n$: polynomial $n=\Theta(d^{\gamma})$ and assymptotic $n\gg d$. The main results are: - Optimality of SGD in well-specified regimes: They show that using exponentially decaying step sizes allows SGD to achieve minimax-optimal rates for smooth targets $s\geq 1$, avoiding the so-called saturation effect that plagues certain offline algorithms. - Efficiency in misspecified problems. For problems where the target is less smooth $s\in(0,1)$, a constant step size with iterates averaged is proven to match the minimax lower bounds in high dimensions. They also provide a comparison with offline KRR under both scalings, showing that SGD can outperform KRR, especially when the target is sufficiently smooth. Claims And Evidence: The claims in this paper are rigorous mathematical results. Numerical simulations are also provided as an illustration of the mathematical results. Methods And Evaluation Criteria: N/A. Theoretical Claims: I skimmed through the proofs of the results and did not spot any issue. The proofs are standard, and consist in bounding the bias and variance decomposition of the predictor by leveraging previous results in the literature. Experimental Designs Or Analyses: N/A. Supplementary Material: I skimmed through the full SM. Relation To Broader Scientific Literature: - The study of excess risk rates for kernel regression is a classical topic in machine learning theory, with an extensive literature. The goal of this work is comparing the rates obtained by training a kernel method using one-pass SGD with two benchmarks: the minimax rates by and the KRR rates with a given regularisation. These have been derived in previous works both under generic source and capacity conditions e.g. (Caponnetto & De Vito, 2007; Cui et al. 2021; Lu et al. 2024), but also specifically for dot product kernels in the sphere (Bordelon et al. 2020; Bietti & Bach, 2021; Misiakiewicz, 2022) - The proofs also leverage previous results on the analysis of SGD for kernel methods (Dieuleveut & Bach 2016). Essential References Not Discussed: 1. While the authors properly acknowledge the related literature which is closest to their work on the technical side, there are major omissions concerning related results which are directly relevant to this discussion, both when it comes to the learning curves of dot-product kernel on spherical data in the regime $n=\Theta(d^{\gamma})$ (Bordelon et al. 2020) and to the learning curves of KRR under source and capacity conditions in the $n\gg d$ regime (Cui et al. 2021). Indeed, these contain results which are discussed here, but that precede other references used by the authors. For example, Cui et al. (2021) provided a full picture of the rates of the asymptotic rates of KRR as a function of the noise level and regularization, including the suboptimal rate of KRR when $s>2$ at optimal quoted from Li et al. (2023). 2. Although you cite (Pillaud-Vivien et al., 2018) in the end of Section 5, it would be good to also mention it in the related works, when discussing SGD being suboptimal. - [Bordelon et al. 2020] Bordelon, Blake, Abdulkadir Canatar, and Cengiz Pehlevan. "Spectrum dependent learning curves in kernel regression and wide neural networks." In International Conference on Machine Learning, pp. 1024-1034. PMLR, 2020. - [Cui et al. 2021] Cui, H., Loureiro, B., Krzakala, F., & Zdeborová, L. (2021). Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime. Advances in Neural Information Processing Systems, 34, 10131-10143. Other Strengths And Weaknesses: The paper is well-written and easy to follow. The results are interesting. The major weaknesses is that the discussion is specified to dot-product kernels in the sphere, while I believe that part of the discussion should generalize to more general settings. I also consider the omission of relevant related literature as an important weakness. Other Comments Or Suggestions: The absence of numbers in the equations make it hard to refer to them. But here are a list of possible typos: - Related work, L069 right column, I think there is a $-$ sign missing in $n^{\frac{s\alpha}{s\alpha+1}}$ - Section 3, L127 right column: the adjoint should be $T^{\star}:\mathcal{H}\to L^{2}$. - 1st equation of Section 6.2: $f^{b}=f_{\star}$ and $\tilde{f}^{b}=f_{\star}$. - 2nd equation of Section 5.2: double check the rate $\frac{2d+2}{3d+1}$, I think it might be $\frac{2d+2}{3d+2}$ instead. Questions For Authors: - What are the challenges of stating the results of Section 5.2. for general source and capacity conditions of the kernel? What is exactly the role played by the spherical data here? - I think it would be a nice addition to the manuscript to have either a plot or a schematic drawing summing the behaviour of the error curve of SGD in the different polynomial regimes $n=\Theta(d^{\gamma})$ at increasing $\gamma$. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive comments and recognition of our contributions. Below, we address your concerns and questions in detail. *** # Author's Response to Essential References: >*1. "Related References Bordelon et al. (2020) and Cui et al. (2021)":* Thank you for sharing these important and relevant works in this area. They proposed the description of learning curve's exact order and have inspired subsequent efforts toward the characterization of the optimal learning curve. We have now included Bordelon et al. (2020) and Cui et al. (2021) in the revised version of the paper. We discuss the contributions of these works and their connections to our research below. Bordelon et al. (2020) studied the learning curves of dot-product kernels on spherical data in the high-dimensional regime $n\asymp d^{\gamma}$. They observed that errors in spectral modes associated with large eigenvalues decrease more rapidly as the sample size $n$ increases, while the errors in spectral modes with small eigenvalues remain nearly constant until $n$ reaches the degeneracy of the corresponding mode. Cui et al. (2021) provided a comprehensive characterization of the asymptotic rates of KRR with respect to the capacity and source conditions, the regularization parameter, and the noise level in the Gaussian design setting. Their analysis implies that KRR is suboptimal when $s>2$. Li et al. (2023) further proved this conclusion for general continuous positive-definite kernels. >*2. "Mentioning Pillaud-Vivien et al. (2018) in the Related Works":* Thank you for this suggestion. We have now mentioned it in the related works when discussing the suboptimality of SGD, noting that the introduction of multi-pass strategies can lead to a broader region of optimality. *** # Author's Response to Suggestions: Thank you for pointing out the typos, which definitely helps us improve the paper in a concrete way. We have made the following revisions: - Revised L069 right column as $n^{-\frac{s\alpha}{s\alpha+1}}$. - Revised L127 right column as $T^{\ast}: \mathcal{H}\rightarrow L^2 $. - Revised 1st equation of Section 6.2 as $f^{b} _0=f _{\rho}^{\ast}$ ,$\tilde{f} _{0}^{b}=f _{\rho}^{\ast}$. - Revised 2nd equation of Section 5.2 as $n^{-\frac{2d+2}{3d+2}}$. *** # Author's Response to Questions ## Author's Response to Question 1: >*1. "What are the challenges of stating the results of Section 5.2. for general source and capacity conditions of the kernel?"* Thank you for raising the question of whether our asymptotic results can be extended to general source and capacity conditions of the kernel. First, we note that the excess risk bound derived in Appendix B relies only on the assumption that the kernel is bounded, and it already provides a general formulation. Once the target kernel satisfies certain capacity conditions, along with a distributional condition that controls the residual noise of $f_t^{b(1)}$ and $f_t^{v(1)}$, such as the boundedness of the kernel, or conditions like $\mathbb{E}[K(\mathbf{x},\mathbf{x} )K_{\mathbf{x}}\otimes K_{\mathbf{x}}]\preceq \kappa \mathbb{E}[K_{\mathbf{x}}\otimes K_{\mathbf{x}}]$, or $\mathbb{E} \left \langle f,K_{\mathbf{x} } \right \rangle ^4\le \kappa \left \Vert f \right \Vert _{\mathcal{H} } ^2$, our analysis can be directly applied to obtain corresponding convergence rates. >*2. "What is exactly the role played by the spherical data here?"* We did not present the general conclusion separately, but continued using spherical data in order to maintain focus on the central theme of our paper. The main goal of this work is to understand the generalization curve of SGD under the NTK. Our analysis in the asymptotic setting aims to answer the following question: Can the SGD algorithm consistently achieve optimality across all scalings of $n$, particularly when liberated from the $n\asymp d^{\gamma}$ constraints? For this reason, we adopted the assumptions in $n\asymp d^{\gamma}$ setting throughout the paper. We acknowledge that the assumption of a uniform input distribution on spheres is restrictive in high-dimensional scenarios. Our primary motivations for adopting this setting are that the harmonic analysis in the sphere is clearer and more concise, the analysis of Mercer’s decomposition for general kernels in high-dimensional settings is challenging, and few results are available. Due to these reasons, most existing analyses in high-dimensional setting also utilize spherical data. Your suggestion to extend our analysis to more general kernels is indeed a valuable and challenging direction, and we will pursue this in our future research. ## Author's Response to Question 2: >*"Addition of a plot summarizing the behavior of the error curve of SGD."* This is a valuable suggestion. We have now included in the appendix the curves of the convergence rate on the excess risk of SGD as $\gamma$ increases, under different values of $s$. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebutal. My questions were clarified, and I will keep my score.
Summary: This paper proves 4 results about the excess risks of least squares RKHS regression optimised by stochastic gradient descent. The settings of the four results are divided along two axes: firstly, the high-dimensional regime, in which the input dimension grows with the sample size, versus the fixed dimension regime, where the only the sample size grows, and secondly, the well-specified and misspecified settings. Comparing with lower bounds in existing works, optimality of their results are investigated in each case. Claims And Evidence: This is a theoretical paper, and hence the presentation of the mathematics and the correctness of the proofs are needed as evidence for the theoretical claims. I do not have big doubts that the results are correct, but the proofs are presented in a way that is rather hard to follow and I have doubts on the correctness of some parts of it. I gave up reading the proofs in the Appendix some way in, and I unfortunately can only think that there will be many more places where corrections or clarifications have to be made. See "Questions" sections for details. Methods And Evaluation Criteria: This is a theoretical paper. Theoretical Claims: See "Questions" section for details. Experimental Designs Or Analyses: This is a theoretical paper. Supplementary Material: Some of the proofs. Relation To Broader Scientific Literature: The results would be interesting in the community of learning theory in RKHSs, the attention on which have been recently revived with connections to neural networks via NTKs and the study of benign overfitting. The study of properties of SGDs is much less explored than the closed form solutions, and if it can be shown that SGDs have provable advantages over closed form solutions, as this paper claims, that would be interesting. Essential References Not Discussed: There are some related papers that the authors have not discussed in the KRR literature, for example, [Blanchard and Mücke, 2018, Optimal Rates For Regularization Of Statistical Inverse Learning Problems], [Li et al., 2023, Optimal Rates for Regularized Conditional Mean Embedding Learning]. Other Strengths And Weaknesses: I think the writing could be very much improved - see Section "Other Comments Or Suggestions". The English is not such a big problem, as it cannot be expected that non-native speakers write in perfect English, but there are some inconsistencies with mathematical notations too. All the results are given in expectation, and I think in general high-probability bounds are stronger, and personally preferred. This is a minor point. Other Comments Or Suggestions: 10R: "which states sufficiently wide neural network" -> "which states that sufficiently wide neural networks" 35R: "via an online fashion" -> "in an online fashion" 43R: "In specific" -> "Specifically" 51R: "resulting entirely different learning dynamics" -> "resulting in entirely different learning dynamics" 87L: "Why SGD can overcome" -> "Why can SGD overcome" 77R, 104R: "early stop" -> "early stopping" 138L: For consistency, $\rho_X$ should be $\rho_\mathcal{X}$. 146L: "denote ... be the operator" -> "denote by ... the operator" 114R: "By the Mercer's theorem" -> "By Mercer's theorem" 118R, 125R, 142R, 232L: For consistency, $L_2$ should be $L^2$. 195R: Seems a space is missing between the definition of $m$ and the next sentence? 287L: "optimaly" -> "optimally" 284R: space needed after "s," 559: The subscript for $\langle\mathbf{x},\mathbf{x}\rangle$ should not be $L^2$, as $\mathbf{x}$ are not functions. 577: In (9), the second inequality is an equality. 636, (16): $f_t$ is only introduced later in (18), so it doesn't look good to use it here. If $f_t$ is replaced by general $f$ here, then the outer expectation should not be there, since $f$ would not be random and there is no other source of randomness. 686: $f^{(1)}_t$ should be $f^{b(1)}_t$. 768, 770: The operator does not live in $\mathcal{H}$, so you cannot take the $\mathcal{H}$-norm of the operators, you should take the operator norm. Questions For Authors: 164R: I thought the multiplicities were $N(d,k)=\frac{(2h+d-2)(h+d-3)!}{h!(d-2)!}$? See [Müller, 1998, Analysis of Spherical Symmetries in Euclidean Spaces, p.28, Exercise 6]. Why are your multiplicities different? The authors repeatedly refer to SGD as an "online algorithm", but this is very different to the usual usage of the word "online learning", whereby the data is fed to the algorithm sequentially, and one is concerned with the regret. I have personally never seen the term "online algorithm" being used for variants of gradient descent. If the authors and the other reviewers agree, I would strongly suggest to remove the word "online" to avoid confusion. The authors also repeatedly use the term "offline" for learning with explicit regularisation, or anything that is not based on variants of gradient descent, but I would consider their SGD set-up also as "offline", as we only have one static dataset. 567, (7): Are you sure that $\sum^\infty_{i=1}\langle\phi_i,T\phi_i\rangle_{L^2}=\sum^\infty_{i=1}\mathbb{E}[\langle\phi_i,K_\mathbf{x}\rangle^2_{L^2}]$? I get that $\sum^\infty_{i=1}\langle\phi_i,T\phi_i\rangle_{L^2}=\sum^\infty_{i=1}\mathbb{E}[K(\mathbf{x},\mathbf{x}')\phi_i(\mathbf{x})\phi_i(\mathbf{x}')]$ and $\sum^\infty_{i=1}\mathbb{E}[\langle\phi_i,K_\mathbf{x}\rangle^2_{L^2}]=\sum^\infty_{i=1}\mathbb{E}[K(\mathbf{x},\mathbf{x}')\phi_i(\mathbf{x}')K(\mathbf{x},\mathbf{x}'')\phi_i(\mathbf{x}'')]$, which are not the same. However, from $\sum^\infty_{i=1}\langle\phi_i,T\phi_i\rangle_{L^2}=\sum^\infty_{i=1}\mathbb{E}[K(\mathbf{x},\mathbf{x}')\phi_i(\mathbf{x})\phi_i(\mathbf{x}')]$, you can proceed with the reproducing property and the Cauchy-Schwarz inequality to get the desired bound. 589, (10): The second equality should be an inequality arising from the Cauchy-Schwarz inequality. They are not the same. 667, (23): I'm sorry if I'm missing something, but how does the positive definiteness of $\boldsymbol{\Sigma}$ here imply (23)? 671, (24): On the first line, shouldn't it be $\mathbb{E}[\langle f_i-f^*_\rho,\boldsymbol{\Sigma}(f_j-f^*_\rho)]\rangle_\mathcal{H}=\mathbb{E}[\langle f^v_i+f^b_i,\boldsymbol{\Sigma}(f^b_j+f^v_j)\rangle_\mathcal{H}]$ instead of $\mathbb{E}[\langle f_i-f^*_\rho,\boldsymbol{\Sigma}(f_j-f^*_\rho)]\rangle_\mathcal{H}=\mathbb{E}[\langle f^v_i+f^b_j,\boldsymbol{\Sigma}(f^b_i+f^v_j)\rangle_\mathcal{H}]$? How is it that $i$ and $j$ switch? 712, 733, 744, 748, 753, etc.: In the first term on the right hand side on the first line, what is random inside the expectation here? It seems to me that nothing is random, so you shouldn't be writing expectations right? 719: I don't understand (31). On the left-hand side we have subscript $t$, but on the right-hand side we have $i$ and $j$. Where do $i$ and $j$ come from? Does this bound hold for all $t$, $i$ and $j$? The preceding proofs suggest that all $i$ and $j$ should be replaced by $t$, see comment above for 671. 761: How do you obtain (37)? I guess you use $\eta_t=\frac{\eta_0}{2^{l-1}}$ and $m=\lceil\frac{n}{\log_2(n)}\rceil$ somehow? I'm sorry that I couldn't immediately see how this leads to (37). Is this obvious? 424: Where was $k^*$ defined? This seems to be the only place in the main body where $k^*$ is used, and it is not introduced. It is used quite heavily in the appendix but it is not introduced there either. 790, (42): Why is $\mathbb{E}[\Xi_i]=0$? On 630, $f^*_\rho$ was defined to be $\text{argmin}_{f\in[\mathcal{H}]^s}\mathcal{E}(f)$, not $\mathbb{E}[y\mid\cdot]$, so I don't think you can assume $\mathbb{E}[\Xi_i]=0$. Am I missing something? This has repercussions, e.g. in (43), where the cross-terms should not disappear. 813: Again, what is $k^*$? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for taking the time to carefully review our paper and for providing detailed and highly valuable feedback. Below, we respond to your questions and concerns one by one. *** # Response to Questions: >*1. 164R: $N(d,k)$.* The confusion stems from a typo in Line 142L, where we mistakenly wrote $\mathbb{R}^{d}$ instead of $\mathbb{R}^{d+1}$. In our paper, $\mathbb{S}^d$ actually denotes the unit sphere in $\mathbb{R}^{d+1}$. We have corrected this and confirm that the difference only introduces a constant factor, which does not affect the validity of our rates. >*2. Online algorithm.* We refer to SGD as an "online algorithm" to emphasize that it can process data in an i.i.d. streaming fashion, in order to distinguish it from "offline algorithm", which operates on a fixed dataset. We have now reduced the use of "online", only when we mention "offline". We have also added a clarification of "offline" to ensure the meanings of both terms are clear. >*3. 567, (7): $\sum _{i=1}^{\infty}\langle\phi _i,T\phi _i\rangle _{L^2} =\sum _{i=1}^{\infty} \mathbb{E}\left[\left\langle\phi _i,K _{\mathbf{x}}\right\rangle _{L^2}^2\right]$?* This is a typo, but the conclusion $\mathrm{tr} (T )=\mathbb{E}{K(X,X)}$ still holds. A revised proof is provided below: \begin{aligned} \sum _{i=1}^{\infty}\langle\phi _i,T \phi _i\rangle _{L^2}=\sum _{i=1}^{\infty} \mathbb{E} \left( \lambda _i^{1/2}\phi _i(X)\right)^2\overset{(a)}{=}\sum _{i=1}^{\infty} \mathbb{E} \left\langle K _X, \lambda _i^{1/2} \phi _i \right\rangle _{\mathcal{H}}^2\overset{(b)}{=}\mathbb{E}\left\langle K _X, K _X \right\rangle _{\mathcal{H}}, \end{aligned} where $(a)$ follows from the reproducing property, $(b)$ follows from Parseval's identity. >*4. 589, (10): Typo in Equa.* We have corrected this. >*5. 667, (23): Why positive definiteness of $\boldsymbol{\Sigma}$ imply (23)?* >*6. 671, (24): $i$ and $j$?* >*7. 719, (31): $i$ and $j$?* We address Questions 5, 6 and 7 together. Thank you for pointing out the typo, that we mistakenly used subscript $i$ and $j$ instead of the iteration $t$. All $i$ and $j$ should be replaced by $t$ in Lemma B.1, Lemma B.2 and (76). We have corrected this, and it does not affect the validity, as only the decomposition at the $t$-th iteration are used. The key steps are provided below: By the positive definiteness of $\boldsymbol{\Sigma}$, $\left\langle f _{t}^{b} - f _{t}^{v}, \boldsymbol{\Sigma}\left(f _{t}^{b} - f _t^v\right)\right\rangle _\mathcal{H}\ge0$. This implies $\left\langle f _t^b,\boldsymbol{\Sigma}f _t^v\right\rangle _\mathcal{H}+\left\langle f _t^v , \boldsymbol{\Sigma}f _t^b\right\rangle _\mathcal{H}\le \left\langle f _t^b , \boldsymbol{\Sigma}f _t^b\right\rangle _\mathcal{H}+\left\langle f _t^v , \boldsymbol{\Sigma}f _t^v\right\rangle _\mathcal{H}$. Based on this, we can obtain the desired result in Lemma B.1. The proofs of Lemma B.2 and (76) follow the same reasoning. >*8. 712, 733, 744, 748, 753, etc.: $\mathbb{E}$?* We have removed the expectation where no randomness is involved. >*9. 761: (37)?* Below is an explanation of (37). Due to $\eta _0 \le \frac{1}{\lambda _1}$ and $\eta _i\le \eta _0$, it follows that $\mathbf{0} \preceq \boldsymbol{I} -\eta _i\boldsymbol{\Sigma}\preceq \boldsymbol{I}$. For all $i$, $\boldsymbol{I} - \eta _i\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}$ are positive semidefinite, self-adjoint and mutually commuting operators. Denote $\mathcal{M} _n= \prod _{i=1}^{n} (\mathbf{I} - \eta _i \boldsymbol{\Sigma})$, then $\mathcal{M} _n \boldsymbol{\Sigma} \mathcal{M} _n\preceq\cdots\preceq\mathcal{M} _m \boldsymbol{\Sigma}\mathcal{M} _m.$ >*10. 424, 813: What is $k^{\ast}$?* Thank you for pointing out the inconsistent use of $k^{\ast}$. In Line 424, $k ^{\ast} = \max _{i\in\mathbb{N} _+} \lbrace k:\eta _0\lambda _k\ge \frac{1}{n}\rbrace $, which serves as an effective dimension (see Lemma D.4). While in Appendix B, $k^{\ast}$ can be any positive integer in order to present a more general result. The specific value is clarified in Appendix C and D. We have revised the paper to clearly indicate whether $k^{\ast}$ refers to. >*11. 790, (42): $\mathbb{E}[ \Xi _{t} ]=0$?* We apologize for the confusion caused by the improper definition of $f _{\rho}^{\ast}$ in Line 630. Although under the source condition given $s$, $f _{\rho}^*$ in line 630 is intended to refer to $\mathbb{E} [ y| \mathbf{x} ]$, this expression used was inappropriate. We have revised the paper to ensure consistent use of $f _{\rho}^*=\mathbb{E} [ y| \mathbf{x} ]$ throughout. Under this definition, $\mathbb E[\Xi _t ]=0$ is a mild assumption. *** # Response to Suggestions: We appreciate your constructive writing suggestions. All points have been addressed, and we have further refined the manuscript. *** # Response to Related Works: Thank you for sharing the references. They have been included in the revised version. Additionally, we have expanded our discussion of the KRR literature. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your rebuttal. Although the specific points I raised were addressed, I think the manuscript should go through a thorough revision, and I maintain my evaluation of the paper. It seems the other reviewers are much more positive though, and I would not mind at all if it was accepted either! Best, reviewer
Summary: This paper examines the problem of using SGD to train a kernel regressor. In particular, the paper considers the dot product kernel with input data that is uniformly distributed on the sphere. Assuming that the targets $y = f_*(x) + \varpepsilon$ for $f_*$ with certain smoothness properties, the paper shows that different forms of SGD can achieve the minimax optimality rate. The paper empirically verifies this in a couple of situations. Claims And Evidence: The paper is primarily theoretical, with some experiments reinforcing their theoretical claims. The theoretical statements are detailed and rigorously written and have corresponding proofs. The results from the paper are convincing. The importance of the upper bounds is also well justified. Methods And Evaluation Criteria: The experiments are appropriate to support the theory. Theoretical Claims: I read the proof for property one and the beginning of the proof for Theorem 5.1. They look correct to me. Experimental Designs Or Analyses: N/A Supplementary Material: I read the proof for property one and the beginning of the proof for Theorem 5.1. They look correct to me. Relation To Broader Scientific Literature: The paper mostly situated itself very well with respect to broader literature. They provide comprehensive cases against which to compare their results. Specifically, the lower bounds from prior works, as well as the fact that KRR does not meet these lower bounds. This helps strengthen their results, showing that the method they consider achieves the optimal rates. Essential References Not Discussed: In essence, the paper does not consider SGD until convergence. But an early stopped version of SGD. It would be beneficial for the paper to discuss related works on early stopped SGD (or even GD). I am more familiar with the GD literature so I list some here [A,B,C,D]. I think this is also an import paper on dot product kernel regression that is missing [E] [A] Madhu S. Advani, Andrew M. Saxe, and Haim Sompolinsky. High-dimensional dynamics of generalization error in neural networks. Neural Networks, 132:428–446, 2020. [B] Alnur Ali, J. Zico Kolter, and Ryan J. Tibshirani. A Continuous-Time View of Early Stopping for Least Squares Regression. In International Conference on Artificial Intelligence and Statistics, pages 1370–1378, 2019. [C] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Early Stopping and Non-parametric Regression: An Optimal Data-dependent Stopping Rule. Journal of Machine Learning Research, 15(11):335–366, 2014. [D] Rishi Sonthalia, Jackie Loh, Elizaveta Rebrova. On regularization via early stopping for least squares regression. arXiv preprint arXiv:2406.04425. 2024. [E] Xiao L, Hu H, Misiakiewicz T, Lu YM, Pennington J. Precise learning curves and higher-order scaling limits for dot product kernel regression. arXiv preprint arXiv:2205.14846. 2022 May 30:3. Other Strengths And Weaknesses: **Strengths** The results from the paper are quite strong. They show that optimal minimax rates can be achieve for a variety of well specified and mis-specified problems. This is quite a strong result and I think is sufficient for the paper to be published. Other Comments Or Suggestions: **Typos** The embedding operator Line 126 (Right) should map from $\mathcal{H} \to L^2$. In Assumption 4.3 it should be clarified for a given $s$ and not for all $s$. In Theorem 5.1 $f_\rho$ should be explicity defined. It would also be helpful to write 4.7 and 5.1 in the same lanague. Specifically, so that the conditions on $p$ and $\gamma$ are easier to compare. Currently they look similar, but I think there is a difference of $\pm 1$. Questions For Authors: Do you what happens if instead of stopping early the algorithms are run until convergence? Do they converge to the KKR solutions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your careful review and positive feedback on our work. Your comments have provided concrete guidance for improving the paper and have been a great source of motivation for us. Below, we address your questions and concerns in detail. *** # Author's Response to Essential References: Thank you for sharing these important related works with us, which have helped us to improve the completeness of our discussion. > *1. "Import paper on dot product kernel regression":* **Our Response:** [E] is indeed a key reference in the high-dimensional dot-product kernel regression. We have now added them in the revision of the paper. It studies the setting where $n\asymp d^{\gamma}$, provides a precise closed form formula for learning curves of dot-product KRR, and identifies the multiple descent phenomenon of KRR. The techniques presented in this paper are also highly beneficial for analyzing optimal convergence rates in high-dimensional scenarios. >*2. "Related works on early stopped GD":* **Our Response:** Thank you for providing references on early stopped GD, which enables readers to gain a more comprehensive understanding of the related techniques. We have incorporated a more detailed comparison with early stopped GD in the revision of the paper. In the following, we discuss the contributions and connections to our current work of the papers you shared. Specifically, references [A–D] examine the generalization performance of early stopped GD in kernel regression, collectively highlighting the close connection between early stopping and ridge regularization. They analyze early stopped GD under scaling conditions such as $n\asymp m$ or $n\asymp d$, where $n$ denotes the sample size, $d$ the input dimension, and $m$ the number of model parameters, emphasizing optimal stopping rules and generalization performance. Our paper complements these studies by considering single pass SGD within the NTK framework (infinite width, $m=\infty$), and explores the optimal learning curves under various ratios between sample size $n$ and input dimension $d$. >*3. "Related works on early stopped SGD":* **Our Response:** The single pass SGD we consider can be viewed as a form of early stopping, where each observation is used only once, and overfitting is avoided by making only a single pass through the data. Following your suggestion, we will expand the discussion with related works. *** # Author's Response to Comments or Suggestions: Thank you for your suggestions and for pointing out the typos. We have made the following revisions accordingly. > *1. "The embedding operator in Line 126 (Right)":* **Our Response:** We have made the revision $T^{\ast}: \mathcal{H}\rightarrow L^2 $. > *2. "Clarify of $s$ in Assumption 4.3":* **Our Response:** We have now clarified in Assumption 4.3 that the source condition is specified for a given $s$. > *3. "Explicit Definition of $f_{\rho}^{\ast}$ in Theorem 5.1":* **Our Response:** We now consistently use the notation $f_{\rho}^{\ast}\left ( \mathbf{x} \right )=\mathbb{E}_{(\mathbf{x},y)\sim \rho} \left [ y| \mathbf{x}\right ]$. In addition, we have clarified the definition of the joint distribution $\rho$ of $(\mathbf{x},y)$ in theorems. > *4. "Write 4.7 and 5.1 in the same language":* **Our Response:** Although the rates in Theorems 4.7 and 5.1 match for given $s$ and $\gamma$, we agree that the use of $p$ could cause confusion at the boundary cases. We have revised the statements accordingly to ensure consistent language throughout. *** # Author's Response to Question 1: >*1. "Do you what happens if instead of stopping early the algorithms are run until convergence? Do they converge to the KKR solutions?"* **Our Response:** I think what you may be referring to is a scenario different from the single pass SGD we consider: specifically, when a fixed dataset is given and the algorithm performs infinitely many iterations over this fixed sample set, the resulting function tends to $f_{\infty }(\mathbf{x} )=K(\mathbf{x},\mathbf{X})K(\mathbf{X},\mathbf{X})^{-1}\mathbf{y}$, which corresponds to the kernel interpolation solution. If we consider single-pass SGD with an infinite stream of i.i.d. samples, the procedure falls under the asymptotic setting, and the excess risk of the solution will converge at the asymptotic convergence rate. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I maintain my view that this paper has important results and should be accepted.
null
null
null
null
null
null
Return of the Latent Space COWBOYS: Re-thinking the use of VAEs for Bayesian Optimisation of Structured Spaces
Accept (spotlight poster)
Summary: In the proposed manuscript, the authors present a novel approach for Bayesian optimization within the latent space of a VAE applied to structured domains like molecular design. They aim to address multiple drawbacks of conventional methods that often result in suboptimal performance when the latent space does not align with the specific task. To address these issues, the authors introduce a decoupled framework where the generative model and a GP surrogate are trained independently and later integrated through a principled Bayesian update rule. This approach allows the VAE to specialize in generating candidate structures while the GP excels in predictive modeling, thereby improving the identification of promising candidates under limited evaluation budgets. Claims And Evidence: The paper claims that while BO in the latent space of a VAE is a powerful framework for tackling complex optimisation tasks, existing methods have significant drawbacks, including suboptimal surrogate modeling in the latent space and inherent search space challenges. The paper provides a thorough discussion of these issues, offering both descriptive analysis and empirical evidence. Moreover, the proposed decoupled approach appears to address these shortcomings effectively, potentially serving as a remedy for the identified limitations. Methods And Evaluation Criteria: The proposed method decouples the search space from the latent space and further uses a Preconditioned Crank-Nicolson algorithm (which concentrates on high-prior probability annulus, better than random walk sampling) for sampling. COWBOYS conducts a number of studies, showing competitive results with state-of-the-art comparison methods, even those that fine-tune the VAE, and with high-dimensional BO benchmarks and traditional LSBO methods. Finally, the authors discuss the need to develop/extend COWBOYS to large-scale molecule optimization tasks, as this can be challenging with the proposed structure-based GP optimization approach. While the presented approach shows promising results on the current task, the manuscript's impact would be significantly enhanced by testing the method in additional domains, such as protein fitness optimization or similar applications involving other structured spaces. Theoretical Claims: The theoretical claims in the presented work seem plausible and well discussed. Experimental Designs Or Analyses: The experimental design seems in line with other literature in the field and appears to be well-integrated within a benchmark suite. The approach also sounds plausible based on the information presented. Supplementary Material: The supplementary material discusses the PCN MCMC sampler in detail and shows an ablation over the number of steps and chains of the MCMC algorithm. Thereby, the authors suggest that COWBOYS is robust to different configurations of the PCN MCMC sampler across the benchmark, unless the chains/steps are set to be very small. While I generally agree, the results in Table 2 also suggest that best results for the tasks are achieved with distinct settings each, i.e. it is not the 10 chains/100 steps that consistently yields best results as one could perhaps expect. This could be further discussed. Relation To Broader Scientific Literature: The authors start by motivating LSBO, which has emerged to be a prominent technique in this field in recent years. Current limitations and shortcomings of LSBO are discussed, and how the community handles some of these (e.g., fine-tuning the VAE, limiting the search space). This is followed by how COWBOYS addresses these issues. Essential References Not Discussed: To the best of my knowledge, the authors have included the key literature in the field, particularly regarding LSBO approaches. Other Strengths And Weaknesses: Is the search space problem also an issue for the initial design of COWBOYS? Essentially, the initial design of Alg. 2 also requires sampling from the VAE’s search space and decoding the samples. Other Comments Or Suggestions: line 98 left: expected improvement twice line 82 right: this should be encoder instead of decoder line 94 right: “.” missing at the end of the sentence line 227 left: In algorithm, the arrow extends into the brackets Line 269 left: “*Note that replacing the VAE’s probabilistic decoding with the most-likely mapping is already a common strategy in many LSBO implementations”* You are citing here a survey paper that does not discuss this in detail, please cite the actual works to make the connection clear for the reader. Questions For Authors: See questions asked above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful assessment of our work. We appreciate your recognition of both the descriptive and empirical strengths of our approach. It is encouraging to hear that you find our decoupled strategy promising, and that our competitive results align well with other state-of-the-art methods. We have addressed all your suggestions thoroughly and have implemented the requested revisions as detailed below. We hope that our efforts to address the comments from you and the other reviewers will be appreciated and could justify increasing your score. 1) **Generalizability of COWBOYS Beyond Molecular Design and Drug Discovery**. We appreciate your interest in the broader applicability of COWBOYS outside its current focus on molecular design and drug discovery. Our evaluation focuses on these tasks because they remain the most widespread use cases for latent space Bayesian optimization (LSBO), and well-established baseline suites exist to facilitate direct comparison. Fitting a Gaussian Process (GP) in the original, potentially high-dimensional and highly structured space can indeed be challenging with standard kernels. However, we see an ability to model directly in structure space as a key advantage of COWBOYS, rather than a limitation. By working in the raw structure space, our approach naturally supports specialized kernels tailored to particular domains—an area where there is a rich, yet underused, literature. For instance, the Tanimoto kernel, originally proposed by chemists, encodes prior knowledge of what sort of molecular attributes are important in a way that can be especially powerful in low-data regimes [1,2]. Similar structural kernels exist for various structured objects and, via COWBOYS, could now be used for molecular graph optimisation with graph kernels [3], engineering design with 3d mesh kernels [4], optimising computer code via tree kernels [5], or protein design with new protein kernels [6]. In contrast, many current LSBO strategies cannot leverage these specialized kernels because they rely on Euclidean latent spaces. By enabling the direct use of such kernels, COWBOYS allows practitioners to incorporate a wealth of domain-specific prior knowledge. We hope this work helps revitalize interest in harnessing these powerful kernels across a range of complex design problems and spurs further synergy between rich generative models and carefully structured discriminative models 2) **We are pleased to hear that our discussion clarified the robustness of COWBOYS with respect to PCN parameters**. However, In the revised manuscript, we have added details on how to interpret the corresponding table, emphasizing that no single parameter choice can yield uniformly optimal performance across different objective functions, owing to varying degrees of model mismatch between our GP and each specific problem objective and varying degrees of difficulty/locality of the optimisation problems. 3) **“Is the search space problem also an issue for the initial design of COWBOYS”?** In standard LSBO, a specific region of the latent space must be chosen, over which we sample an initial design and then perform BO. We argue in Section 3.3 that this clipping can lead to pathological performance issues. In contrast, COWBOYS just samples from the VAE in the standard way, i.e. we generate a random Gaussian (which has infinite support) and then push it through the decoder. There is no need to specify a search space for COWBOYS. We have greatly increased discussion around this point in the final version. 4) Also, thanks for the **minor corrections**, they have now been changed in the final version, including adding more four specific references to justify ”replacing the VAE’s probabilistic decoding with the most-likely mapping is already a common strategy”. [1] Griffiths, Ryan-Rhys, et al. "GAUCHE: a library for Gaussian processes in chemistry." Advances in Neural Information Processing Systems 36 (2023): 76923-76946. [2] Moss, Henry, et al. "Boss: Bayesian optimization over string spaces." Advances in neural information processing systems 33 (2020): 15476-15486.] [3] Vishwanathan, S. Vichy N., et al. "Graph kernels." The Journal of Machine Learning Research 11 (2010): 1201-1242. [4] Perez, Raphaël Carpintero, et al. "Gaussian process regression with Sliced Wasserstein Weisfeiler-Lehman graph kernels." International Conference on Artificial Intelligence and Statistics. PMLR, 2024. [5] Beck, Daniel, et al. "Learning structural kernels for natural language processing." Transactions of the Association for Computational Linguistics 3 (2015): 461-473. [6] Groth, Peter Mørch, et al. "Kermut: Composite kernel regression for protein variant effects." Advances in Neural Information Processing Systems 37 (2024): 29514-29565. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification, I raised my recommendation to "Accept".
Summary: This paper looks for valid molecules with a high probability of improving a desired property of the molecule. The novelty of this method is that they decouple the two probabilities: the probability of a molecule existing is estimated by VAE which is initially trained/pre-trained, then the probability of a molecule having the desired properties is estimated from a GP which operates on the space of count based vectors of molecular features. By not fitting a GP in the latent space of the VAE they claim to avoid two pathologies. First, there can be a lack of smoothness of the objective function in the latent space. Second, because a single point in the VAE latent space maps to several molecules through noise added to the decoder which each have different values for the objective function, a GP in the latent space will inevitably have more noise than could be predicted from a single structure. The authors suggest candidate molecules using an MCMC sampler using a prior defined by a deterministic decoder and a likelihood given by the probability of improvement compared to the current best observation, as calculated by the GP. The authors benchmark this method agains various alternative methods on several datasets. Claims And Evidence: The authors claim that their method outperforms alternatives the low data regime and existing high dimensional discrete sequence which appears to be well supported by the experiments shown here. They also claim with good support to outperform traditional Latent Space BO even when the traditional methods have much higher experimental budgets. Two elements which make me less confident on the quality of the method are: 1. The code has been retracted for the review and I do not see any anonymised version to check their implementation. I would have appreciated seeing the code during the review. 2. Use of different comparison methods for different datasets. I am not familiar with the standard benchmarks in this area and would have appreciated either seeing the same methods applied across all datasets, or a justification for why not. Please see the question below regarding justifying this. Methods And Evaluation Criteria: I thought that the proposed method and evaluation criteria mostly made sense and was well justified. My one concern was in Section 5.1 where they discard the stochasticity of the decoder. The VAE with a stochastic decoder should provide a probabilistic manifold for the density of possible molecules. The deterministic decoder seems to have a potential issue with either assigning definite existence to impossible molecules or definite existence to optimal molecules. Additionally, as the MCMC sampler was just used to find candidate molecules (as opposed to finding exact posteriors) I do not see the issue with running the MCMC as described but then sampling from the stochastic decoder, which would have potentially avoided these concerns. I would have appreciated a discussion of potential issues from this, although I appreciate that they do not seem to have been major concerns in practice given the experimental results. Please also see the question regarding this. Theoretical Claims: The authors make no strong theoretical claims. Experimental Designs Or Analyses: I do not see any issues with the experimental design carried out here, the experiments and comparisons both seem to be valid and show good results. Supplementary Material: I briefly checked the appendix. The algorithm details, significance of MCMC chain length and number, and ablation study all seemed well presented and correct. Relation To Broader Scientific Literature: I am not familiar with the literature surrounding BO for molecules, but to me the literature review seems extensive. Essential References Not Discussed: I am not aware of any references which should have been discussed here, although I do not claim extensive knowledge of BO applied to molecular design. Other Strengths And Weaknesses: The strengths and weaknesses are addressed in other comments adequately. Other Comments Or Suggestions: 1. As a non-expert in BO in molecular design I would have liked 1-2 sentences describing each of the benchmark datasets and the particular challenge/reason for including each one. 2. The equation on the left hand column of line 267 was unclear if it was using a Dirac Delta function, which is typically not used for categorical variables. I believe that what is being stated in that line is that $\hat p_\theta(x|z)$ is a vector of all zeros except for one element with value one, where the non-zero element corresponds to the maximum likelihood prediction for $p_\theta(x|z)$. See also the question on this. Questions For Authors: 1. Can you comment on possible problems of using a deterministic decoder, specifically: i) is there a risk of the VAE with a deterministic decoder assigning 100% confidence to molecules which in practice do not/ cannot exist? ii) Does the deterministic VAE prevent the model from exploring other similar molecules to high performing ones result in missing very good candidate molecules, especially in the final rounds of a BO run in the exploitative stage? 2. Can you clarify what is meant by the equation on the left hand column at line 267? 3. Can you clarify why on different benchmark datasets different comparison methods are used? Do the different approaches not work on the other benchmark datasets, or was the decision for which methods to compare based on another consideration? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review and positive feedback regarding our experimental design and validations. We greatly appreciate your recognition of our method's strong performance. We have carefully addressed all of your suggestions and implemented the requested revisions, detailed below. We hope these efforts demonstrate our commitment to improving the manuscript and may justify an increased evaluation score. 1) **Code Accessibility** In response to your suggestion we have now made some of our code available [here](https://anonymous.4open.science/r/cowboys-AC3E/README.md). The repository allows the recreation of the benchmarking results of Section 7.2. Indeed, incorporating COWBOYS (see [cowboys.py](https://anonymous.4open.science/r/cowboys-AC3E/cowboys.py)) required only minimal additions, emphasizing both its practical usability and potential for broad community adoption. The fully-fledged codebase, allowing COWBOYS to be applied to generic pre-trained VAEs and thus applied to a range of downstream tasks (and recreate the results of Sections 7.1 and 7.2), will be provided upon acceptance as it cannot be shared without violating ICML’s anonymity rules. 2) **Clarification of Equation (Line 267)** We appreciate you pointing out the ambiguity in the equation on line 267. As requested, we have clarified this equation explicitly in the revised manuscript, clearly defining the delta function as indicating a deterministic decoding strategy that selects only the most-likely decoded molecule from the latent location z. The delta function here means that the probability is 0 unless we are taking the most likely prediction from the decoder (from the chosen latent location z). 3) **Addressing concerns about discarding the stochasticity of the decoder**. We understand your concerns, as we initially were also worried. We agree that there is indeed a theoretical risk of being unable to generate as wide a range of molecules as the stochastic VAE (although not one we observed in practice). We believe the risks of allocating 100% confidence to implausible molecules, which would arise when decoding areas of the latent space where the decoder is highly uncertain, are strongly mitigated by the fact that we are doing conditional VAE sampling (i.e. staying on the well-supported areas of the latent space) rather than exploring the whole latent space like standard LSBO methods. Nevertheless, future work will seek more sophisticated sampling strategies to perform COWBOYS-like strategies for generative models (see our Discussion Section). We like your idea of running the MCMC as described but then sampling from the decoder in order to properly diagnose if the restrictions of the deterministic decoder are limiting the flexibility of our proposed molecules. We have implemented and ran this suggestion for the high-dimensional BO benchmark of Section 7.2, with the full results added to the ablation study of Appendix B. Unfortunately, this lead to a significant drop in performance for 10 of the 25 tasks (and returning statistically similar scores for the other tasks), which we hypothesise is due to the fact that the resulting “sampled” molecules are now not the same as the ones passed to the GP to see if the considered latent code will yield an improvement in score – i.e. we return to a problem similar to that of the alignment issues that we discuss in Section 3.2. Based on your comments, we have greatly increased our discussion of our decision to use a deterministic simplification of the decoder, stressing the points above and that (1) the VAE is still initially trained with a stochastic decoder (it is only at inference/BO time that we take the most likely mapping), and (2) we have added four additional specific references of previous work that also using this deterministic simplification. 4) **Clarifying Differing Comparisons in Experiments**. The diversity in initial budgets, degrees of parallelism, and subsets of molecular optimization problems stems directly from our intention to replicate and extend the experiments used in influential prior work in LSBO precisely. We believe this breadth provides a robust and comprehensive empirical evaluation of COWBOYS' performance. For peace of mind, please see the comments of the other reviewers, e.g. “Empirical evaluations use the standard benchmarks in the field and replicates and extends the evaluations from previously published papers” and “experiments are conducted thoroughly by evaluating a number of baselines on a good number of test problems”. However, acknowledging your valid concern regarding transparency—especially for readers less familiar with molecular design benchmarks—we have now included an additional section in the appendix. This section explicitly details the structure and interpretation of these benchmark suites, clearly explaining the objectives, settings, and significance of the various optimization tasks we considered.
Summary: This manuscript proposes an alternative approach to latent space Bayesian optimization in high-dimensional/structured spaces. Contrary to previous methods that aim to efficiently couple a generative model (typically VAE) and GP surrogate, the proposed method decouples the generative model and GP surrogate. The proposed method builds on an assumption that a surrogate model can be trained in the original high-dimensional/structured data domain or using features extracted from the data objects in the original domain (here authors consider molecules and features are molecular fingerprints). The trained generative model and GP surrogate are then combined during the optimization step by defining a conditional sampling distribution for the evaluation of next molecules. Results section contains extensive evaluation against most if not all the state-of-the-art methods, and results demonstrate highly competitive performance. Review update after rebuttal: I am satisfied with authors answers, and will keep my suggestion to accept the manuscript. Claims And Evidence: The presented result support the claims of highly competitive performance. Methods And Evaluation Criteria: The proposed method can be considered as a well-designed and novel approach to the high-dimensional/structured BO. Empirical evaluations follow the standard benchmarks in the field. Theoretical Claims: Manuscript does not contain any formal theoretical claims but the probabilistic model construction seems well-designed and valid. Experimental Designs Or Analyses: Empirical evaluations use the standard benchmarks in the field and replicates and extends the evaluations from previously published papers. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: see below Essential References Not Discussed: All relevant background is presented. Other Strengths And Weaknesses: As far as I can tell, the proposed method can be considered as a novel approach to BO in structured spaces. The method is presented clearly and entire manuscript is very well written. Empirical performance evaluation seems solid, and the results demonstrate strong performance. Other Comments Or Suggestions: I agree with all central parts of the manuscript. My only slightly critical comment or concerns is that the proposed method assumes that the surrogate model can be trained using the molecular fingerprints as inputs. More generally, the proposed method is applicable whenever a structured GP kernel can be defined, as authors note at the end of page 1 (this limitation could be repeated in Limitations Section at the end). While this works very well for molecules using fingerprints and Tanimoto kernel, there certainly are other problems where it less clear how a structured kernel should be defined, or whether such exist. I have two related suggestions: 1) authors could elaborate on when and why this assumption holds (e.g. in Discussion and Limitations Section). 2) authors could carry out an ablation where they study the role and accuracy of the structured surrogate GP. Authors could e.g. artificially remove (or add noise to) fingerprints such that the surrogate modeling becomes more challenging and less accurate, and evaluate how that affects the overall performance. Notation: my understanding is that throughout the manuscript $x$ generally denotes a molecule, except in the context of Tanimoto kernel, where $x$ denotes the molecular fingerprint representation. I suggest to use a different symbol around Tanimoto kernel, e.g. $finger(x)$ or something else to clearly note that they are different. Since your model is build on probability of improvement (PI) acquisition function, it would make sense to use the same acquisition function also for the baseline comparison methods. Perhaps that was mentioned in the text but I do not remember seeing that authors commented on that. Can you also incorporate a probabilistic graphical model representation of your proposed model (in supplement if not enough space in the main text). The proposed method can be considered as a conditional generative model for BO. In related works section you could also cite a recent work (Ramchandran et al, ICLR, 2025, https://openreview.net/pdf?id=SIuD7CySb4 ) that has also proposed to use conditional generative model when they attempt to align the latent space for surrogate modeling. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review and valuable suggestions. We sincerely appreciate your recognition of the novelty, clarity, rigorous empirical evaluation, comprehensive background coverage, and overall manuscript quality of our work. We have addressed all your suggestions thoroughly and have implemented the requested revisions as detailed below: 1) **Generalizability of COWBOYS Beyond Molecular Design and Drug Discovery**: We fully agree with your observation on the importance of clarifying the generalizability of our approach. Following your advice, we have expanded our discussion (now included prominently in the introduction and highlighted in the limitations section) regarding the applicability and significant potential of COWBOYS beyond molecular fingerprint-based problems. Please refer to our detailed response to reviewers vn27 and 1JvV for further context. Furthermore, your insightful suggestion about investigating the robustness of COWBOYS against degraded surrogate mode performance (to mimic settings where it may be hard to propose a structure-space kernel) was extremely helpful. As suggested, we have conducted additional experiments on the high-dimensional BO benchmark of Section 7.2, deliberately impairing the Tanimoto GP surrogate by randomly swapping ones to zeros in molecular fingerprints with varying probabilities. This experiment corresponds to adding significant input noise to the fingerprints by randomly removing the count of particular (hashed) molecular substructures. Our findings indicate that COWBOYS maintains robust performance except under the most severe perturbations and have been added to our ablation studies of Appendix B. In summary, perturbing Tanimoto entries with probability 0.01, 0.1, and 0.5 lead to a statistically significant drop in the best molecule found across 0, 3, and 8 tasks from the 25 molecular optimization tasks, respectively. 2) **Small additional Points**. We have clarified our notation by using boldface m for fingerprints to distinctly differentiate from molecules represented by boldface x. Following your recommendation, we have included a graphical model illustrating the dependencies within both COWBOYS and existing LSBO approaches. We appreciate this suggestion as it significantly enhances the manuscript's clarity. We have added the suggested recent reference, highlighting its relevance. The presence of this paper (and also [1] at the same conference in 2 months time), both using extensions of standard LSBO frameworks, underscores the timeliness and importance of our contributions. Thank you once again for your constructive feedback, which has significantly strengthened our manuscript. [1] Lee, Seunghun, et al. "Latent Bayesian Optimization via Autoregressive Normalizing Flows." The Thirteenth International Conference on Learning Representations. 2025.
Summary: This paper proposes a novel approach to Bayesian Optimization (BO) over structured spaces such as molecular design. Instead of fitting a Gaussian Process (GP) surrogate model within the latent space of a Variational Autoencoder (VAE), which often leads to poor predictive performance, the authors introduce COWBOYS (Categorical Optimisation With Belief Of underlYing Structure). This method decouples the generative model (VAE) from the surrogate (GP), allowing the GP to be trained directly in the structured space rather than the latent space. By leveraging a Bayesian update rule, COWBOYS refines its search iteratively without requiring the complex fine-tuning of VAEs. Experiments show the approach outperforms traditional Latent Space BO (LSBO) methods, demonstrating significant improvements in efficiency for molecular optimization tasks under constrained evaluation budgets. Claims And Evidence: Overall yes, though with some of my concerns listed below. Methods And Evaluation Criteria: Yes, though the evaluation is mostly done on molecular design and drug discovery problems. Although they are the area of application that COWBOYS is designed for, it is unclear whether the claimed superior performance can be extended to other application domains. One of the issues of the proposed method is that it still requires fitting a GP (or other probabilistic model) on the original, high-dimensional space. In many molecular design and drug discovery problems, Tanimoto is suitable for such high-dimensional binary problems and scale well, but it may likely not be the case in many other problem — simply fitting the GP (or other probabilistic model) on the high-dimensional space is already prohibitively expensive, not to mention any efficient optimization. Also the optimization step described in eq (3) seems can be replaced by more established method such as Expected Improvement or Thompson sampling. If it can/should not be changed, more discussion of why should be included. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall experiments are conducted thoroughly by evaluating a number of baselines on a good number of test problems. Ideally I’d like to see more ablation study on COWBOYS’ components (e.g., sensitivity to VAE hyperparameters, impact of optimization budget such as N_init, choice of acqf used in eq 3), as we currently only have a very limited one for the PCN MCMC sampler in Appendix B. Supplementary Material: Yes, I reviewed all of the supplementary material. Relation To Broader Scientific Literature: This is directly related to the general Bayesian Optimization and high-dimensional/latent-space BO. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is overall well-written and easy to understand. Other Comments Or Suggestions: Minor: extra space in L284. Questions For Authors: - Instead of using the sampling strategy described in eq (3), which may intuitively make sense, but has little theoretical justification, why not use something more established such as Thompson sampling or EI to select the candidates. - I’d love to see more discussion and/or experiment on other problem domains. As I mentioned in “Methods And Evaluation Criteria" above, the proposed method may not extend well to other domain, which may be fine, but would significantly limit its impact. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your strongly positive review and for highlighting the novelty, clarity, and thorough experimental design of our work. We have carefully considered your two main comments, both of which guided additional discussion and improvements in the final version of the paper. These insights have helped us better refine our manuscript. We look forward to seeing how this work inspires new lines of inquiry and contributes to the continued advancement of Bayesian optimization. 1) **On Incorporating Arbitrary Acquisition Functions**. You raise an important question regarding the integration of an arbitrary acquisition function into COWBOYS. In essence, our proposed approach is fundamentally different from traditional BO settings that rely on standard utility-based acquisition functions. Here, we move away from maximizing an acquisition function over a constrained set of equally likely structures. Instead, we explore searching over an entire distribution (as modeled by the VAE) of likely structures. Because of this distribution-based view, conventional acquisition functions (e.g., Expected Improvement or Probability of Improvement) cannot be directly applied without inheriting the same limitations found in current LSBO methods, most notably their disregard for the distribution provided by the VAE. One of our paper’s main contributions is precisely to handle this distribution by replacing maximization-based strategies with a sampling-based approach (Equation 3). While Equation 3 does resemble Probability of Improvement (a point now greatly elaborated in the final version), adapting more sophisticated acquisition schemes is indeed an exciting direction for future research. For instance, we could condition the VAE to generate samples for which an alternative acquisition criterion (e.g., EI) exceeds a certain threshold. Determining this threshold adaptively would open up further avenues, possibly guided by theoretical regret analyses to explain COWBOYS’ strong empirical performance. We believe these avenues will become fertile ground for future studies that aim to bridge existing BO theory with generative models in complex, high-dimensional design spaces. 2) **Generalizability of COWBOYS Beyond Molecular Design and Drug Discovery**. Please see our response to vn27 for a discussion of the significant potential of COWBOYS in different domains.
null
null
null
null
null
null
ARS: Adaptive Reward Scaling for Multi-Task Reinforcement Learning
Accept (poster)
Summary: Multi-task reinforcement Learning (MTRL) algorithms face challenges when tackling tasks with varying complexities and reward distribution. In this work, the authors propose a method for tackling the varying reward magnitude across tasks by adaptively scaling the reward of each task using a history-based reward scaling strategy. Furthermore, to prevent early overfitting to a few tasks, the authors adopt a resetting mechanism from the single-task deep RL literature to mitigate that. The proposed approach, named ARS, is benchmarked on Metaworld compared to related baselines while showing promising results in handling large scale MTRL setting with varying reward magnitude. Claims And Evidence: In this work, two main claims were presented, 1. The introduction of reward scale variation of a challenge in multi-task RL. 2. The importance of adaptively scaling the reward of different tasks because of the varying reward magnitude among them. 3. The role of resetting is integrated to alleviate overfitting to early learned tasks, in addition to stabilizing the critic training since the proposed adaptive scaling can destabilize the critic training since the Q target changes frequently. In my opinion, 1. The problem of the varying reward distribution or reward scale variation is a known issue that has been discussed in the literature [1]. I don't think that introducing this issue should be a contribution to this work. Nevertheless, the fixed reward scaling is an interesting view for this problem, in addition to the connection to the reward scaling in single-task deep RL. In addition, the presented example in Figure 2 shows another dimension of the problem, hence motivating adaptive scaling. 2. The method introduced in this work to adaptively scale the reward magnitude is novel. 3. The resetting mechanism, in general, can help the MTRL training since it has been adopted in prior works [2,3]. In this work, resetting the networks is motivated by more than one reason, as stated above. Since it is an essential component in the algorithm and crucial to enhance the performance, it is important to strongly support the claim, behind adding this component to ARS, by ablation studies. For example, showing how the critic is suffering without resetting the network, given the change in the reward magnitude caused by the adaptive scaling. Otherwise, the resetting mechanism is not really a contribution to this work; it is just an adoption of an existing tool in the literature [3]. [1] Hessel, Matteo, et al. "Multi-task deep reinforcement learning with popart." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019. [2] Sun, Lingfeng, et al. "Paco: Parameter-compositional multi-task reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 21495-21507. [3] Cho, Myungsik, et al. "Hard tasks first: multi-task reinforcement learning through task scheduling." The 41st International Conference on Machine Learning. 2024. Methods And Evaluation Criteria: - I believe the authors proposed a *new method* for adapting the scale of the reward magnitude between tasks, which is effective when looking at the empirical results. - For the evaluation criteria, I believe Metaworld is a good benchmark to study the effectiveness of MTRL algorithms, especially on the MT50 scenario, which is large scale. Theoretical Claims: - No theoretical claims were provided or discussed in this work. Experimental Designs Or Analyses: - In general, all experiments highly suit this work in showing the effectiveness of the proposed method in the MTRL setting. - I appreciate the teaser experiment added in Figure 2. - I have concerns regarding the baselines. 1. As stated before, the problem of varying reward distribution is known in the MTRL literature. One important baseline is PopArt [1], which is, as far as I know, the first work to discuss this issue in the MTRL setting. This approach is similar to the normalization baseline in the ablation in Table 5, yet not exactly. Popart hasn't been used as a baseline nor cited in this work. 2. In addition, MOORE [2] is a recent MTRL approach that reported SOTA results on Metaworld, in particular, on MT50. Moreover, MOORE has never been benchmarked against nor mentioned in the related work section. - For the Metaworld MT10 and MT50 scenarios, it is not clear to me if this experiment considers random goal positions (MT10-rand and MT50-rad) [3] or fixed goal positions. - The performance of PACO is lower in this work than in the original paper. It could be because of the different network architecture and hyperparameter, but why not follow the original hyperparameter of each method? - I have a concern regarding the experiment done in Figure 4. I don't understand how some baselines can have higher ESTR value as the threshold $\delta$ increases. In particular, PACO has a higher value at $\delta = 0.7$ than $0.5$. [1] Hessel, Matteo, et al. "Multi-task deep reinforcement learning with popart." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019. [2] Hendawy, Ahmed, Jan Peters, and Carlo D'Eramo. "Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts." The Twelfth International Conference on Learning Representations. Supplementary Material: - I checked the whole supplementary material. Notably, I appreciate the illustrative diagram in Figure 14. Relation To Broader Scientific Literature: - I believe this work highlights the varying reward distribution issue, which has been studied previously in the literature [1]. This indicates the importance of looking into the reward magnitude as a cause for the instability of the MTRL training. - In addition, the reset strategy proposed shows effectiveness empirically, hence supporting previous claims discussed in the literature [2,3]. [1] Hessel, Matteo, et al. "Multi-task deep reinforcement learning with popart." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019. [2] Sun, Lingfeng, et al. "Paco: Parameter-compositional multi-task reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 21495-21507. [3] Cho, Myungsik, et al. "Hard tasks first: multi-task reinforcement learning through task scheduling." The 41st International Conference on Machine Learning. 2024. Essential References Not Discussed: - I believe this work should cite the following papers for the aforementioned reasons: [1] Hessel, Matteo, et al. "Multi-task deep reinforcement learning with popart." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019. [2] Hendawy, Ahmed, Jan Peters, and Carlo D'Eramo. "Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts." The Twelfth International Conference on Learning Representations. Other Strengths And Weaknesses: - All points have been mentioned in Claims And Evidence & Experimental Designs Or Analyses. Other Comments Or Suggestions: - In the Preliminaries Section, the Multi-Task Reinforcement Learning subsection, $H$ was not defined. I believe it is the horizon. Questions For Authors: - For the ablation study in Figure 4, does the ARS w/o reset baseline consider the same frequency ($n_{reset}$) in updating the reward scaling factors? the same frequency when performing the factors update + resetting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer Cp7C for their thoughtful feedback and valuable suggestions, which have significantly improved our paper. We have carefully addressed each comment, strengthened our experimental results, and clarified our key contributions accordingly. Below, we respond in detail to each point raised by the reviewer. **Other Multi-Task RL Baseline (PopArt and MOORE):** We appreciate the reviewer’s suggestion regarding the inclusion of an additional multi-task RL baseline, specifically PopArt[1] and MOORE [1]. In response, we conducted comparative experiments using MOORE on the MT10 and MT50 benchmarks. Since the Meta-World v2 environments default to a horizon of 500, we used this setting across all our experiments. However, the original MOORE paper used a horizon of 150, making a direct comparison with our ARS results challenging. To address this, we attempted to run MOORE with a horizon of 500, but the official code required roughly 200 hours of computation on the MT50 benchmark alone. Consequently, we evaluated ARS using a horizon length of 150 for fairness. The ARS results are shown in Table 1 at the following anonymous link: https://sites.google.com/view/icml25ars Notably, the MOORE setup on MT50 (n_expert=6) includes significantly more parameters than our default ARS (400×4). To ensure a fairer comparison, we tested MOORE against an enlarged ARS variant (800×4), which has a comparable parameter count. Despite the comparable model size, ARS consistently outperformed MOORE on both benchmarks requiring significantly less computational time. For instance, on MT50, MOORE requires about 200 hours to train, whereas ARS (800×4) surpasses its performance in just 22 hours—an order-of-magnitude faster. These results demonstrate that our ARS framework achieves significant performance improvements **without incurring substantial computational overhead**. We will include MOORE results in the revised version. We also tested PopArt on MT10. Because the official implementation is unavailable, we reimplemented PopArt’s scale-invariant updates within SAC-MT, using SAC target values for critic learning. We varied the update frequency over {1, 10, 100, 500}. The outcomes are shown in Table 2 (available anonymously at https://sites.google.com/view/icml25ars). Across all frequencies, our ARS method outperformed PopArt on MT10, and we plan to include the complete results for both MT10 and MT50 in our revised paper. https://sites.google.com/view/icml25ars Our ARS method consistently outperformed all PopArt variants on the MT10 benchmark. We will include comprehensive PopArt variant results for both the MT10 and MT50 benchmarks in the revised paper. [1] Hessel, Matteo, et al. "Multi-task deep reinforcement learning with popart." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019. [2] Hendawy, Ahmed, Jan Peters, and Carlo D'Eramo. "Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts." The Twelfth International Conference on Learning Representations. **Effectiveness of Reset Mechansim** We thank the reviewer for suggesting a more thorough analysis of the **Reset Mechanism**.Table 4 in our paper already highlights its importance in stabilizing performance. To further investigate, we analyzed Q-values per task during training on MT10, comparing ARS with and without the reset mechanism. Results are shown in Figure 1 at the following anonymous link: https://sites.google.com/view/icml25ars As shown in Figure 1, training without the reset mechanism leads to significantly lower Q-values across tasks, with negative values appearing in tasks like 'push' and 'pick-place'—despite rewards being always positive. Q-values also show greater variance without reset, emphasizing the reset mechanism’s role in ensuring training stability. **Random Goal Positions** Sorry for the confusion. Random goal positions are used in our setup. **Lower Performance of PaCo in MT10** We use the same hyperparameters for PaCo as in the original paper. The performance gap likely stems from the difference in horizon lengths: the original PaCo uses 150, while Meta-World v2 defaults to 500, which we follow. To isolate this effect, we also run ARS with a horizon of 150. Results are shown in Table 1 at the anonymous link below: https://sites.google.com/view/icml25ars **ESTR Results** We sincerely apologize for the confusion regarding the ESTR results. The reported ESTR value of PaCo at $\delta = 0.7$ was incorrect and should be revised from 0.6 to 0.5. **Definition of H** Yes, H is the horizon length. **Frequency (n_reset) for ARS w/o reset** We investigated the performance of the ARS w/o reset baseline with various values of n_reset . We then selected the best-performing setting for the paper. The value of n_reset used in the paper for the ARS w/o reset baseline is 40. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing most of my concerns. Also, I appreciate adding PopArt and MOORE as baselines, and from this experiment and given it is a random goal setting, ARS is indeed performing very well. I still have concerns regarding the novelty of the resetting mechanism. I believe the concept of resetting is not novel in MTRL [1,2]. More importantly, I believe the same exact way was introduced in SMT [2]. The answer of the authors regarding this point was not convincing. I am not asking if the resetting is important; I am asking how novel this mechanism is compared to [2]. In other words, what differentiates this resetting mechanism from the one introduced in SMT [2]? I am stressing this point because I can clearly see from the results and, given the author's response, that resetting plays an important role. [1] Sun, Lingfeng, et al. "Paco: Parameter-compositional multi-task reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 21495-21507. [2] Cho, Myungsik, et al. "Hard tasks first: multi-task reinforcement learning through task scheduling." The 41st International Conference on Machine Learning. 2024. --- Reply to Comment 1.1.1: Comment: Thanks for your further comment. The novelty of this paper lies in the use of 'reward scaling' combined with resetting. As the reviewer mentioned, resetting is not new in MTRL, e.g. [2]. But, in [2], the authors adopted 'scheduling' together with resetting to facilitate learning of hard tasks by assigning more resources and early times to hard tasks, because hard tasks mainly degrade overall performance and easy tasks are learned quickly. But, [2] still shows limitation. It could not solve all MT10 tasks although the performance was much improved. In our work, we used reward scaling to boost the performance of hard tasks and adopted the equalizer rule, i.e., our reward scaling tries to make the rewards of all MT tasks have the same magnitude so that any one task is not favored or disfavored. The authors of [2] did not catch that the strong bias of MT policy towards easy tasks can simply be corrected by reward scaling. Note that the strong bias of MT policy towards easy tasks is basically due to easy tasks' large early rewards. (Recall that the policy gradient is score * Q, where Q is just an expected weighted sum of discounted rewards. So, larger rewards of a subtask mean larger policy gradient towards that subtask.) Our reward scaling simplifies the overall procedure significantly and achieves significant performance gain. This approach solved all MT10 tasks for the first time to the best of our knowledge, which is a milestone in MTRL. In fact, during the rebuttal period, we ran more experiments. We adopted layer normalization which is commonly used in deep learning for stable training. In this new experiment, we achieved the average success rate as follows: MT10(hidden unit 400): 97.3% (w/o layer norm) $\to$ **98.16**% (w/ layer norm) MT50(hidden unit 400): 68.5% (w/o layer norm) $\to$ **78.3**% (w/ layer norm) MT50(hidden unit 1024): 78.7% (w/o layer norm) $\to$ **88.85**% (w/ layer norm) Please note that such high performance has not been reported before. We believe that our contribution revealing that such simple reward scaling combined with resetting can solve MTRL effectively is not trivial in the area of MTRL and believe that our work is worth being shared to the MTRL community via publication.
Summary: The paper introduces Adaptive Reward Scaling (ARS), a novel framework designed to tackle the difficulties caused by varying reward distributions in multi-task reinforcement learning. ARS employs a history-based reward scaling strategy that dynamically adjusts reward magnitudes to ensure balanced training focus across diverse tasks. Additionally, ARS incorporates a reset mechanism that mitigates biases introduced by early learned tasks, enhancing adaptability and convergence. The framework integrates seamlessly into existing off-policy algorithms and has demonstrated state-of-the-art performance on Meta-World benchmark. ## update after rebuttal I don't have major concerns regarding this paper. I still recommend acceptance. Claims And Evidence: I think most of the claims in this submission are supported by evidence. Methods And Evaluation Criteria: Yes, they make sense to me. Theoretical Claims: This submission does not include proofs. Experimental Designs Or Analyses: I went through all the experiments, and most of them make sense to me. Supplementary Material: Yes, section C and D. Relation To Broader Scientific Literature: Reward scaling has been shown to be effective in prior studies [1, 2]; however, most of these works focus on single-task settings. In contrast, this paper addresses the multi-task setting, where reward scaling poses greater challenges due to varying reward magnitudes across tasks. [1] Wu, Yueh-Hua, et al. "ANS: adaptive network scaling for deep rectifier reinforcement learning models." arXiv preprint arXiv:1809.02112 (2018). [2] Henderson, Peter, et al. "Deep reinforcement learning that matters." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths - This paper directly addresses the challenge of varying reward distributions across tasks in multi-task RL. Improper reward scales can result in biased training and suboptimal performance. As an RL practiioner, I would say this is a critical yet frequently overlooked issue in the field of RL. - The ARS framework proposed in this paper includes a history-based reward scaling strategy and a reset mechanism. It's simple but very intuitive. - The ARS framework demonstrates strong empirical results on the Meta-World benchmark, solving the MT10 benchmark from scratch. - The proposed ARS framework seems to be applicable to any off-policy multi-task RL method. The authors demonstrate its applicability by integrating it into various off-policy multi-agent approaches. ### Weaknesses - The proposed framework is evaluated solely on Meta-World, a relatively simple benchmark. The conclusions would be more compelling if more challenging tasks were included, such as those in [1, 2, 3]. [1] Zhu, Yuke, et al. "robosuite: A modular simulation framework and benchmark for robot learning." arXiv preprint arXiv:2009.12293 (2020). [2] Mu, Tongzhou, et al. "Maniskill: Generalizable manipulation skill benchmark with large-scale demonstrations." arXiv preprint arXiv:2107.14483 (2021). [3] Chernyadev, Nikita, et al. "Bigym: A demo-driven mobile bi-manual manipulation benchmark." arXiv preprint arXiv:2407.07788 (2024). Other Comments Or Suggestions: N/A Questions For Authors: - I assume the rewards used in this paper are dense. Would varying reward scales still pose an issue if binary sparse rewards were used instead? - The reward scales change dynamically during training. Could this variation in reward scales affect training stability? If not, what mechanisms ensure stability? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer mXD6 for their thoughtful feedback and valuable suggestions, which have significantly improved our paper. We have carefully addressed each comment, strengthened our experimental results, and clarified our key contributions accordingly. Below, we respond in detail to each point raised by the reviewer **Limited Environments** We appreciate the reviewer’s suggestion of three benchmarks for multi-task RL. We believe that the MT10 and MT50 benchmarks from Meta-World are widely recognized and present significant challenges in multi-task RL. Since no existing method has fully solved both, demonstrating strong performance on these benchmarks effectively highlights the strength of our approach. That said, we agree that including the suggested benchmarks could further strengthen our claims. Benchmarks [2] and [3] are primarily designed for demonstration-driven settings, such as imitation learning and offline RL. While our current focus is on the online setting, we believe that extending our ARS method to offline multi-task RL would be an interesting direction for future work. We appreciate the reviewer for introducing these valuable benchmarks. **Reward Scale Issue with Sparse Reward Setting** Even in sparse reward settings, task difficulty can vary significantly across tasks, leading to large variations in returns during training. This can cause an uneven reward distribution in the replay buffer. Applying the ARS framework can help mitigate this issue. **Instability caused by varying reward scales** Large variations in reward scales during training can indeed destabilize the learning process. To mitigate this, we update the reward scales only during reset periods—four times in MT10 and six times in MT50. This reset mechanism helps maintain stability by preventing frequent changes in reward scaling. Additionally, since the networks are initially trained with the established reward scales, this approach further supports stable training throughout. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I still recommend acceptance.
Summary: This paper introduces Adaptive Reward Scaling, a novel framework for multi-task reinforcement learning that dynamically adjusts reward magnitudes using a history-based scaling strategy and integrates a periodic network reset mechanism to mitigate overfitting and biases toward simpler tasks. The empirical results on the Meta-World show some improvements in success rates compared to several established baselines. **Update after rebuttal**--I appreciate the authors’ thorough revision and the additional experiments addressing my concerns (especially the attempt with MOORE). I hope those will be implemented in the revised draft. I have decided to increase my score. Claims And Evidence: The paper claims that the adaptive reward scaling combined with resets improves training stability and overall performance in multi-task settings. These claims are supported by comprehensive experimental evidence, including detailed success rate tables and ablation studies isolating the contributions of each component. The experimental results are convincing, though a few recent method is missing in baseline and additional experiments on diverse benchmarks could further validate the generality of the method. Methods And Evaluation Criteria: The proposed approach is well-motivated and clearly explained. The use of the replay buffer to compute task-specific mean rewards for scaling and the integration of resets to counteract biases are both novel and appropriate for the challenges in multi-task RL. The evaluation criteria—success ratios, effective solvable task ratios, and ablation studies—make sense and they demonstrate the performance gains in straightforward way. Theoretical Claims: The paper does not emphasize formal proofs or theoretical guarantees but rather focuses on the algorithmic innovation and empirical validation. Experimental Designs Or Analyses: The experimental design is robust, comparing ARS against several state-of-the-art baselines using standard benchmarks. The inclusion of ablation studies provides clear insight into the effectiveness of both the reward scaling mechanism and the reset strategy. It could have been better to explore beyond MetaWorld suite, though I understand the current situation; exploring additional environments could further support the claims. Supplementary Material: The supplementary material—comprising additional experiments, hyperparameter details, and extended ablation studies—was reviewed and provides valuable context that reinforces the primary findings of the paper. However, Appendix A is an exact copy of the other paper (Cho et al. 2024). You should not do this. Relation To Broader Scientific Literature: ARS builds on existing work in reward scaling, modular networks, and resetting mechanisms in deep RL. The paper successfully situates its contributions within the broader context of multi-task RL research by comparing with methods such as SAC-MT, PCGrad, and Soft Modular. This connection to prior work is well articulated. Essential References Not Discussed: The author should have the paper called Multi-task Reinforcement Learning with Mixture of Orthogonal Experts (MOORE, ICLR 2024) both in Related work section and experiments as a baseline. In addition, it might benefit from a discussion of very recent advances in reward normalization and adaptive scaling across different RL domains to highlight its broader applicability and limitations. Other Strengths And Weaknesses: *Strengths:* - Clear and innovative formulation of an adaptive reward scaling mechanism. - Thorough empirical evaluation with convincing ablation studies. - Significant improvements on challenging benchmarks (especially hard problems). *Weaknesses:* - Theoretical analysis is somewhat limited. - Evaluation is limited to Meta-World benchmarks; broader testing could enhance the claims. Other Comments Or Suggestions: - In table 4, you should consider statistical significance when you bold the method that shows the highest performance. For instance, in easy tasks, second column is within range of third column. - Citing mistake in line 400 right column. - Typo in line 423. Multi-taks → Multi-task Questions For Authors: - Do you have any intuition why SAC-MT works well with ARS in MT10, while Soft Modular works well with ARS in MT 50? - Can you provide more insights into the computational overhead and potential limitations introduced by the reset mechanism. - How can ARS be extended or adapted to other multi-task or multi-agent settings? Ethical Review Concerns: Appendix A is an exact copy of Appendix A in "Cho, M., Park, J., Lee, S., & Sung, Y. (2024, July). Hard tasks first: multi-task reinforcement learning through task scheduling. In Forty-first International Conference on Machine Learning. https://openreview.net/forum?id=haUOhXo70o" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer SzVus for their thoughtful feedback and valuable suggestions, which have significantly improved our paper. We have carefully addressed each comment, strengthened our experimental results, and clarified our key contributions accordingly. Below, we respond in detail to each point raised by the reviewer **Other Multi-Task RL Baseline (MOORE [1]):** We appreciate the reviewer’s suggestion to include an additional multi-task RL baseline, specifically MOORE [1]. In response, we conducted comparative experiments using MOORE on the MT10 and MT50 benchmarks. In the default setup of the MetaWorld v2 environments, the horizon length is set to 500, which was used consistently across all our experiments. However, we noticed that the experiments reported in the original MOORE paper [1] were conducted using a horizon length of 150, making a direct comparison between the reported MOORE results and our ARS method inappropriate. To address this, we attempted to run MOORE with the horizon length set to 500. Unfortunately, we encountered significant computational challenges, as the official implementation requires approximately 200 hours to complete on the MT50 benchmark. To ensure fairness, we evaluated ARS using a horizon length of 150. The ARS results are shown in Table 1 at the following anonymous link: https://sites.google.com/view/icml25ars The MOORE setup (n_expert = 6) on MT50 uses significantly more parameters than our default ARS (400×4). To ensure a fair comparison, we evaluated MOORE against a larger ARS variant (800×4) with a similar number of parameters. Despite the comparable model size, ARS consistently outperformed MOORE on both benchmarks while also requiring dramatically lower computational costs. Specifically, on MT50, MOORE requires approximately 200 hours of training, while ARS (800×4) achieves superior performance in just 22 hours—an order-of-magnitude reduction in training time. These results demonstrate that our ARS framework achieves significant performance improvements **without incurring substantial computational overhead**. We will include MOORE results in the final version. [1] Hendawy, Ahmed, Jan Peters, and Carlo D'Eramo. "Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts." The Twelfth International Conference on Learning Representations. **Discussion of Recent Advances in Reward Normalization and Adaptive Scaling across Different RL Domains** . We thank the reviewer for recommending additional related work. We will expand our related work section accordingly to enhance the paper. **Evaluation is Limited to Meta-World Benchmarks** We acknowledge the importance of including experiments beyond Meta-World. While the MT10 and MT50 benchmarks are widely recognized and challenging in multi-task RL—with no method fully solving both—they serve as a strong demonstration of our approach's effectiveness. Nevertheless, we welcome suggestions for additional benchmarks and will gladly conduct further experiments to strengthen our claims. **Lower Performance of Soft Modular with ARS in MT10** We attribute the initially lower performance of Soft Modular with ARS on MT10 to suboptimal hyperparameter tuning. By increasing the batch size per task from 100 to 128, we significantly improved performance to $98.8 \pm 1.3$. **Computational Overhead and Potential Limitations Introduced by the Reset Mechanism** The reset mechanism introduces minimal computational overhead, occurring only during reset periods. With our reset strategy (updating re-initialized networks 1,000 times per reset), the total number of updates increases only slightly from 2,000,000 to 2,006,000—an **increase of just 0.3%**. **How can ARS be extended or adapted to other multi-task or multi-agent settings?** As illustrated in Algorithm 1 and Figure 14, the ARS framework is easily adaptable to any off-policy multi-task RL method by incorporating an adaptive reward scaling factor based on the replay buffer. In multi-agent settings, we can consider either individual rewards per agent or a global reward. With a global reward, applying ARS is challenging since it effectively becomes a single task. In contrast, individual rewards allow each agent to be treated as a separate task, making ARS more applicable. However, caution is needed—unlike multi-task learning where tasks are independent, agents in multi-agent systems are interdependent. Thus, careful design of the adaptive reward scaling factor is crucial. **Limited Theoretical Analysis** We agree with the reviewer that the theoretical analysis is limited. However, we believe the extensive experimental validation provides strong support for the effectiveness of ARS. We plan to address a comprehensive theoretical analysis in future work. **Other Comments Or Suggestions:** We thank the reviewer for identifying typos, issues with bold symbols, and citation errors. These will be corrected thoroughly in the revised paper.
null
null
null
null
null
null
null
null
Learning to Route LLMs with Confidence Tokens
Accept (poster)
Summary: The paper introduces Self-REF, a lightweight fine-tuning framework designed to teach large language models (LLMs) to express confidence in their answers through confidence tokens. These learned tokens, indicate whether the model is confident or uncertain about its prediction, improving reliability and performance in downstream routing and rejection learning tasks. The authors demonstrate that Self-REF outperforms traditional approaches, such as verbalizing confidence or using token probabilities, on multiple public datasets by enabling more accurate routing to stronger models and better rejection of uncertain answers. The method achieves improved system efficiency and calibration while maintaining model performance. ## update after the rebuttal During the rebuttal the authors have addressed all my concerns. I have decided to increase my score from 3 to 4 Claims And Evidence: The claims in the submission are supported by clear and convincing evidence. The paper presents quantitative results across four public datasets (MMLU, OpenbookQA, GSM8K, MedQA), showing that Self-REF improves routing efficiency by reducing the number of queries sent to larger models while maintaining accuracy (e.g., Llama3-8B routes only 39% of queries to match Llama3-70B's performance). Additionally, ROC curves demonstrate superior rejection learning, and calibration metrics (ECE, Brier Score, Cross-Entropy) confirm that confidence tokens align well with correctness. However, while the authors discuss trade-offs in routing and rejection, they do not extensively analyze potential failure cases, such as when Self-REF misidentifies confidence leading to incorrect rejections or unnecessary escalations. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-aligned with the problem of confidence-based routing and rejection learning in LLMs. The use of confidence tokens is a novel yet intuitive approach that integrates seamlessly into autoregressive models, and the evaluation on standard QA and reasoning benchmarks (MMLU, OpenbookQA, GSM8K, MedQA) allows for fair comparison with prior art. The routing and rejection tasks are practical and relevant, as they reflect real-world scenarios where LLMs need to manage uncertainty efficiently. As stated above, an error analysis of model behvior in confidence-based routing would help gain additional insights on the effectiveness of Self-REF. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is generally strong. The routing experiments are well-structured, using multiple confidence thresholds to analyze trade-offs between accuracy and efficiency. The rejection learning task is also appropriately tested with artificially modified datasets where the correct answer is removed. Also the calibration analysis (ECE, Brier Score, CE) help measure model confidence alignment with correctness.` On the other hand, a much stronger analysis would include a systematic investigation of failure cases to identify patterns in when and why Self-REF misclassifies confidence. While the paper provides overall accuracy and calibration metrics, it does not explore whether certain types of questions, knowledge domains, or reasoning patterns lead to systematic overconfidence or underconfidence. For instance, if the model consistently misroutes some type of quesstions questions, this could highlight fundamental limitations in its confidence estimation. Identifying such failure patterns would not only improve interpretability but also inform targeted improvements to Self-REF, such as adjusting confidence token fine-tuning strategies or incorporating adversarial training. Also, an analyses of the relation between confidence threshold (more below in the questions section) could help get a better understanding of model behavior when self-REF is used. With the analysis above, I consider this paper to deserve a score of 4 instead of its current 3 Supplementary Material: N/A Relation To Broader Scientific Literature: The paper builds on prior work in uncertainty quantification, LLM routing, and rejection learning, but introduces confidence tokens as a mechanism for end-to-end confidence estimation. Previous methods, such as logit-based calibration and verbalized uncertainty prompts, shows worse alignment between confidence scores and correctness, whereas Self-REF fine-tunes the LLM itself to embed confidence directly. Unlike external classifiers for routing, Self-REF integrates confidence estimation within the autoregressive decoding process, making routing decisions more adaptive and model-aware. Additionally, it improves on LLM rejection learning by enabling confidence-based abstention without requiring a separate rejection model or new loss functions. Essential References Not Discussed: All the relevant references needed to understand the paper are discussed. -- I found this paper that came out after ICLR submission deadline but it could be a nice reference to add to the related work section: Dhananjay Ashok, Jonathan May, Language Models Can Predict Their Own Behavior Other Strengths And Weaknesses: The paper is generally well written and easy to follow. Other Comments Or Suggestions: Typos: - line 117: retarded? should be a typo - line 349: bed is s typo Questions For Authors: - What data did you use in the setup when you mention you trained on MMLU? I looked at table 4 in the appendix but I'm not sure about the numbers given that, from the MMLU paper, I get "The few-shot development set has 5 questions per subject, the validation set may be used for selecting hyperparameters and is made of 1540 questions, and the test set has 14079 questions.". In general, i'd move some information about training the confidence tokens from the appendix to the main body of the work, or at least better discuss them in the paper. - In figure 2 I see random route to 70B baseline that I can't find it described. Can you elaborate on it? - I am slightly confused by the quantile thresholds in Section 4.2 and their relation to the routing rate in Figure 2. Specifically, how do these two quantities interact? For example, in Figure 2(a), what was the exact threshold value (t) that resulted in a routing rate of 0.4? More generally, it would be helpful to explicitly discuss the relationship between t and routing rate, as this would give practitioners a clearer understanding of how to tune Self-REF for different trade-offs between accuracy and efficiency. - Can you clarify what you mean with the following sentence about the in-context learning for llama3: "All experiments utilize Llama3-70B-Instruct with only its strong in-context learning capabilities during instance routing. The routing decisions are determined by the probabilities" Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and reference. We've incorporated them into our related work and made the suggested expository improvements in our revised paper. **[Q-1] More analysis on a systematic investigation of failure cases to identify patterns in when and why self-REF misclassifies confidence.** **[A-1]** This is a good suggestion, we previously included some case studies of successful routing in Appendix D, and will add more failure cases as well. Analyzing the top 5 categories of correct/incorrect and overconfident/underconfident predictions, we have: - Predict Correctly+<UN> (underconfident in its predictions): (1) computer_security; (2) high_school_biology; (3) high_school_european_history; (4) human_sexuality; (5) miscellaneous - Predict Wrongly+<CN> (overconfident in its predictions): (1) college_computer_science; (2) conceptual_physics; (3) high_school_computer_science; (4) high_school_microeconomics; (5) jurisprudence - Predict Correctly+<CN>: (1) international_law; (2) college_biology; (3) moral_disputes; (4) philosophy; (5) us_foreign_policy - Predict Wrongly+<UN>: (1) abstract_algebra; (2) anatomy; (3) college_chemistry; (4) college_medicine; (5) econometrics Analyzing these categories, we note: - Predict Correctly+<UN> (underconfident in its predictions): The model often hesitates with context-heavy subjects due to their ambiguity and need for broader reasoning. "miscellaneous" reflects general uncertainty in non-standard topics. - Predict Wrongly+<CN> (overconfident in its predictions): Many of these involve technical, structured domains, where the model shows overconfidence, likely due to familiarity from training. However, it may struggle with edge cases and nuanced reasoning, especially in areas like jurisprudence. - Predict Correctly+<CN>: These topics rely on broad conceptual knowledge rather than strict calculations, and the model appears well-calibrated, likely due to strong training data coverage or clearer signals of correctness. - Predict Wrongly+<UN>: These highly specialized, detail-heavy subjects require precise recall or deep understanding, and the model’s uncertainty may reflect an awareness of its limitations in recalling detail-heavy content to answer questions. **[Q-2] What data did you use in the setup when you mention you trained on MMLU? In general, i'd move some information about training the confidence tokens ...** **[A-2]** We use a randomly sampled subset of the official MMLU training set, where ground truth answer choices are augmented with confidence tokens when fine-tuning Self-REF on MMLU (Algorithm 1, Section 3.2). We will move more details about the training setup from the appendix into section 4.1. **[Q-3] In figure 2 I see random route to 70B baseline that I can't find it described. Can you elaborate on it?** **[A-3]** The random routing approach is a naive baseline that uniformly at random routes to the 70B model at the specified rate. When the random routing rate is 0.0, then it gives the performance of the small LM, and when the random routing rate is 1.0, it gives the performance of routing to the 70B model entirely. We will update our paper with these details. **[Q-4] I am slightly confused by the quantile thresholds in Section 4.2 and their relation to the routing rate in Figure 2. Specifically, how do these two quantities interact? for example, in Figure 2(a), ...? More generally, it would be helpful to explicitly discuss the relationship between t and routing rate, as ... trade-offs between accuracy and efficiency.** **[A-4]** To better analyze the routing performance, we set the thresholds t at 20-quantiles, as described in Section 4.2. For instance, the threshold t corresponding to a routing rate of 0.4 is the 40th percentile of confidence scores across all input queries. Practically, one could observe the 20-quantiles of the distribution, and choose thresholds that should empirically correspond to rough routing rates, and then monitor these thresholds over time. Note that Self-REF fine-tuning does not require committing to a particular tradeoff, as instead of relying directly on the sampled tokens, we instead extract confidence scores from the logits of the confidence tokens, thus giving us the ability to threshold this score and control how often to route. **[Q-5] Can you clarify what you mean with the following sentence about the in-context learning for llama3: "All experiments utilize Llama3-70B-Instruct with only its strong in-context learning capabilities during instance routing..."** **[A-5]** This refers to the fact that we did not fine-tune Llama3-70B-Instruct (the larger, more expensive LLM) in the routing setting. **[Q-6] Missing reference, formats, and Typos.** **[A-6]** Thank you for the valuable comments and references. We will add this to our related work section on rejection learning. The suggestions for formats and typos are fixed in our next version. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Thing are clearer now. I don't see any major weakness with this work and have decided to increase my score to 4
Summary: Self-REF is a lightweight method for training a language model to show when its answers are correct or incorrect by using “confidence tokens.” The approach starts with a base model that generates predictions and labels each instance as “confident” or “unconfident” based on the answer’s correctness, creating an augmented dataset. The model is then fine-tuned on these labeled samples and learns both to provide the correct answer and to tag it appropriately. Finally, continuous confidence scores are computed by comparing the probabilities of the “confident” vs. “unconfident” token at the end of each response. They propose two applications. First, low-confidence queries can be routed to a larger model to cut costs without sacrificing accuracy. Second, when no larger model is available, the system can reject responses it deems untrustworthy. Across MMLU, OpenbookQA, GSM8K, and MedQA, routing only uncertain queries matches the strong model’s accuracy while reducing latency and cost; rejecting low-confidence outputs also helps avoid incorrect claims for questions lacking a valid answer. Claims And Evidence: * Claim that the model’s confidence tokens always align with correctness: The paper states that once fine-tuned, the model should consistently output <CN> whenever it is correct and <UN> otherwise. In practice, even well-calibrated methods can yield discrepancies between predicted confidence and true correctness. * Claim of consistently “lightweight” overhead The resulting method would yield a cascading method, where first we have to monitor the smaller model's output then route to the bigger one to get the final answer. Methods And Evaluation Criteria: Overall, the methods and evaluation criteria do generally make sense given their focus on practical improvements in downstream tasks. However, I have a few concerns: * The assumption that <CN> tokens represent correctness and <UN> represent incorrectness lacks a clear theoretical grounding. The paper implicitly assumes confidence directly correlates with correctness, though confidence in practice might not always reflect correctness accurately, potentially limiting generalizability. * The paper partially addresses imbalance by subsampling unconfident tokens with a tunable parameter, but does not elaborate on handling extreme cases where the base model might be consistently correct or incorrect, which could severely skew training data. In fact, the approach depends heavily on the correctness of the base model to annotate confidence tokens. If the base model has substantial weaknesses or biases, these may propagate through fine-tuning, limiting improvements or perpetuating biases. Theoretical Claims: NA Experimental Designs Or Analyses: Yes, I would suggest studying additional baselines: * I'm curious if authors explored a baseline where the model generates a confidence token before producing an answer. This can reduce computation by preventing full generation when confidence is low. Also a learned router baseline would provide insight * Explore other methods for labeling confidence: * Estimating confidence based on consistency across multiple stochastic samples from the model. * External calibration techniques, e.g. entropy-based uncertainty Supplementary Material: Yes, A and B Relation To Broader Scientific Literature: The work addresses an important problem of allowing LLMs to self-assess the confidence of their predictions, and their applications to LLM routing and rejections. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed feedback. We respond to each point below, and will update the discussion accordingly: **[Q-1] The assumption that <CN> tokens represent correctness and <UN> represent incorrectness lacks a clear theoretical grounding. The paper implicitly assumes confidence directly correlates with correctness...** **[A-1]** A core contribution of this paper is utilizing confidence scores for rejection and routing downstream. Unlike prior works that derive uncertainty scores from logits, verbalized outputs, or re-sampling, we train our notion of confidence into the model, teaching it to reflect on its answer after producing it. The notion of confidence is grounded in the probability that a prediction is correct, i.e., $P(\text{<CN>} |X,Z) = P(Y=Z|X,Z)$, where CN is the confidence token, $X$ is the question, $Z$ is the predicted answer, and $Y$ is the true answer. This is a more direct form of confidence than other techniques, which often instead reflect consistency in the answer. For example, logits would instead reflect $P(Z|X)$, with no relation to $Y$. As noted in our paper (Section 1 and 5.3), calibrated uncertainty metrics are not necessarily correlated with correctness. In our paper, we assess both the utility of the confidence scores as well as their calibration. **[Q-2] If the base model has substantial weaknesses or biases, these may propagate through fine-tuning, limiting improvements or perpetuating biases.** **[A-2]** We respond using two perspectives: - The primary goal of Self-REF is to help the base model identify its existing weaknesses while maintaining performance, thereby enabling more effective routing. - One possible approach for extreme incorrectness is as follows: first, fine-tune the base model on the downstream task to improve its task-specific capabilities; then, in a second stage, apply the Self-REF framework to teach the model to route effectively based on uncertainty. This is a useful extension of Self-REF we will add to the discussion. **[Q-3] Explore other methods for labeling confidence (consistency across sampling and external calibration).** **[A-3]** As mentioned in Q-1, the goal of Self-REF is to produce confidence tokens useful for downstream settings such as routing and rejection, boosting correctness of the overall system. Well-calibrated confidence scores do not always correlate with correctness [1, 2] (Section 1 and 5.3). Two toy examples to explain this misalignment intuitively: Assume we have a binary classification task with predicted probabilities (see below). Example 1 achieves lower accuracy but has a lower ECE score, whereas Example 2 achieves higher accuracy but a higher ECE score. This example demonstrates that calibration metrics are not correlated with the correctness of the prediction. Similarly, consistency-based uncertainty is not necessarily aligned with downstream correctness, e.g., when one has highly consistent incorrect answers. This misalignment can degrade routing performance when such signals are used for routing. ``` Bins = 2 predicted prob. of "1" = [0.5, 0.5, 0.5, 0.5] ground truth = [0, 0, 1, 1] Example 1: predicted prob. of "1" = [0.5, 0.5, 0.5, 0.5] -> ECE(↓)= 0% -> accuracy (↑)= 0% Example 2: predicted prob. of "1" = [0.4, 0.6, 0.9, 0.9] -> ECE(↓) = 20% -> accuracy (↑) = 75% ``` [1] Huang, et al. "Look before you leap: An exploratory ..." arXiv 2023. [2] Yona, et al. "Can Large Language ... Uncertainty in Words?." arXiv 2024. **[Q-4] I'm curious if authors explored a baseline where the model generates a confidence token before producing an answer. Also a learned router baseline would provide insight.** **[A-4]** To the best of our knowledge, there is no existing baseline that generates a confidence token prior to producing the answer. In multiple-choice QA, token generations are relatively short, making it feasible to predict a confidence token after producing the answer. This allows the model to condition its confidence estimation directly on the generated output, potentially resulting in more accurate confidence scores for routing. However, for tasks involving longer responses (e.g. reasoning), an alternative approach to improve efficiency could involve an early-stopping mechanism for confidence tokens. The model might produce a confidence token midway through its answer generation, allowing an earlier routing decision. We have additional experiments with a learned confidence-based router baseline: OOD-Probe [1] to assess its routing performance from Mistral-7B to Llama3-70B in the MMLU dataset. The results are shown in the table below. We provide the accuracy in different routing ratios, and observe that Self-REF outperforms the new baseline. |Route_Ratio|0%|20%|40%|60%|80%|100%|least_routing_ratio| |-|-|-|-|-|-|-|-| |OOD-Probe|0.50|0.61|0.66|0.68|0.74|0.74|80%| |Self-REF|0.55|0.64|0.68|0.72|0.74|0.74|70%| [1] Mahaut, et al. "Factual confidence of LLMs:..." ACL 2024.
Summary: Authors proposed a lightweight training strategy to teach LLMs to express confidence in whether their answers are correct in a reliable manner. Using this, the authors build a router algorithm that reduces latency and improves overall QA performance. Claims And Evidence: The claims are well stated and supported. Methods And Evaluation Criteria: All three RQ1, RQ2, RQ3 benchmarks and datasets are make sense and well analyzed. Theoretical Claims: Not applicable Experimental Designs Or Analyses: Experimental designs and analyses are valid. In particular, significantly reducing the latency metric, improves the validity of the results provided. Supplementary Material: The supplementary materials is good. Authors provided valuable additional insights that enhance the understanding of the main results presented in paper. Relation To Broader Scientific Literature: The routing problem and Confidence Tokens are very novel research area. Every related literature is well covered. Essential References Not Discussed: None Other Strengths And Weaknesses: Please see other sections Other Comments Or Suggestions: Paper is well written. Questions For Authors: Your framework is only able to predict the confidence token at the end of the answer? Is that correct? If so, I see the drawback in terms of routing capabilities. since the model need to generate the full response before going to route the response to another larger LLM? Do authors see any way to get the confidence tokens before the full answer is generated, using the proposed framework? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and ideas. We have updated our discussion accordingly. **[Q-1] Your framework is only able to predict the confidence token at the end of the answer? Is that correct? If so, I see the drawback in terms of routing capabilities. Since the model need to generate the full response before going to route the response to another larger LLM? Do authors see any way to get the confidence tokens before the full answer is generated using the proposed framework?** **[A-1]** Thank you for the insightful feedback. In question-answering tasks, the cost of generation is relatively low, and predicting a confidence token after generating the answer allows the model to condition its confidence on the output, potentially leading to more accurate uncertainty estimation for routing. However, in other long-response tasks, such as LLM-based reasoning, a promising direction could be to develop an early-stopping approach for confidence token generation. The model may emit a confidence token partway through answer generation in this setup, enabling an earlier routing decision to stronger models. This strategy could be particularly beneficial in complex reasoning scenarios, and we consider it a valuable direction for future work.
Summary: The paper proposes Self-REF, a training strategy that adopts LoRA to fine-tune an LM on a dataset augmented with confidence tokens, based on prediction correctness. Self-REF enhances downstream applications like model routing and answer rejection by leveraging the learned confidence token scores. Claims And Evidence: Please refer to the "Other Strengths And Weaknesses" section. Methods And Evaluation Criteria: Please refer to the "Other Strengths And Weaknesses" section. Theoretical Claims: N/A. No proofs or theoretical claims that require checking. Experimental Designs Or Analyses: Please refer to the "Other Strengths And Weaknesses" section. Supplementary Material: N/A. No additional supplementary material that require reviewing. Relation To Broader Scientific Literature: Please refer to the "Other Strengths And Weaknesses" section. Essential References Not Discussed: Please refer to the "Other Strengths And Weaknesses" section. Other Strengths And Weaknesses: **Strengths** - The proposed Self-REF method is straightforward yet effective in learning more calibrated confidence scores, utilizing a simple data augmentation strategy. - Overall, the paper is well-written and easy to follow, with comprehensive and detailed experiments that demonstrate strong empirical results across multiple datasets. **Weaknesses** - Missing reference to R-Tuning [1], which similarly constructs an augmented dataset consisting of certain and uncertain sets based on the correctness of LM predictions, and appends corresponding tokens as supervised signals to fine-tune a more calibrated LM capable of refraining from answering unknown questions. This work is not discussed in the paper, and could be considered as a baseline. - Self-REF requires a dedicated training/validation set and a fine-tuning stage for different downstream tasks/datasets, which limits its practical usages comparing to zero-shot baselines. - It is unclear how much transferability Self-REF holds beyond similar tasks such as OpenbookQA --> MMLU. It would be very interesting to see how Self-REF performs when fine-tuning in a multi-task setup with a collective of datasets.   [1] Zhang, Hanning, et al. *R-tuning: Instructing large language models to say ‘I don’t know’.* NAACL 2024 Other Comments Or Suggestions: Presentation: The figure size and font size (e.g., in ```Figure 2```) might be too small, potentially affecting readability. Questions For Authors: 1. How was the optimal parameter $\alpha$ for the <UN> data determined? 2. How does the ratio of <CN>:<UN> in dataset construction affect the results? 3. In ```Figure 2```: why do different models/baselines/datasets have different number of datapoints? For instance, Mistral only has 3 data points for the "Verbalizing uncertainty" baseline curve on MMLU. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and reference. We've incorporated them into our related work and made the suggested expository improvements in our revised paper. **[Q-1] Self-REF requires a dedicated training/validation set and a fine-tuning stage for different downstream tasks/datasets, which limits its practical usages compared to zero-shot baselines.** **[A-1]** Our method targets the scenario where one would like to optimize performance for a particular downstream task while (1) being able to fine-tune smaller specialized models, (2) reducing monetary or latency cost, and (3) maintaining comparable performance on the downstream task. Thus, we agree that fine-tuning is necessary in self-REF. However, we argue that it is often the case that one may want to fine-tune a smaller model for one's specific task of interest. Exploring broader use cases is a valuable direction for future work. An interesting extension would be to explore how self-REF can extend to multi-task training for general use cases. **[Q-2] It is unclear how much transferability Self-REF holds beyond similar tasks such as OpenbookQA --> MMLU. It would be very interesting to see how Self-REF performs when fine-tuning in a multi-task setup with a collective of datasets.** **[A-2]** We agree that fine-tuning self-REF with a mixture of tasks augmented with confidence tokens is a promising extension. However, we would like to emphasize that the nature of uncertainty can vary significantly across tasks. For instance, uncertainty in question-answering may differ from that in code generation or other complex reasoning tasks. While QA tasks often have well-defined ground truth, other tasks may require soft confidence labels (tokens), which could affect the effectiveness of confidence-based routing. We value and acknowledge the potential of this direction and will consider incorporating a broader set of datasets under a multi-task framework in future work. **[Q-3] How was the optimal parameter $\alpha$ for the <UN> data determined?** **[A-3]** $\alpha$ is a hyperparameter in our work, and we select the optimal $\alpha$ based on the validation set. **[Q-4] How does the ratio of <CN>:<UN> in dataset construction affect the results?** **[A-4]** We fixed the overall dataset size and treated the ratio between <CN> and <UN> samples as a hyperparameter to optimize performance for a particular downstream task with fine-tuning. On one hand, including more <CN> samples helped fine-tune the model to better perform the downstream task; on the other hand, a sufficient number of <UN> samples was necessary to teach the model when to express uncertainty. An imbalance—either too many <UN> samples, which could degrade task performance, or too many <CN> samples, which could reduce the quality of uncertainty estimation—can negatively affect the model. To balance this trade-off, we experimented with ratios of 1:1, 1:2, 1:3, 1:4, and 1:5 and selected the optimal setting based on each task’s validation set. **[Q-5] In Figure 2: why do different models/baselines/datasets have different number of datapoints? For instance, Mistral only has 3 data points for the "Verbalizing uncertainty" baseline curve on MMLU.** **[A-5]** The points correspond to 20-quantiles, as described in Section 4.2, which ideally yield 20 distinct and equally spaced routing rates, assuming all confidence scores are unique. However, certain methods for extracting confidence scores, such as verbalizing uncertainty, can suffer from mode collapse, where the scores cluster around a limited set of numbers. As a result, a substantial portion of the confidence scores is identical, leading to fewer than 20 distinct routing rates in practice. **[Q-6] Missing reference, formats, and Typos.** **[A-6]** Thank you for the valuable comments and references. We will add this to our related work section on rejection learning. The suggestions for formats and typos are fixed in our next version.
null
null
null
null
null
null
Understanding the difficulties of posterior predictive estimation
Accept (poster)
Summary: The authors study the problem of Monte Carlo estimation of the density of the posterior predictive distribution for (approximate) Bayesian inference. They show that simple Monte Carlo estimation can have a low signal to noise ratio (SNR) if the training data and test data are substantially different, the dimension of the problem is high or the test set is much larger than the training set. They propose an importance sampling approach based on learning a sampling distribution using an Importance Weighted Evidence lower bound, and provide empirical support for this method at least partially mitigating the observed problems with low signal to noise ratio when using simple Monte Carlo estimation to estimate the (predictive) posterior density. Claims And Evidence: *Claim: Bias in estimating the log predictive density can lead to unreliable comparisons between methods.* Figure 1 illustrates the bias in estimating the log predictive density. While bias is clearly illustrated, it would be more compelling if the authors provided an example where the bias leads to issues in comparison between the methods. In the current example, it seems like we would reach the same conclusion, regardless of how many samples are used in estimation. And it seems reasonable to believe this is the correct conclusion (that flow VI is leading to higher predictive density). Also the standard error bars shown are disjoint for all numbers of samples, so it really seems comparison in this case is reliable, even if estimation is poor. *Claim: SNR is low when training and test data are 'mismatched'* Figures 2 and 8 show simulations supporting this claim. The linear regression case also provides support for this claim. *Claim: SNR decreases as the dimension of the latent space increases* Figure 2 and figure 8 (right) shows a simulation supporting this claim. Equation 5 provides a heuristic argument for this claim. I found the sketch of the heuristic argument in the main text incomplete. In particular, it wasn't clear to me what was meant by "the corresponding Bayesian CLT approximations". The link to section D provided didn't link to the appendix. When I went to section D of the appendix, I expected to see a clear statement of what is intended by A1 (i.e. what Baysian CLT is being used in this heuristic, along with a citation). But I couldn't find this. Please provide additional detail or point me to where it is already in the text. I strongly suspect some assumptions on the data-generating process for both test and train locations are needed for equation 70 (in particular, something like that all the data is sampled iid from some distribution). I also find assumption A2 difficult to understand at an intuitive level. Could you give concrete examples where it holds/does not hold? Currently think the argument is so heuristic that it does not provide significant insight beyond the simulations. The linear regression case provides some support for this claim in a particular concrete (although somewhat limited) setting. *Claim: SNR is low when the test set is large compared to the test set* Figure 2 and 8 show simulations supporting this claim. The heuristic support for this claim follows the same argument as for the claim about dimension, and so my earlier comments apply. The linear regression case provides some support for this claim in a particular concrete setting. *Claim: Adaptive importance sampling can increase SNR relative to simple Monte Carlo* Tables 1,2 and 3 as well as figure 8 support this claim. Methods And Evaluation Criteria: The authors evaluate standard Monte Carlo and the proposed importance sampling in terms of expected log predictive density and estimated signal to noise ratio. While there is not a ground truth expected log predictive density available in many of the experiments, this metric is still useful as 1.) it is commonly used in practice for model comparison between methods, and so the quality of its estimation is of interest. 2.) the direction of bias in estimating this quantity is known, and so higher values must represent lower amounts of bias. The signal to noise ratio also seems a useful metric to report since it is the focus of most of the discussion of the paper. Theoretical Claims: I looked over the appendices for the proof of theorems 2.1 and 2.2. I the argument structure makes sense and I did not see errors. Overall the theoretical claims seem believable, although in certain places heuristic arguments made are imprecise (see details in other boxes). Experimental Designs Or Analyses: The simulation studies are reasonable illustrations of the arguments made in the paper. I looked over the model details provided in section 7, and did not have significant concerns. Supplementary Material: What is being defined in definition C.2? Is it just equation 26? It seems a claim is also being made about the form of the posterior distribution. While this is true, it should not be contained in a definition environment. Equation formatting could be improved throughout. Line breaks are not needed in the first equality (e.g. eqns 33-34 could be on a single line, as well as eqns 36-37). Relation To Broader Scientific Literature: To the best of my knowledge, there is not a significant amount of literature studying the quality of Monte Carlo estimation of the (approximate) posterior based on independent samples. The proposed method in the paper builds on ideas from importance weighted variational inference. Essential References Not Discussed: I did not see significant gaps in discussion of related literature (Appendix B). However, I am not familiar enough with several areas related to this paper in order to feel confident nothing has been missed. Other Strengths And Weaknesses: Weakness: Some sections of the text did not have clear takeaways. For example, the discussion of approximate inference (section 3) gave formulas for the SNR, but didn't provide insight into when I should expect the SNR to be high or low with approximate inference. In the discussion for exponential families, there is a condition discussed in terms of statistics lying on a particular ray, but I didn't see how this can be related back to quantities related to the analysis (the available data, the prior and likelihood, the approximate inference performed). Other Comments Or Suggestions: The proof sketch for Theorem 2.1 repeatedly refers to steps as *simple*. I don't think this is useful. Either the reader will understand how this step is done and agree it is simple, or they will not in which case telling them it is simple is not helpful (and discouraging). appendix A.3, typo: “margina” Punctuation is missing in many equations in the appendix (e.g. eqn 34, 39, 88, 91, 107, 113). Eqns 40, 108 shouldn't be there as it is empty (likely results from an additional new line command). Questions For Authors: What is the main (practical) insight I am supposed to take from section 2.2? I didn't see how the description around "looseness" in Jensen's inequality supported the main claims of the paper, nor how to turn it into practical insights about when I would expect the SNR to be low/high. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments. We will address the typos, equation formatting, and other minor suggestions as is and offer comments to other concerns below. >... Bayesian CLT approximations ... provide additional detail ... While there are several versions of Bernstein-von Mises, we believe Theorem 10.1 from Vaart (1998) suffices for our use case. Note that Theorem 10.1 states the result in terms of the true parameters. For the extension to the maximum likelihood estimate, see the discussion between Lemma 10.3 and Lemma 10.4. In short, we assume that the posterior $p(z \vert \mathcal D) \approx \mathcal N(z_{\textrm{MLE}}, \frac{1}{|\mathcal{D}|} I^{-1}(z_{\textrm{MLE}})),$ where $I$ is the estimate of the Fisher information matrix evaluated at the maximum likelihood estimate; such that, $I^{-1} (z_{\textrm{MLE}}) = -(\frac{1}{|\mathcal D|} \nabla^2_z \log p(\mathcal D \vert z))^{-1}$. Since the size of the dataset cancels out, we get the expressions in equations 65, 66, and 67. In the revision, we will add the discussion on Bernstein-von Mises and the above clarification after Corollary D.2. [1] Vaart, A. W. van der. "Bayes Procedures." _Asymptotic Statistics_. Cambridge: Cambridge University Press, 1998. >Assumptions on the data-generating process for ... equation 70. In spirit, this assumption is not that different from assuming the data-generating distributions are the same. The quantity in eq. 70 ($\frac{1}{|\mathcal D|}S_{\mathcal D}^{-1}$) is the estimate of the Fisher information matrix (FIM) under different datasets. We essentially assume that the train and test data sets are similar enough such that estimates (MLE and FIM) under these datasets will be the same. As the reviewer notices, it is tempting to only assume that the train and test data distributions are similar. However, even after assuming the same distribution, we will require additional assumptions for a result in terms of estimates under two given datasets. This assumption about the estimates is precisely the reason we (very explicitly) keep the result informal. We will add this discussion after eq. 70, and if the reviewers suggest, we will reword the assumption in terms of FIM. >What is being defined in definition C.2? There is no specific assumption on the form of the posterior. We are essentially writing Bayes' rule. However, we define $V$ explicitly as we use it as a function of data $\mathcal D$ in later expressions. We made an organizational choice to aid the reader, but we can move it to an inline definition if preferred. >The discussion of approximate inference ... but didn't provide insight ... The main insight is that for approximate posteriors that closely resemble the true posterior, we expect the relationships from Section 2 to hold as is. However, for arbitrary distributions, it is hard to make a precise statement beyond "SNR depends on how much the *posterior* $q_\mathcal{D}(z|\mathcal{D}^∗)$ varies from the *prior* $q_\mathcal{D}(z)$." Intuitively, the training data only enters these equations through the conditioning, or, equivalently, from the approximate posterior $q_\mathcal D(z)$. Since arbitrary approximate posteriors may have arbitrary relationships with training data, it is not immediately obvious how to make a more precise statements. We hope that future research can explore this further. We will add this discussion at the end of Section 3 and in the limitations paragraph in Section 6. >What is the main (practical) insight I am supposed to take from section 2.2? The analysis in Section 2.2 is complementary to the main claims of the paper. From the earlier parts in Section 2, we know that when the training and test datasets match and are large enough, the posteriors in eq. 3 are similar, and the SNR is high. The same is implied by the two conditions at the end of section 2.2. When $\xi_\mathcal D$ is large, the training dataset will be large (note that $\xi_\mathcal D$ includes both the statistics and the number of data points, see eq. 10). When the points $\xi_\mathcal D$ and $\xi_{\mathcal D + \mathcal D^*}$ lie close to the ray emanating from the origin, the datasets have to be similar. Combined together, these two conditions also support the view that when the datasets are large and similar, we expect the SNR to be high. We favored the illustration in Figure 6 in place of words to derive the main takeaway. However, we will add some of this explicit discussion to the main text. Overall, we thank the reviewer for these insightful comments. We strongly believe that the concerns raised by them will make the paper stronger. We look forward to addressing any remaining concerns. Otherwise, we feel confident in our abilities to add the requested explanatory text, and hope the reviewer will consider a strong recommendation for our work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the response. I think the paper raises interesting points. I have decided to maintain my score. I appreciate the authors additional descriptions of take-aways of the main results from the paper (regarding approximate inference and section 2.2). Section 2.2: I still don't entirely see the new insight gained from section 2.2 that wasn't already conveyed in the general case. Is it that it suffices, for datasets to be "similar" in the exponential family case, it suffices that the resulting sufficient statistics are similar? The geometric insight from working an exponential family seems nice, but I'm still not clear what I should take away from the section practically, that I would not have concluded already from earlier sections. It seems from the authors response that section 2.2 is meant as further support of the main claims. Mostly, I'd like to see a clear discussion of how the exponential family case relates to the general case, and what structure is particular to the exponential family case that adds additional insight into the problem considered by the authors. --- Reply to Comment 1.1.1: Comment: We truly appreciate the reviewer for their additional time and offer more explanations below. We begin with the friendly thought that "insight, to some degree, is in the eye of the beholder!" We acknowledge that we did not clearly explain what insight section 2.2 offers. Roughly, we think of corollary 2.2 as saying, "SNR is determined by Jensen's inequality applied to the log-partition function of the exponential family." We find this insightful because log-partition functions are, in some sense, "soft-max" functions or (very informally) "rounded cones." In particular, we know that they are "rounded" (causing looseness and thus low SNR) near the origin, but if you follow any ray away from the origin, they eventually become "flat" (causing tightness and thus high SNR). We think this provides some further insight, but don’t necessarily necessarily mean that these insights leads to practical changes to practical algorithmic choices. (Of course, if you truly had a conjugate family, the posterior would be known in closed form.) We find log-partition view of the problem to be an appealing explanation for "why" the SNR is high or low: The impact of both differences in sizes of datasets and differences in observed moments on SNR can be understood in terms of how they position points along the surface of the "cone" given by the log-partition function—small datasets position you near the origin where the log partition function is rounded, while different moments mean the points do not lie on a ray pointing near the origin (as shown in Figure 6) The corollary 2.2 is also used to compute SNR in closed form for exponential families, which is useful for making plots. Table 9 in Appendix J provides the expressions for the log partition function. These expressions are plugged into eq. 12 to calculate SNR and generate Figure 5 and Figures 9-12. These visualizations provide tangible evidence of how SNR can quickly decay in relatively simple scenarios. We hope these explanations better portray our insights, and we look forward to addressing any remaining concerns.
Summary: This paper addresses the issue of unreliable posterior predictive density (PPD) estimates when using a simple Monte Carlo (MC) approach, highlighting the previously under-recognized issue of low signal-to-noise ratio (SNR). The authors theoretically analyze and empirically demonstrate that the SNR for posterior predictive estimation significantly decreases under three conditions: increasing mismatch between training and test data, increasing dimensionality of the latent variable space, and an increasing relative size of the test dataset. They propose an adaptive importance sampling method—learned IS—based on maximizing a variational proxy for SNR, to mitigate this issue, showing substantial improvement in predictive estimation accuracy. Claims And Evidence: The main claim in this work is the signal-to-noise ratio (SNR) of the simple MC estimator can sometimes be extremely low, leading to unreliable estimates. To demonstrate this, the authors provide a theoretical analysis in section 2. They provide an analytical expression for SNR in Theorem 2.1 and they demonstrate it empirically on several models: linear regression, logistic regression, and hierarchical models. Methods And Evaluation Criteria: The methods proposed in this paper are appropriate for the problem at hand and follow obviously from the analysis presented. Specifically, the authors identify the central issue of low signal-to-noise ratio (SNR) in simple Monte Carlo estimations of posterior predictive densities (PPDs). To mitigate this, they propose an adaptive importance sampling method, named "Learned Importance Sampling" (LIS). In this approach, they optimize a variational proxy to the SNR by maximizing an importance-weighted evidence lower-bound (IW-ELBO), to learn a more efficient proposal distribution. The methodological choice is well-justified, as direct optimization of the optimal proposal distribution (which would maximize SNR directly) is often intractable. Hence, optimizing a proxy measure like the IW-ELBO, which has clear theoretical connections to minimizing estimator variance, is a reasonable and effective strategy (Section 4)​ . The authors also empirically demonstrate that this method significantly improves estimator accuracy across various scenarios, including exponential family models, linear regression, logistic regression, and hierarchical models, providing robust evidence that their methodological choice is practically effective and relevant for the stated problem. Theoretical Claims: I reviewed the correctness of the theoretical derivations provided, particularly Theorems 2.1 and 2.2. The proofs provided appear to be sound and accurately support the claims regarding the decay of SNR under specified conditions. Experimental Designs Or Analyses: The soundness of the experimental designs was checked, particularly the experiments with linear regression, logistic regression, exponential family models, and hierarchical models. The setups appear valid, and the experimental results strongly support the theoretical analyses. The use of illustrative examples clearly demonstrates the issues with low SNR and the effectiveness of the learned IS method. Supplementary Material: I checked the proofs in the appendix. Relation To Broader Scientific Literature: The paper's contributions lie within the broader literature of approximate inference and Monte Carlo estimation. It builds upon work on importance sampling and also related to prior research on variational inference, predictive uncertainty, and Bayesian model evaluation. Essential References Not Discussed: The authors appear to have cited the relevant works adequately. No essential references seem missing. Other Strengths And Weaknesses: Strengths: 1. Clearly identifies and thoroughly analyzes a subtle but critical issue. 2. Provides theoretical grounding and extensive empirical validation. 3. Introduces a simple yet effective methodological improvement. Weaknesses: Computational complexity and scalability of the learned IS method, particularly in very high-dimensional or large-scale applications, may be a concern but it is less important since the contributions are primarily theoretical. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your detailed and encouraging review. We appreciate your recognition of our theoretical analysis and the empirical validation of our proposed LIS method, as well as your insights into potential computational challenges. Your constructive feedback is truly invaluable.
Summary: This paper provides a theoretical framework explaining the severe signal-to-noise ratio (SNR) degradation observed in naive posterior predictive distribution (PPD) estimators. It rigorously demonstrate that, even with exact inference, SNR diminishes with increasing: (1) training-test data mismatch, (2) latent space dimensionality, and (3) test data size relative to training data. These theoretical findings are empirically validated through numerical experiments on both synthetic and real-world datasets. Claims And Evidence: The paper's claims are rigorously supported by comprehensive references and robust mathematical proofs, built upon well-justified assumptions Methods And Evaluation Criteria: This paper analyzes scenarios involving exact posterior distributions, represented as $Q_{D}(z) = P(Z|D)$, as well as cases where $P(Z|D)$ is approximated using variational inference (VI) or Laplace approximation. This comprehensive analysis provides a complete theoretical framework. The theory's reliability is further strengthened by its validation on both synthetic and real-world datasets. Theoretical Claims: The proofs presented herein are, to the best of my ability, accurate. Experimental Designs Or Analyses: The numerical experiments provide clear empirical validation for all theoretical claims presented. A natural and compelling extension would be to investigate whether these theoretical findings hold within the context of Bayesian neural networks, a research area of considerable interest to the contemporary machine learning community. Supplementary Material: I focused on verifying the proof structure of the key theorems and, to the best of my knowledge, found no errors. Relation To Broader Scientific Literature: This paper identifies three scenarios leading to rapid signal-to-noise ratio (SNR) decay in posterior predictive distributions (PPDs) and proposes adaptive importance sampling as a mitigation strategy, providing valuable insights into PPD failure modes for the machine learning community. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: While the appendix provides a comprehensive review of related work, integrating key discussions into the main text would enhance understanding for readers unfamiliar with the field's cutting-edge developments. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your thoughtful and constructive review. We appreciate your recognition of the theoretical contributions and the thoroughness of our analysis. Your feedback is invaluable and greatly encouraging. For the camera-ready version, we will plan to use the extra space to add more commentary on the takeaways from sections 2.2 and 3 (as pointed out in [response to the reviewer hMnG](https://openreview.net/forum?id=TzfGuKazvf&noteId=ywqVSRwl8H)). We will also use the extra space to move more of the related works discussion to the main text (as indicated in [response to the reviewer CNiq](https://openreview.net/forum?id=TzfGuKazvf&noteId=hM1fsV2z27)). In fact, we are open to moving the model details to the appendix and, instead, moving all of the related works section into the main text. Please let us know if you have a strong preference for one or the other.
Summary: This paper provides a theoretical investigation of Monte Carlo estimation of posterior predictive distributions. The signal-to-noise ratio (SNR) of Monte Carlo estimation is shown to decrease under three conditions: increasing mismatch between training and test, increasing dimensionality of latent space, and increasing size of test data. The contributions consist of (1) a theoretical analysis of the SNR in several settings (exact linear regression, exact conjugate exponential family, and approximate inference), and (2) a learned importance sampling procedure designed to improve the quality of the Monte Carlo estimate. Claims And Evidence: The theoretical claims are supported by proofs in Appendices C - H and Tables 1 - 4 show the improvements due to the learned importance sampling. Methods And Evaluation Criteria: The experimental setup seems reasonable, consisting of exponential family, linear regression, logistic regression, and a hierarchical model on the MovieLens data. Evaluation criteria include lower bounds on the log posterior predictive density and the signal-to-noise ratio, which make sense for this work. Theoretical Claims: I did not examine the proofs in detail. The results seem to intuitively correspond to expressions from information geometry. Experimental Designs Or Analyses: The experiments seem to be reasonable but I did not verify them in detail. Supplementary Material: I skimmed through the appendix. Relation To Broader Scientific Literature: The results should be better connected to the literature of analyzing the variance of Monte Carlo estimators. In this paper, the signal-to-noise ratio is the object of concern, and to my knowledge this has not been analyzed. The related work section goes back only a decade or so which makes it difficult to assess the significance of the results in this paper. Essential References Not Discussed: References discussing e.g. Monte Carlo Variance and variance reduction techniques. Other Strengths And Weaknesses: The structure of the paper is nonstandard which makes it more difficult to read. There is no background section building up to the results and justifying the use of SNR to analyze PPD estimation. Related work is contained only in the appendix whereas model details are placed in the main paper after the conclusion. More connections to the literature justifying the use of SNR and arguing for the significance/impact of these results are needed. Other Comments Or Suggestions: p. 2, l. 57: ratio Questions For Authors: 1. Why is SNR a good metric for analyzing the quality of Monte Carlo estimation of the PPD? 2. How do the theoretical results here influence the way that practitioners estimate PPDs or the way they select approximate posterior distributions? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We will address the minor concerns and offer some specific comments below. >Why is SNR a good metric for analyzing the quality of Monte Carlo estimation of the PPD? We study SNR because it is equivalent to relative variance and bakes in the idea of how large the estimator variance is relative to the target quantity. The idea of relative variance becomes crucial when the target quantity is numerically small, as in the case of $\textrm{PPD}_q$ values. To make this precise, let’s consider an example where the $\log \textrm{PPD}_q= -100$ (for reference, all estimates of $\log \textrm{PPD}_q$ in Tables 1, 2, 3, and 4 are lower than $-100$). Also, consider an unbiased estimator $R$ with variance $\mathbb V(R)=10^{-20}$. In the absolute sense, the variance of this estimator is low; however, $R$ carries more noise than signal. To see why, note that $\textrm{PPD}_q=\exp(-100) \approx 3.72\times10^{-44}.$ Intuitively, $R$ varies on the scale of $10^{-10}$ (standard deviation) and will produce noisy approximations of the target value that is the order of $10^{-44}$. SNR naturally captures this intuition: $\textrm{SNR}(R) = \textrm{PPD}_q/\sqrt{\mathbb V [R]} \approx 3.72\times10^{-34}$ and flags the estimator as poor. Several works before us have studied the SNR of unbiased estimators of numerically small quantities (as cited in lines 59-61). For the works cited in the main paper, the focus is to study the behavior of gradient estimators since the value of the gradients can become very small as the optimization proceeds. In our extended search, we found two more works that study the SNR of gradient estimators for VI [1, 2] and one more work that studies SNR for the policy gradients in RL [3] (the definition of the SNR in [3] is different from our ratio of moments definition; however, the idea to study the SNR is similarly motivated). In hindsight, we agree this discussion should be more prominent. We will add these citations and the above discussion to the already existing discussion in lines 59-61. [1] Liévin, Valentin, et al. "Optimal variance control of the score-function gradient estimator for importance-weighted bounds." NeurIPS, 2020. [2] Rudner, Tim GJ, et al. "On signal-to-noise ratio issues in variational inference for deep Gaussian processes." ICML, 2021. [3] Roberts, John, and Russ Tedrake. "Signal-to-noise ratio analysis of policy gradient algorithms." NeurIPS, 2008. >How do the theoretical results here influence the way that practitioners estimate PPDs or the way they select approximate posterior distributions? We believe our theoretical results are of independent interest and also provide useful insights for practitioners. First, our work serves as a strong cautionary note. We hope that practitioners monitor (and report) the SNR value of their PPD estimates alongside the $\log \textrm{PPD}$ value. Second, whenever the SNR values of the naive estimator are low, our work shows that this does not necessarily reflect on the accuracy of the approximate posterior but simply on our ability to accurately estimate PPD. In cases of low SNR, we suggest practitioners use approaches like LIS to improve the reliability of the estimates and obtain a clearer picture of the relative performance of the different approximate posterior methods. We briefly touch on the above discussion in the conclusion paragraph in Section 6 and will expand on this in the update. > Related work is contained only in the appendix, whereas model details are placed in the main paper after the conclusion. Due to space limitations, we moved the related work section to the appendix. In the camera-ready version, one more page is allowed, and we plan to use it to bring more of the discussion on the related works into the main paper (note that a small paragraph is present in Section 6). If the reviewers believe that the paper will benefit from moving *all* of the related work section to the main paper, then we propose moving the Model Details section to the appendix to create more space. > The related work section goes back only a decade or so, which makes it difficult to assess the significance of the results in this paper. Any research review can fall short of being exhaustive. We believe we made an honest attempt to cover the relevant literature. We cite research on PPD evaluation in Section B, and we will further expand on our motivation to study SNR (as mentioned in this rebuttal). If we are missing some relevant work, we will be more than happy to include it. However, we respectfully disagree that it is difficult to judge our work. We will be obliged if the reviewer can point us to specific literature that we have missed, and without which it becomes difficult to assess the significance of our analysis. Overall, we thank the reviewer for their time and hope that they will reconsider their position in light of the explanations and other reviews.
null
null
null
null
null
null
FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation
Accept (poster)
Summary: This paper introduces FairICP, a novel fairness-aware learning method designed to promote equalized odds in machine learning models when dealing with complex and multi-dimensional sensitive attributes. The method combines adversarial learning with an innovative Inverse Conditional Permutation (ICP) strategy to generate conditionally permuted copies of sensitive attributes without estimating multi-dimensional conditional densities. The paper provides a comprehensive theoretical foundation for the method and evaluates its performance through extensive simulations and real-world datasets. ## Update after rebuttal Thanks for your explanation, I will maintain my rating. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem (improving fairness measure EO) at hand. Theoretical Claims: The theoretical claims in the paper are supported by detailed proofs. The authors provide proofs for the validity of the ICP-generated permutations and their ability to promote equalized odds. The proofs are mathematically sound and well-structured. However, the robustness of these theoretical claims under model misspecification or inaccurate density estimation could be further explored. Experimental Designs Or Analyses: The experimental designs and analyses are generally sound. The authors conduct extensive experiments on both synthetic and real-world datasets, providing a comprehensive evaluation of the method's performance. The experimental results are presented in a clear and organized manner, and the comparisons with baseline methods are thorough. However, the paper could benefit from a sensitivity analysis to assess the impact of hyperparameters and data characteristics (how complex the sensitive attributes are) on the method's performance. Supplementary Material: The authors did not provide supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The paper builds upon existing work in fairness-aware machine learning and addresses the underexplored challenge of handling multi-dimensional sensitive attributes. The authors provide a comprehensive review of related work, including prior methods for achieving equalized odds and the limitations of existing approaches. Essential References Not Discussed: The paper cites relevant prior work in the field of fairness-aware machine learning. However, it could benefit from additional references to recent advancements in multi sensitive attributes fairness techniques. Other Strengths And Weaknesses: **Strengths:** 1. The introduction of the ICP strategy is a significant contribution that addresses the challenge of handling multi-dimensional sensitive attributes. 2. The integration of ICP with adversarial learning provides a flexible and efficient framework for promoting equalized odds. 3. The empirical validation on both synthetic and real-world datasets demonstrates the effectiveness of the method. **Weakness:** 1. My concern is that the ICP strategy heavily relies on the accurate estimation of the conditional density $q(Y|A)$. Could you discuss more on the challenges and potential inaccuracies associated with density estimation, especially in high-dimensional spaces or with complex data types. 2. I feel like the experiments are conducted on a limited set of datasets. The number of sensitive attributes is limited. And for the categorical case, multi sensitive attributes can be actually turned to one attribute case by separating the dataset into more subgroups. I am curious about the performance of standard fairness-aware method in this naive way. Maybe you can add this as baseline. 3. While the paper compares FairICP with several baseline methods, it does not include some state-of-the-art methods for fairness-aware learning. 4. The ICP strategy is specifically designed for EO and may not be generalized to other fairness measures enough. Other Comments Or Suggestions: Providing detailed information on the exact hyper parameter values or the code for reproducing the results would be more convincing. Questions For Authors: Please see weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the feedbacks by the reviewer, here are our responses: 1. "Could you discuss more on the challenges and potential inaccuracies associated with density estimation, especially in high-dimensional spaces or with complex data types." **Reply.** Thank you for this question. We mention how the quality of density estimation will affect the performance of our ICP (and CP) sampling scheme in line 241-243, and we also provide a theoretical analysis linked to it in the Appendix C.1. Specifically, we show an smaller TV distance bound for ICP compared with CP when density estimation is inaccurate, which aligns with our claims and experimental evidence (Figure 2). 2. "And for the categorical case, multi sensitive attributes can be actually turned to one attribute case by separating the dataset into more subgroups. I am curious about the performance of standard fairness-aware method in this naive way. Maybe you can add this as baseline." **Reply.** We thank the reviewer for this suggestion. We now have also added two more baselines in experiments: FDL [3] and Kearns et al. [2]. [Link for experiments added](https://docs.google.com/document/d/1CLwxWwBRsVrYhhIfeC3_IsDBZ0mAoRiZVWWGLv4DDIw/edit?usp=sharing). We also tried Kearns et al. [2] for Adult dataset but it failed to converge with limited number of iteration for the short time given (as evidenced by Figure 1 in its paper), however, we agree this is an important baseline and are working to add it to our revised paper, and we will also update the results once it is done. 3. While the paper compares FairICP with several baseline methods, it does not include some state-of-the-art methods for fairness-aware learning. **Reply.** Thanks for your feedback. We fully understand the need to compare with other methods. However, based on the literature we have searched so far, what we compared in our paper are already the most recent methods that can be applied to our setting (equalized odds, multiple (mixed) sensitive attributes). We would also appreciate it if the reviewer could point out some other cutting-edge fair machine learning methods suitable to our task. Additionally, the major novelty of this work is the ICP sampling for fairness-aware (equalized odds) learning with multiple sensitive attributes. Hence, we should consider the comparison between FairICP and FDL [3] to be the most informative and reliable evaluation of this paper's contribution in a setting where FDL is straightforward to apply. 4. The ICP strategy is specifically designed for EO and may not be generalized to other fairness measures enough. **Reply.** Thank you for pointing this out. Our focus on the equalized odds metric is intentional, as enforcing equalized odds requires ensuring conditional independence. This conditional nature makes it a more challenging fairness criterion compared to unconditional ones like demographic parity (Tang et al.[1]). While ICP is well-suited for enforcing equalized odds due to its ability to avoid the challenging density estimation of given, for unconditional fairness notions such as demographic parity, we can utilize unconditional permutation of which is a special case of FairICP without conditioning. 5. Providing detailed information on the exact hyper parameter values or the code for reproducing the results would be more convincing. **Reply.** We apologize for not including code in the initial submission. We have now provided the relevant code [Link for code](https://drive.google.com/file/d/1gX4UIDPGKYEcL-yK9TtYmazi28gc_4tv/view?usp=sharing). ## Reference [1] Tang et al. Attainability and optimality: The equalized odds fairness revisited. [2] Kearns et al.. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. [3] Romano et al. Achieving equalized odds by resampling sensitive attributes. --- Rebuttal Comment 1.1: Comment: Thanks for your explanation and I will maintain my score.
Summary: - This study introduces a permutation-based learning algorithm for developing fair predictive models. - The fairness notion considered in this work is equalized odds. - The proposed permutation mechanism generates pseudo-sensitive attributes with aligning the distributions of $(\hat{Y}, A, Y)$ and $(\hat{Y}, \tilde{A}, Y)$. - The proposed generation mechanism is designed to accommodate both continuous and discrete sensitive attributes. - Experimental results suggest that the proposed method achieves a superior fairness-prediction trade-off compared to two baseline approaches. ## update after rebuttal - Thanks to the rebuttal, which includes the additional experiments, I will raise my rating to 3. Claims And Evidence: - Overall, the flexibility of the proposed algorithm in handling various types of sensitive attributes is supported by the experimental results. - However, while the authors claim that the proposed method offers a superior fairness-accuracy trade-off, it seems not consistently outperforms the baselines when the prediction model is linear. Methods And Evaluation Criteria: - Metrics: - (1) The authors use "Loss" as the primary measure of prediction performance. Including accuracy as an additional metric in classification task would enrich the evaluation part. - (2) KPC: The paper primarily employs KPC as the fairness metric. Could the authors clarify why standard measures such as TPR, FPR, or DEO were not mainly used instead? - (3) DEO: The paper defines DEO as the sum of TPR and FPR over $y \in {0, 1}$. Presenting TPR and FPR separately could provide a more detailed analysis of the fairness evaluation. - Baseline methods - The proposed method is primarily compared against three baselines. Evaluating it against additional baselines would further strengthen the validation of its effectiveness. - Examples: - (1) For continuous sensitive attributes: Generalized Demographic Parity, https://openreview.net/pdf/3f9ffe7eafbd44f0205f3629edbcfb60ec738e7c.pdf. Although this work focuses on demographic parity, its approach could be extended to equalized odds. - (2) For mixed-type sensitive attributes: Subgroup fairness could also be applicable in this setting, as continuous sensitive attributes can be categorized via binning. https://proceedings.mlr.press/v80/kearns18a.html Theoretical Claims: - Question: In the proof of Task 2 of Theorem 2.1, the first equation holds when $P(A \le t | Y, S(A) = S) = P (\tilde{A} \le t | Y, S(A) = S),$ while the given result from Task 1 states that $\tilde{A} | Y \overset{d}{=} A | Y$. Could the authors can further explain the math behind how the result of Task 1 directly leads to this equation? Experimental Designs Or Analyses: - Implementation details - How stable is the adversarial learning process? Given that stability is critical for practical deployment, analyzing it would be valuable. Supplementary Material: - No explicit supplementary materials attached. Relation To Broader Scientific Literature: - This paper suggests that generating pseudo sensitive attributes can contribute to learning fair prediction models. Essential References Not Discussed: - Please see `Baseline methods’ in **Methods And Evaluation Criteria**. Other Strengths And Weaknesses: - While the direct generation of pseudo-sensitive attributes through adversarial learning appears to be novel (to the best of my knowledge), adversarial learning-based approaches for developing fair prediction models have been extensively explored in the literature. It would be helpful if the authors could clarify how their proposed method differs from existing adversarial learning-based approaches in terms of intuition, motivation, technical aspects, and experimental results. - More specifically, how does the generated $\tilde{A}$ differ from the fair representation learned through adversarial learning? It would be helpful if the authors distinguish these two approaches. - Several examples are: - https://dl.acm.org/doi/pdf/10.1145/3278721.3278779 - https://arxiv.org/abs/1511.05897 - https://www.cs.toronto.edu/~toni/Papers/laftr.pdf - https://proceedings.mlr.press/v162/kim22b/kim22b.pdf Other Comments Or Suggestions: - (Minor typo) Algorithm 2: In the “Input” line, should ($X^{te}, A^{te}, Y^{te}$) be replaced by ($\hat{Y}^{te}, A^{te}, Y^{te}$)? Questions For Authors: - N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the feedback by the reviewer. Here are our responses: 1. Methods: more baselines. **Reply.** We thank the reviewer for this valuable suggestion. We add two more baselines in experiments: FDL [4] and Kearns et al. [5]. [Link for experiments added](https://docs.google.com/document/d/1CLwxWwBRsVrYhhIfeC3_IsDBZ0mAoRiZVWWGLv4DDIw/edit?usp=sharing). For ACS income/Adult, we also ran [5] as the reviewer suggests, but we find it failed to converge with limited time given (also evidenced by Figure 1 in its paper). Nonetheless, we agree this could be a good complement and we will update it once it's done. We also thank the reviewer for the generalized DP paper ([3]), however, we find that its goal to extend demographic parity is fundamentally different from our goal (equalized odds), and it's also non-trivial to extend it to the best our knowledge. We have included it into our related work section. 2. Metric: (1) Accuracy for classification **Reply.** We apologize for the confusion. Our "Loss" metric is misclassification rate for classification task, which is (1 - accuracy) . We will make it clearer in our revised version. (2) Why TPR, FPR, or DEO were not mainly used? **Reply.** Thank you for this question. The reason why we introduce KPC is that it's a flexible metric for conditional independence without constraints for the shape of input, which well aligns with our goal to measure equalized odds for complex $A$. While DEO (TPR, FPR) is a standard measure in the previous work, it's only suitable for binary $Y$/$A$, thus can only be used in a part of our experiments (Adult/COMPAS). (3) Presenting TPR and FPR. **Reply.** Thank you for this suggestion. Yes, DEO is a sum of FPR and TPR accross all binary $A$. However, when there are multiple binary $A$, it could be rather subjective to choose which subgroup's TPR(FPR) to be presented, thus making DEO a standard measure ([1, 2]). 3. Math behind line 583 **Reply.** We apologize for the confusion. In Task 2, we first prove $\tilde{A} | Y \overset{d}{=} A | Y$, which is done by taking expectation of results from Task 1 ($P(A \leq t \mid Y, S(A)=S)=P(\tilde{A} \leq t \mid Y, S(A)=S)$). We have now explained this in the proof for clarification. 4. How stable is the adversarial learning process? **Reply.** Thank you for this question. We agree that stability could be a potential pitfall for all the adversarial learning approaches, as pointed out in line 432-439. We adopted a vanilla GAN instead of more exquisite ones to provide a fair comparison with FDL [4], but our ICP can integrate with the more efficient methods in the area of adversarial learning. We have now clarified it further in the discussion section. 5. How our method differs from existing adversarial learning-based approaches. **Reply.** Thanks for raising this point. While we adopted adversarial learning (Algorithm 1) as in FDL [4], the main contribution of our work is a better way of generating synthetic $\tilde A$ for equalized odds with multi-dimensional (mixed) $A$ (line 090-096). FDL does not discuss how to estimate $q(A \mid Y)$ when $A$ is complex, which is important for fair ML. Also, the nature of conditioning for equalized odds makes it more difficult than the demographic parity setting (e.g., prior work using VAE [6]). As far as we are aware, no other proposal has been designed for our setting. The ICP sampling scheme guides both training and evaluating equalized odds (Algorithm 2). The reported KPCs reflect the same trend of the hypothesis testing (Algorithm 2) when the ground truth is known (Figure 3). This result is useful as ground truth is unknown in practice, which makes results from Algorithm 2 subject to density estimation while KPC itself is not (line 354-360). 6. How does the generated $\tilde A$ differ from the fair representation learned through adversarial learning? **Reply.** Thank you for this question and providing the literature. Learning fair representation is another line of work based on adversarial learning, which generally requires model to predict $A$ from the representation $Z$ (and possibly $Y$) (e.g., the work reviewer mention). When facing complex $A$, this direct way of modelling $A$ could run into the same challenge as FDL [4] does, while our FairICP avoids it as discussed. We thank you again for this suggestion and we have included this discussion into our related work part. ## Reference [1] Cho et al. A fair classifier using kernel density estimation. [2] Agarwal et al. A reductions approach to fair classification. [3] Jiang et al. "Generalized demographic parity for group fairness." [4] Romano et al. Achieving equalized odds by resampling sensitive attributes. [5] Kearns et al. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. [6] Creager et al. "Flexibly fair representation learning by disentanglement." --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have checked the additional experimental results. However there are a few remaining concerns: - About points 4-6: I think that the authors should point out a concrete advantage of FairICP compared to other adversarial learning-based methods (that are not designed with sampling pseudo samples). For example, the first reference (https://dl.acm.org/doi/pdf/10.1145/3278721.3278779) is considered as a baseline for FDL, as shown in the FDL paper (Note that I mistakenly referred to this work as a fair representation learning method, but it is actually a direct debiasing method). Further, what are examples of the *"same challenge as FDL [4]"* that existing adversarial learning-based learning would face? - A concern in "Claims And Evidence" seems not to have been discussed in the rebuttal. *However, while the authors claim that the proposed method offers a superior fairness-accuracy trade-off, it seems not consistently outperforms the baselines when the prediction model is linear.* - (Minor) I agree that DEO is a more standard measure, however, it would be better to report the TPR and FPR at least for Adult and COMPAS datasets. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We are happy to address your remaining concerns: 1. Compare with other adversarial learning-based methods **Reply.** Thank you for this follow-up question. It's true that Zhang et al. [1] is a direct debiasing method using adversarial learning (similar work [2]), however, they use adversarial network to directly predict $A$ from $\hat Y$ and $Y$ (while in [2] they train a parametric model of $A | \hat Y$ with $\hat Y$ as inputs), which is fundamentally different from our ICP scheme that does not rely on modelling $A | (Y, \hat Y)$. This way of modelling $A | (Y, \hat Y)$ (or similarly modelling $A | Z$ where $Z$ is a representation [3, 4, 5] as the reviewer mentioned), will share a similar challenge as estimating the density of $A|Y$ in FDL when $A$ becomes complex as we discussed as our motivation (line 110-125), e.g., for Zhang et.al [1], how should one balance the prediction loss for $A$ when $A$ is mixed (categorical and continuous)? On the other hand, our FairICP scheme avoid such challenges since we can leverage all the well-studied methods to model $Y | A$ instead, where $Y$ is usually one-dimension (in Figure 2). To empirically support our claims, we also add Zhang et al. [1] as a baseline for the COMPAS dataset ([link](https://docs.google.com/document/d/1iCAwVjXYaUOwlIRMyX_uUXdz5AmBgyEWtuQdUmvs_LM/edit?usp=sharing)). 2. "Not outperforming baselines in linear model" **Reply.** We apologize for accidentally dropping our response to this question. As we were plotting loss against KPC/power, we still observed that our model outperforms others in most datasets for linear models (Figure 9, 10). Only in the COMPAS data, which considers binary $A$ and response $Y$, FairICP did not outperform the baselines. However, COMPAS represents a more classical setting with both binary sensitive attributes and response, where Reduction [6] is specifically designed for this setting with an almost oracle calculation of HGR obtained easily [7]. The COMPAS results indicate that FairICP is comparable to Reduction/HGR in cases where their high-quality calculations are feasible. Indeed, even in Figure 4, which uses a non-linear NN structure, FairICP is only comparable to Reduction and slightly better than HGR. In summary, although COMPAS favors the baseline models by design, we feel it is important to also include it to demonstrate the general applicability of FairICP which is not much worse even if it uses a general framework not specific to COMPAS. 3. TPR and FPR **Reply.** We agree with the reviewer that reporting TPR and FPR will be a good complement, thus we also report the TPR/FPR for Adult/COMPAS dataset ([link](https://docs.google.com/document/d/1iCAwVjXYaUOwlIRMyX_uUXdz5AmBgyEWtuQdUmvs_LM/edit?usp=sharing)). [1] Zhang et al. "Mitigating unwanted biases with adversarial learning." Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 2018. [2] Louppe et al. "Learning to pivot with adversarial networks." [3] Edwards et al. "Censoring representations with an adversary." [4] Madras et al. Learning Adversarially Fair and Transferable Representations. [5] Kim et al. "Learning fair representation with a parametric integral probability metric." [6] Agarwal et al. A reductions approach to fair classification. [7] Mary et al. "Fairness-aware learning for continuous attributes and treatments." International conference on machine learning.
Summary: The paper introduces FairICP, a method for enforcing equalized odds fairness in machine learning models that handle multiple sensitive attributes. The key idea is to improve how synthetic versions of sensitive attributes are generated in fairness-aware learning. Instead of relying on traditional resampling or conditional permutations, FairICP estimates the relationship between the outcome and the sensitive attribute to create better-conditioned synthetic attributes. The paper compares FairICP to existing methods such as Fair Dummies Learning (FDL) and Conditional Permutation (CP), using Kernel Partial Correlation as a fairness metric. Using a number of experiments, relying on both synthetic data and widely used datasets, the paper argues that FairICP is more effective at reducing fairness violations while maintaining predictive accuracy. Claims And Evidence: The core claims of the paper are that ICP is a superior method for approximating the true conditional distribution and that this better sampling method leads to a better fairness-accuracy tradeoff (which is the ultimate goal). Not being a theorist it is hard for me to fully evaluate the theoretical merits of the analysis but I do think that the authors could have put more effort into intuition building. For example, a key element of the analysis is the metric for measuring fairness- kernel partial correlation (KPC). While the authors define the metric mathematically, they do not explain why KPC is an appropriate metric for fairness in this context. Even a simple graphical intuition would have been helpful in understanding what this metric is capturing. It would have been helpful to have a toy example that could concretely demonstrate why CP struggles in a way that ICP does not, or why FDL might introduce bias. Methods And Evaluation Criteria: The paper evaluates the method on widely used datasets like COMPAS and ACSIncome, which facilitates comparison with previous work. However, the various choices over which approaches to compare are somewhat opaque. For example, why do the empirical examples not compare ICP to CP? Why is FDL not considered consistently? Why is there a comparison to HGR, which is only very briefly mentioned in the theoretical discussion? The reduction method is not discussed before being used in the example. Related to the point above, what are the alternatives to the KPC fairness metric? How sensitive are the results of the use of KPC? Theoretical Claims: I did not check for the correctness of appendix proofs. Experimental Designs Or Analyses: See above for my discussion of experiments. Supplementary Material: I read the supplemental material for the real-world data experiments and also found the description there difficult to follow without addressing some of my open questions from the main text. Relation To Broader Scientific Literature: The paper relates to a core challenge within the algorithmic fairness literature of how to treat fairness when sensitive attributes are multidimensional. Essential References Not Discussed: I do not have a specific reference that was not included Other Strengths And Weaknesses: The biggest strength of the paper is the importance of the issue it is addressing. It has always been a fundamental weakness of the algorithmic fairness literature that it focuses on fairness within one dimension of a protected characteristic when in many cases there are multiple such characteristics that often intersect. Similarly, the choice of the article to focus on equalized odds, although not explicitly discussed, is reasonable given the importance of considering how predictions vary by group, conditional on real outcomes. Given the potential policy significance of creating an easily implementable method, it is unfortunate the paper is not written in a more accessible way and that it spends little time explaining the intuition for its choices of metrics, superiority of methods and shortcomings of other methods. Other Comments Or Suggestions: To reiterate the point above, I think it would really improve the paper if the authors spent more time building up intuition for the theoretical claims, the superiority of their claims, and their choice of evaluation metrics. In analyzing the evaluation of their method, the authors might also consider other methods for equalizing odds, like post-processing, and how they compare with respect to the fairness-accuracy tradeoff. While their choice to focus narrowly on equalizing odds rather than other fairness metrics, it would be beneficial to compare other approaches to equalizing odds. I also suggest clarifying the various choices made with respect to the real-world experiments, particularly because the comparison to other methods does not fully align with the methods discussed in the previous theoretical section. Questions For Authors: To summarize the points above, my main questions/issues are: 1. Why do the real world experiments compare ICP to some methods and not others? 2. How does ICP compare to post-processing methods? 3. Is there a way to make the paper more accessible by building up intuition and explaining evaluation choices? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thorough review and constructive feedback on our paper, here are our responses: 1. Explanation on KPC. **Reply.** Thank you for this constructive feedback. KPC [1] is a recently proposed non-parametric measure for conditional independence without constraints on the shape of inputs. The reason why we choose $KPC(U,V|W)$ aligns with our main goal in this paper: enforcing equalized odds (a conditional independence notion) for multiple $A$ (i.e., making $KPC(\hat{Y}, A|Y)$ small). Additionally, $KPC(U, V|W)$ has been well normalized for direct comparison across different models (supported by our experiments (line 354-360)), and it can be viewed as a generalization of Partial Correlation between $U|W$ and $U|W,V$. To see this, we first recall that MMD distance $MMD(U, V)=||\mu_{U}-\mu_{V}||_{\mathcal{H}}^2$ where $\mathcal{H}$ is RKHS and $\mu_U = \int k(,;u)d P_U(u)$ is the kernel mean embedding of a distribution $P\_U(.)$ [9] and consider the simple case where the kernel is linear and $U = \alpha W + \beta V+\varepsilon$ with $W, V, \varepsilon$ being i.i.d $\mathcal{N}(0, 1)$. We then have $P\_{U|W,V}=\mathcal{N}(\alpha W+\beta V, 1)$, $P\_{U|W}=\mathcal{N}(\alpha W, 1+\beta^2)$ and $P\_{U}=\mathcal{N}(0, 1+\beta^2)$. Then, the numerator in $KPC(U,V|W)$ becomes: $E[MMD^2(P\_{U|W,V}, P\_{U|W})]=E[(\alpha W+\beta V-\alpha W)^2]=\beta^2$. The denominator becomes: $E[MMD^2(\delta\_{U}, P\_{U|W})]=E[(U-\alpha W)^2]=\beta^2+1$. Hence, we can see that $KPC(U, W, V)$ reduces to the squared correlation between $U|V$ and $U|W,V$: $KPC(U, W, V)=\rho^2_{U,W|V}$. In this simple case, $U$ is conditionally independent of $V$ given $W$ iff the partial correlation/KPC is 0 ($\beta$ = 0). KPC is not the only choice in our setting, and one can choose other conditional dependence measures that allow multi-dimensional inputs (e.g., CODEC [2], though it can be viewed as a special case of KPC). In our paper, the robustness of KPC is evidenced by comparing the similar curve given by other metrics (power of hypothesis testing and DEO, figure 4/7/8 and table 1). We thank the reviewer again for this valuable question and we have included some of the intuition in our revised paper. 2. "Why the experiments not compare ICP to some methods?" **Reply.** We apologize for this confusion. This is due to the differences in their applicability: HGR [3] is designed for univariate continuous $A$ and Reduction [4] is only for binary classification and binary attributes. We introduce HGR [3] (in line 105-109) and detail how we generalize it to multiple attributes (line 436-439). We cite Reduction [4] in line 083 and detail it in line 422-425. We now have made it clearer in our revised paper. We have also added two more baselines in experiments: FDL [7] and Kearns et al. [10] (introduced in line 096). [Link for experiments added](https://docs.google.com/document/d/1CLwxWwBRsVrYhhIfeC3_IsDBZ0mAoRiZVWWGLv4DDIw/edit?usp=sharing). We also ran [10] for Adult but find it failed to converge with limited time given (also evidenced by Figure 1 in its paper). Nonetheless, we agree this could be a good complement and we will update it once it's done. 3. "Consider post-processing" **Reply.** Thank you for raising this interesting direction. As post-processing and in-processing conventionally represents two different paradigms: the former focuses on training $f(X)$, whereas the latter focuses on recalibrating the probability cutoffs across binary $A$ for binary response (e.g., [6]) and requires test-time access to $A$. Thus, we view it as complementary rather than competing with each other. In fact, one recent work by Tifrea et al. [8], as the rare post-processing work that can work with both a categorical/continuous $A$, proposed to post-process any baseline model, either ERM or in-processing ones. Like many in-processing methods, existing post-processing ones are not designed for complex $A$, especially under the equalized. This also raises the question if the idea of ICP can be also utilized in post-processing. We now have included these discussions as future directions in our discussion section. ## Reference [1] Huang et al. "Kernel Partial Correlation Coefficient---a Measure of Conditional Dependence." [2] Azadkia et al. A simple measure of conditional dependence. The Annals of Statistics [3] Mary et al.. Fairness-aware learning for continuous attributes and treatments. [4] Agarwal et al. A reductions approach to fair classification. [6] Kim et al.: Black-Box Post-Processing for Fairness in Classification. [7] Romano et al. Achieving equalized odds by resampling sensitive attributes. [8] Tifrea et al. "Frappé: A group fairness framework for post-processing everything." [9] Tolstikhin et al. "Minimax estimation of maximum mean discrepancy with radial kernels." [10] Kearns et al. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. --- Rebuttal Comment 1.1: Comment: I appreciate the clarifications regarding the experiment. --- Reply to Comment 1.1.1: Comment: Thank you for carefully reading our paper and sharing your questions and comments. We hope we have addressed your concerns, and we would be happy to discuss further if any questions remain.
Summary: The authors tackle the problem of enforcing equal odds during training, where the protected attribute is multi-dimensional. They propose FairICP, which constructs a synthetic sensitive attribute via a procedure which involves sampling from a learned $q(Y \mid A)$. Then, a discriminator network is added to distinguish between the real and fake $(Y, A, \hat{Y})$, and the model is penalized if these are distinguishable. The authors evaluate their method against the baselines and find that it achieves a better Pareto front. ## update after rebuttal I have gone through the rebuttal and the comments from other reviewers. I thank the authors for their additional experiments. However, I believe the lack of novelty (W1) is still a significant weakness that I am not satisfied with. Further, I still believe that the potential application of the method is limited (W2), as it is only useful in cases where the A is high dimensional enough that there is a significant difference in estimation of Y|A vs A|Y, but not high dimensional enough that both fail (and where Kearns et al., (2018) would presumably be better). Finally, as also raised by other reviewers, the performance/fairness improvements are not consistent over the baselines, especially in the rebuttal results. For these reasons, I will keep my score. Claims And Evidence: 1. The novelty of the paper over Romano et al. (2020) is rather limited. Algorithm 2 is identical to this prior work, and the overall architecture (and Algorithm 1) are essentially the same as this prior work with a small tweak to the sampling which is a simple application of Bayes rule. As such, I do not believe it presents a sufficiently novel algorithmic or theoretical contribution. 2. The main motivation of this work is to allow for equal odds when the attribute is multi-dimensional. However, as the dimension of A grows, the estimation of $Y \mid A$ will become worse due to smaller per-group sample size. This seems to limit the utility of the method, and the authors have not characterized this theoretically. 3. The authors state (L94) that "While several studies have successfully addressed demographic parity under multiple sensitive attributes via permutations(Kearns et al., 2018; Creager et al., 2019), these ideas can not be trivially generalized to equalized odds since permutation of A conditional Y is still difficult and does not circumvent the challenge in distribution estimation of multidimensional complex attributes". I do not believe this is true for Kearns et al., (2018), as they specifically examine equalizing FPR in their paper (and equalizing FNR is also possible by symmetry); and do not require estimating the conditional density. The authors should add this method as a baseline. Methods And Evaluation Criteria: 1. The Adult and COMPAS datasets are not great testbeds for the method as the number of groups (intersections of sex and race) is only 4. Thus, given that these are also binary classification problems, it is possible to estimate the conditional probability distribution by data analyses (without training any models), and any in-processing equal odds method would work here. For these datasets, the authors should at least show FDL as a baseline. They should also compare against the line of work on fairness via adversaries (e.g. [1]), which has a similar architecture but does not require conditional randomization. 2. The author should show ERM as a single point on all Pareto plots. [1] Learning Adversarially Fair and Transferable Representations. ICML 2018. Theoretical Claims: I did not check any proofs in detail. Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria". Supplementary Material: I skimmed through Appendix A and B. Relation To Broader Scientific Literature: The work is one of many that enforces equal odds during supervised learning. However, the paper distinguishes itself by examining the case of multidimensional attributes, where existing methods would not work or are less effective. The proposed method relies heavily on the idea of conditional randomization (Candes et al., 2018), which has been applied to enforce equal odds for univariate attributes in Romano et al. (2020). The main contribution of the paper is an alternative sampling procedure which adapts Romano et al. (2020) to the multi-attribute setting. Essential References Not Discussed: Please see above. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. Empirically, the DEO is still quite large (Figure 8 and Table 1). Is this because DEO is measuring fairness for binary $\hat{Y}$ whereas the method enforces equal odds wrt the continuous score? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the feedback from the reviewer. Here are our responses: 1. Novelty of the paper over FDL. **Reply.** Thank you for sharing your concern. We agree that our core insight is not complicated, yet it has not been previously explored and is effective. 1) While we adopted adversarial learning (Algorithm 1) as in FDL [1], the main contribution of our work is a better way of generating synthetic $\tilde A$ for equalized odds with multi-dimensional (mixed) $A$ (line 090-096). FDL does not discuss how to estimate $q(A \mid Y)$ when $A$ is complex, which is important for fair ML. Also, the nature of conditioning for equalized odds makes it more difficult than the demographic parity setting (e.g., prior work using VAE [4]). As far as we are aware, no other proposal has been designed for our setting. The ICP sampling scheme guides training and can measure equalized odds (Algorithm 2). The reported KPCs reflect the same trend of the hypothesis testing (Algorithm 2) when the ground truth is known (Figure 3). This result is useful as ground truth is unknown in practice, which makes results from Algorithm 2 subject to density estimation, but the KPC itself is not (line 354-360). 2) While our proof uses Bayes rule, the ICP procedure has not been proposed before. The simplicity in the proof does not contradict its usefulness. For example, one influential result in the classical density ratio estimation is framed as a classification problem [7]. The validity of this transformation is also just Bayes rule, but doesn’t reduce its novelty or utility. 2. "The estimation of $Y | A$ gets worse with high-dim A." **Reply.** We apologize for the confusion. If the problem is intrinsically hard, e.g., there are $n$ samples with each $(Y_i, A_i)$ being drastically different, even the distribution of $Y|A$ will be poorly estimated. However, estimation of the $(A|Y)$ is not easier in this setting. The gain of using $Y|A$ are twofold (line 110-124): 1) Machine learning approaches can more effectively leverage the structure of $Y | A$ than the direct estimation of $A|Y$. E.g., when $(Y, A)$ are jointly Gaussian with sparse connections, the former can provide better quality from using lasso (MSE bound $\sqrt{\frac{s\log p}{n}}$ under sparsity condition $s$, [e.g.,8]) whereas the latter is prone to more errors even when using graphical lasso (Frobenius bound $~\sqrt{\frac{(s^2+p)\log p}{n}}$, e.g., [6]), which can also be evidenced by our Figures 2. These bounds on density estimation will in turn upper bound the quality of the conditional permutation (Appendix C.1). 2) Modeling $Y \mid A$ avoids directly estimating complex dependencies in multidimensional $A$ (e.g., continuous and binary variables). Instead, dependence in $A$ is naturally incorporated through $Y \mid A$, simplifying implementation. We have now clarified this further in section 2.2. 3. More baselines **Reply.** We thank the reviewer for this valuable suggestion. We add two more baselines in experiments: FDL [1] and Kearns et al. [2]. [Link for experiments added](https://docs.google.com/document/d/1CLwxWwBRsVrYhhIfeC3_IsDBZ0mAoRiZVWWGLv4DDIw/edit?usp=sharing). We also ran [2] for Adult but found it failed to converge with limited time given (also evidenced by Figure 1 in its paper). Nonetheless, we agree this could be a good complement and we will update it once it's done. For LAFTR, we find it can only be applied to a single binary $A$, which differs from our main target in this paper. 5. "The DEO is still quite large." **Reply.** We apologize for the confusion. We calculated DEO (line 823) as the sum of TPR and FPR differences as in [9], while some work [10] defines it as the max of differences, which explains why our results appear to be large. Our method still provides generally smaller DEOs, which indicates the consistency of KPC and DEO (Figure 8 and Table 1). ## References [1] Romano et al. Achieving equalized odds by resampling sensitive attributes. [2] Kearns et al. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. [3] Madras et al. Learning Adversarially Fair and Transferable Representations. [4] Creager et al. "Flexibly fair representation learning by disentanglement." [5] Tansey et al. "The holdout randomization test for feature selection in black box models." [6] Wang et al. "Precision matrix estimation in high dimensional gaussian graphical models with faster rates." [7] Menon et al. Linking losses for density ratio and class-probability estimation. [8] Wainwright et al. High-dimensional statistics: A non-asymptotic viewpoint. [9] Cho et al. A fair classifier using kernel density estimation. [10] Agarwal et al. A reductions approach to fair classification.
null
null
null
null
null
null
Fast Exact Unlearning for In-Context Learning Data for LLMs
Accept (poster)
Summary: This paper proposes ERASE, an in-context learning method combined with quantized k-means clustering for exact unlearning in LLMs. ERASE combines in-context learning with quantized k-means clustering, aiming to achieve dataset- and model-independent unlearning costs while maintaining competitive performance. A "holistic unlearning cost" metric is introduced to analyze trade-offs between inference and unlearning efficiency. The authors compare ERASE to SISA-based baselines on Big-Bench tasks, demonstrating reduced unlearning costs and comparable accuracy. Claims And Evidence: 1. The paper claims dataset-independent unlearning costs based on quantized k-means, but fails to provide adequate justification for why this specific combination would be uniquely effective for LLMs. While quantized k-means is a known technique, its application to LLM embeddings lacks a foundation, particularly regarding stability in high-dimensional spaces. 2. The proposed metric C(M) for Holistic Unlearning Cost represents a ratio between unlearning and inference costs, but without proper empirical validation, and it lacks experimental measurements of actual computational costs (such as GPU time and memory usage) to substantiate its claims. Methods And Evaluation Criteria: - The methodology section suffers from insufficient comparisons. The authors did not compare their approach against alternative clustering methods (like hierarchical clustering), making it difficult to substantiate claims of method superiority. - The choice of Big-Bench as an evaluation benchmark is questionable, as it was not designed for unlearning tasks. - The paper lacks comparisons with existing in-context unlearning methods, particularly the work by Pawelczyk et al. [Pawelczyk et al.] Pawelczyk, M., Neel, S., and Lakkaraju, H. In-context unlearning: Language models as few shot unlearners. arXiv preprint arXiv:2310.07579, 2023. Theoretical Claims: N/A Experimental Designs Or Analyses: - The paper lacks a proper unlearning evaluation framework. Specifically, there is no clear distinction between forget sets and remaining sets, nor is there a comprehensive evaluation of both model utility preservation and forgetting ability. - Regarding the forgetting ability, the authors do not employ standard unlearning metrics, such as membership inference attacks or data extraction leakage tests, which are crucial for validating privacy guarantees. Supplementary Material: Yes, I reviewed the supplementary materials, focusing on Appendix A (prompt formatting and scoring), Appendix B (cost analysis and convergence data), and Appendix C (particularly sections C.3-C.4 on limitations and future directions). Relation To Broader Scientific Literature: While most previous work focused primarily on model fine-tuning approaches for unlearning, this paper introduces a novel in-context learning perspective. Additionally, the paper attempts to address a critical gap in evaluation methodology by proposing a holistic unlearning cost framework. Essential References Not Discussed: Although the paper cites Pawelczyk et al.'s work on in-context unlearning in the related work section, it fails to provide any meaningful discussion or comparison with this directly relevant approach. [Pawelczyk et al.] Pawelczyk, M., Neel, S., and Lakkaraju, H. In-context unlearning: Language models as few shot unlearners. arXiv preprint arXiv:2310.07579, 2023. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and time. We respond to specific concerns below. > The paper claims dataset-independent unlearning costs based on quantized k-means, but fails to provide adequate justification for why this specific combination would be uniquely effective for LLMs. While quantized k-means is a known technique, its application to LLM embeddings lacks a foundation, particularly regarding stability in high-dimensional spaces. We wish to point out that our experiments showed Q-Kmeans was as effective as ACoT (which uses standard kmeans++) on the LLM eval datasets we tested, and also as effective as parameter fine-tuning with SISA. While in general this might not be true if we change the embedding dimension used in ERASE, a contribution of our paper is showing that for the tasks we tested (and when using BERT embeddings) it was effective. > The proposed metric C(M) for Holistic Unlearning Cost represents a ratio between unlearning and inference costs, but without proper empirical validation, and it lacks experimental measurements of actual computational costs (such as GPU time and memory usage) to substantiate its claims. Our experiments did measure actual computational costs. The results in Tables 3 and 4 (in the appendix) computed the FLOPS for training and inference, which are used to directly compare between methods (relative improvements are shown in Table 1). We use FLOPS as opposed to GPU time and memory usage because FLOPS are independent of available hardware and, thus, more accurately capture algorithmic costs. > The methodology section suffers from insufficient comparisons. The authors did not compare their approach against alternative clustering methods (like hierarchical clustering), making it difficult to substantiate claims of method superiority. There are a limited number of efficient unlearnable clustering algorithms. We are unaware of efficient unlearning operations for hierarchical clustering. > The choice of Big-Bench as an evaluation benchmark is questionable, as it was not designed for unlearning tasks. We agree other datasets would be beneficial, but want to point out our evaluation was over 15 different tasks. These tasks already allowed us to conclude that which is better between in-context learning and fine-tuning is data dependent. > The paper lacks comparisons with existing in-context unlearning methods, particularly the work by Pawelczyk et al. In this paper we obtain exact unlearning guarantees, while the mentioned paper (also cited in our related work) only does approximate unlearning (and that too without any guarantees). It is currently difficult to comprehensively evaluate such approximate unlearning methods, e.g., [1], which further motivates our paper. We will make this clearer in our related works section. [1] Hayes, Jamie, et al. "Inexact unlearning needs more careful evaluations to avoid a false sense of privacy." arXiv preprint arXiv:2403.01218 (2024). > The paper lacks a proper unlearning evaluation framework. Specifically, there is no clear distinction between forget sets and remaining sets, nor is there a comprehensive evaluation of both model utility preservation and forgetting ability. To clarify, this distinction is only present in approximate/heuristic unlearning where their effectiveness depends on the sets. Our method is an exact unlearning method, which by definition always works. Our evaluation thus focuses on exact unlearning compute. > Regarding the forgetting ability, the authors do not employ standard unlearning metrics, such as membership inference attacks or data extraction leakage tests, which are crucial for validating privacy guarantees. As mentioned earlier, our method is an exact unlearning method and so by definition will always have perfect unlearning performance under the approximate unlearning metrics. To elaborate, we are producing exactly the retrained models, and those metrics measure differences (of approximately unlearnt models) to the retrained model. > Although the paper cites Pawelczyk et al.'s work on in-context unlearning in the related work section, it fails to provide any meaningful discussion or comparison with this directly relevant approach. See our response to the earlier question about this paper; we will add more discussion elaborating the difference between their approximate unlearning approach and the guarantees of exact unlearning (which our method does) in our revised draft.
Summary: The paper proposes an exact unlearning algorithm, ERASE, for in-context learning. The core unlearning idea revolves around performing exact unlearning in AutoCOT, which clusters in-context examples using their sentence representations and uses the samples close to the cluster centroids as the ICL examples. ERASE leverages the quantized k-means algorithm to perform exact unlearning that efficiently updates the clustering process to remove deleted samples. The paper introduces a holistic evaluation strategy by accounting for both the unlearning and inference costs. The paper also provides several experiments on BigBench datasets comparing ERASE with baseline algorithms. Claims And Evidence: **Strengths**: 1. The paper is fairly easy to follow. 2. The paper introduces a neat paradigm for performing exact unlearning while using in-context learning. The paper proposes ERASE, an efficient exact unlearning algorithm to delete instances when using ACoT. The core idea is generalizable and can be utilized in any algorithm using clustering to select ICL examples. 3. The paper prioritizes an important aspect of unlearning, where unlearning and inference costs are often at odds. It introduces a metric that captures both costs, allowing for a holistic evaluation of an unlearning algorithm's performance. 4. Empirical evaluation shows that the proposed method, ERASE, achieves similar performance to baseline algorithms even after unlearning several instances. **Weaknesses**: 1. The novelty of the technical contribution is weak. This is because the proposed approach uses the exact algorithm for unlearning used in clustering and applies it in ICL where clustering is involved. I understand that the algorithm has been used in a new application and other metrics for evaluating unlearning have been proposed. However, the technical contribution is weak because the core unlearning technique is borrowed from previous work and applied in a new setting. 2. Some of the underlying assumptions related to this work need to be mentioned. For example, for exact unlearning in this setup, the underlying assumption is that the ICL examples were not used during any stage of pre-training or instruction tuning. 3. The paper focuses on unlearning for a specific approach, ACoT. Although the paper motivates its advantage over vanilla CoT and showcases the results in Fig. 2 & 3, its significance over Random COT (random ICL example) selection is unclear in the context of unlearning. I understand that ACoT achieves better results than Random selection. But since Random CoT can perform O(1) deletion, the ideal way to evaluate would be to have a tradeoff between per query deletion cost and the overall performance (normalized aggregate score). In the current version, the paper reports only the normalized scores or inference cost at a time. 4. The above point also raises a question about the holistic evaluation approach. Apart from inference and unlearning costs, shouldn’t we also consider the overall performance of the algorithm as a dimension to consider? For example, Random CoT has the same inference cost as ACoT and a much lower unlearning cost but the overall performance is poor compared to ACoT. Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Line 102: “While no past work studied unlearning specifically for the fine-tuning stage (i.e., task adaptation stage) of LLMs” — this claim isn’t accurate. There is this recent paper [1] that exclusively focuses on erasing fine-tuning data. Moreover, existing exact unlearning algorithms like SISA and others, can be modified to adapt to the fine-tuning setting. [1] https://arxiv.org/pdf/2406.16257 Questions For Authors: Please respond to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We discuss specific points below. > Some of the underlying assumptions related to this work need to be mentioned. For example, for exact unlearning in this setup, the underlying assumption is that the ICL examples were not used during any stage of pre-training or instruction tuning. That is correct, and we will make that assumption clearer in our revised draft. > The paper focuses on unlearning for a specific approach, ACoT. Although the paper motivates its advantage over vanilla CoT and showcases the results in Fig. 2 & 3, its significance over Random COT (random ICL example) selection is unclear in the context of unlearning. I understand that ACoT achieves better results than Random selection. But since Random CoT can perform O(1) deletion, the ideal way to evaluate would be to have a tradeoff between per query deletion cost and the overall performance (normalized aggregate score). In the current version, the paper reports only the normalized scores or inference cost at a time. We would like to point out one more subtlety, which is ACoT and ERASE do not always perform better than random; this is dataset dependent. We decided to present solely the performance cost as it was already explained earlier in the paper that Random COT has O(1) cost, so it is up to the model provider to decide if the performance loss (on some datasets) is worth the cheaper unlearning cost. It is not clear to us how to value performance gains relative to unlearning computation. We will add this discussion elaborating the performance vs. unlearning consideration to our revised draft (in Section 5.2 if space allows). >The above point also raises a question about the holistic evaluation approach. Apart from inference and unlearning costs, shouldn’t we also consider the overall performance of the algorithm as a dimension to consider? For example, Random CoT has the same inference cost as ACoT and a much lower unlearning cost but the overall performance is poor compared to ACoT. See our response to the previous question, which provides a detailed discussion on the context dependent nature of the performance and unlearning comparison: it is not clear to us how to value performance with changes in compute. This was unlike inference and unlearning operations, which are both measured with the same computational units (e.g., flops). > Line 102: “While no past work studied unlearning specifically for the fine-tuning stage (i.e., task adaptation stage) of LLMs” — this claim isn’t accurate. There is this recent paper [1] that exclusively focuses on erasing fine-tuning data. Moreover, existing exact unlearning algorithms like SISA and others, can be modified to adapt to the fine-tuning setting. Thank you for bringing this paper to our attention! We will change our claim to mention this work, but note that it actually does not satisfy exact unlearning (according to our reading of their paper). Specifically, they propose to revert to a checkpoint that did not observe the datapoint to unlearn, but this does not reproduce the apriori trained model that would come from not having a datapoint in that slice. Furthermore, the selection of models amongst those trained from many permuted slices again means we do not reproduce the apriori distribution of models coming from not having a datapoint. This latter technique also seems analogous to the forging attack to fool exact unlearning [1]. More generally, the mentioned paper’s implicit definition of exact unlearning (which is anything that does not “use” a datapoint is exact unlearning) falls under the algorithmic definition issues pointed out in [1]; unlearning needs to be defined w.r.t a specific training/unlearning algorithm, as allowing anything that does not use a datapoint means we can trivially unlearn. After the submission cycle we will reach out to those authors to let them know of this issue. Note, the exact unlearning definition we follow removes this issue, by defining a specific training algorithm and unlearning with respect to it. [1] Thudi, Anvith, et al. "On the necessity of auditable algorithmic definitions for machine unlearning." 31st USENIX security symposium (USENIX Security 22). 2022.
Summary: This paper studies unlearning for in-context learning task adaptation, which is claimed to be understudied. Unlearning is mostly studied in setting where parameters are updates are required such as SISA. The overall method is a follow up on ACoT but in a new setting which is unlearning (ACoT was in learning paradigm) where the authors have used quantized kmeans++. The improvements is shown to efficiency of unlearning. Claims And Evidence: Some of the claims in the contribution are difficult to verify for instance. 1. The first claim in line 073-074 is vague and I am not sure how to verify it. 2. The claim 3 and 4 are also not novel and vague. I request the authors to re-write the whole summary of contributions clearly. Methods And Evaluation Criteria: The authors need to explain well why unlearning during in-context learning (i.e. no change in parameters) is an important problem. A naive baseline in this setting in removing samples from incontext rather than analyzing the embeddings. The authors have explained this point from line 161-172, however this naive baseline is missing from results. The evaluation criteria is taken from prior works and seems reasonable. A key limitation of this work based on my understanding is in-context fine-tuning doesn't work well for preference or instruction tuning like settings. So, ERASE in it's current form can't be applied to this important and widely used setting of fine-tuning. Theoretical Claims: no theoretical claims, I have verified that inference cost is correct in Table 1. Experimental Designs Or Analyses: The evaluation metrics are taken form prior works and they seems reasonable. Some of my concerns. 1. The comparison to SISA in sec 5.3 and 5.4 seems out of place as SISA is built for unlearning gradient based methods and ERASE is unlearning in-context finetuning. 2. Why the naive baseline of removing samples from context not reported? 3. The model and training details are very confusing to me. For instance which LLaMA? 4. I think the evaluation needs to be extended some harder tasks such as math generation gsm8K, MMLU , hellaswag and other metrics commonly used in LLM literature for fine-tuning. Supplementary Material: No I did not. Relation To Broader Scientific Literature: This paper extends ACoT [1] in unlearning setting and employs quantized Kmeans++ instead of regular kmeans++. [1] Automatic chain of thought prompting in large language models, zhang et al. ICLR 2022. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths 1. This paper studies a new direction. Weaknesses, apart from what already discussed. 1. The writing can be further improved. 2. The novel of the approach seems a bit limited as it directly extends ACoT [1] to unlearning. But since it is a fairly new area in my opinion this not a key concern. Other Comments Or Suggestions: Already discussed above. Questions For Authors: Do you believe ERASE can be extended for unlearning instruction samples from an instruction tuning dataset? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and time. > Some of the claims in the contribution are difficult to verify for instance. 1) The first claim in line 073-074 is vague and I am not sure how to verify it. 2) The claim 3 and 4 are also not novel and vague.I request the authors to re-write the whole summary of contributions clearly. Thank you for pointing this out, we propose to change the contributions bullet points as follows: 1) Showing that for certain datasets, exact unlearning can be efficient by using in-context learning. 2) Proposing an exact unlearning algorithm ERASE for in-context learning, that has dataset and model-size independent unlearning operation costs. 3) A study of when in-context learning provides more efficient unlearning deployments than fine-tuning (and repeating fine-tuning from scratch to unlearn), including changes to inference costs. > The authors need to explain well why unlearning during in-context learning (i.e. no change in parameters) is an important problem. A naive baseline in this setting in removing samples from incontext rather than analyzing the embeddings. The authors have explained this point from line 161-172, however this naive baseline is missing from results. To clarify, we understand the reviewer to be referring to random in-context selection, whose unlearning operation is replacing the selected example (which by rejection sampling one can see now samples from the correct unlearnt distribution). We have comparisons to this in Figure 2. We wish to also clarify that only removing the unlearnt in-context example is not a correct unlearning operation for any in-context learning algorithm. Specifically, unlearning is an algorithmic definition, and if the original selection used group statistics (e.g., clustering), we must simulate those decisions on the dataset without the example to unlearn. Motivating this, clustering based in-context learning has been observed to occasionally outperform random in-context selection, and a contribution of this paper is to note this is still efficient to unlearn. We will incorporate this previous paragraph into the related works to help motivate in-context unlearning. > A key limitation of this work based on my understanding is in-context fine-tuning doesn't work well for preference or instruction tuning like settings. So, ERASE in it's current form can't be applied to this important and widely used setting of fine-tuning. We will acknowledge this limitation in a revised draft. To be clear, the main claim of the paper is that for some fine-tuning datasets, exact unlearning can be easy as in-context learning can be as performant as fine-tuning. > The comparison to SISA in sec 5.3 and 5.4 seems out of place as SISA is built for unlearning gradient based methods and ERASE is unlearning in-context finetuning. The claim of our paper is that, when possible, we should do in-context learning instead of fine-tuning as it is much more efficient to unlearn. To show this we must compare in-context methods (ERASE) to their fine-tuning based counterparts (SISA). These sections show that in-context methods have comparable performance (on the datasets we tested) while dramatically reducing unlearning cost. > Why the naive baseline of removing samples from context not reported? It is in Figure 2, see our response to an earlier comment. > The model and training details are very confusing to me. For instance which LLaMA? As mentioned on line 267 we use the original LLaMA model from Touvron et al. (2023). The “Fine-Tuning Setup,” “Hyperparameter Selection,” and “Prompt Formatting” sub-sections lines 305-329 outline what we believe to be the most important training hyperparameters. In our revision we will improve readability by adding a table to our appendix showing a complete list of all training hyperparameters. >I think the evaluation needs to be extended some harder tasks such as math generation gsm8K, MMLU , hellaswag and other metrics commonly used in LLM literature for fine-tuning. Our existing evaluation includes 15 tasks capturing a variety of difficulties, and already allows us to conclude that which is better between in-context learning and finetuning is data dependent. > Do you believe ERASE can be extended for unlearning instruction samples from an instruction tuning dataset? We see no issue in using ERASE on other datasets where in-context learning is possible. However, the primary question is whether in-context learning is effective at learning from that dataset, and as pointed out by the reviewer, in-context learning tends to perform poorly on instruction tuning.
Summary: this paper proposes a novel exact un-learning approach for in-context learning. Given a training sample, the goal of exact unlearning is to obtain as quickly as possible an algorithm that one would have obtained without training on that data sample. The authors study this problem in the context of in-context learning, where the algorithm is “trained” by seing several examples from the target dataset. Auto chain of thought is an efficient method for ICL, which clusters the examples into k clusters and only uses the k examples closest to the centroids for ICL. The authors propose instead to use a quantized k-means, which enables fast exact un-learning. The resulting method, ERASE, is then compared against other unlearning methods for ICL, like random selection (which performs worse than Acot but makes unlearning easier) and Acot (which requires re-running k-means at each unlearning step). The authors also compare their method to unlearning with standard SGD-based fine-tuning, SISA. The authors explore the results in the space of resulting inference cost and cost of unlearning. # Edit after rebuttal + discussion - This paper studies exact unlearning in the context of in-context learning. In-context learning is a very important topic of research in those days, as it is an efficient method to adapt models for some tasks (not all, of course). Therefore, developing un-learning algorithms for ICL is a very important research direction. This paper is the first paper to propose such algorithms for exact un-learning. With that in mind, I think that the main criticism of s2nr, that"*in-context fine-tuning doesn't work well for preference or instruction tuning like settings. So, ERASE in it's current form can't be applied to this important and widely used setting of fine-tuning*", is not valid criticism: ERASE is not supposed to work where ICL does not; it is clearly not this paper's purpose to make ICL work for these tasks. In my view, showing that the proposed method works in settings where ICL is beneficial is enough to validate the method. - I also disagree with rev. 6e2H that there is little novelty: indeed, this paper is the first to propose exact un-learning algorithms for ICL. Unlearning and ICL are both important topics, and having even a baseline for unlearning ICL is important for the community as a whole. In my view, this paper fills an important gap. The fact that the method is simple is, in my view, a feature of the method and not a weakness. The method is novel and simple. - I also found the response of the authors to rev. ygwC convincing; the bulk of this reviewer's criticism is about comparison to approximate un-learning methods; but comparing approximate and exact un-learning methods is notoriously hard to do; here the methods proposed by the authors provably un-learns (of course, this trades-off against efficiency compared to approximate unlearning, but again, in my view, this is not this paper's battle to benchmark approximate vs exact unlearning) Therefore, I still think that this is an important article that should be accepted. Claims And Evidence: - ICL is a very popular approach, designing algorithms for exact unlearning on it is a novel and promising resarch direction - the proposed method is sound, and the theory behind it is both simple to understand and elegant - the empirical evaluation is quite thorough - the main weakness to me is the title and positioning itself. It is the first time that I see ICL defined as a fine-tuning method. I think that for the vast majority of the community, fine-tuning means using a gradient based approach to tune or add parameters to the model, while ICL does not use gradient descent. I think that “fast exact unlearning for in-context learning data for LLMs” or something along those lines would make what the paper is about much clearer. Methods And Evaluation Criteria: - the proposed method is sound, and the theory behind it is both simple to understand and elegant - gradient based fine-tuning and ICL have very different effects on the model. gradient based fine-tuning scales much better when the dataset size increases (and is worse at small scales), and it incurs no inference overhead. I think that the experiments could do a better job at exploring this space. - the error bars in most experiments are huge, it would be good to have a sense of the statistical significance of the results of the paper. Theoretical Claims: - the proposed method is sound, and the theory behind it is both simple to understand and elegant Experimental Designs Or Analyses: - the error bars in most experiments are huge, it would be good to have a sense of the statistical significance of the results of the paper. Supplementary Material: no Relation To Broader Scientific Literature: - the paper "In-context unlearning: Language models as few shot unlearners.” seems related, it would be good to clarify more the differences with the present approach. Essential References Not Discussed: - the paper "In-context unlearning: Language models as few shot unlearners.” seems related, it would be good to clarify more the differences with the present approach. Other Strengths And Weaknesses: - the paper is very easy to follow, it was a joy to read Other Comments Or Suggestions: - "this, we have unlearning the fine-tuning data is independent of the model and dataset size” unclear to me - “Showing that learning with access to an LLM can allow for faster exact unlearning.” unclear to me - “makes undoing x fast” unlearning? - “our models converge after one epoch of SISA finetuning.” isn’t it contrary to the conventional wisdom that repeating data a few epochs improves performance? - It would be good to clarify the costs of inference for ICL. The fact that it is linear with the number of token puzzles me, my understanding is that attention is quadratic in the number of tokens. One can use caching to make the cost linear with the ICL prompt length, but that cost still seems quadratic the first time it is done. - I would use in fig 3 a different color palette than that in fig 2 Questions For Authors: - would it be possible to extend results in table 2 in a 2d place, where x axis is unlearning cost and y axis is inference cost? - In fig 2, how can such a slight modification of the algorithm yield such changes ? can we conclude anything based on this, looking at the error bars? - How does the method scale with dataset size? I expect that at large datasets sizes, ICL starts lagging behind fine tuning. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback! Below we discuss questions raised in the review. Other suggestions will be implemented in our revised draft. >the main weakness to me is the title and positioning itself. It is the first time that I see ICL defined as a fine-tuning method... We agree that the terminology in-context learning is more clear than fine tuning and propose to change the title (“fast exact unlearning for in-context learning data for LLMs” sounds good to us) and contributions accordingly. See our response to Reviewer s2NR for specific changes to the contribution bullet points. To clarify, this does not change any of the methodology or experimental setup and the results hold. > the error bars in most experiments are huge, it would be good to have a sense of the statistical significance of the results of the paper. While the error bars are large in terms of accuracy, the methods perform differently (in a statistically meaningful way) when it comes to unlearning cost. That is, the FLOPS required to unlearn varies between the methods, despite their accuracy being comparable. > the paper "In-context unlearning: Language models as few shot unlearners.” seems related, it would be good to clarify more the differences with the present approach. In this paper we obtain exact unlearning guarantees, while the mentioned paper (also cited in our related work) only does approximate unlearning (and that too without any guarantees). It is currently difficult to comprehensively evaluate such approximate unlearning methods, e.g., [1], which further motivates our paper. [1] Hayes, Jamie, et al. "Inexact unlearning needs more careful evaluations to avoid a false sense of privacy." > “our models converge after one epoch of SISA finetuning.” isn’t it contrary to the conventional wisdom that repeating data a few epochs improves performance? It is common with LLMs to only train for one epoch. Supporting this, we generally found that the training loss for our tasks reached near 0 within a single epoch; we hence stop at one epoch to avoid overfitting. We will add a note about this in the appendix with associated training loss graphs. > It would be good to clarify the costs of inference for ICL. The fact that it is linear with the number of token puzzles me, my understanding is that attention is quadratic in the number of tokens. One can use caching to make the cost linear with the ICL prompt length, but that cost still seems quadratic the first time it is done. Thank you for pointing this out. You are correct that the asymptotic relationship should be quadratic in terms of number of tokens in the context. The asymptotic costs in table 1 will be updated in the revision with “t” -> “t^2”. We note that this has a negligible effect on our experimental results as we manually measured inference flops. The primary effect is improving ERASE’s relative asymptotic cost efficiency against the baseline SISA method. > In fig 2, how can such a slight modification of the algorithm yield such changes ? can we conclude anything based on this, looking at the error bars? We were unsure which “slight modification” you are referring to. We will list our response to a couple of modifications: Random vs ACoT: Our results suggest that ACoT performance benefits are task dependant. The ACoT paper also states that it “matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations." However, we believe further work is required to accurately understand this dependance. UMAP vs non-UMAP: The dimension of our text embeddings is large (1536). It is a known that clustering may work poorly in high dimensional settings. Thus, applying dimensionality reduction to our text embeddings could have potentially improved clustering results. ACoT vs ERASE: We find that in-context learning performance is very sensitive to which examples are chosen, causing high variance for each method. We found on average ERASE and ACoT perform similarly, and believe a significant portion of the variance is intrinsic to the choice of in-context examples. > How does the method scale with dataset size? I expect that at large datasets sizes, ICL starts lagging behind fine tuning. We agree that if there is an abundance of data, and the task is sufficiently different from the pre-trained model’s data, fine-tuning will perform better. However, this comes with additional unlearning costs. We will make it clearer in the paper that our focus is on the cases where in-context learning performs similarly to fine-tuning. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for the clarification. > We agree that the terminology in-context learning is more clear than fine tuning and propose to change the title (“fast exact unlearning for in-context learning data for LLMs” sounds good to us) and contributions accordingly. See our response to Reviewer s2NR for specific changes to the contribution bullet points. To clarify, this does not change any of the methodology or experimental setup and the results hold. I think that this will indeed make the point of the paper clearer. > While the error bars are large in terms of accuracy, the methods perform differently (in a statistically meaningful way) when it comes to unlearning cost. That is, the FLOPS required to unlearn varies between the methods, despite their accuracy being comparable. I agree that the computational gains are meaningful. However, there are a few places in the manuscript where you claim that a method is better than another despite large error bars (e.g. *"We see that ERASE matches our outperforms ACoT on three of the four tasks, and similarly with random selection. Considering dimension reduction for ERASE , we observed it made slight improvements "*). These claims are void if they do not come with a statistical test, please report the p values you get with that experiment and update the caption accordingly. > In this paper we obtain exact unlearning guarantees, while the mentioned paper (also cited in our related work) only does approximate unlearning Thanks for the clarification > It is common with LLMs to only train for one epoch. Supporting this, we generally found that the training loss for our tasks reached near 0 within a single epoch These two sentences do not seem related to me; indeed, LLMs are pretrained on non-repeated data, but the loss is far from 0. The fact that the training loss is close to 0 within a single epoch is very surprising; how can the model memorize each data point with a single gradient step per mini-batch? Conventionnal wisdom when training LLMs is that data can be repeated ~3 times without hurting the generalization performances (see [1]). This casts some doubts on the practical implementation of SISA. > Thank you for pointing this out. You are correct that the asymptotic relationship should be quadratic in terms of number of tokens in the context. The asymptotic costs in table 1 will be updated in the revision with “t” -> “t^2”. We note that this has a negligible effect on our experimental results as we manually measured inference flops. The primary effect is improving ERASE’s relative asymptotic cost efficiency against the baseline SISA method. Thanks for the clarification. > We were unsure which “slight modification” you are referring to. We will list our response to a couple of modifications: I apologize for not being clear about this; I was referring to UMAP vs non-UMAP. Thanks for the clarification. > We agree that if there is an abundance of data, and the task is sufficiently different from the pre-trained model’s data, fine-tuning will perform better. However, this comes with additional unlearning costs. We will make it clearer in the paper that our focus is on the cases where in-context learning performs similarly to fine-tuning. Thanks; I understand it might be a lot of work, but an ablation regarding dataset size would help in having a broader picture. [1]:Muennighoff, Niklas, et al. "Scaling data-constrained language models." Advances in Neural Information Processing Systems 36 (2023): 50358-50376. --- Reply to Comment 1.1.1: Comment: Thank you for your response! > I agree that the computational gains are meaningful. However, there are a few places in the manuscript where you claim that a method is better than another despite large error bars (e.g. "We see that ERASE matches our outperforms ACoT on three of the four tasks, and similarly with random selection. Considering dimension reduction for ERASE , we observed it made slight improvements "). These claims are void if they do not come with a statistical test, please report the p values you get with that experiment and update the caption accordingly. Yes we completely agree, we will include p values and adjust the claims in the paper accordingly. Below is a table containing the pairwise p-values between algorithms (ACoT, ERASE) across tasks, testing the null hypothesis that the means of two methods (for a given dataset) were the same. This is for the experiments in Figure 2 with no UMAP. We found most of the comparisons to be significant, and bolded those with p values $< 0.05$. | Task | Algorithm | ACoT | ERASE | |---------------------|-----------|--------------|--------------| | **Disambiguation QA** | Random | **0.0000** | 0.384 | | | ACoT | -- | **0.0000** | | **Fantasy Reasoning** | Random | 0.6436 | **0.0029** | | | ACoT | -- | **0.0005** | | **Implicatures** | Random | 0.9695 | 0.4054 | | | ACoT | -- | 0.3133 | | **Intent Recognition** | Random | **0.0000** | 0.5728 | | | ACoT | -- | **0.0000** | > Thanks; I understand it might be a lot of work, but an ablation regarding dataset size would help in having a broader picture. Yes we agree, we currently don't believe we have the resources to run it in time for the rebuttal, but will at the very least acknowledge this limitation in our revised draft. > These two sentences do not seem related to me; indeed, LLMs are pretrained on non-repeated data, but the loss is far from 0. The fact that the training loss is close to 0 within a single epoch is very surprising; how can the model memorize each data point with a single gradient step per mini-batch? Conventionnal wisdom when training LLMs is that data can be repeated ~3 times without hurting the generalization performances (see [1]). This casts some doubts on the practical implementation of SISA. We think the difference is we are fine-tuning a pre-trained model, versus training a pre-trained model from scratch. We believe the fact just an epoch suffices to get near 0 training loss suggests the pre-trained model was already quite good for our task/ the fine-tuning tasks are relatively easy to learn compared to pre-training the LLM. We will make this distinction to pre-training clear in our revised draft.
null
null
null
null
null
null
GL-LowPopArt: A Nearly Instance-Wise Minimax-Optimal Estimator for Generalized Low-Rank Trace Regression
Accept (spotlight poster)
Summary: In this work, the authors developed a new estimator for the generalized linear low-rank trace regression problem. The estimator improves existing works by considering instance-dependent information. Additionally, the estimator is nearly minimax optimal locally around the global optimizer. The authors also discussed the application of the proposed estimator to 1-bit matrix completion and bi-linear dueling bandits problems. Please see the following sections for my detailed comments. Claims And Evidence: Due to the time limit, I did not check the correctness of the theory, except those briefly mentioned in the main paper. The theoretical claims seem correct by checking the main paper. Methods And Evaluation Criteria: The methods and evaluation criteria make sense to me. Theoretical Claims: Due to the time limit, I did not check the correctness of the theory. The theoretical claims and the proofs in the main paper seem correct. Experimental Designs Or Analyses: N/A Supplementary Material: I did not check the supplementary material due to the time limit. Relation To Broader Scientific Literature: This paper is related to the topic of ICML conference and should be interesting to audiences from machine learning and optimization fields. Essential References Not Discussed: It would be better if the authors could briefly discuss the new optimization complexity metric for generalized linear matrix completion under USR: - Yalçın, Baturalp, et al. "Factorization approach for low-complexity matrix completion problems: Exponential number of spurious solutions and failure of gradient methods." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. - Zhang, Haixiang, et al. "A new complexity metric for nonconvex rank-one generalized matrix completion." Mathematical Programming 207.1 (2024): 227-268. Also, it would be better if the authors could briefly discuss or compare to the bound for the 1-bit matrix completion bound in: - Bi, Yingjie, Haixiang Zhang, and Javad Lavaei. "Local and global linear convergence of general low-rank matrix recovery problems." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 9. 2022. Other Strengths And Weaknesses: Please see my comments in other sections. Other Comments Or Suggestions: - In Remark 1, should the bound be $C(\pi_{nuc}^*) \geq C(\pi_{E}) / 2$? - In Theorem 4.1, I wonder if the authors explain the intuition why a lower bound on $N$ is required? - Could the authors elaborate a little more on the adaptive estimation procedure mentioned at the end of Section 5.1? - I appreciate the novelty in the bounds and algorithms proposed in this work. However, I am also curious how the computational cost compares with algorithms that utilize the Burer-Monteiro factorization. When the problem scale ($d_1, d_2$) is large, the factorization-based algorithms may outperform the proposed algorithm in this work. - It would be better if the authors could also compare the proposed algorithm with existing algorithms, potentially algorithms that depend on the Burer-Monteiro factorization. Questions For Authors: Please see my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing detailed and insightful reviews, and we are especially encouraged by your overall positive evaluation of our paper. Let us respond to each point that you have raised. --- **Discussions regarding Burer-Monteiro Factorization (BMF) Approach** Thank you for pointing us to the references on the BMF approach. Given that one of our main applications is generalized linear matrix completion under USR, we *will include* relevant discussions regarding the BMF literature suggested by the reviewer in the revision. Below, we provide a summary of the key points we intend to address: **Different Problem Setting:** We note that the setting considered in the BMF literature differs slightly from ours. The reviewer's recommended references all focus on the noiseless matrix recovery problem, where the linear measurement operator $\mathcal{A}$ is deterministic, and the goal is to recover the ground-truth matrix $M^*$ exactly. In contrast, our setting involves noisy matrix recovery, focusing on how quickly the error decays with the sample size rather than exact recovery. **Optimization vs. Statistical Complexity:** Our methodology relies solely on convex optimization problems, meaning the optimization complexity associated with non-convex approaches like BMF does not apply here. Upon reviewing the suggested literature, we believe directly comparing the optimization complexity metric (OCM) from Yalçin et al. (2022); Zhang et al. (2024) with our "statistical complexity metric" (SCM) in Theorem 4.1, $\lambda_{\max}(H(\pi; \Theta_\star))$, is ambiguous. The OCM quantifies the non-convexity of the BMF landscape relating to the success of local search methods, while SCM is information-theoretic and relates to the sample size required for *any* estimator to obtain a desired accuracy with high probability. **Comparison with Bi et al. (2022):** As noted earlier, Bi et al. (2022) focus on exact recovery in the noiseless setting using gradient descent and provide only convergence rate results (Theorems 5 and 6). We could not identify a statistical sample complexity bound for 1-bit matrix completion beyond the experimental results presented. **Computational Cost:** Our methodology only involves convex optimization subroutines and inverting and computing the SVD of $d^2 \times d^2$ matrices, all of which can be solved using CVXPY and NumPy. While our approach may not scale as efficiently as BMF in high dimensions, its primary contribution lies in the statistical guarantees. **Empirical comparisons:** See our response to reviewer pkis --- **C1. Remark 1** Thank you for pointing out this oversight. Indeed, the bound you provided is correct, and we will correct this in the revision. **C2. Lower bound on $N$ in Theorem 4.1** Thank you for highlighting this. Upon careful review of the proof of Theorem 4.1, we recognized that the previously stated requirement on $N$ was merely an artifact of our original proof strategy. We have since refined our proof, resulting in an improved version of Theorem 4.1 that no longer requires this condition: Let $\mathcal{A} \subseteq B^{d_1 \times d_2}_F(1)$ and $\pi \in \mathcal{P}(\mathcal{A})$. Let $S\_\* > 0, r \geq 1$ such that $\frac{S\_\*^2}{r} \geq \gamma$ for some $\gamma > 0$. Then, there exist universal constants $C\_1, C\_2 > 0$ and $c \in (0, 1)$ such that for any $\Theta\_\star \in \Theta(r, S\_\*)$ with $\lVert \Theta\_\star \rVert_F^2 \geq \frac{9 \gamma}{8}$, there exists a $\varepsilon = \varepsilon(\Theta\_\star) > 0$ such that the following holds: $$\inf\_{\widehat{\Theta}} \sup\_{\widetilde{\Theta}\_\star \in \mathcal{N}\_\star} \mathbb{P}\_{\pi, \widetilde{\Theta}\_\star}\left( \left\lVert \widehat{\Theta} - \widetilde{\Theta}\_\star \right\rVert\_F^2 \geq \frac{C\_2 \gamma g(\tau) r (d\_1 \vee d\_2)}{N \lambda\_{\max}(H(\pi; \Theta\_\star)) S\_\*^2} \right) \geq c.$$ We sincerely appreciate the reviewer’s insightful comment, which enabled us to refine our lower bound result. **C3. Adaptive Estimation at the end of Section 5.1** Thank you for bringing this ambiguity to our attention. To explicitly clarify, our Stage I estimation procedure (nuclear-norm regularized MLE) assumes knowledge of all GLM parameters except $\Theta_\star$. For example, in the Gaussian noise case, knowledge (or an accurate upper bound) of the true variance is needed, as it informs the choice of regularization parameter in Stage I (Theorem 3.1 and Lemma B.4). When mentioning adaptive estimation at the end of Section 5.1, we intended to suggest alternative approaches that could be employed for Stage I when such knowledge is unavailable or uncertain. For instance, Section 4 of Klopp (2014) addresses adaptive estimation for matrix completion problems with unknown noise variance. We also note that our previous reference to Klopp et al. (2015) was inaccurate, as their method similarly assumes knowledge of the GLM. We will clarify these points in the revision. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal! I am happy to increase the score. I also appreciate the authors for pointing out the difference between their work and the references. I feel that it is not necessary to include these references. It may still be better to mention the differences/pros/cons compared to BMF-based algorithms. --- Reply to Comment 1.1.1: Comment: We are grateful to the reviewer for the enlightening discussion that would further help us position this paper, especially in the context of matrix recovery/completion, and for raising the score. We will make sure to include all the relevant discussions in the revision. Thank you again.
Summary: The authors study the problem of generalized linear low-rank trace regression. They build on the previously established algorithm LowPopArt, which applies to the linear setting. Their main result (Theorem 3.1) provides the tightest known upper bound for recovery in the operator norm, incorporating instance-specific quantities. They also establish a new lower bound and claim that it is the first lower bound in this context to incorporate instance-specific curvature. Finally, they discuss applications of their algorithm to matrix completion with uniformly sampled entries and bilinear dueling bandits. Claims And Evidence: Yes, I believe all claims are supported by clear evidence. Methods And Evaluation Criteria: N/A Theoretical Claims: I did not check the proofs, but the results seem reasonable and correct. Experimental Designs Or Analyses: There are no experimental results. Supplementary Material: No, I have not. Relation To Broader Scientific Literature: As the authors themselves clearly mention, the algorithm is fairly similar to LowPopArt (Jang et al., 2024). I believe the authors discuss the connections to both matrix completion and bandits literature well in the paper. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: The paper is very comprehensive—it provides both upper and (almost matching) lower bounds, as well as two interesting applications. It is remarkable that assuming such a general setting can yield state-of-the-art guarantees in simple settings, such as matrix completion under uniform sampling with replacement (USR). I believe Theorem 4.1 is also a very strong result on its own. Although there are multiple assumptions in the paper, they all seem very weak, which makes the results even more impressive. The only weakness I can think of is the first stage, which essentially serves to effectively linearize the problem. Although it may be crucial for establishing the theoretical results, I would be curious to see if it can be skipped in practice and, if so, at what cost. Other Comments Or Suggestions: Typos: Line 046 (left): "exmaple" → "example" Line 075: It would be helpful to define what you mean by curvature the first time you mention it. Line 078 (right): Please refer to the definition of Borda regret in (13) when you mention it. Line 157 (left): Shouldn't it be $\kappa_\star V(\pi) \lesssim H(\pi; \Theta^\star)$? Line 420 (left): "becomes to Bernoulli" → "becomes Bernoulli" Questions For Authors: 1. In line 080, you write, "The known performance guarantees ... depend on the inverse link function's derivative $\dot{\mu}$..." However, looking at the inequalities around line 370 (right), your bound still depends on $\dot{\mu}$. I assume you meant to write that it depends on $\min_z \dot{\mu}(z)$? 2. In line 100, you say that $H(\pi; \Theta^\star)$is the Hessian of the negative log-likelihood loss at $\Theta^\star$. Is this evident from its definition in (4)? 3. Regret in Theorem 5.1 scales with $T^{2/3}$ as a consequence of using ETC. Are there any algorithms with better dependence on the horizon, and do you expect your algorithm could be easily applied in those cases? 4. The notation is a bit difficult to follow at times. For instance, your GL$(\pi)$ corresponds to $B(Q(\pi))$ from the LowPopArt paper. There are also other parameters, such as $\kappa$ and $\lambda_\min$. I believe it would be helpful for other researchers to have a section in the appendix where all these parameters are compared and their intuition is explained. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing detailed and insightful reviews, and we are especially encouraged by your overall positive evaluation of our paper. Let us respond to each point that you have raised. **W1. First stage, which essentially serves to linearize the problem effectively. Although it may be crucial for establishing the theoretical results, I would be curious to see if it can be skipped in practice and, if so, at what cost.** Thank you for your interesting question. Indeed, as the reviewer correctly pointed out, the first stage essentially linearizes the problem by outputting an asymptotically consistent initial estimator $\Theta_0$, which is crucial in obtaining the correct guarantee for the second stage (Catoni estimation). We also show that this is the case empirically, i.e., the first stage is necessary to obtain a good error at the end. We test the effect of the initial (pilot) estimator for Stage II (matrix Catoni) in 1-bit recovery of a symmetric rank-1 matrix with three initial estimators: zero, random, and MLE obtained from $N = 3 \cdot 10^4$ samples. In Fig2.png of this [link](https://anonymous.4open.science/r/GL-LowPopArt-1186/), it can be seen that there is a clear gap between {zero, random} and {MLE}, showing that indeed, skipping Stage I leads to much larger bias in practice as well. Still, one can also observe that the matrix Catoni results in a decaying error up to a certain point. Thank you again for the interesting question, which helps us further emphasize our contribution. In the revision, we will expand upon the experiments, including the one you suggested. **Typos and Writing Suggestions** Thank you for pointing out these typos. We will carefully address and correct all typos in the revision. Specifically, regarding line 157, you are correct—it should indeed be written as $\kappa_\star V(\pi) \preceq H(\pi; \Theta_\star)$. **Q1. Discrepancy between line 080 and line 370** Thank you for highlighting this discrepancy. You are correct that both our guarantee and prior guarantees depend on $\dot{\mu}$. The distinction we intended to make in line 080 is that previous results rely on a "worst-case" $\dot{\mu}$, specifically $\min_{|z| \leq \gamma} \dot{\mu}(z)$, whereas our guarantee depends explicitly on the instance-specific quantity $\min_{i,j} \dot{\mu}((\Theta_\star)_{i,j})$. We will clarify this explicitly in line 080 in the revision. **Q2. Eqn. (4) is the Hessian?** You are correct; using the relation $\langle X_t, \Theta \rangle = \mathrm{vec}(X_t)^\top \mathrm{vec}(\Theta)$ combined with (1), (4) indeed corresponds (up to a constant factor) to the Hessian of the expected negative log-likelihood loss. We acknowledge that this relationship might not have been immediately clear, so in the revised version, we will explicitly state this connection by clearly defining the negative log-likelihood loss just above (4). **Q3. Are there any algorithms with $o(T^{2/3})$ Borda regret?** Thank you for your insightful question. Due to space constraints, we addressed most of the discussions on bilinear dueling bandits in Appendix H of our submission (specifically, see the last paragraph on pg. 48). Among the deferred discussions, we especially highlight the lower bound part. Wu et al. (2024) established a $\Omega(d^{2/3} T^{2/3})$ Borda regret lower bound for generalized linear dueling bandits, attributing the $T^{2/3}$ rate to the inherent difficulty of mixing exploration and exploitation (see the paragraph following their Theorem 4.1). Although Wu et al. (2024)'s setting slightly differs, it is very similar in spirit, and we believe that a $T^{2/3}$ horizon dependence is generally unavoidable for Borda regret. Furthermore, as our estimation guarantee is not sequential (not anytime-valid), we believe ETC is an ideal integration choice for our estimation procedure within the bandit framework. We will clearly discuss this comparison and rationale in the revised manuscript. **Q4. Notation table** Thank you for your valuable suggestion. In the revised version, we will include an Appendix section dedicated to clearly defining and summarizing all notations used throughout the manuscript.
Summary: The paper introduces GL-LowPopArt, a new estimator for generalized linear low-rank trace regression. It combines nuclear norm regularization with matrix Catoni estimation, achieving tighter error bounds than previous methods. The authors propose a novel experimental design objective, GL$(\pi)$, and establish a local minimax lower bound, showing that the estimator is near-optimal up to the Hessian’s condition number. The estimator can be used within, for example, generalized linear matrix completion, where it adapts to instance-specific curvature. Claims And Evidence: The authors provide rigorous proofs for its theoretical guarantees, including upper and lower bounds. Comparisons with prior work show clear improvements in error rates and regret bounds. For matrix completion, the method adapts to instance-specific curvature, outperforming approaches that rely on worst-case assumptions. Methods And Evaluation Criteria: The focus of this work is on theoretical guarantees, which I find acceptable for this kind of work in statistical learning theory. The proposed method addresses handling nonlinearity and curvature adaptivity and evaluating it using error bounds and regret analysis seems to be established in this field. Theoretical Claims: I did not check any proofs for their correctness. Experimental Designs Or Analyses: The paper does not have any experiments. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: GL-LowPopArt extends LowPopArt (Jang et al., 2024) from linear to generalized linear models, requiring new techniques to handle bias from nonlinear inverse link functions. It also improves upon Fan et al. (2019) by relaxing assumptions and obtaining tighter bounds. -------- Jang, Kyoungseok, Chicheng Zhang, and Kwang-Sung Jun. "Efficient low-rank matrix estimation, experimental design, and arm-set-dependent low-rank bandits." arXiv preprint arXiv:2402.11156 (2024). Fan, Jianqing, Wenyan Gong, and Ziwei Zhu. "Generalized high-dimensional trace regression via nuclear norm regularization." Journal of econometrics 212.1 (2019): 177-202. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The method lacks empirical evaluation on real-world datasets, and there is no complexity analysis or runtime comparison with existing methods. These may limit the practical applicability, but I don't think it is a reason to reject the paper; more like a nice-to-have. Other Comments Or Suggestions: There are small typos throughout the text, e.g., "exmaple" should be "example" and "enviornmental" should be "environmental" in Section 1. Questions For Authors: How sensitive is the method to misspecification of the GLM, and could it be extended to handle uncertainty in the GLM itself? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing detailed and insightful reviews, and we are especially encouraged by your overall positive evaluation of our paper. Let us respond to each point that you have raised. **W1. Lack of empirical evaluation** Thank you for your suggestion. We provide preliminary experimental results in this [link](https://anonymous.4open.science/r/GL-LowPopArt-1186/). We assure the reviewer that we will expand upon the numerical experiments in the revision. We consider 1-bit recovery of a rank-1 symmetric matrix and compare the nuclear norm error of Stage I vs. Stage I+II vs. BMF w.r.t. the number of samples. For fair comparison, we ensured that Stage I+II uses the same number of samples as the others. Detailed implementation can be found in the link. Fig1.png shows the result, where it is clear that our Stage I+II outperforms the other baselines. **W2. No complexity analysis or runtime comparison with existing methods** Thank you for raising this important point. As correctly noted by the reviewer, our primary goal has been to obtain tight statistical and sample complexity guarantees, and thus, we did not explicitly provide computational complexity or runtime comparisons. However, we emphasize that our algorithm is computationally tractable since it relies solely on convex optimization subproblems (MLE, optimal experimental design) that are efficiently solvable via available tools such as CVXPY. We also note that prior works focusing primarily on statistical guarantees, such as Jang et al. (2024), similarly do not provide computational complexity comparisons. To rigorously analyze computational complexity, one would need to introduce optimization errors ($\varepsilon$) and examine two key aspects: (1) how these optimization errors impact the statistical guarantees, and (2) the complexity of solving each convex optimization subproblem to $\varepsilon$-accuracy. Such analysis has been conducted, for instance, in Faury et al. (2022) within the context of logistic bandits. We recognize this as an important avenue for future work and will include relevant discussions in our revised manuscript. **Typos** Thank you for pointing them out. We will make sure to fix all typos for the revision. **Q1. How sensitive is the method to misspecification of the GLM, and could it be extended to handle uncertainty in the GLM itself?** Thank you for your interesting question. We begin by emphasizing that all the discussions in our paper assume a well-specified GLM, which is a common assumption in the bandit and statistical literature (e.g., Chapter 24.4 of Lattimore & Szepesvari, 2020). Indeed, addressing misspecifications typically requires separate theoretical and algorithmic techniques, as misspecification can introduce challenges such as biased estimates and reduced efficiency. There are established works in the literature (White, 1982; see Fortunati et al. (2017) for a survey) that explore such issues in the context of misspecified models, but handling them is often beyond the scope of our current focus. With this clarification, we can elaborate on why our current methodology isn’t expected to be robust to gross misspecification, such as a misspecified distribution. It is well known that misspecified MLE estimates converge not to the true $\Theta_\star$, but rather to the KL projection of the misspecified distribution onto the ground-truth distribution (White, 1982). This results in the initialization in Stage I of our GL-LowPopArt (Algorithm 1) potentially being too far from the true $\Theta_\star$, which can lead to a constant, non-vanishing bias during the Catoni estimation in Stage II. However, if the misspecification is "minor," such as an overestimation of variance in the Gaussian case (where the true variance is $\sigma$ but the learner assumes $\bar{\sigma} > \sigma$), our methodology is expected to be somewhat robust. In such cases, the estimates would still be reasonably close to the true $\Theta_\star$. We also acknowledge that the issue of uncertainty in the GLM is a critical and promising direction for future research. While our current work is not focused on modeling uncertainty in the GLM, such extensions—whether through Bayesian methods or robust statistical techniques (Walker, 2013)—could certainly enhance the generalizability of our approach. We will discuss these possibilities in more detail in the revision. Thank you for your valuable question. [1] White, H. (1982). Maximum Likelihood Estimation of Misspecified Models. *Econometrica*, 50(1), 1-25. [2] Fortunati, S., Gini, F., Greco, M. S., and Richmond, C. D. (2017). Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental Findings and Applications. *IEEE Signal Processing Magazine*, 34(6), 142-157. [3] Walker, S. G. (2013). Bayesian inference with misspecified models. *Journal of Statistical Planning and Inference*, 143(10), 1621-1633.
null
null
null
null
null
null
null
null
Griffin: Towards a Graph-Centric Relational Database Foundation Model
Accept (poster)
Summary: This paper proposes Griffin, a unified model for Relational DataBase(RDB). Griffin has a classification prediction head and a regression prediction head to handle the classification and regression tasks, correspondingly. The classification head can handle arbitrary classes by comparing the inner product between the generated node representation and the text embeddings of the target categories. Components like cross-attention modules and unified task formats have been proposed to make this one model suitable for different scenarios. Griffin follows a pretrain-finetune paradigm. It first pretrains on a set of single tables and RDBs. On a specific task, it is then further finetuned on the provided data. Experiments show that Griffin, especially the pretrained one, performs better than the GraphSAGE baseline. Claims And Evidence: NA Methods And Evaluation Criteria: - The proposed method is properly evaluated on the benchmark datasets. - The significance of the proposed Griffin needs to be further validated. Standard deviation needs to be included to verify the significance. Theoretical Claims: NA Experimental Designs Or Analyses: - For the single-table tasks, more baselines should be included for a comprehensive comparison, like XgBoost, LightGBM and TabPFN[1]. [1] TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second. Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: Past work: Relational Deep Learning: Graph Representation Learning on Relational Databases. Matthias Fey, Weihua Hu, Kexin Huang, Jan Eric Lenssen, Rishabh Ranjan, Joshua Robinson, Rex Ying, Jiaxuan You, Jure Leskovec Contemporary work: RelGNN: Composite Message Passing for Relational Deep Learning. Tianlang Chen, Charilaos Kanatsoulis, Jure Leskovec ContextGNN: Beyond Two-Tower Recommendation Systems. Yiwen Yuan, Zecheng Zhang, Xinwei He, Akihiro Nitta, Weihua Hu, Dong Wang, Manan Shah, Shenyang Huang, Blaž Stojanovič, Alan Krumholz, Jan Eric Lenssen, Jure Leskovec, Matthias Fey Other Strengths And Weaknesses: Strength: - The task over RDB is a practical problem and has a great impact on the real-world applications. - The writing of the paper is clear and easy to understand. - The tailored design of Griffin makes it able to perform arbitrary classification tasks and regression tasks in a single model. Weakness: - The experiment part is not comprehensive. There is only GraphSAGE as the baseline. - Several design choices are not well supported by evidence. For example, what is the advantage of the Cross Attention Module compared to just averaging the embedding? Similarly, during message-passing, why use Max aggregation rather than Mean (Attention) to aggregate messages from different edge types? Why the encoder/decoder for numeric features are trained seperately rather than jointly? Other Comments Or Suggestions: NA Questions For Authors: - The encoder/decoder of the numerical features is trained alone, apart from the entire training pipeline. What is the benefit of this? There is no clear reason why it can't be trained end-to-end. To support the claim, an ablation study is needed to validate the design. - When encoding the target node, how do you generate the embedding of the missing column? I assume there will be a default missing embedding for the missing value. - What if there are missing values in multiple nodes that are not the target node? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer 95m6 for acknowledging the contributions of our work and for the helpful suggestions and questions. We address each concern below. The updated experiments including 7 additional baselines about main experiment and 2 additional baselines on few-shot setting is provided at https://anonymous.4open.science/r/Griffin-Rebuttal --- **Related Work** Thank you for the recommended references. In response to your suggestions, along with those from other reviewers, we have revised the related work section to include a broader range of relevant literature. We have added new subsections covering embedding-based RDB models and Table QA approaches, as well as additional works on RDB-specific models and tabular foundation models. To better reflect the landscape and clarify our positioning, we now organize the related work into the following categories: • Single-table and RDB models for tabular predictive tasks • Table QA and SQL generation methods • Tabular and graph foundation models, including both LLM-based models and those trained from scratch --- **W1: Experimental Coverage and Baselines** Thank you for your concern regarding the experimental baselines. We have significantly extended our experiments by including the following additional baseline categories: 1. **Other GNN-based models** for RDBs, including **GAT**, **PNA**, and **HGT**. 2. **Widely used single-table methods**, such as **XGBoost**, **MLP**, **DeepFM**, and **Feature Transformer**, applied to RDBs via **Deep Feature Synthesis (DFS)** to flatten the schema. 3. **Tabular foundation models**, including **TabPFN** and **TabLLM**, also used with DFS. These baselines are now incorporated into the experimental section, reinforcing its effectiveness across RDB-related tasks. --- **W2 & Q1: Design Choices and Ablation Studies** Thank you for raising questions about the justification of specific design choices. We have now addressed these through targeted ablation studies and reorganized the section to improve readability. Specifically: • **Cross-Attention vs. Mean Pooling**: We compare our cross-attention mechanism with simple mean-pooling. The results show improvements, supporting the benefit of selective attention over equal-weighted aggregation. • **Max vs. Mean Aggregation**: We evaluate different aggregation methods over heterogeneous relations. Max aggregation helps highlight the most informative relation types, and the ablation supports this design choice. • **Numeric Encoder/Decoder Pretraining**: Our design choice is based on the goal of enabling transferability across tasks. To ensure that the model outputs remain in a consistent numerical space, we use a fixed numeric encoder and decoder during training. Since our primary focus is not on improving single-task performance but rather on transferability, we did not include ablation studies on single-task performance for this component. That said, we recognize the importance of this question and are planning a more comprehensive ablation study in future work—starting from re-pretraining the model to investigate the impact of this design on cross-task generalization. --- **Q2 & Q3: Handling Missing Values** We appreciate the questions on how missing values are handled, both in the target node and in neighboring nodes: • **For the target node’s missing column (label)**: We mask it using a **zero placeholder** during pretraining. • **For other missing values**: • **Categorical/Text features** are replaced with a "None" token. • **Numeric features** are replaced with the **column-wise mean**. This strategy generalizes to cases where **multiple features or nodes have missing values**, and aligns with common practices in both tabular learning and graph-based models. We have clarified this in the methodology section. --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my questions. The additional results at the rebuttal stage should be included in the final revision. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our work! We’ve updated the key experimental figures and will further refine them in the final version.
Summary: This paper proposes Griffin, a foundation model specifically designed for Relational Databases (RDBs), which leverages graph neural networks (GNNs) to unify the processing of diverse RDB tasks. Experiments demonstrate that Griffin exhibits superior or comparable performance across multiple benchmarks, especially in low-data scenarios. Claims And Evidence: Partially. The authors do not provide a clear description of the challenges in developing the foundation models for RDBs. Methods And Evaluation Criteria: Partially. The authors highlight that Griffin is capable of handling temporal heterogeneous graphs, a feature that distinguishes it from traditional Message-Passing Neural Networks (MPNNs), as the latter does not inherently support temporal data processing. Theoretical Claims: N/A Experimental Designs Or Analyses: While the experiments comprehensively evaluate Griffin's performance across diverse tasks and datasets, the study primarily focuses on validating the proposed model itself. Notably, the absence of comparative analyses with existing models or methods limits the understanding of how Griffin's capabilities measure against alternative approaches in solving RDB-related problems. Supplementary Material: Experimental details and additional experimental results. Relation To Broader Scientific Literature: The Griffin is built upon the Graph Foundation Models (GFMs) and the Tabular Foundation Models (TFMs) and extend them to the RDB-related problems. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper presents a foundation model specifically designed for RDB-related problems, effectively addressing a significant gap in the field. 2. Comprehensive experimental results demonstrate the model's robust and comparable performance across multiple RDB-related tasks, showcasing its versatility. Weakness: 1. The paper's organization and presentation could be improved for better clarity. For example, Section 1 Paragraph 4 introduces the first challenge, while the subsequent paragraph (Paragraph 5) shifts to discussing Griffin's advantages, creating a disjointed flow that may confuse readers. 2. The paper lacks a clear justification for the necessity of developing a specialized foundation model for RDB-related tasks. It does not adequately address whether existing frameworks, such as those based on large language models, could achieve similar or better results. 3. The experimental evaluation is insufficient. The paper fails to compare Griffin's performance against the state-of-the-art (SOTA) methods for RDB-related tasks, relying instead on a single baseline (Sage) that does not represent SOTA. Additionally, the authors do not provide ablation studies or analyses to demonstrate the contribution and necessity of each component in the proposed model framework. Other Comments Or Suggestions: In Section 1, Paragraph 2, the connection between graph-based methods and RDB problems is not clearly articulated. Additionally, the visualization of main results in Figures 2 and 3 needs refinement. The current vertical labeling of the x-axis significantly hinders readability, making it challenging to interpret the data effectively. A more user-friendly design, such as horizontal x-axis labels, would greatly enhance the clarity and accessibility of these critical findings. Questions For Authors: 1. The authors should address the necessity of developing a foundation model specifically for RDB-related problems. It remains unclear whether existing models or frameworks could effectively solve these tasks. To strengthen the significance of this work, the authors should provide comparative results against commonly used frameworks, as the absence of such comparisons undermines the motivation and relevance of the proposed approach. 2. The relationship between Griffin and temporal data requires further clarification. While the paper highlights Griffin's capability to handle heterogeneous RDB data, it lacks a detailed explanation of how the model processes temporal data. This omission creates ambiguity regarding the model's applicability to time-dependent RDB tasks, which is a critical aspect of real-world database systems. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer XrAk for the thoughtful and constructive feedback. We address your concerns in detail below. The updated experiments including 7 additional baselines about main experiment and 2 additional baselines on few-shot setting is provided at https://anonymous.4open.science/r/Griffin-Rebuttal --- **Claim 1: Clarifying the Challenges in Developing Foundation Models for RDBs** Thank you for pointing this out. We agree that the challenges could be stated more clearly. We now explicitly outline the key challenges in developing a foundation model for relational databases: 1. **Data and task diversity**: RDBs contain various data types (numerical, categorical, textual) and support diverse tasks, including regression and multi-class classification. A unified model must handle all these variations under a single framework. 2. **Lack of RDB-specific GNNs**: Existing GNN architectures are not tailored for relational databases and often fail to leverage the rich metadata (e.g., table schema, column descriptions, task semantics) that is critical in RDBs. 3. **Missing pretraining pipeline and transferability analysis**: It is not yet clear how pretraining can benefit relational data, nor what kinds of tasks or data lead to effective transferability across RDBs. These challenges directly motivate the three components of Griffin: unified encoder/decoder design, GNN architecture enhancements, and a new pretraining pipeline. We have now added a dedicated subsection to the paper to clearly articulate these challenges and align them with our proposed solutions. --- **Experiment: Including More Baselines & W3: Ablations and Experimental Focus** Thank you for raising this point. While our primary goal is to present a unified foundation model framework for RDBs, we agree that including broader baselines and detailed ablation studies is essential for a comprehensive evaluation. To that end, we have expanded our experimental comparisons to include three additional categories of baselines: • **Three additional GNN-based baselines**: GAT, PNA, and HGT, as suggested in 4DBInfer. • **Four single-table baselines**: MLP, DeepFM, Feature Transformer, and XGBoost, along with **Deep Feature Synthesis (DFS)**—a strong feature synthesis method that converts multi-table data into a single table by computing meaningful feature combinations. • **Two single-table baselines for few-shot settings**: We include TabPFN and TabLLM as representative few-shot baselines, both of which leverage pretrained models. TabPFN is trained from scratch, while TabLLM uses LLM-based pretraining. We have also reorganized the experimental section to make the ablation studies more prominent. These ablations highlight the contributions of key architectural choices—specifically, the **cross-attention mechanism** and **hierarchical aggregation**—to the overall performance of the model. --- **W1: Organization and Presentation** Thanks for your suggestions. We have revised the introduction to better connect the discussion of challenges with our method overview. --- **W2 & Q1: Justification for RDB-Specific Foundation Models** We appreciate this important question. While single-table models can be combined with DFS to simulate multi-table input, they inherently lack the ability to capture the complex relational structure that is central to RDBs. This limitation has been highlighted in prior benchmarks such as 4DBInfer, where structure-aware models (e.g., GNN-based) consistently outperform single-table methods with DFS. To further support this, we have added experimental comparisons with strong single-table baselines, including XGBoost, DeepFM, TabPFN, and TabLLM (all paired with DFS). The results reinforce the need for RDB-specific model designs like Griffin. --- **Comments and Q2: Handling Temporal Data** Thank you for this valuable observation. Our approach to handling temporal information follows the setup used in 4DBInfer, where subgraphs are constructed such that all neighboring nodes have timestamps earlier than the target node. This ensures that the model only uses past information, aligning with the temporal nature of real-world relational data. To improve clarity, we have **expanded the explanation** in the preliminaries and **reorganized the method section** to better highlight how temporal constraints are applied during subgraph construction. Additionally, we have revised the figures and included more baselines to improve presentation. We appreciate your feedback on presentation—it has significantly improved the clarity and accessibility of our results.
Summary: The paper introduces Griffin, the first graph-centric foundation model designed specifically for relational databases. Griffin combines advanced architectural innovations such as unified encoders for categorical and numerical features, cross-attention modules for selective information aggregation, and enhanced hierarchical message-passing neural networks. Pretrained on extensive single-table and multi-table datasets, Griffin demonstrates superior or comparable performance to task-specific models, excels particularly in low-data scenarios, and achieves strong transferability across various RDB tasks. ## No score updates after rebuttal The comparison with single-table methods using single table input is critical from my point of view, but the rebuttal has not provided statistical results. Claims And Evidence: The claims presented in the paper are supported by extensive experimental evidence, clearly demonstrating the model's efficacy in handling relational databases. Griffin’s generalization capabilities are well-substantiated through comprehensive experiments involving benchmarks such as 4DBInfer and RelBench. However, an area that warrants further exploration is whether the method effectively generalizes to single-table datasets, especially compared to established single-table baselines. Methods And Evaluation Criteria: The proposed methods, including unified feature encoding, cross-attention mechanisms, and hierarchical message-passing neural networks, are highly relevant and thoughtfully designed for the targeted relational database applications. The evaluation criteria using existing graph-centric benchmarks (4DBInfer and RelBench) are appropriate and rigorous, making sense for the context of relational data. Theoretical Claims: The paper does not primarily focus on theoretical claims or proofs; therefore, no correctness of proofs needed verification. Experimental Designs Or Analyses: The experimental designs and analyses appear sound and thorough. The paper includes detailed comparisons across multiple metrics (Accuracy, ROC-AUC, RMSE, MAE, and Logloss). One limitation, however, is the restricted baseline comparison primarily against SAGE. Including comparisons with additional baseline methods tailored for single-table data would strengthen the analysis. Supplementary Material: I reviewed the supplementary materials, including additional ablation studies and extended experiments on supervised fine-tuning (SFT) strategies. The code is provided. Relation To Broader Scientific Literature: The integration of advanced encoding and decoding strategies from related literature on tabular and graph data enriches the paper’s contributions. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: Methodological innovations for relational database-specific foundation model. Extensive empirical analysis, including comprehensive ablation studies and transferability experiments. Weaknesses: Potential for further clarification on generalizability specifically in single-table scenarios. Other Comments Or Suggestions: - Questions For Authors: How does Griffin perform specifically in single-table settings compared to state-of-the-art single-table models (e.g., XGBoost, TabNet, TabLLM, TableGPT)? Clarifying this would enhance understanding of its generalizability. What is Griffin’s performance trend with varying numbers of tables in relational datasets? Is there a threshold where Griffin clearly outperforms simpler methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer yHkP for acknowledging the contributions of our work and for the constructive questions and suggestions. The updated experiments including 7 additional baselines about main experiment and 2 additional baselines on few-shot setting is provided at https://anonymous.4open.science/r/Griffin-Rebuttal. We address the key concerns below: --- **Q1: Performance Comparison with Single-Table Baselines** Griffin is primarily designed for RDB tasks and is not optimized to outperform state-of-the-art models on purely single-table datasets. However, to better understand its generalizability—and in response to suggestions from other reviewers—we conducted two additional sets of experiments: 1. **DFS + Single-Table Models on RDB Tasks**: We employed **Deep Feature Synthesis (DFS)** [1], which converts multi-table RDBs into single-table formats by aggregating features. This allows single-table models to be applied to RDB tasks. Building on pipelines from 4DBInfer, we evaluated advanced single-table baselines (including MLP, DeepFM, Feature Transformer, and XGBoost) on RelBench using DFS. Results indicate that even strong single-table models struggle to match Griffin’s performance, particularly on tasks where the multi-table structure plays a critical role. 2. **Few-Shot Settings with Pretrained Single-Table Models**: To evaluate performance in few-shot scenarios, we introduced two representative baselines: **TabPFN** (trained from scratch) and **TabLLM** (leveraging LLM-based pretraining). Both are designed for few-shot tabular tasks. Griffin still achieves higher average performance under this setting, demonstrating its generalization ability and potential as a foundation model for RDBs. [1] Deep Feature Synthesis: Towards Automating Data Science Endeavors https://groups.csail.mit.edu/EVO-DesignOpt/groupWebSite/uploads/Site/DSAA_DSM_2015.pdf --- **Q2: Performance Trends with Varying Numbers of Tables** Thank you for the insightful question. Griffin’s framework integrates innovations across encoder/decoder design, GNN architecture, and pretraining strategies—each contributing to improved performance under different structural complexities. Our ablation studies isolate the effect of these components, including cross-attention mechanisms and hierarchical aggregation functions. Regarding the trend with varying numbers of tables, we do not yet have a fully consistent explanation for all scenarios where Griffin outperforms simpler models. One intuitive observation is that Griffin performs especially well in tasks that involve **high-quality textual features**—for example, predicting review ratings in e-commerce datasets where review content is informative. That said, there are still cases where Griffin and simpler models (e.g., SAGE) exhibit unexpected performance patterns. Interestingly, even more expressive GNN models such as GAT, PNA, and HGT occasionally perform worse than SAGE. We believe this may stem from data-specific factors, including noisy features, varying relation types, or mismatched inductive biases—cases where simpler architectures might generalize better. We acknowledge this as an open research question and welcome further discussion on better understanding the relationship between RDB schema complexity and model performance.
Summary: The proposes Griffin, a pretrained model for relational databases. Griffin uses concepts of unified representation of inputs and tasks, cross-attention mechanisms, and graph neural networks (or MPNNs), tasks of cell completion and supervised learning for pretraining. The proposed framework was tested on several datasets of relational database and shows competitiveness compared to baselines. Claims And Evidence: Please refer to the comments or suggestions. Methods And Evaluation Criteria: Please refer to the comments or suggestions. Theoretical Claims: Please refer to the comments or suggestions. Experimental Designs Or Analyses: Please refer to the comments or suggestions. Supplementary Material: The supplementary material contain sources codes, which I did not rigorously review. Relation To Broader Scientific Literature: Relational tabular data are widespread and some parts of the proposed methods can be extended to domain specific problems. Moreover, some techniques (e.g., cell completion) can be extended and used for pretrained neural network models for tabular data in general. Essential References Not Discussed: - TabPFN models in foundation models for tabular data. - Some works on relational data (rdf2vec, EmbDI). Other Strengths And Weaknesses: Please refer to the comments or suggestions. Other Comments Or Suggestions: - What does Griffin stand for? - I am uncertain as Griffin can be considered as a "foundation model", as it is trained on specific sets of data. In this sense, the word "towards" in the title to me seems more acceptable and the claim (in introduction) that Griffin is a foundation model designed for RDBs seems debatable. - The sentence "curated a diverse and extensive collection of datasets for both single-table and RDB tasks" seems to obscure. What are some of the groundwork (good efforts) that have been put to curate these data? - Throughout the paper, the paper uses many adverbs or adjectives that are unclear. For instance, in Figure 1, what does "seamless" mean? There are several cases in the paper that use such words without much justification. - The term (or the title "Task Unification" (3.1)) might be misleading. The term task could simply denote regression or classification, while the subsection also deals with unification of the input representation. A clarification would help the understanding. - I am a bit confused with the part "Categorical and Textual Feature" with the paragraph starting with "For classfication tasks...". What is this supposed to describe? - For numerical features, have the authors considered numerical embeddings in RealMLP or followups of Gorishniy et al., 2021? - What is the structure of ENC (the numerical encoder) in equation 2? - How does Griffin mask for the completion task? Does it have a special embedding for this mask? - What is the number of epochs (or number of steps) for the pretraining (especially for the mask completion)? - Since there is no mention in the number of epochs (or steps), except for employing early stopping at 10, I worry about the overfit of the pretrain of mask completion since it is based on similarity of vectors (eq. 6), and there seems to be no regularization on this (as far as the current content of the paper). - It would be good to see comparison with simple baselines, such as DFS + (GBDT or Linear models). - (Paragraph Message-Passing Neural Network) What defines a more stable model? Or rather what is less stable model? - From my understanding I think there is no pretraining involved. If this is the case, it might be good to state this (hinting that it is to test the architectural choices of Griffin). - In Figure 2, it would be really helpful to have an arrow in the metrics to point the direction of better performing. Also, the terms Griffin-Pretrain and Griffin is confusing since I feel Griffin itself is a framework that includes the pretraining. Possibly, without pretrain could be better for understanding. Questions For Authors: Please refer to the comments or suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer w5kU for the careful reading and detailed feedback. We address the concerns and questions below. The updated experiments including 7 additional baselines about main experiment and 2 additional baselines on few-shot setting is provided at https://anonymous.4open.science/r/Griffin-Rebuttal --- **Related Work** Thank you for the suggestions. We have included those references and also in comparison. The updated can be refered to reply to Reviewer 95m6. --- **C1: What does Griffin stand for?** Griffin stands for towards a **G**raph-centric Relat**I**onal dataset **F**oundat**I**o**N** model. --- **C2: Clarifying “Foundation Model” Usage** We appreciate your point. While Griffin is trained on specific datasets, we consider it to follow the general trajectory of foundation models [1] —namely, training on broad data for wide applicability. That said, we agree the claim should be stated with more care. We now consistently refer to Griffin as “**towards** a foundation model” or as“ a foundation model **attemptation**”. All inconsistent uses have been revised accordingly. [1] On the Opportunities and Risks of Foundation Models --- **C3: Clarifying Data Curation Efforts** Thank you for raising this. The data curation involved collecting, filtering, and improving dataset quality. We started with public datasets from TPBERTa, TabLLM, UniTabE, TableGPT, TaBERT, TabNet, CARTE, and others from HuggingFace. However, many were unsuitable—some tables had very few rows, others were derived from RDBs but had lost context when flattened, and some were noisy or irrelevant (e.g., meaningless string columns without clear semantics). We ultimately curated ~200 datasets with over 10M rows. These were selected for having rich metadata and column features not heavily preprocessed (e.g., not just hashed values). We also generated natural language task descriptions using GPT4 to improve the training signals for tasks. --- **C4 & C13: Ambiguous Adjectives** We appreciate this feedback and have revised the paper to remove or clarify ambiguous adjectives. Specifically: • The word **“seamless”** was used to describe the unified modeling of both single-table and RDBs as graphs using a consistent node structure. • The term **“stable”** referred to the aggregation mechanism in our heterogeneous GNN, where neighbors are first aggregated within each relation type, followed by max aggregation across types to focus on the most relevant relations. Ablations also verify the effectiveness. For them, we have now replaced the adjective with a direct explanation of the method. --- **C5 & C6: Clarifying Section 3.1 and Decoder Design** We revised Section 3.1 to clarify the notion of **“Task Unification”**. Previously, it refers not only to regression/classification unification but also to input unification. Now we split the subsection into two subparts: one describing encoder design and one describing decoder design. For classification tasks, the decoder retrieves the most similar label embedding. These embeddings are aligned with both category and textual feature encodings. For regression tasks, a numerical decoder predicts normalized values. --- **C7 & C8: Numerical Feature Encoder** The ENC module is a 3-layer MLP with SiLU activations (replacing ReLU) and layer normalization. Our current pretraining task about ENC and DEC focuses on accurate numerical recovery, for which this design is sufficient. We included this paper in related work and leave further exploration on numbers to future work. --- **C9: Masking Strategy for Completion Task** We use a simple zero-filling strategy, consistent with the settings used in 4DBInfer and RelBench. Zero is also used as the placeholder during prediction. --- **C10–C11 & C14–C17: Pretraining Details** Thank you for your careful reading. We now provide a detailed breakdown of the training regimes used in our experiments: | **Method** | **Completion-pretrain-single** | **Joint-SFT-single** | **Joint-SFT-RDB** | **Finetune** | | --- | --- | --- | --- | --- | | **Data volume** | ~10M rows | ~1M rows | ~150M total (domain-specific subset) | Task-specific | | **Training steps** | ~12k (batch size 4096) | ~6k (batch size 4096) | Domain-dependent | Task-dependent | | **Regularization** | L2 (9e-3) | L2 (2e-5) | L2 (2e-4) | L2 (2e-4) | To improve clarity in the presentation of our experiments, we have renamed the models as follows: • **Griffin-unpretrained**: trained from scratch, no pretraining • **Griffin-pretrained**: pretrained on single-table data (completion + joint SFT) • **Griffin-RDB-SFT**: further pretrained with RDB-based joint SFT --- **C12: DFS + X Baselines** We have added these baselines in our updated experiments. A summary of the updated experimental setup is provided in our response to Reviewer zrC4 under Evaluation Concerns.
Summary: This paper introduces Griffin, which is claimed to be a novel foundation model specifically designed for RDBs. Griffin aims to unify differet tasks from single table to multi table RDBs. To do that, Griffin is pretrained by sampling the sub graphs from RDBs, and use a unified encoder/decoder to generate unified embeddings for different tasks. Experiments shows that Griffin has good performance across different tasks. Claims And Evidence: My major concerns of this paper are (1) the novelty of the proposed approach and (2) the insufficient discussion and comparison with other related works. For the novelty, the major contribution claimed was the introduction of a unified task decoder that eliminates the need for different prediction heads for different tasks. However, this idea seems to be not very novel or even needed. Instead of using the prediction heads for different tasks, Griffin still needs an MLP to be trained by sampling "x". It seems like simply merging all different prediction heads into one head. I'd suggest the authors to clarify the novelty of this paper. For the discussion of related works, see section "Methods And Evaluation Criteria". Methods And Evaluation Criteria: In the experiment section, the authors focuses on the comparison with only one baseline approach SAGE. I understand that the authors may want to do a "fair" comparison among different method. However, it is still very necessary to include other recent baselines into the evaluation, such as the ones reported in 4DBInfer or whichever performs well on the reported benchmarks (Figure 2). The authors mentioned that they did some modification due to the normalization etc. IMO, this should not be a reason not to include more baseline result. From the Appendix b.2, the results of Griffin can be transformed back so I did not see a challenge there. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria". Supplementary Material: Seems to be enough to reproduce the resutls. Relation To Broader Scientific Literature: This paper introdcues some new ideas to the area but like I mentioned before, I think the novelty of this paper is limited. Essential References Not Discussed: There are multiple methods not mentioned in this paper, which are very relavent to this domain. For example, this paper is also similar to those that talks about embedding over relational table (eg. https://arxiv.org/pdf/1909.01120). Some other LLM methods for tabular questions are also worth discussing: eg. https://arxiv.org/abs/2004.02349 https://arxiv.org/pdf/2107.07653 https://arxiv.org/pdf/2207.03637 Other Strengths And Weaknesses: W1: I think some part of this paper is not well-written and handwavy. E.g. for appendix B.1, I think the authors want to talk about the experimental setup difference between the methods in 4DBInfer RelBench and this paper. But is it confusing to say comprision between a benchmark with a model (griffin). E.g. Figure 1 shows a subgraph sampling phase but seems this phase is not described in the methodology section. Also figure 2 also shows a very good running example. I'd suggest the authors use this example in Section 3 while you explain your workflow. W2: Lack explaination of the results. In figure 2, it is intersting to show that on task "Retailrocket/cvr", Griffin is significantly better than SAGE. But for other benchmarks, the gap is much smaller, even on some benchmarks Griffin is worse. These results are interesting and need some explaination. Other Comments Or Suggestions: See above Questions For Authors: Could you please clarify the concerns that I have in section "Claims And Evidence" and "Methods And Evaluation Criteria"? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer zrC4 for the detailed and constructive feedback. Below, we address each of the main concerns. The updated experiments including 7 additional baselines about main experiment and 2 additional baselines on few-shot setting is provided at https://anonymous.4open.science/r/Griffin-Rebuttal --- **Claim 1: Novelty of the Proposed Approach** We appreciate the reviewer’s concern about the novelty of our approach and would like to clarify our contributions. Our goal is to take a step toward building a foundation model RDBs. We focus on three key challenges: 1. How to represent different RDBs in a unified model? 2. How to design a GNN architecture that works effectively for RDBs? 3. How to leverage abundant data, and when does it help? To address these challenges, we propose **Griffin**, a unified framework with three main components: (1) encoder/decoder design, (2) GNN architecture updates, and (3) a pretraining pipeline. Then our experiments show consistent improvement through three stages: 1. Our base model (even without pretraining) outperforms prior models (initially compared only with SAGE; we have now added 7 more baselines). 2. Pretraining on single-table data further boosts performance. 3. Additional SFT with similar or diverse RDBs enhances performance, especially in low-resource settings. The strength of **Griffin** lies in the integration of these three components under a unified framework aimed at RDB foundation modeling. Each component plays a distinct and necessary role: enabling diverse data usage, improving learning capacity, and supporting practical deployment. Our experiments confirm that each part contributes meaningfully to both performance and transferability. --- **Claim 2: Discussion of Related Work** We thank the reviewer for highlighting missing related work. We have now revised the related work section to include a wider range of relevant literature. The updated can be refered to reply to Reviewer 95m6. --- **Evaluation Concern: Missing Baselines and Alignment with RelBench** Thank you for raising this important point. While we initially prioritized “fair” comparisons, we agree that broader baseline coverage is necessary for a more complete evaluation. In response, we have expanded our baseline set to include: • **Three additional GNN-based baselines**: GAT, PNA, and HGT, as suggested in 4DBInfer. • **Four single-table baselines**: MLP, DeepFM, Feature Transformer, and XGBoost, along with **Deep Feature Synthesis (DFS)**—a strong feature synthesis method that converts multi-table data into a single table by computing meaningful feature combinations. • **Two single-table baselines for few-shot settings**: We include TabPFN and TabLLM as representative few-shot baselines, both of which leverage pretrained models. TabPFN is trained from scratch, while TabLLM uses LLM-based pretraining. Although DFS can be computationally intensive (taking up to 7 hours for optimized pipelines in 4DBInfer and even longer using the original Featuretools implementation), making it less suitable for low-resource few-shot settings, we still include these baselines as a reference. For evaluation, we continue to present normalized scores in the main figures to support comparison across tasks with varying scales. To ensure transparency and enable alignment with RelBench, we also provide some unnormalized results which is aligned with normalized scores, both suggesting Griffin outperforms Sage. --- **W1: Method Description and Figures** We thank the reviewer for this helpful feedback. We have made several improvements including Clarified terminology and Improved explanation of subgraph sampling and method pipeline. --- **W2: Explanation of Results** Thank you for the suggestion to provide more explanation of the results. In general, Griffin performs better when the task benefits from rich table metadata and high-quality text feature embeddings. However, there are still cases where it is difficult to consistently explain why Griffin outperforms SAGE, or vice versa. This challenge also applies to other strong GNN baselines such as GAT, PNA, and HGT. Despite being more expressive in theory, these models sometimes perform significantly worse than SAGE. We believe this may be due to the complexity and variability in data distributions, where simpler models like SAGE may align better with the task’s inductive bias in certain cases. We acknowledge this as an open question and an area for future work. We welcome further insights and suggestions on how to better understand and interpret these variations. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I've read them and I'll update my score if necessary. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! Your comments are very valuable to us and have helped improve the quality of our work. We would be happy if our responses addressed your concerns and if you would consider raising your score. The experiments and revisions based on your feedback, as well as suggestions from other reviewers, will be carefully included in the final version of the paper.
null
null
null
null
Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks
Accept (poster)
Summary: Achieving collaborative fairness in federated learning involves contribution assessment and reward allocation mechanisms. This paper proposes a new reward mechanism that leverages slimmable neural networks (a client with lower contribution would get a neural network of smaller width and lower accuracy). Aequa can decide reward values for post-training distribution of model rewards and also be adapted as part of training time model rewards. ## update after rebuttal The authors have addressed my concerns and I raised my score during the rebuttal process. Claims And Evidence: Mostly, the objectives of an ideal allocation algorithm (in Sec 4.2) should be better justified. For example, reducing the variability may actually help free riders. Equation 3 is only one way to achieve these objectives. There are others concepts that consider both fairness and efficiency such as Nash social welfare (which consider the product of utilities) and lp-norms. Methods And Evaluation Criteria: Mostly, I am unsure about post-training model rewards as each client would have got the best possible full model during training based on Algorithm 1 (without TEEs). Theoretical Claims: I only checked the correctness of Sec 5.2. Experimental Designs Or Analyses: I checked the soundness of any experimental designs or analyses. * It is better to only change one factor at a time, e.g., replace zeroing out gradients with slimmable neural networks and see whether the trend in accuracy is better. Fix the reward mechanism that decide reward values (e.g., proportional to shapley value) and show that the resulting model reward accuracy is more correlated to the Shapley values. Supplementary Material: I have reviewed the shorter proofs and the experiment results. Relation To Broader Scientific Literature: The paper proposes an alternative reward mechanism, slimmable neural networks, for federated learning. Existing works have considered sparsifying model parameters, only sharing updates from a few clients, and controlling the frequency of the client receiving updates. Essential References Not Discussed: Under reward mechanisms, Lin et al, 2023 should also be discussed and compared against. Other Strengths And Weaknesses: **Strengths** 1. The paper introduces a novel reward allocation mechanism by leveraging slimmable neural networks. 2. The paper is generally well-written and easy to understand. **Weaknesses** 1. I assume that the fair allocation algorithm is directly applied after algorithm 1. **However, if algorithm 1 is used, each client would already get the full model parameters.** As the authors have pointed out in the introduction (line 62), this aids free-riding and does not ensure collaborative fairness subsequently. * This limitation is partially addressed by TEEs * In Sec 4.3, an alternative reward mechanism (Equation 5) is proposed. **Is this used in place of Equation 3 when training-time rewards are desired?** 2. The paper is not self-contained and important details (e.g., algorithm in Sec 4.3, proofs in Sec 5, experiment results) are left to the appendix). It would be better to incorporate them in the main paper, for example, give the intuition behind some more complex proofs (e.g, Theorem 2) in the main paper. Other Comments Or Suggestions: * The Impact Statement is missing * Move algorithm 2 to the main paper. Specify the reward mechanism * Instead of claiming aqua to be "agnostic to any contribution measure", it might be more accurate to say that it is a reward mechanism (and complements any contribution assessment step). Questions For Authors: 1. In Sec 6.2 and 6.3, is Aequa referring to combining Algorithm 1 and Sec 4.2? What is Aequa reward mechanism in Sec 6.3? How do you decide the model width for each client? 2. Address weakness 1 and in particular, the bolded statements. 4. Can you provide some intuition and description about the setting in Theorem 2? Is it still the post training reward setting? Why are multiple iterations considered? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Experimental Designs **E1:** We appreciate the reviewer's effort in scrutinizing the experimental design. However, we would like to clarify an **important point**: there is no alternative definition of collaborative fairness in our work or in the literature - the goal is consistent across methods. Our objective function in Sec. 4.2 aims to reduce the variance in $u(\mathbf{a})$, which promotes equitable benefit distribution and is a desirable property. Even without using CGS, Aequa achieves high Pearson correlation scores, reinforcing its fairness. Our formulation is principled and aligned with Tastan et al., 2025. **E2:** Thank you for this comment. Our design targets a different setting than suggested. When using contribution assessment methods like CGSV - which uses the *zeroing out gradients* strategy - we already test the scenario the reviewer proposes. As shown in Figs. 3 & 4, Aequa (CGSV) ranks second-best in correlation and CGS, validating our approach. --- > References We will include a discussion of Lin et al., 2023 in the final manuscript. However, that work focuses on balancing performance and collaborative fairness, unlike ours, which aims to maximize fairness directly. Their results do not match the high fairness levels achieved by Aequa (please refer to the paper), but we acknowledge its relevance and will cite it in Sec. 2. --- > Weaknesses We appreciate the reviewer's feedback but would like to clarify a key aspect that may have been **overlooked**. Our method explicitly assumes the use of TEEs for local client training, preventing clients from accessing full model parameters and thereby preventing free-riding concerns. We kindly request the reviewer to revisit the paper for further clarification and consider adjusting their assessment accordingly. **W1.3**: The reviewer’s understanding is correct. Eq. 5 is used in training-time reward settings for simplicity. **W2.1**: Our method can incorporate contribution-based strategies. Free riders can be assigned the minimum-width model (even set to zero). The utility definition (allocation minus contribution) can also be formulated as a ratio without affecting Lemmas 3 & 4 (it is straightforward to prove). As for the case where a client's contributions equals $u$, it represents a quantity skew - a common FL challenge. Our experiments include such scenarios. Increasing the minimum width parameter adjusts allocations for lower-contributing clients. Aequa aligns with Lyu et al., 2020's fairness principle: allocated reward proportional to contributions. **W2.2**: While we don't follow prior procedures exactly, our minimum width parameter functions similarly to the existing tradeoff parameters the reviewer mentioned. When comparing Aequa to other works, we used their best-performing parameters, as detailed in Appendix B. For CGSV, we used the best setting ($\beta=1.0$), as justified in the original paper. Moreover, Aequa's best-performing clients end up with a comparable or better model than the model obtained by simply running FedAvg, making an explicit tradeoff parameter unnecessary - a strength of our approach. **W2.3**: We would like to highlight a fundamental misunderstanding in the reviewer's assessment. First, predictive performance is unrelated to Eq. 3, as this equation is applied post-training and does not influence the model's learning process. Any suggestion otherwise misinterprets our approach. Second, there is no "alternative" definition of collaborative fairness. The concept remains consistent across works; the only difference lies in how it is operationalized. Our objective aligns with Tastan et al., 2025, as previously stated. Finally, we urge the reviewer to carefully examine also the Pearson correlation results, where Aequa consistently outperforms all baselines. Our method is both theoretically justified and empirically validated, reinforcing its effectiveness in achieving collaborative fairness. **W3:** Due to page limitations, we included only the most essential components of our work in the main paper while placing proofs and extended experiments in the appendix - a common practice in the literature. However, we acknowledge the reviewer's suggestion and will incorporate additional details in the main text where feasible. We will use an extra page for camera-ready for this. > Comments or Suggestions We appreciate the reviewer’s suggestions and will incorporate them. > Questions **Q1:** Yes. Based on the reward mechanism in Section 4.2. **Q4:** Yes, Theorem 2 applies to the post-training reward setting, where the contributions of each participant and the allocation vector are already known. The multiple iterations are used to determine $\mathbf{a}^{\star}$ that minimizes the objective defined in Eq. 3, i.e., we run an iterative optimization algorithm to determine optimal widths for each client without updating the model and only taking client contributions as an input. --- Rebuttal Comment 1.1: Comment: > We appreciate the reviewer's feedback but would like to clarify a key aspect that may have been overlooked. Our method explicitly assumes the use of TEEs for local client training, preventing clients from accessing full model parameters and thereby preventing free-riding concerns. Thank you for the clarification! This improves the opinion of my work but should also be included beyond Section 1. > First, predictive performance is unrelated to Eq. 3, as this equation is applied post-training and does not influence the model's learning process. Any suggestion otherwise misinterprets our approach. My interpretation is as follows: Minimising Eq. 3 would mean that your allocation vector would reward free riders with more valuable models (post-training). Can you clarify this further? > Finally, we urge the reviewer to carefully examine also the Pearson correlation results, where Aequa consistently outperforms all baselines. Thank you for the clarification! In data valuation, fairness is usually based on proportionality to individual/Shapley values. Initially, I found it strange that the Pearson correlation results is a measure of fairness as it suggests that rewarding clients by their individual standalone accuracy is optimal. Upon re-checking the related work (Xu 2021, Wu 2024), I noticed that in FL, fairness can be approximately assessed by the correlation with standalone accuracy. Indeed, Aequa outperforms existing methods. --- The additional rebuttal comments has addressed my concerns. I have removed the weakness and raised the score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up. We respectfully disagree that minimizing Equation 3 rewards free riders with more valuable models. Maybe the confusion arises due to misinterpretation of the utility $u$ as the accuracy of the allocated models. We would like to emphasize that utility $u$ is defined as **collaboration gain** (difference between the allocation $a$ and the contribution $c$) in Equation 2. While the numerator in Equation 3 (expected utility/collaboration gain) is maximized when all the clients receive more valuable models (irrespective of their contribution), such an allocation would greatly increase the denominator (variance of the collaboration gains) when there are free-riders. Consequently, such an allocation will not minimize Equation 3. Rather, Equation 3 is minimized when the variance of the collaboration gains is small (collaboration gains are uniform), i.e., free riders receive models with lesser width (and hence lower accuracy) **and** high contributors receive models with higher width (and hence better accuracy), while at the same time ensuring that expected collaboration gain stays positive, i.e., all clients receive models whose accuracy is better than their respective standalone accuracies. In particular, if a client has a near-zero contribution (free rider), the optimal solution to Eq. 3 assigns that client a low utility model, i.e., a minimum-width model with low accuracy. On the other hand, a client with the highest contribution receives the full-width model with high accuracy. In both these scenarios, the collaboration gain is expected to be positive and the variance of the collaboration gains would be very low, thereby minimizing the objective in Eq. 3. --- If the concern of the reviewer pertains to the case where the accuracy of the minimum-width model is still too high (thereby rewarding free riders), we recommend lowering the minimum width parameter. This ensures that free riders do not benefit disproportionately, maintaining strict contribution-based fairness. If this is indeed the question, we would like to direct the reviewer to our quantity skew experiments presented in the main paper. This setting includes clear free-rider scenarios. For instance: - In quantity skew (0.15, 6), four participants each hold only $2.5$% of the data (free riders), while six participants each hold $15$%. - In quantity skew (0.4, 2), eight participants hold $2.5$% each, while two high-contributing clients hold $40$% each. These cases are detailed in the experiments, specifically in Tables 1, 2, 6, and 7. In Table 6, we use a minimum width of **0.25**. While Aequa performs well in general, it underperforms compared to IAFL in these extremely skewed partitions in terms of collaboration gain metrics (CGS, but perfect in performance and Pearson correlation). To address this, in Table 7, we reduce the minimum width to **0.1**. This adjustment significantly decreases the gains of free riders (decreases the performance of the minimum-width model). As a result, Aequa outperforms all baselines across CGS (variance), Pearson correlation, and predictive performance, achieving near-perfect fairness and accuracy. These results demonstrate that even in extreme skew scenarios (scenarios with free riders), the solution to Eq. 3, in conjunction with an appropriate minimum width setting, prevents rewarding free riders and remains faithful to collaborative fairness.
Summary: The paper introduces a framework (Aequa) for fair model rewards in collaborative learning (CL) by leveraging slimmable neural networks. The core idea is to proportionally allocate model capacity to participants based on their contributions, rather than distributing identical models to all. The method ensures that higher contributors receive better-performing models (e.g., a neural network with a bigger width), while lower contributors get degraded versions. Claims And Evidence: supported Methods And Evaluation Criteria: probably sound Theoretical Claims: probably sound Experimental Designs Or Analyses: valid Supplementary Material: NA Relation To Broader Scientific Literature: Fair reward allocation is important in federated learning. Essential References Not Discussed: References ok. Other Strengths And Weaknesses: Strengths 1. Integrating slimmable networks (a single neural network that can operate at multiple widths) with federated learning for fairness is unique. Unlike previous methods that use heuristic reward distributions, Aequa directly controls model width, ensuring a structured degradation in performance. 2. Theoretical Convergence Analysis The framework provides a comprehensive convergence analysis to ensure the optimization of the training-time reward allocation algorithm remains stable and optimal. This is a notable improvement over methods like CGSV, which lack formal convergence proofs, and IAFL, which introduces trade-offs between fairness and performance. Weaknesses 1. Assumption of Continuous Model Performance: The theoretical analysis assumes that model performance is continuous within the interval [ℓ, u], allowing for perfect Pearson correlation coefficients. However, in practice, model widths are discrete due to hardware and software limitations, and performance may not scale smoothly with width reduction. This discreteness could lead to suboptimal allocations, where small changes in width result in disproportionate performance drops, affecting fairness. 2. There are some missing refs in related fields such as [1][2]. 3. Limited Scalability to Large Federated Learning Setups The approach has only been tested on relatively small datasets (CIFAR-10, MNIST, etc.), which do not reflect the scale of real-world FL scenarios (e.g., federated medical imaging, large-scale NLP). Lack of experiments on larger FL benchmarks: The method should be evaluated on federated benchmarks like FEMNIST or OpenImage-FL to test scalability. [1] Jiang, M., Roth, H. R., Li, W., Yang, D., Zhao, C., Nath, V., ... & Xu, Z. (2023). Fair federated medical image segmentation via client contribution estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16302-16311). [2] Li, T., Hu, S., Beirami, A., & Smith, V. (2021, July). Ditto: Fair and robust federated learning through personalization. In International conference on machine learning (pp. 6357-6368). PMLR. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper, for highlighting its strengths, and for their valuable feedback. > W1. Continuous model performance assumption We acknowledge that the assumption of continuous model performance across the interval $[\ell, u]$ may not always hold perfectly in practice. This assumption was primarily introduced for theoretical rigor. Nevertheless, as indicated and supported by other reviewers, our experimental results consistently achieve fairness scores close to the perfect score (mostly exceeding 0.95). Based on these comprehensive experiments, we confidently assert that the practical discreteness of model widths does not significantly impact performance or fairness, as our fairness scores consistently remain very high, often approaching or equaling 1.0. > W2. Missing references Please note that reference [1] is already cited in our paper (lines 40, 59, 90). Regarding [2], while we acknowledge its relevance in the broader context of fairness in federated learning, its focus differs from the core objective of our work. [2] specifically targets performance fairness, aiming to personalize models for individual clients to ensure each achieves reasonable predictive performance. In contrast, our work addresses collaborative fairness, which is concerned with the fair distribution of model rewards in proportion to each participant's contribution. Given this, [2] is not central to our framework. Nonetheless, to ensure completeness in the extended related work discussion, we will include it in Section 2 (Fairness in FL) of the final version of the paper. > W3. Scalability experiments | Algorithm | Acc. | $\rho$ | MCG $\pm$ CGS | | --- | --- | --- | --- | | FedAvg-FT | 71.48 | 0.2904 | 47.02 $\pm$ 5.07 | | Aequa (ours) | **73.10** | **0.9888** | 52.19 $\pm$ **2.57** | [FEMNIST experiment] In response to the reviewer's suggestion, we conducted additional scalability experiments on the FEMNIST dataset to evaluate the effectiveness of our proposed method under challenging federated learning conditions. FEMNIST is a large-scale benchmark dataset characterized by its natural non-IID distribution and extensive client base, making it well-suited for evaluating scalability. For this experiment, we employed a custom CNN architecture composed of two convolutional layers followed by a fully connected layer. The experiment involved a total of $\textbf{3597}$ **clients** with partial participation, randomly sampling 10 clients per communication round. We set the number of local epochs to 1, a batch size of 16, and executed training for 500 communication rounds. The training was performed using an SGD optimizer with an initial learning rate of 0.1, employing the same settings detailed in the main paper. The results in the table demonstrate that our method consistently outperforms the FedAvg algorithm across all evaluation metrics. This indicates that Aequa effectively addresses the scalability and heterogeneity challenges inherent in large-scale federated learning scenarios. We also include the model width vs. accuracy plot in this link: https://ibb.co/C3jpd5Nm
Summary: The manuscript studies the question of assigning rewards to participants with different models whose performance faithfully reflects their heterogeneous contribution, and extends/repurposes the concept of the slimmable network for fairness in federated learning, so as to make sure that model rewards are proportional to client contributions, achieving both high performance and collaborative fairness simultaneously. Claims And Evidence: Yes. The manuscript includes theoretical convergence results and fairness analysis, as well as extensive numerical results to support the advances of the proposed method. Methods And Evaluation Criteria: Looks reasonable: * it considers MNIST, Fashion-MNIST, SVHN, CIFAR-10 & 100; * it uses homogeneous, heterogeneous, and quantity skew scenarios; * it also considers several baselines, including CGSV, IAFL, SA. Theoretical Claims: * The proof of the convergence analysis looks standard, and it is unclear how much the subnetwork will affect the optimization. * The writing quality of sec 5.2 should be improved. Experimental Designs Or Analyses: Yes. The numerical experiments look strong. Supplementary Material: The manuscript provides extensive additional numerical results in the appendix. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: Lemma 4 is very hard to understand, as it does not explain the meaning of $\mathbf{c}$ and $\rho$. ## update after rebuttal The reviewer thanks the authors for providing feedback and will maintain the original score. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our work, for highlighting its advantages and for their valuable positive feedback. > The writing quality of Section 5.2 should be improved. We appreciate the reviewer highlighting the clarity issues in Section 5.2. We sincerely apologize for any confusion caused and have carefully revised this section to improve readability and coherence. Additionally, we have ensured that all important points previously omitted are clearly incorporated into the final manuscript. > How much the subnetwork will affect the optimization? As we show in Lemmas 1 and 2, subnetworks preserve both smoothness and convexity of the objective. Thus, the effect on the optimization is minimal, and our new formulation admits standard analysis as demonstrated in our manuscript, which is the strong point. Therefore, the main effect that we need to investigate is the final quality of models given subnetwork formulation, which consistently maintains strong performance as validated by our experimental results. > Lemma 4 We apologize for the oversight in defining the terms used in Lemma 4. We clarify that $\mathbf{c}$ represents the vector containing individual client contributions, as elaborated in Section 4.2. Additionally, $\rho$ denotes the Pearson correlation (line 281). We appreciate the reviewer's comment and will revise Lemma 4 and the explanation to incorporate these clarifications in the final manuscript.
Summary: This paper introduces Aequa, a framework to ensure collaborative fairness in federated learning using slimmable neural networks. It trains a single global model whose sub-networks of varying widths serve as rewards aligned with participant contributions. Experiments on six benchmark datasets show Aequa achieves near-perfect correlation between contributions and model performance without significant loss in overall accuracy. Claims And Evidence: The authors claim Aequa ensures proportional rewards, maintains strong performance, and converges theoretically. Correlation results (often above 0.95) back up the fairness claim, and tables show minimal accuracy drop compared to FedAvg. The theoretical analysis relies on standard convex FL assumptions; while the reviewer did not verify every proof step, the logic appears sound. Methods And Evaluation Criteria: The methods and experimental setup appear well-chosen for the problem. The core method – using a slimmable network to enable differentiated model widths – is appropriate because it allows implementing fairness in a single training run (as opposed to training separate models per client). This design is elegant and efficient, ensuring that smaller “reward” models are true sub-networks of the larger model and thus require no additional training . The paper’s federated optimization procedure (Algorithm 1) is an adaptation of FedAvg to train all widths of the slimmable model in a coordinated way; this is a sensible approach to ensure that the global model performs well across the full width range. The fair allocation algorithm (post-training) is described formally as an optimization problem with clear objectives (non-negativity of gains, proportionality, etc.), and the chosen solution method (simulated annealing-based heuristic) is reasonable for finding an approximate optimal width allocation. The evaluation criteria align with the claims: the authors evaluate global model accuracy to ensure overall performance isn’t degraded, and they evaluate fairness through the correlation between contributions and rewards (as well as a metric called collaboration gain spread). These metrics directly measure the goals of collaborative fairness and are standard in this line of work . The use of six diverse datasets (MNIST, FMNIST, SVHN, CIFAR-10, CIFAR-100, SST) and multiple data partition strategies (homogeneous, Dirichlet heterogeneous, quantity skew, label skew) is commendable  . This covers both IID and non-IID scenarios, which is critical for federated learning experiments. The chosen baselines are appropriate: FedAvg with fine-tuning represents a naive approach to personalization, CGSV and IAFL represent state-of-the-art fairness/incentive methods , and evaluating Standalone accuracy provides a reference point for each client’s individual performance. By including these baselines, the authors ensure a fair and informative comparison. The experiments were run across all methods on identical settings, and the paper mentions using balanced accuracy for class-imbalanced data which is a proper choice . In terms of methodology, everything from the training procedure to the selection of metrics seems well-justified and in line with common practice, indicating the experimental design is suitable for validating the paper’s contributions. Theoretical Claims: The paper makes two primary theoretical claims: a performance guarantee for the federated training process with slimmable networks, and the convergence of the fair allocation algorithm. Both are presented with formal statements (Theorem 1 and Theorem 2) and proved under certain assumptions. The correctness of these claims appears plausible – the authors assume standard conditions (convex loss, L-smoothness, bounded variance in gradients, etc., as referenced in the paper) and then build on known federated optimization analyses . Theorem 1 provides a convergence rate or error bound for Aequa’s training algorithm, ensuring that training on slimmable networks still optimizes the global objective within the same order of convergence as regular federated learning (given a suitable learning rate) . Theorem 2 addresses the allocation algorithm, stating that the iterative simulated annealing approach will converge asymptotically to the optimal allocation (with probability 1) . The proofs for these are relegated to the appendix, and the authors reference lemmas and prior work (e.g. citing known results from convex optimization literature) to support their derivations. From a clarity standpoint, the theoretical section in the main text is relatively concise – it outlines key lemmas and theorems, deferring detailed technical proofs to Appendix A. This makes it a bit challenging to verify every step without deep diving into the supplementary material, but it keeps the main content accessible. The reviewer’s confidence in the theoretical claims is moderate rather than absolute: the reasoning is sound on a high level and no obvious errors were found, but a full verification would require checking each appendix lemma and assumption carefully. The authors do clearly state assumptions (e.g., convexity and bounded client-drift via bounded dissimilarity in data, as noted in the appendix references) and the statements of theorems seem consistent with those assumptions. The convergence claim for the allocation algorithm is particularly interesting since it uses a heuristic method (simulated annealing); proving convergence in that context is non-trivial, but the authors appear to have done so under specific conditions (assuming a certain form of cost function and proper cooling schedule, presumably detailed in Appendix A.6). In summary, the theoretical claims are well-motivated and likely correct, though their practical applicability is bound by the validity of the assumptions (which is typical for FL theory). The clarity is sufficient, but readers who want full detail will need to consult the supplementary proofs, which the reviewer trusts with some caution. Experimental Designs Or Analyses: Experiments include varied data partitions (Dirichlet, label/quantity skew) and compare multiple metrics. Aequa consistently achieves the highest fairness correlation while matching or improving baseline accuracies. A few corner cases require tuning the minimum width parameter. Supplementary Material: Appendices contain full proofs, implementation details, and extended experiments. They corroborate the main text and clarify the approach to sub-network allocations. Relation To Broader Scientific Literature: The paper builds on existing FL incentive work (e.g., CGSV, IAFL) but uniquely applies slimmable networks to allocate capacity. It aligns with “collaborative fairness” studies, providing a more rigorous and flexible mechanism than many prior heuristic methods. Essential References Not Discussed: No major omissions are evident; the authors cite relevant literature on fair FL and slimmable architectures. Other Strengths And Weaknesses: Strengths include a novel architectural approach, solid empirical validation, and flexible application to different contribution measures. Weaknesses are its reliance on TEEs for security and the need to choose parameters (e.g., minimum model width) carefully. Further exploration of real-world overhead and security challenges would be beneficial. Other Comments Or Suggestions: Strengths include a novel architectural approach, solid empirical validation, and flexible application to different contribution measures. Weaknesses are its reliance on TEEs for security and the need to choose parameters (e.g., minimum model width) carefully. Further exploration of real-world overhead and security challenges would be beneficial. Questions For Authors: (1) How does Aequa handle noisy or imperfect contribution assessments? (2) Could Aequa mitigate free-riding if trusted hardware is unavailable, or is TEE integral? (3) Does simultaneously training many width configurations introduce significant overhead for very large models or client counts? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable positive feedback. > Reliance on TEEs for security We acknowledge the reviewer's concern regarding reliance on TEEs. This limitation is already discussed in the paper, in Section 7. While TEE is a necessity for Aequa, Aequa can also operate in a setting without trusted hardware by incorporating contribution assessment methods (e.g., CGSV). In this case, model allocations are directly tied to assessed contributions, ensuring fairness without requiring hardware-based isolation. Thus, while TEEs provide one implementation path and theoretically fair algorithm, Aequa remains effective and secure even in their absence. > Need to choose parameters (e.g. minimum model width) carefully. We acknowledge the reviewer's concern regarding the careful selection of parameters. However, our proposed method notably differs from existing approaches regarding hyperparameter sensitivity. Unlike existing methods, which typically rely on one or two hyperparameters that must be carefully tuned to balance fairness and utility, our method utilizes only a single parameter - the minimum model width. Importantly, we do not treat this parameter as a conventional hyperparameter requiring extensive tuning because it has a clear, interpretable meaning: it represents the minimum model width corresponding to the lowest performance model. Furthermore, setting this parameter to its minimal feasible value does not have an effect on both predictive performance and fairness metrics, thereby significantly reducing sensitivity and simplifying practical deployment. > Further exploration of real-world overhead and security challenges Please take a look at the response to [Reviewer wyh2, W3] to see the results on the FEMNIST dataset that effectively mimics the real-world overhead and how our method performs on all evaluation metrics. We supplement the training time per communication round in seconds from the provided table. | Algorithm | time (s) | | ---- | ----- | | FedAvg | 42 $\pm$ 2.9 s | | Aequa | 55 $\pm$ 3.1 s | | From this table, FedAvg completes a round in 42 seconds, whereas Aequa takes 55 seconds - an increase of 1.309 times or 30.9% over FedAvg. This overhead remains modest, not significant, and is well justified by Aequa's superior performance and fairness. As per the communication overhead, there is no overhead as we communicate the same-sized models. Regarding security challenges, the main consideration is the requirement of TEEs on client devices, as described in Section 1. However, the training-time extended version of Aequa, described in Section 4.3, does not depend on TEEs. In such a setting, broadcasted model widths from the server are based on contribution scores, mitigating security concerns related to model access. > Questions **Q1:** In this work, we specifically focus on the reward allocation mechanism, operating under the assumption that the contribution assessments provided are reliable. Addressing improvements in contribution assessment methods falls beyond the current scope of our study. **Q2:** Aequa's design inherently supports mitigating free-riding even in the absence of trusted hardware (TEE). By integrating robust contribution assessment methods (e.g., CGSV), Aequa effectively addresses the issue of free-riders. In this scenario, the model width allocated to each participant is determined by their assessed contribution. Consequently, clients with lower contributions receive narrower model widths and thus lack access to the full-width model. This mechanism naturally restricts the benefits available to potential free riders and removes the explicit need for TEE. **Q3:** Our approach trains only two width configurations simultaneously in each forward pass. As demonstrated above, this approach does not introduce significant overhead. Specifically, we conducted an experiment on the FEMNIST dataset with a high number of clients (**3597 clients**), as detailed above and in the response to [Reviewer wyh2, W3]. The empirical results confirm that the overhead remains notably lower than $2\times$ (1.309) since our training leverages subnetworks. Thus, our method remains computationally efficient and scalable even with large models or substantial client counts.
null
null
null
null
null
null