title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
House of Cards: Massive Weights in LLMs
Reject
Summary: This paper identifies the phenomenon of "massive weights", which contributes to a know phenomenon called "massive activations". These massive weights only take up a small part of LLMs, but significant impact the model performance. When zeroing the top-k massive weights, LLMs experience serious performance degradation. This is called top-k zeroing attack in this paper. To reduce the reliance on massive weights, the authors propose a massive weights curriculum dropout (MacDrop) during the LoRA or DoRA fine-tuning. The results show that with MacDrop, the performance of fine-tuned LLMs is better. And the LLMs are more resilient to top-k zeroing attack. ## update after rebuttal I think the authors alleviate my concerns to some extent. I increased my rating to 3 as this paper identifies the phenomenon of "massive weights", which contributes to the mechanical interpretability of LLMs. But I think the proposed MacDrop and top-k zero attacking are less practical. I assume these two approaches could be regarded as manners to explain the phenomenon. This prevents me from rating higher. Claims And Evidence: I think the major claims 1) massive activations are related to massive weights, which are small in quantity but significantly impact the performance; 2) proposed MacDrop could reduce the reliance of LLMs on massive weights are justified through the visualisations and empirical studies. Methods And Evaluation Criteria: I think proposed methods and evaluation criteria make sense. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: Yes, I think experimental designs and analyses make sense. Supplementary Material: I reviewed the supplementary material to check the generalisation of conclusions to other papers. I mainly check the attention sink phenomenon of Gemma2 as in the main paper the authors claim that Gemma2 has no attention sink. Relation To Broader Scientific Literature: The authors attribute the massive activations to massive weights. The conclusions in this paper may be interesting to related literature about massive activations, or downstream applications, such as model quantization. The proposed MacDrop seems to improve the performance of LoRA/DoRA to some extend. I think it may be interested to communities about general fine-tuning topics. Essential References Not Discussed: I think the essential references are discussed in this paper. Other Strengths And Weaknesses: The motivations behind the proposal of MacDrop is "massive weights are predominantly learned during pre-training, and that zeroing them can severely undermine LLMs". However, why we need to relax the reliance of LLMs on massive weights? I am aware that this paper showcases the top-k zeroing attack. However, in the real applications, will the top-k zeroing attack be used? For production LLMs, which one cannot access the weights, such attack cannot be conducted. For open-sourced LLMs, I think it should be easy to make LLM performance drop by randomly editing several model parameters. Therefore, the motivation of MacDrop should be further justified. Other Comments Or Suggestions: I think the key message here is that massive weights, which are related to massive activations, can significantly impact the model performance. StreamingLLM [1] has already shown that without the initial tokens, the PPLs will soar. This hints that attention sink is essential to model performance. Back to the massive weights, I conjecture that when zeroing the massive weights, massive activations and attention sink disappear, and then model performance drops. [1] Xiao et al. Efficient Streaming Language Models with Attention Sinks. ICLR 2024. Questions For Authors: Why do Gemma 2 models have no attention sink? Do you have intuitions behind this? In Appendix C.5, I find that only without BOS, there is no attention sink phenomenon. Have you checked whether Gemma models can still have normal performance without BOS? According to [1], when fine-tuning Gemma models, BOS token is required to be added. Otherwise, the loss is much higher. Therefore, if without BOS, model performance becomes much worse. It is not necessary to discuss whether there is attention sink. Although it is not related to my ratings, just from curiosity, how about the massive weights for LLMs which have no massive activations? In [2], they showed that using learnable kv bias could mitigate massive activations. In such cases, will massive weights also disappear? [1] https://unsloth.ai/blog/gemma-bugs \ [2] Sun et al. Massive Activations in Large Language Models. COLM 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful review. We would like to answer the concerns raised in **Weaknesses**, **Comments Or Suggestions**, and **Questions**.  --- ### Weaknesses - First, because our paper deals with open-source LLMs that allow access to weights, we begin by addressing random weights editing. - As you mentioned, random weights editing is a simpler attack. However, although it may show whether the model's performance decreases (or does not decrease), it has limitations in revealing whether the model's performance depends on particular weights. - Therefore, by showing that the zeroing attack on **targeted specific weights** leads to severe degradation, we highlighted the existence and significance of massive weights. - We also recognize that such attacks are not feasible on production LLMs. - **We used such attacks as a means to analyze and gain a deeper understanding** of the weights in LLMs, rather than focusing on the attack itself. - In this regard, we have clarified this point in **Impact Statement** - Finally, we acknolwedge that instead of mitigating massive weights, it is possible to utilize them, as demonstrated by other algorithms introduced in Related Work (Appendix H). - However, from the perspective of a model developer, who can access the model weights, it is reasonable to think that one does not want a large-scale model, developed at significant cost, to exhibit over-dependence on just a few weights. - For this reason, we would like to highlight the motivation for mitigating massive weights rather than utilizing them, because developing more robust models is necessary. Accordingly, we have expressed this point in **Impact Statement** --- ### Comments Or Suggestions - Thank you for your suggestions. For verification, we examine the intermediate and hidden states across layers (like Figure 3) and the attention maps (like Figure 2) using the top-5 massive weights zeroed-out models listed in Table 1. - As expected, **the massive activations in the states disappear, and the attention sink phenomenon does not occur**. - We kindly ask for your understanding, as we are currently unable to update the manuscript and can only provide this information in text form. Figures will be updated. --- ### Questions - We argue that the lower sensitivity of the Gemma-2 family can be attributed to the LayerNorm layer, discussed with **Equation 3**. - Very recently, a related paper [1] has provided an analysis indicating that the architectural design shown in Equation 3—referred to as Peri-LN—preserves **gradient stability** and prevents the emergence of massive activations observed in Pre-LN (e.g., in the Llama family). - In detail, Pre-LN applies LayerNorm only to the input, while allowing the output to pass through the residual path. As a result, massive activations generated may accumulate. Peri-LN applies normalization at both the input and output of each sub-layer. This two normalization prevents gradient explosion. - We consider this paper to offer deeper insights into the functional role of LayerNorm, which we will cover in an updated manuscript. - During rebuttal, we checked zero-shot performance of the Gemma-2 family depending on the bos token during both PEFT and evaluation. **Our default setting use the bos token for both PEFT and evaluation**. The averaged zero-shot performance is as follows: | PEFT/Evaluation | O/O | X/O | O/X | X/X | |---|---|---|---|---| | Gemma-2-2B + LoRA | **72.7** | 71.9 | 44.3 | 63.4 | | Gemma-2-9B + LoRA | **80.6** | 79.7 | 46.1 | 73.8 | - This results indicate that not using the bos token during evaluation (i.e., O/X case) leads to a greater performance drop compared to not using it during PEFT (i.e., X/O case). - We found information messages related to this issue in lm-eval github [2]. And, we modify below line for not using the bos token. ``` if "gemma" in getattr(self.config, "model_type", ""): self.add_bos_token = True eval_logger.info( f"Model type is '{self.config.model_type}', part of the Gemma family--a BOS token will be used as Gemma underperforms without it." ) ``` - When considering the Gemma-2 family, we repeatedly encountered the same concerns. In our opinion, since massive weights are defined in relation to massive activations, it is appropriate to consider that massive weights also disappear. - In other words, if the KV bias in [3] is properly learned, we expect that the model’s performance will be preserved even if the weights—previously defined as massive weights—are attacked. --- [1] Peri-LN: Revisiting Layer Normalization in the Transformer Architecture [2] https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/huggingface.py [3] Massive Activations in Large Language Models --- Rebuttal Comment 1.1: Comment: Why do you think that Gemma-2 family has lower sensitivity due to the LayerNorm layer? I have not checked the structure of Gemma-2 models. Do you think Gemma-2 adopt a Peri-LN structure as you used this example? --- Reply to Comment 1.1.1: Comment: Thank you for your rapid response. --- We can easily check the structure of Llama models from transformers github [1]. Please refer to \_\_init\_\_ and forward methods of class LlamaDecoderLayer. - In a single layer, **two LayerNorm** layers exist, which is described as **Equation (1)** in our paper. In addition, this architecture is called as pre-LN in [3]. - From the perspective of gradient stability, this structure is analyzed to have potential for gradient explosion [3]. ``` self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) ``` Similarly, we can check the structure of Gemma-2 models from transformers github [2]. Please refer to \_\_init\_\_ and forward methods of class Gemma2DecoderLayer. - In a single layer, **four LayerNorm** layers exist, which is described as **Equation (3)** in our paper. In addition, this architecture is called as peri-LN in [3]. - It is showed that Peri-LN can mitigate and gradient instability [3]. - In addition, Gemma-2 is introduced as one of representative models using peri-LN in [3]. ``` self.input_layernorm = Gemma2RMSNorm(config.hidden_size, eps=config.rms_norm_eps) self.post_attention_layernorm = Gemma2RMSNorm(config.hidden_size, eps=config.rms_norm_eps) self.pre_feedforward_layernorm = Gemma2RMSNorm(config.hidden_size, eps=config.rms_norm_eps) self.post_feedforward_layernorm = Gemma2RMSNorm(config.hidden_size, eps=config.rms_norm_eps) ``` In summary, the evidence clearly supports the conclusion that Gemma-2 incorporates the peri-LN architecture in its design. --- [1] https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py [2] https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma2/modeling_gemma2.py [3] Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
Summary: The paper studies and localizes the weight vector that causes the massive activation, and observes that the massive weights exist in different model families and even MoE models. The paper also found that setting the the massive weights (of one layer) to zero will complete destroy the model while setting the complement weights only hurts mthe odel by small portion. Based on the findings that massive weights are important, the authors propose the MacDrop approach to drop the massive weights during the parameter-efficient fine-tuning to penalize the impact of those massive weights. **Update after rebuttal**: My latest reply reflected my final update. Claims And Evidence: No problematic claims. Methods And Evaluation Criteria: The methods and evaluation make sense. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: I checked all the experimental results. Supplementary Material: There is no Supplementary Material. Relation To Broader Scientific Literature: The findings in this paper might motivate people to pre-train/post-training models to mitigate the superweights. Essential References Not Discussed: N/A Other Strengths And Weaknesses: [Strength] * The paper explores the massive weights of various model families. * The results on retaining top-k massive weights is better than dropping top-k massive weights are interesting. [Weakness] Major: * The proposed method, MacDrop, does not show a clear improvement over not using it (Tables 2 and 3) when considering results from a single run. It would be more compelling if the results were averaged over multiple runs to ensure robustness. * The novelty of zeroing out massive weights and its impact on performance appears limited, as it is conceptually similar to zeroing out massive attention, which has been explored in [1]. * Given the previous two limitations, the significance of defining massive weights remains unclear. It would be helpful to demonstrate more applications to highlight their value. [1] Massive Activations in Large Language Models Other Comments Or Suggestions: In the abstract, it would be great to mention the massive weights are defined in just one layer. Or it will sound like the majority of weights of the whole models are zeroed out when reading lines 28-29 "However, when all weights except for massive weights are set to zero". Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your clear review. We will revise the abstract to make it crystal clear, reflecting your comments accordingly. During the rebuttal process, we focus on addressing the weaknesses in **Weaknesses**. --- ### Weaknesses - (1) Multiple runs - To address this, we conduct **3 runs** with different seeds. The results consistently show that MacDrop outperforms the baseline (i.e., without MacDrop) in all cases. Namely, the overall trends are consistent with the single-run results reported in the original tables (**Tables 2 and 3**). - Zero-shot downstream tasks (average over 3 runs) Model | Method | Avg. ± Std | |--------|-------------------|-----------------------------| | Llama-3-8B | LoRA | 75.97 ± 0.08 | | Llama-3-8B | LoRA + MacDrop | **76.14** ± 0.28 | | Llama-3-8B | DoRA | 76.03 ± 0.11 | | Llama-3-8B | DoRA + MacDrop | **76.37** ± 0.07 | | Mistral-7B | LoRA | 75.19 ± 0.09 | | Mistral-7B | LoRA + MacDrop | **76.33** ± 0.10 | | Mistral-7B | DoRA | 75.32 ± 0.03 | | Mistral-7B | DoRA + MacDrop | **76.19** ± 0.06 | - Long context tasks (average over 3 runs) Model | Method | Avg. ± Std | |--------|-------------------|-----------------------------| | Llama-3-8B | LoRA | 39.09 ± 0.08 | | Llama-3-8B | LoRA + MacDrop | **40.85** ± 0.24 | | Llama-3-8B | DoRA | 38.87 ± 0.27 | | Llama-3-8B | DoRA + MacDrop | **39.68** ± 0.17 | | Mistral-7B | LoRA | 37.10 ± 0.15 | | Mistral-7B | LoRA + MacDrop | **38.04** ± 0.20 | | Mistral-7B | DoRA | 37.00 ± 0.19 | | Mistral-7B | DoRA + MacDrop | **37.06** ± 0.27 | - (2) Novelty of zeroing attack - We did not intend to emphasize the novelty of the zeroing attack. Rather, **we already clarified the similarities and differences** with the attack used in [1] in **Section 2.3**. - "In essence, this attack is very similar to the one proposed in [1], where massive activations in the hidden state are zeroed out in a single layer. The difference is that their attack targets the hidden state, while our attack targets the intermediate state." - In other words, we would like to highlight that while the method of attack is the same, the target is different. Our intention is to emphasize that the massive phenomenon appears **earlier than a hidden state (i.e., intermediate state)** identified in [1]. - (3) Significance of defining massive weights - We disagree with the claim that the significance of the massive weights remains unclear. Through **Table 1**, we demonstrated that zeroing out the massive weights (approximately 0.0005% of the total weights) leads to a complete degradation in performance. This alone suggests that these weights are indeed significant. - Furthermore, our responses to weaknesses (1) and (2) provide further clarification on this matter. - In addition, **based on massive weights**, we proposed MacDrop and demonstrated its effectiveness in terms of both performance (**Tables 2 and 3**) and robustness (**Table 4**). Namely, MacDrop is one of the applicable algorithms that consider massive weights. Its effectiveness underscores the importance of considering massive weights in the design for parameter-efficient fine-tuning. --- [1] Massive Activations in Large Language Models --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal contents. Although I still hold small concerns about the applications of massive weights other than MacDrop, I have increased my rating accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful response. As you mentioned, we believe the observation on massive weights holds broader potential for future research. That said, we would like to gently emphasize that MacDrop is the first algorithm explicitly built upon the concept of massive weights.
Summary: This work investigates the massive weights phenomenon in large language models (LLMs). The authors observe that massive weights are strongly associated with the initial BOS token, though the results varies across different models. They also find that massive activations first emerge in the intermediate layers of MLP blocks within the early layers of the model. Furthermore, masking out these massive weights leads to a significant performance drop, highlighting their critical impact. To address this, the authors propose MacDrop, a parameter-efficient fine-tuning method that employs a curriculum dropout strategy to gradually reduce reliance on massive weights during fine-tuning, thereby improving overall model performance. ### After Rebuttal ### Thanks for the responses that addressed my concerns. I keep my original score. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: the evaluattion make sense Theoretical Claims: There are no theoretical claims Experimental Designs Or Analyses: The proposed methods are evaluated with different models and tasks, as well as several ablation studies. Supplementary Material: N/A Relation To Broader Scientific Literature: This work serves as a complementary study to the massive weights phenomenon observed in previous research, providing additional insights into its characteristics and impact on model performance. And the authors use such phenomenon for developing a parameter-efficient finetuning methods. Essential References Not Discussed: None. Other Strengths And Weaknesses: - The models are fine-tuned for three epochs—could the authors clarify which epoch’s performance is reported? Since extended fine-tuning can sometimes degrade performance, it is important to specify how checkpoint is selected. Other Comments Or Suggestions: None Questions For Authors: Are there any insights whether such massive weights in LLM is a benefit or artifact for LLM's ability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your clear review. We would like to answer the concerns raised in **Weaknesses** and **Questions**.  --- ### Weaknesses - We used the trained model for 3 epochs (i.e., the last checkpoint) for all experients. We will update this detail in **Section 4** clearly. --- ### Questions - In our view, massive weights can be regarded as artifacts; however, we believe they also offer advantages. In particular, while they introduce undesirable over-dependence, they contribute positively in areas such as compression. - First, it is widely recognized that the cost of model pre-training is exceptionally high. Therefore, model developers likely do not want trained models to be totally disrupted by very simple attacks, as shown in **Table 1** of our paper. Based on these results, we believe that the massive weights are not intentionally designed. - Nevertheless, these massive weights, predominantly learned during pre-training, have a significant impact on the capabilities of LLMs, providing considerable advantages. In essence, they can be interpreted as a means of densely concentrating information. Consequently, we think that LLM algorithms such as streaming generation [1] and quantization [2] have improved based on this underlying principle. --- [1] Efficient Streaming Language Models with Attention Sinks, ICLR 2024 [2] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization, NeurIPS 2024
Summary: The paper focuses on understanding the internal mechanisms of Large Language Models (LLMs). The authors observe that large activations in LLMs, which appear in specific feature dimensions of hidden states, introduce bias by emphasizing the corresponding token.   They identify that these large activations originate from the intermediate state of a feed-forward network module in an early layer.   The authors define “top-k massive weights” as the weights that contribute to the dimensions with the top-k magnitudes in the intermediate state.   They find that setting these massive weights to zero disrupts the functionality of LLMs, while zeroing out all other weights results in a relatively minor performance drop.   Based on this observation, the authors propose a method called MacDrop (massive weights curriculum dropout) to rely less on massive weights during parameter-efficient fine-tuning.   MacDrop applies dropout to the pre-trained massive weights, starting with a high dropout probability and gradually decreasing it as fine-tuning progresses.   The authors demonstrate that MacDrop improves performance and robustness through various experiments. Claims And Evidence: Claim 1: Massive activations in LLMs introduce bias by overemphasizing specific tokens.   Evidence: The authors provide examples in Figure 1(b) where attacking (zeroing out) the top-5 massive weights in the Llama-3-8B-Instruct model leads to the model repeating the user prompt, indicating a disruption of functionality.   Claim 2: Massive activations originate from the intermediate state of a feed-forward network module in an early layer.   Evidence: The authors trace various states in early layers and observe that the intermediate state inter l within an early layer l exhibits massive activations before they appear in the hidden state hl. Figure 3 and related descriptions support this claim.   Claim 3: Top-k massive weights (weights contributing to top-k magnitudes in the intermediate state) are crucial for LLM functionality.   Evidence: The authors conduct attacks on Llama-3-8B, Llama-3-70B, and Llama-3.1-405B (8bit) by zeroing out the top-5 massive weights and, conversely, retaining only the top-5 massive weights. Zeroing out the top-k massive weights severely disrupts LLM performance, while retaining only these weights leads to a relatively minor performance drop. This demonstrates the significant impact of these weights.   Claim 4: MacDrop (massive weights curriculum dropout) improves performance and robustness in parameter-efficient fine-tuning.   Evidence: The authors conduct experiments on zero-shot downstream tasks and long context tasks, showing that MacDrop generally enhances model performance. Table 3 and Table 2 provide specific results supporting this claim. Additionally, Table 4 demonstrates that fine-tuned models with MacDrop exhibit better performance under the top-3 zeroing attack, indicating improved robustness.   Overall, the claims made in the paper are supported by the evidence provided, including experimental results, ablation studies, and comparative analyses across different LLM architectures. Methods And Evaluation Criteria: These methods and evaluation criteria appear to be appropriate for the problem and application: - The proposed MacDrop method is designed to improve the fine-tuning of LLMs by reducing reliance on massive weights, and the curriculum dropout strategy makes sense in this context. - The use of ablation studies to examine the effects of dropout scope, dropout probability scheduling, and curriculum methods is suitable for analyzing the impact of different components of MacDrop. - The evaluation criteria cover various aspects of LLM performance, including perplexity, zero-shot accuracy on downstream tasks, long context understanding, and generation quality. - The choice of benchmark datasets like WikiText, C4, PG-19, LongBench and Spec-Bench are standard in the field and provide a means to compare the effectiveness of the proposed method with existing approaches. Theoretical Claims: The paper does not include any proofs for theoretical claims. It focuses on empirical observations and proposing a practical method (MacDrop) based on those observations. Experimental Designs Or Analyses: Based on my review, here are some potential issues and questions related to the experimental designs or analyses in the paper: Ablation Study on Dropout Scope: The ablation study in Section 4.3.1 examines the effect of dropout scope (all weights, massive weights, all weights except for massive weights) and probability.   The study concludes that applying dropout solely to massive weights can surpass the original performance, but strong dropout on massive weights deteriorates performance.   It would be interesting to see a more detailed analysis of why this occurs. Is it simply a matter of finding the right balance in dropout probability, or are there more complex interactions at play? Curriculum Methods and Initial Dropout Probability: Section 4.3.2 investigates the effect of curriculum methods (step-wise linear, epoch-wise linear, exponential) and initial dropout probability in MacDrop.   The authors find that step-based curriculum methods generally outperform epoch-based methods, and that a rapid decline in dropout probability can diminish MacDrop's effectiveness.   It might be useful to explore why step-based methods are superior. Is it because they provide a more granular control over the dropout process? Additionally, the paper mentions that a rapid decline in dropout probability can diminish effectiveness, but what is the optimal rate of decline, and how might this be determined? Generalizability Across Architectures: The paper evaluates MacDrop on several LLM architectures, but the sensitivity to massive weights appears to vary significantly across architectures (e.g., Gemma-2 is less sensitive).   While the authors acknowledge this, a deeper discussion on why certain architectures are more or less sensitive to massive weights would be valuable. Understanding the architectural features that influence this sensitivity could lead to more generalizable methods. Impact on Different Tasks: The paper shows that MacDrop improves performance on zero-shot downstream tasks and long context tasks, but has limited impact on generation tasks.   The authors provide some examples and discussion in Appendix G, but a more detailed analysis of why MacDrop is more effective in some tasks than others would be beneficial. Is it related to the type of knowledge or reasoning required by the task? Computational Overhead: The paper mentions that MacDrop introduces a negligible overhead (e.g., approximately 0.35 seconds per step for Llama-3-8B using LORA on 8xA100 GPUs).   While this sounds promising, it would be helpful to have a more detailed breakdown of where this overhead comes from and how it scales with larger models or different hardware. Supplementary Material: A-G are reviewed. Relation To Broader Scientific Literature: In summary, the paper bridges the understanding of attention sinks and massive activations with an analysis of weight importance, and it leverages these insights to develop a more effective parameter-efficient fine-tuning method. Essential References Not Discussed: Based on the context you provided, here are some related works that are essential to understanding the key contributions of the paper but are not currently cited or discussed: Weight Pruning Techniques: The paper discusses the importance of identifying crucial weights in LLMs. This is closely related to weight pruning techniques, which aim to remove less important weights from a neural network to reduce its size and computational cost. Some classical works in weight pruning may provide additional context. Neural Network Interpretability: The paper touches on understanding the internal mechanisms of LLMs, which falls under the broader field of neural network interpretability. Research in this field aims to provide insights into how neural networks make decisions. Regularization Techniques: The MacDrop method is a regularization technique that applies dropout to specific weights. Other regularization methods, such as L1 or L2 regularization, are commonly used to prevent overfitting in neural networks and could be relevant for comparison. Other Strengths And Weaknesses: Strengths: - The paper provides a new perspective on understanding the internal mechanisms of LLMs by focusing on the role of massive weights in the intermediate state of feed-forward network modules. This approach is novel and contributes to the ongoing research on LLM interpretability. - The paper is generally well-written and easy to follow. The authors provide clear explanations of their methodology, experimental setup, and results. The use of figures and tables effectively illustrates the key findings. The supplementary material provides additional details and analysis that further enhance the clarity and completeness of the paper. Weaknesses: - Limited Impact on Generation Tasks: The paper acknowledges that MacDrop has limited performance improvements in generation tasks. While the authors provide some analysis in Appendix G, a more in-depth investigation into why MacDrop is less effective for generation would be valuable. - Generalizability Across Architectures: The paper demonstrates that the sensitivity to massive weights varies across different LLM architectures. While the authors discuss this, a deeper exploration of the architectural features that influence this sensitivity could lead to more generalizable methods. Other Comments Or Suggestions: Typos and Grammatical Errors: There are instances of typos and grammatical errors throughout the text. For example, in the abstract, "LLMS" should be "LLMs".   Attention to these details will improve the overall readability and professionalism of the paper. Additional Analysis: The paper could benefit from additional analysis in certain areas. For example, while the authors discuss the impact of MacDrop on zero-shot downstream tasks and long context tasks, a more detailed analysis of why MacDrop has limited impact on generation tasks would be valuable. Questions For Authors: Details on Dropout Impact: In Section 4.3.1, the paper mentions that strong dropout on massive weights deteriorates performance. Can the authors elaborate on why this happens and what the optimal balance is for dropout probability? Generalizability of MacDrop: MacDrop's effectiveness varies across different LLM architectures. Could the authors discuss the architectural features that make some LLMs more sensitive to massive weights than others? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your in-depth review. We will update the manuscript to address the typos and grammatical errors, and suggested references. During the rebuttal process, we focus on addressing the questions in **Experimental Designs Or Analyses**, because this encompasses concerns raised in **Weaknesses**, **Comments Or Suggestions**, and **Questions**. --- ### Q1) Ablation Study On Dropout Scope (or, Details on Dropout Impact in **Questions**) - We conducted this ablation study to identify the proper dropout probability. Additionally, we emphasize that among three scopes, applying dropout to massive weights can only lead to improved performance. - We interpret the performance degradation under strong probabilities as follows: - As demonstrated in the original dropout paper [1], excessively high dropout probabilities cause performance degradation due to **underfitting**. (Figure 9 in [1]) - Moreover, consider the extreme case where the dropout probability $p = 1.0$. This corresponds to training a model under the complete zeroing attack. Therefore, the gradients cannot capture meaningful update directions. --- ### Q2) Curriculum Methods and Initial Dropout Probability - As you mentioned, we consider training to be more stable when using the step-based curriculum, because it offers finer granularity compared to the epoch-based one, as shown in **Figure 7**. This is particularly relevant in our experimental setting (only 3 epochs fine-tuning). We believe that if training is conducted over sufficient epochs, the drawbacks of the epoch-based approach can be mitigated. - Next, while optimizing the curriculum is necessary, it is a highly challenging task and may incur significant additional overhead (Line 6 of Algorithm 1). - Therefore, we proposed a general curriculum as a rule of thumb. Please refer to the last sentence of the last paragraph in **Section 4**. We believe that, at least in training environments similar to ours, the optimal curriculum likely lies between the Step and Exp. ($\alpha = 0.01$) strategies. --- ### Q3) Generalizability Across Architectures (or, Generalizability Across Architectures in **Weaknesses** and Generalizability of MacDrop in **Questions**) - We argue that the lower sensitivity of Phi-3.5-medium and Gemma-2 can be attributed to the dropout (in **Eq. 2**) and LayerNorm layers (in **Eq. 3**), respectively. - Regarding the Phi-3 family, we have observed that sensitivity varies with the model scale and the number of pre-training tokens. However, due to limitations in computational resources, we kindly ask for your understanding that we are currently unable to reproduce and verify this finding in detail. - However, for the Gemma-2 family, a very recent study [2] has provided an analysis indicating that the architectural design shown in **Eq. 3**—referred to as Peri-LN—preserves **gradient stability** by normalizing both input and output of each sub-layer and prevents the emergence of massive activations. - We consider this paper to offer deeper insights into the functional role of LayerNorm, which we will cover in an updated manuscript. --- ### Q4) Impact on Different Tasks (or, Limited Impact on Generation Tasks in **Weaknesses** and Additional Analysis in **Comments Or Suggestions**) - Although we have considered this matter extensively, we have not been able to identify the exact cause. - As you have mentioned, we conducted an analysis of all tasks, including zero-shot, long context, and generation, at the sub-task level; however, the performance increase or decrease of MacDrop did not appear with any consistent characteristics. - Nevertheless, we believe that the lack of improvement of MacDrop in generation tasks may be due to their inherent complexity. We would greatly appreciate any suggestions or experimental designs that could help better isolate this phenomenon. --- ### Q5) Computational Overhead - Please refer to the last sentence of the last paragraph in **Section 3**. The comptational overhead comes from only masking and rollback processes in layer $l$, corresponding to the Lines 6-9 and 11-12 in Algorithm 1, respecitvely. - For Llama-3-70B, approximately 0.68 second per step is required. This overhead is related to the **dimension of the intermediate state** (e.g., 14,436 for 8B and 28,672 for 70B), because the number of massive indices is set to 5 for both the 8B and 70B models. - Although it is nearly twice as much compared to 8B, the time required for loss computation (Line 10) has increased by approximately eight times, making addtional overhead even more negligible. - Unfortunately, we do not have different HW. However, it is expected that the overhead is neglectable compared to the loss calculation in other hardware as well. --- [1] Dropout: A Simple Way to Prevent Neural Networks from Overfitting [2] Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
null
null
null
null
null
null
FlatQuant: Flatness Matters for LLM Quantization
Accept (poster)
Summary: This work proposes an approach of learning scaling and rotation transformation to mitigate the problem of outliers and varying dynamic range of weights and activations in LLMs. The learnable rotation transformation is parameterized as a Kronecker product of 2 matrices. Parameters of the transformation are trained to optimize the block reconstruction objective. The method proposed in validated on several LLM models for W4A4 quantization. ### Update after rebuttal I believe than contribution and experimental results are sufficient for acceptance of this work. Claims And Evidence: The authors claim to achieve new state-of-the-art for W4A4 and the evidence provided does support this statement. Performance drop for W4A4 quantization of Llama-2, Llama-3 models relative to baseline is much smaller than the one produced by baselines (even quite strong, such as QuaRot and SpinQuant). In addition, FlatQuant comes with certain inference speed-up, and the paper provides an analysis of overhead induces by rotation transformations, making the provided results plausible. Methods And Evaluation Criteria: The evaluation protocol adopted is standard for research on LLM compression. Theoretical Claims: This work provides primarily a practical contribution. The theoretical motivation of the proposed approach is sound. Experimental Designs Or Analyses: Overall, experimental protocol and choice of baselines is sensible. However, for an exhaustive comparison I would suggest adding a few more state-of-the-art methods [1, 2]. The former provides other strategy for learning rotation and the latter accounts for outliers tokens in KV cache. An ablation on impact of each component in the proposed method is provided as well as analysis of computational overhead caused by rotations. --- References [1] Lin H. et al. Duquant: Distributing outliers via dual transformation makes stronger quantized llms //Advances in Neural Information Processing Systems. – 2025. – Т. 37. – С. 87766-87800. [2] Chen, Mengzhao, et al. "Prefixquant: Static quantization beats dynamic through prefixed outliers in llms." arXiv preprint arXiv:2410.05265 (2024). Supplementary Material: I have read the submission appendix. Relation To Broader Scientific Literature: This work can be regarded as a new approach for learning scale and rotation transformation to simplify the problem of quantization. Essential References Not Discussed: Two recent papers with strong performance are not discussed. I would suggest adding them to related work section. --- References [1] Lin H. et al. Duquant: Distributing outliers via dual transformation makes stronger quantized llms //Advances in Neural Information Processing Systems. – 2025. – Т. 37. – С. 87766-87800. [2] Chen, Mengzhao, et al. "Prefixquant: Static quantization beats dynamic through prefixed outliers in llms." arXiv preprint arXiv:2410.05265 (2024). Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: What are the speed-ups on newer generation of GPUs (say GPU 4090) or high-end GPU on prefill and decode? Would you expect them to be lower or higher? In addition, it could be insightful to measure speed-ups with other sequence lengths. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1**. *For an exhaustive comparison I would suggest adding a few more state-of-the-art methods [1, 2]. The former provides other strategy for learning rotation and the latter accounts for outliers tokens in KV cache.* *[1] Lin H. et al. Duquant: Distributing outliers via dual transformation makes stronger quantized llms //Advances in Neural Information Processing Systems. – 2025. – Т. 37. – С. 87766-87800.* *[2] Chen, Mengzhao, et al. "Prefixquant: Static quantization beats dynamic through prefixed outliers in llms." arXiv preprint arXiv:2410.05265 (2024).* **A1**. Thanks for the recommendation regarding the recent works. We will add them in the revised version. PrefixQuant is mainly experimented with static quantization and is a technology orthogonal to our method, we leave the integration of PrefixQuant with FlatQuant as the future work. Below we conduct a detailed comparison between FlatQuant and DuQuant. * **Comparison with DuQuant in Accuracy.** FlatQuant significantly outperforms DuQuant across multiple benchmarks. |**LLaMA3-8B**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |FP16|6.14|9.45|53.50|77.57|79.12|75.51|80.74|72.93|73.23| |DuQuant|8.13|12.91|44.80|71.30|73.00|68.04|75.73|69.46|67.05| |FlatQuant|6.98|11.13|50.00|75.80|76.80|72.91|79.16|72.69|71.23| * **Comparison with DuQuant in Latency.** DuQuant has the same number of online transformations (each consisting of two matrix multiplications and one channel permutation) as FlatQuant (each consisting of a Kronecker-decomposed matrix multiplication). It can be seen that FlatQuant achieves comparable speedup without kernel fusion and even faster speedup with kernel fusion. |**Batch Size**|**1**|**4**|**16**| |:----|:----|:----|:----| |DuQuant|1.95x|2.03x|2.08x| |FlatQuant w/o Kernel Fusion|1.94x|2.02x|2.10x| |FlatQuant w/ Kernel Fusion|2.12x|2.21x|2.27x| --- **Q2**. *What are the speed-ups on newer generation of GPUs (say GPU 4090) or high-end GPU on prefill and decode? Would you expect them to be lower or higher?* **A2**. Thanks for the valuable suggestion. Currently, we haven't implemented INT4 kernels on other GPUs, but we can approximately estimate the speedup. In the following, we estimate the speedup for a single linear layer with hidden dimension $d$ on A100 GPU. The speedup for attention layers can be estimated likewise. * **A100 Specs**. For A100, we have $\text{Perf} _{\text{BF16}}=312\text{ TFLOPS},\text{Perf} _{\text{INT4}}=1248\text{ TFLOPS},\text{Bandwidth}=1555\text{ GB/s}$ * **Prefill Speedup**. Assume $s=2048$ and $d=4096$ where $s$ is the number of tokens in the prefill. Since in the prefill stage, the linear layer is mostly compute-bound, while the online quantization and transformations are memory-bound, we can estimate the prefill speedup accordingly: $ \text{FLOPs} _{\text{Linear}}=2sd^2, \text{Mem} _{\text{Quant+Kron}}=2.5sd+4d $ $ \text{Prefill Speedup}\approx \frac{\text{FLOPs} _{\text{Linear}}/\text{Perf} _{\text{BF16}}}{\text{FLOPs} _{\text{Linear}}/\text{Perf} _{\text{INT4}}+\text{Mem} _{\text{Quant+Kron}}/\text{Bandwidth}}\approx 3.21 $ * **Decoding Speedup**. Similarly, the decoding speedup can be estimated with $ \text{Mem} _{\text{BF16}}=2d^2+4d,\text{Mem} _{\text{INT4}}=d^2/2+2.5d,\text{Mem} _{\text{Quant+Kron}}=6.5d $ $ \text{Decoding Speedup}\approx \frac{\text{Mem}_ {\text{BF16}}}{\text{Mem}_ {\text{INT4}}+\text{Mem}_ {\text{Quant+Kron}}} \approx 3.98 $ --- **Q3**. *In addition, it could be insightful to measure speed-ups with other sequence lengths.* **A3**. Thanks for the helpful advice. In the following, we provide more speedup results with other sequence lengths as a complement to Figure 4: * **Prefill** speedup with batch size 1. |**Prefill Length**|**INT4**|**QuaRot**|**FlatQuant**| |:----|:----|:----|:----| |2048|2.16x|1.97x|2.12x| |4096|2.06x|1.90x|2.04x| |8192|1.94x|1.79x|1.92x| |16384|1.83x|1.72x|1.80x| * **Decoding** speedup with batch size 64. |**KV Cache Length**|**INT4**|**QuaRot**|**FlatQuant**| |:----|:----|:----|:----| |256|1.38x|1.09x|1.24x| |512|1.62x|1.38x|1.56x| |1024|1.70x|1.61x|1.63x| |2048|1.78x|1.72x|1.76x| --- Rebuttal Comment 1.1: Comment: Thank you for your response. I believe FlatQuant demonstrates strong performance and significant speed-ups, making it highly promising for practical integration. Therefore, I support the acceptance of this work. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your comments and valuable advice.
Summary: This paper presents FLATQUANT, a post-training quantization approach for large language models (LLMs) that focuses on improving the "flatness" of weight and activation distributions to enhance quantization performance. The authors introduce a novel method employing fast and learnable affine transformations tailored to each linear layer in a Transformer model. The approach uses Kronecker decomposition to reduce computational overhead and incorporates kernel fusion for efficient implementation. The paper demonstrates state-of-the-art quantization accuracy with minimal performance degradation, even in challenging W4A4 (4-bit weights and activations) scenarios for large models like LLaMA-3-70B, while providing significant speedups compared to FP16 inference. Claims And Evidence: The authors make several claims that are generally well-supported by experimental evidence: Page 1, lines 51-52: The claim that FLATQUANT achieves less than 1% accuracy drop for W4A4 quantization on LLaMA-3-70B is supported by the comprehensive evaluation in Table 2, which shows a minimal drop from 79.95% to 79.01% on average across multiple benchmarks. Page 2, paragraph 2: The claim that flatter distributions are easier to quantize is well-substantiated through multiple visualizations (Figure 1) and analyses of quantization error landscapes (Figure 2). Page 3, paragraph 3: The authors claim that Kronecker decomposition reduces memory and computational demands, which is validated by the quantitative analysis showing 2.61% computational overhead and 0.11MB memory overhead (page 4). Methods And Evaluation Criteria: The methods are generally appropriate and well-designed: The selection of benchmark tasks (WikiText2, C4, and six zero-shot QA tasks) provides good coverage for evaluating language modeling capabilities and task performance. The comparisons with state-of-the-art baselines like SmoothQuant, OmniQuant, QuaRot, and SpinQuant offer appropriate context for assessing FLATQUANT's contributions. The ablation studies effectively isolate the contributions of different components (learnable transformations, per-channel scaling, and learnable clipping thresholds). However, there are methodological limitations: Page 4, paragraph 3: The calibration procedure using only 128 sentences from WikiText-2 seems limited, though the authors do later show in Table 16 (Appendix) that performance remains consistent across different calibration datasets. Page 4, line 230-234: The selection of n1 and n2 for Kronecker decomposition is presented as a simple optimization, but the exploration of these parameters is limited to one model size. Theoretical Claims: The paper's theoretical claims are generally sound and verified: The idea that flatness matters for quantization is well-established in prior literature, as the authors acknowledge. Their contribution is showing how to better optimize for this objective. The Kronecker decomposition approach (page 3) is mathematically correct and the efficiency claims are validated by experiments. I did not identify any issues with the mathematical derivations provided, though the theoretical justifications for why the training process leads to flatter distributions could be more formally developed. Experimental Designs Or Analyses: The experimental analyses are thorough and well-executed: Page 5, Tables 1-2: The comprehensive evaluation across multiple models (7B to 70B parameters) and datasets provides strong evidence for FLATQUANT's effectiveness. Page 7, Figure 4-6: The detailed analysis of inference latency demonstrates the practical benefits of FLATQUANT. Page 8, Figure 7: The correlation between training progress and increasing flatness offers a nice validation of the method's working principle. Some experimental limitations exist: Page 7, Figure 5: While the impact of different decomposition sizes is analyzed, this analysis is limited to a single model size (LLaMA-2-7B). It would be helpful to see more detailed performance profiling across different hardware platforms beyond the RTX3090 GPU mentioned on page 5, line 267. Supplementary Material: I reviewed the supplementary material thoroughly, which includes: Implementation details that clarify the technical aspects of matrix inversion, training costs, and kernel fusion Extensive additional experimental results on different model architectures and quantization settings Detailed visualizations supporting the claimed flatness improvements The supplementary material is comprehensive and offers valuable additional insights, particularly the exploration of FLATQUANT's effectiveness across varied quantization settings (weight-only, KV cache) and additional model architectures (Qwen-2.5). Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Page 3, section 3.1: The Kronecker decomposition approach is a clever solution to the computational overhead problem that plagues prior affine transformation methods. Page 3, lines 173-179: The training objective for each Transformer block individually simplifies the optimization problem without apparent degradation in performance. Page 4, section 3.3: The kernel fusion approach is well-designed and practically important for achieving the reported speedups. Weaknesses: Page 1, paragraph 1: While the paper claims to establish "new state-of-the-art" results, it would be strengthened by a clearer quantitative comparison of what constitutes the previous SOTA. Section 2.2: The notion of "flatness" is somewhat intuitive but would benefit from a more rigorous definition or metric. Page 6, lines 364-368: The claim that "FLATQUANT with RTN is sufficient to be competitive" is intriguing but lacks thorough theoretical explanation. Other Comments Or Suggestions: The paper would benefit from a more detailed explanation of why learnable clipping after transformation is more effective than before transformation (as shown in Table 17). The visualization in Figure 1 effectively illustrates the concept of flatness, but could be improved by adding a metric that quantifies the degree of flatness. Some minor typos and grammatical issues exist (e.g., "is powerless to such errors" on page 3). Questions For Authors: How would FLATQUANT perform when incorporated into more advanced quantization schemes like mixed-precision quantization, where different layers use different bit-widths? This could affect my evaluation as it might demonstrate broader applicability. The paper mentions that FLATQUANT is robust to initialization (page 4, lines 261-262). Have you analyzed how different initialization strategies for the affine transformation matrices affect convergence speed or final performance? The paper shows impressive results on LLaMA-3-70B with W4A4 quantization. How would FLATQUANT perform in even more extreme scenarios, such as W3A3 quantization? Some results are shown in Appendix Table 14, but a more detailed analysis of limitations would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1**. ...*a clearer quantitative comparison of what constitutes the previous SOTA.* **A1**. We consider QuaRot and SpinQuant as previous SOTA in Table 1-2, and compare with the latest DuQuant in A1 to Reviewer x5Xi. --- **Q2**. *The notion of "flatness" is somewhat intuitive but would benefit from a more rigorous definition or metric.* **A2**. Thanks for the suggestion. We have provided a quantitative definition of flatness in Appendix D.1 Line 1106-1113, and visualized the associated flatness score in Figure 7 and Figure 11.2. We will move this part to the main text. --- **Q3**. *The claim that "FLATQUANT with RTN is sufficient to be competitive" is intriguing but lacks thorough theoretical explanation.* **A3**. Thanks for pointing this out. We suspect the affine transformations alone make the distributions friendly to RTN, where the flatness is quantitatively measured in Appendix D.1 and Figure 7. We decide to leave theoretical explanations in future works. --- **Q4**. *The paper would benefit from a more detailed explanation of why learnable clipping after transformation is more effective than before transformation (as shown in Table 17).* **A4**. Thanks for the question. This can be attributed to the preservation of important outlier channels. * **"LCT before Transformation"** consistently performs worse, as it directly clips important outliers in LLMs. * **"LCT after Transformation"** performs significantly better because the affine transformations redistribute outliers across channels. Even after clipping, the inverse transformation can effectively recover the original outliers. We will further clarify this issue in our revision. --- **Q5**. *Some minor typos and grammatical issues exist.* **A5**. Thanks for the attentive feedback. We will fix these issues in the revised version. --- **Q6**. *How would FLATQUANT perform when incorporated into more advanced quantization schemes like mixed-precision quantization, where different layers use different bit-widths? This could affect my evaluation as it might demonstrate broader applicability.* **A6**. Thanks for raising this interesting question. Below we explore some orthogonal mixed-precision strategies to enhance FlatQuant. * **Weight Importance Score Analysis**. We follow SqueezeLLM[1] to use Fisher information for the estimation of weight importance, and simply sum over the scores to estimate the overall importance of different parts in LLMs, including different linear layer types and Transformer layers. |**Self-attention**|$\mathbf W_q$|$\mathbf W_k$|$\mathbf W_v$|$\mathbf W_o$| |:----|:----|:----|:----|:----| |**Score**|44.86|11.01|1139.08|365.17| |**Feed-forward network**|$\mathbf W_g$|$\mathbf W_u$|$\mathbf W_d$| |:----|:----|:----|:----| |**Score**|177.20|401.03|2468.97| |**Top 5 Transformer Layer Index**|**1**|**0**|**2**|**3**|**31**| |:----|:----|:----|:----|:----|:----| |**Score**|63684.88|18075.19|6740.75|5780.25|5297.59| * **Mixed-precision Results**. According to the overall importance, we use W8A8 for the top 5 Transformer layers as well as all the down projection layers, which proves to be an effective way to improve FlatQuant. |**LLaMA-3-8B**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |FP16|6.14|9.45|53.50|77.57|79.12|75.51|80.74|72.93|73.23| |FlatQuant|6.98|11.13|50.00|75.80|76.80|72.91|79.16|72.69|71.23| |+down_proj_8bits|6.73|10.62|50.00|77.78|77.49|73.96|79.54|70.56|71.55| |+Top5_8bits|6.80|10.82|49.23|76.56|76.54|73.51|79.71|73.95|71.58| |+Top5&down_proj_8bits|6.61|10.43|51.11|77.74|77.47|74.69|79.92|72.14|72.18| [1] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization." arXiv preprint arXiv:2306.07629 (2023). --- **Q7**. *Have you analyzed how different initialization strategies for the affine transformation matrices affect convergence speed or final performance?* **A7.** Thanks for the thoughtful feedback. We initialize the transformations with random orthogonal matrices by default and find FlatQuant is generally robust to initialization strategies. |**Init**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |Identity Matrix|6.98|11.15|49.49|77.10|76.82|74.21|80.09|70.32|71.34| |Random Orthogonal Matrix|6.98|11.13|50.00|75.80|76.80|72.91|79.16|72.69|71.23| --- **Q8**. *How would FLATQUANT perform in even more extreme scenarios, such as W3A3 quantization? Some results are shown in Appendix Table 14, but a more detailed analysis of limitations would be valuable.* **A8**. Thanks for the advice. As shown in Table 14, although FlatQuant achieves notable improvements over QuaRot, it still suffers significant accuracy degradation. W3A3 needs both improved algorithms and hardware support, which still remains an open question. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I like this work for the research area it explores. I will raise my original rating. --- Reply to Comment 1.1.1: Comment: Thanks for your thorough review and constructive feedback. We sincerely appreciate the time and effort you have dedicated to evaluating our work.
Summary: This paper proposes FlatQuant for post-training quantization with learnable linear transformation to remove outliers of weights and activations. The authors propose the use of Kronecker decomposition to reduce the inference overhead. Extensive experiments demonstrate the effectiveness of the proposed approach on W4A4 quantization. Claims And Evidence: Yes. Methods And Evaluation Criteria: The authors show the effectiveness of the algorithm through extensive experiments. Theoretical Claims: N/A Experimental Designs Or Analyses: Solid experimental results on LLM quantization. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Regarding techniques for suppressing outliers to enhance quantized LLMs, in addition to per-channel scaling and linear transformations, the authors overlooked the use of infinity-norm regularization for reducing the weight range, e.g., [1,2]. [1] Zhang et al., MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization. [2] Kundu et al., R2 Loss: Range Restriction Loss for Model Compression and Quantization. Other Strengths And Weaknesses: Strengths: 1. Proposed Kronecker decomposition to mitigate the computational and memory costs for the additional linear transformation. 2. Demonstrated the effectiveness of the proposed method through extensive experiments. Weakness: 1. The contributions are somewhat incremental, extending existing work on block-wise quantization and linear transformations for smoothing weights and activation values. 2. The results are primarily empirical, lacking sufficient theoretical justification for the effectiveness of the Kronecker decomposition-based linear transformation. Specifically, it remains unclear why this approach outperforms rotation-based transformations. A deeper theoretical analysis would strengthen the claims. 3. The reported results in Table 1 raise concerns about plausibility. It is counterintuitive that FlatQuant outperforms AffineQuant significantly, given that AffineQuant employs a more general affine transformation, while FlatQuant imposes constraints through low-complexity Kronecker decomposition, trading expressivity for efficiency. Justification is needed to explain why the reduced expressivity yields the better performance. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the role of Kronecker decomposition-based transformation in mitigating outliers? Can we achieve similar effects by imposing alternative low-complexity structures, such as low-rank decomposition? 2. I wonder how FlatQuant performs on 2-bit weight quantization. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1**. *Regarding techniques for suppressing outliers..., the authors overlooked the use of infinity-norm regularization for reducing the weight range.* **A1**. Thanks for the recommendation regarding related works. We will add the discussions in the revised manuscript. --- **Q2**. *The results are primarily empirical, lacking sufficient theoretical justification for the effectiveness of the Kronecker decomposition-based linear transformation. Specifically, it remains unclear why this approach outperforms rotation-based transformations. A deeper theoretical analysis would strengthen the claims.* **A2**. Thanks for the advice. Compared with rotational transformations, FlatQuant relaxes the optimization space from Stiefel manifold to the general linear space, which has more potential to suppress outliers. We leave more detailed theoretical analyses in future work. --- **Q3**. *It is counterintuitive that FlatQuant outperforms AffineQuant significantly...* **A3**. In the following, we discuss the superiority of FlatQuant over AffineQuant in more detail as a complement to Line 631-637: * **Improved Expressivity**. As discussed in Figure 5, we find that the Kronecker decomposed transformations deliver comparable accuracy with the full-size transformation, which demonstrates its expressivity. In contrast, AffineQuant optimizes strictly diagonally dominant transformation to ensure invertibility, which may restrict its expressivity and make the transformation more like a per-channel scaling (see Figure 7 in the AffineQuant paper). * **Applicability to All Linear Layers**. FlatQuant applies Kronecker-decomposed linear transformations to all linear layers with minimal overhead. In contrast, AffineQuant directly learns a full-size transformation, and can only apply it to the output projection linear layer for WA quantization so that the transformation can be merged into preceding linear layer to avoid the formidable overhead. The other linear layers can only use per-channel scaling. --- **Q4**. *The contributions are somewhat incremental, extending existing work on block-wise quantization and linear transformations for smoothing weights and activation values.* **A4.** We elaborate on the key differences between FlatQuant and previous works below: * **Linear Transformations v.s. Orthogonal Transformations.** While previous works (e.g. QuaRot) explore orthogonal transformations to smooth outliers, we find linear transformations can achieve better flatness, delivering superior quantization accuracy as shown in Table 1-2. * **Kronecker Decomposed Matrices with Kernel Fusion.** While linear transformations suffer from high computation and memory consumption, we mitigate it with Kronecker decomposition along with kernel fusion, ensuring practical speedup with minimal overhead. * **Learnable Clipping After Transformations.** As detailed in Appendix C.4, a key difference of learnable clipping in FlatQuant is that it is applied after the pre-quantization transformations. This can help avoid damaging critical outliers during activation clipping. --- **Q5**. *What is the role of Kronecker decomposition-based transformation in mitigating outliers? Can we achieve similar effects by imposing alternative low-complexity structures, such as low-rank decomposition?* **A5**. The learnable transformation to remove outliers needs to satisfy 1) **invertible**, so that the transformation is computational invariance (similar to QuaRot); 2) **lightweight** so that little inference overhead is introduced. Kronecker decomposition is a valid approach to satisfy both conditions, where the invertibility is ensured from (Line 661-663). However, low-rank decomposition does not satisfy 1) invertibility. We leave the exploration of other alternatives in the future. --- **Q6**. *I wonder how FlatQuant performs on 2-bit weight quantization.* **A6**. Thanks for the question. In the following, we experiment with 2-bit asymmetric per-channel weight-only quantization and compare FlatQuant against two strong uniform weight-only quantization methods GPTQ and QuIP[1]. For GPTQ, we use activation reorder and grid search the best weight clipping thresholds. |**LLaMA-3-8B**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |FP16|6.14|9.45|53.50|77.57|79.12|75.51|80.74|72.93|73.23| |GPTQ|**26.76**|247.27|22.70|31.86|35.37|5.20|54.08|51.14|33.39| |QuIP|59.96|139.70|23.98|28.24|30.82|1.59|52.18|50.28|31.18| |FlatQuant|31.27|**132.02**|**24.49**|**41.04**|**39.42**|**14.28**|**59.03**|**53.75**|**38.67**| [1] Chee, Jerry, et al. "Quip: 2-bit quantization of large language models with guarantees." Advances in Neural Information Processing Systems 36 (2023): 4396-4429.
Summary: This paper presents FLATQUANT, a novel post-training quantization framework that enhances the flatness of weight and activation distributions in large language models (LLMs) through learnable affine transformations. FLATQUANT introduces layer-wise affine transformations trained on calibration data to reshape distributions, coupled with Kronecker decomposition to reduce parameter overhead. The method outperforms existing techniques (e.g., QuaRot, SpinQuant) across perplexity, QA accuracy, and inference speed benchmarks, particularly excelling in W4A4 settings. Results on LLaMA-3 models (7B–70B) demonstrate robustness, with kernel fusion further enhancing practical deployment viability. Claims And Evidence: The authors claim that flatness of weights and activations is crucial for quantization effectiveness, and their method produces flatter distributions than previous approaches. This claim is well-supported by the visualizations in Figure 1, which clearly show the distribution envelopes after different transformations. The claim that FLATQUANT outperforms previous SOTA methods is convincingly supported by extensive experiments across multiple benchmarks and model scales. For instance, Table 1 and Table 2 show consistent improvements over SpinQuant and QuaRot in perplexity metrics and QA tasks respectively. Methods And Evaluation Criteria: The proposed methods align well with the problem at hand. The use of Kronecker decomposition to reduce computational overhead is particularly clever, as it maintains the benefits of affine transformations while minimizing inference costs. The evaluation criteria are comprehensive, including perplexity on language modeling tasks (WikiText-2, C4) and accuracy on zero-shot QA tasks. The authors also properly evaluate both accuracy and speed metrics, providing a holistic view of the method's practical utility. The comparison against state-of-the-art baselines using the same evaluation framework (lm-eval-harness) ensures fair comparisons. Theoretical Claims: The paper makes minimal purely theoretical claims, focusing instead on empirical findings. The explanation of how Kronecker decomposition reduces computational complexity (line 205-213) appears sound, showing that memory requirements are reduced by a factor of approximately n/2 when the dimensions are balanced (n1 ≈ n2 ≈ √n).. Experimental Designs Or Analyses: The experimental design is thorough and well-executed. The authors test their method across multiple model sizes (7B to 70B parameters) and across different quantization settings (weight-only, weight-activation, KV cache). The additional experiments in the appendix further bolster confidence in the results. The kernel fusion experiments (Figure 4 and Table 6) are particularly well-designed, showing how the theoretical speedups translate to practical benefits across different batch sizes and hidden dimensions. Supplementary Material: I reviewed the supplementary material, particularly Appendices B, C, and D. The implementation details in Appendix B provide valuable insights into the practical aspects of the method. The additional experiments in Appendix C, especially the results on other model architectures (LLaMA-3.1-8B-Instruct, Qwen-2.5-Instruct) and MT-Bench evaluations, strengthen the paper's claims about FLATQUANT's generalizability. The visualizations in Appendix D effectively illustrate the flatness concept across different models. Relation To Broader Scientific Literature: The authors provide a comprehensive discussion of related work, clearly positioning FLATQUANT relative to existing methods like SmoothQuant, OmniQuant, AffineQuant, QuaRot, and SpinQuant. This paper builds upon the insight from previous work that outliers in LLMs pose challenges for quantization, but extends beyond prior approaches by learning model-specific transformations rather than relying on fixed transformations like Hadamard. Essential References Not Discussed: The authors discuss in depth all relevant work related to the key contributions of this paper. Other Strengths And Weaknesses: **Strengths:** 1. The kernel fusion implementation (Section 3.3, Appendix B.3) is particularly impressive, showing how theoretical ideas can be translated into practical performance gains. 2. Figure 2 provides an insightful visualization of quantization error propagation across layers and tokens, helping explain why flatter distributions matter. 3. The method's effectiveness across different model architectures (shown in the supplementary material) speaks to its generalizability. **Weaknesses:** 1. The authors do not provide a comparative assessment of GPU memory consumption during inference relative to baseline methods. This omission is particularly significant for edge deployment scenarios where memory constraints are often the primary limiting factor. 2. The paper lacks exploration of potential biases introduced by Kronecker decomposition and how these biases might affect quantization error propagation through the network. Understanding this relationship would strengthen the theoretical foundation of the approach. 3. While FLATQUANT incorporates LCT and shows improvements over SpinQuant, the paper doesn't adequately isolate whether this specific component is the primary driver of performance gains. A more targeted ablation would clarify the relative importance of affine transformations versus clipping thresholds. 4. The authors treat each layer's quantization independently, but don't thoroughly examine how quantization decisions in earlier layers affect the optimal transformations for later layers, potentially overlooking cascading effects. Other Comments Or Suggestions: 1. The paper would benefit from a clearer explanation of how the online transformations are initialized, as initialization strategies can significantly impact convergence and final performance. 2. The hyperlinks throughout the paper appear to be non-functional, which hinders navigation between sections and references. Authors should consider revising their citation formatting to ensure proper link functionality. Questions For Authors: Please refer to Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1**. *The authors do not provide a comparative assessment of GPU memory consumption...* **A1**. Thanks for the helpful advice. Below, we provide more results on the memory consumption. The experiment results are consistent with our theoretical analysis in Appendix B.2 Line 736-739, demonstrating the minimal memory overhead brought by the online transformations in FlatQuant. * **Peak memory usage for decoding a single token** on one Transformation layer of LLaMA-2-7B model with KV caches of different lengths and batch size 1. |**Sequence Length**|**FP16 (GB)**|**INT4 (GB)**|**FlatQuant (GB)**|**Saving Factor**| |:----|:----|:----|:----|:----| |256|0.393|0.110|0.110|3.58x| |512|0.399|0.112|0.112|3.56x| |1024|0.411|0.118|0.118|3.48x| |2048|0.434|0.130|0.130|3.35x| --- **Q2**. *The paper lacks exploration of potential biases introduced by Kronecker decomposition...* **A2**. Thanks for the thoughtful feedback. As shown in Figure 5, experiments demonstrate that Kronecker decomposition preserves expressivity, achieving accuracy comparable to full-size transformations. We leave theoretical justifications as future work. --- **Q3**. ...*A more targeted ablation would clarify the relative importance of affine transformations versus clipping thresholds.* **A3**. Thanks for the constructive advice. Below we provide more results and discussions to demonstrate the effectiveness of LT: * **Full Ablation Table**. We provide full ablation results as a complement to Table 3, which clearly showcases the effectiveness of each part in FlatQuant, especially LT. |**LT**|**PS**|**LCT**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| ||||1266.60|936.41|25.26|28.62|27.04|1.26|51.80|51.93|30.99| ||√||NaN|NaN|22.70|25.08|25.04|0.00|49.51|49.57|28.65| |||√|1149.08|1490.08|22.95|29.29|27.35|0.60|52.99|50.83|30.67| ||√|√|8197.96|4654.07|25.43|25.72|25.96|0.02|50.49|48.86|29.41| |√|||8.50|13.51|44.97|71.38|73.17|67.05|76.88|67.48|66.82| |√|√||7.95|12.74|44.20|71.89|74.21|68.72|77.15|66.30|67.08| |√||√|7.11|11.47|49.32|76.14|76.30|72.17|78.89|71.51|70.72| |√|√|√|6.98|11.13|50.00|75.80|76.80|72.91|79.16|72.69|71.23| * **Comparison between FlatQuant's affine transformation and SpinQuant's orthogonal transformation**. For both methods, we use asymmetric activation quantization following SpinQuant and do not activate weight or activation clipping. For FlatQuant, we also do not use PS. It can be seen that the affine transformations in FlatQuant are more effective in easing outliers, leading to higher accuracy especially when RTN is used. |**LLaMA3-8B**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |FP16|6.14|9.45|53.50|77.57|79.12|75.51|80.74|72.93|73.23| |SpinQuant-RTN|41.15|63.89|24.15|38.09|45.12|20.80|59.47|55.01|40.44| |FlatQuant-LT-RTN|**8.05**|**12.85**|46.08|71.30|73.74|67.15|76.28|68.59|**67.19**| |SpinQuant-GPTQ|7.95|13.44|44.80|73.36|73.79|66.99|76.01|68.11|67.18| |FlatQuant-LT-GPTQ|**7.66**|**12.58**|46.16|73.57|74.83|67.55|76.01|67.32|**67.57**| --- **Q4**. *The authors treat each layer's quantization independently, … , potentially overlooking cascading effects.* **A4**. Thanks for this valuable question. We tried to minimize the MSE loss between ground truth and outputs given quantized inputs instead of given full-precision inputs (as in Equation 4) to mitigate cascading effects. However, results show that full-precision inputs lead to better performance. To learn the mechanism behind, we think this is similar to the "teacher forcing" in training RNNs (i.e., use the ground truth instead of auto-regressive output for training at each step). Similar effects were also verified to be helpful in model quantization [1]. |**LLaMA-3-8B**|**WikiText-2**|**C4**|**ARC-C**|**ARC-E**|**HellaSwag**|**LAMBADA**|**PIQA**|**Winogrande**|**Avg.**| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |FP16|6.14|9.45|53.50|77.57|79.12|75.51|80.74|72.93|73.23| |FlatQuant|6.98|11.13|50.00|75.80|76.80|72.91|79.16|72.69|71.23| |+quant_inps|7.14|11.51|50.68|76.64|75.77|70.25|80.41|68.51|70.38| [1] Bai, Haoli, et al. "Towards efficient post-training quantization of pre-trained language models." Advances in neural information processing systems 35 (2022): 1405-1418. --- **Q5**. *The paper would benefit from a clearer explanation of how the online transformations are initialized,....* **A5**. Please see A7 to Reviewer pwvD. --- **Q6**. *The hyperlinks throughout the paper appear to be non-functional,...* **A6**. We have double-checked the manuscript and the hyperlinks function well on our side. We are sorry for the inconvenience, and maybe a different PDF browser could resolve your issue. --- Rebuttal Comment 1.1: Comment: This paper is well-experimented and theoretically sound. Therefore, I am inclined to accept this paper. I will raise my score from 3 to 4. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable comments. We are sincerely grateful for your thoughtful review process.
null
null
null
null
null
null
LLMs Can Reason Faster Only If We Let Them
Accept (poster)
Summary: This work addresses the intermediate step solution length of Chain of Thought style exploration algorithms for planning problems, using supervised fine-tuning of LLMs on Algorithm of Thoughts (AoT)-style plans and utilizing a reinforcement learning mechanism for rewarding less verbose plans and thereby reducing solution length. The experimental evaluation with three small-size LLMs (1-3B) on three different planning game domains indicates that solution lengths are reduced quite significantly, while the accuracy of solved problems is maintained or improved. ## Update after Rebuttal: The authors' further experiments on IPC domains such as Blocksworld and Logistics, and experiments utilizing fewer examples for SFT & RL baselines are appreciated. Although there may be differences in the newly obtained results in the accuracies and solution length steps, especially with the SFT baselines, I believe the performance of the proposed method stands up to the scaling attempts, and conclusions should primarily hold without much perturbation. Other questions have been answered, and clarity/ wording recommendations have been accepted. Thus, my score has been increased from a Reject to a Weak Accept. Claims And Evidence: 1) The claim that SFT on AOT-style plans makes the LLM learn to **systematically** explore the state space is an oversell, if not completely wrong. There is no reliable evidence that SFT does anything to help systematic exploration of the state space. It is well known that SFT improves the token-level inference and approximate retrieval ability for similar types of problems encountered during fine-tuning. So, in my understanding, all SFT does is make the exploration less random than before. 2) To be clear, the proposed RL stage with step-count penalties for correct solutions to reward concise solutions with a lesser number of intermediate (thinking) steps is only providing a weak reward signal for conciseness or applying optimization pressure, as alluded to in certain parts of the paper, but not directly optimizing for it as an objective. 3) The Word-Ladder domain is claimed to provide additional complexity for LLMs because actions must be derived from the LLMs' prior knowledge of valid English words. In my perspective, this is not true. If anything, I believe this domain is an advantage for LLMs. It's not clear why this domain would be more complex for LLMs compared to standard planning domains. Methods And Evaluation Criteria: 1) The training set and test sets have a disproportionate number of samples, which opens the possibility for a biased evaluation as the test set may not be fully representative of the data distribution. It is unclear what is the distribution of complexity of problems in the test set. 2) The domains, Game of X and Word Ladder are relatively less sophisticated planning domains compared to standard planning domains such as Blocksworld, Grid, Logistics, etc. It is unclear if this method can be evaluated on such domains as the number of training examples and AOT style plan creation needed for the SFT stage are quite high and effort-intensive. However, I appreciate the work for using the N-Puzzle domain which is known to be quite complex in planning, as the problems can contain negative interactions between subgoals. Theoretical Claims: N/A Experimental Designs Or Analyses: 1) Section 3.2 is largely unclear. I do not understand what is the core motivation and attempt of that experiment. The graph in Figure 2 is also unclear. Supplementary Material: I have reviewed the supplementary material sections A and B. Relation To Broader Scientific Literature: 1) This work tries to address the high solution length problem seen in the Algorithm of Thoughts work. The work's direction could be useful for creating better LLM-based plan generators as it improves the solution length and accuracy metrics for basic planning problems. 2) The work's direction could also be useful in informing the community involved in post-training techniques for Reasoning Language Models (RLMs) such as O1 and DeepSeek to obtain data with shorter solution lengths to perform post-training LLMs. Essential References Not Discussed: 1) References in Section 1 for the LLM-Modulo framework are incorrectly specified. Other Strengths And Weaknesses: ## Other Strengths: 1) The paper is largely clear. ## Other Weaknesses: 1) In my understanding, the method is not scalable. In my view, with SFT technique being heavily involved in the proposed method, there is an inherent difficulty in including standard planning domains such as Blocksworld, Grid, Logistics, and others used in the International Planning Competition, as the SFT stage with AoT style plans utilizes a large number of examples. As for the standard planning domains, it could be harder to create sufficient examples for SFT, only to then have the RL stage weakly influence the LLM to generate shorter solution steps. Thus, the proposed method is not scalable to real-world domains in my opinion. 2) As the RL stage reward formulation only applies optimization pressure by providing a weak reward signal for conciseness, there is no guarantee that there will not be significant backtracking even if the solution length is relatively lesser than CoT plans, since there is no way to keep track of open and closed lists like in standard search algorithms such as A* search. Hence, the optimization pressure could increase action hallucinations and other types of errors from the LLM. Other Comments Or Suggestions: Typos: 1) section 4.2, Line 4. Suggestions: 1) Section 3.2 needs to be re-written or made clear. Figure 2 needs to be explained more thoroughly. 2) Section 4.1 thinking component output examples need to be made more clear, including explaining all the relevant symbols. Questions For Authors: 1) I request the authors to indicate the complexity distribution in the test set for each of the domains. For example, in N-Puzzle, in the 100 examples of the test set, what is the distribution of the edit distance that is claimed to be ranging from 15 to 30? I assume the Manhattan distance mentioned is equivalent to the planning edit distance. Similarly, for Word Ladder, what is the soln path length range distribution in the test set? 2) Why is the standard Tree-of-Thoughts baseline without Finetuning not included? In my understanding, ToT may have a more systematic exploration of the state space with possibly lesser solution length. 3) Section 3.2, Figure 2, what does the Y-axis represent? Why does AoT-medium hit 4 and the other 2 settings only hit 3? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their extremely detailed review. We have acted upon their suggestions to add new experimental results and provide clarifications. **** **Response to Claims and Evidence** 1. In Table 2, we see that AoT-SFT is performing much better than CoT-SFT, indicating better exploration. Furthermore, we chose the word "systematically" to highlight the deliberate search in the output of the LLMs, in opposed to within-layer exploration that happens with CoT. However, we will change that sentence accordingly to reviewer's suggestions to prevent any misunderstandings for future readers. 2. The RL stage is first incentivizing correct solutions, and provides a conciseness reward only if the solution is correct, therefore can be considered a "weak" reward signal in RL terminology. We agree with the reviewer, and will update this part. However, we want to also add that this way of providing reward to the LLMs is much more realistic, since we are not assuming any heuristic knowledge to give much more fine-grained feedback. 3. In Table 2, CoT-SFT has performed worst in Word-Ladder. Furthermore, LLMs can sometimes have difficulties with identification of individual letters in words due to tokenizers. We will update the paper to include these discussions. **** **Response to scalability concerns** Originally, we opted to use a high number of training examples to show the improvements we were getting with AoT-O3 was due to SFT models being under-trained. However, this has raised concerns from the reviewers, therefore **we decided to repeat the experiments with reduced number of SFT and RL examples**. For all three tasks (Game of X, N-Puzzle, Word-Ladder), we used 320 examples to supervise-finetune models (batch size = 64, epochs = 10, training steps = 50, lr=2e-4). For AoT-SFT model, we generated new random augmentations at each epoch for the same 320 examples. For RL, we used 160 examples (batch size = 32, epochs = 10, lr = 2e-5). For ToT, we have used breadth of 5. We have evaluated on **1000 test examples**. |**Problem**|**Method**|**Accuracy (%)**|||**Solution Length (steps)**||| |:--|:--|--:|--:|--:|--:|--:|--:| |||Gemma2-2B|Llama3-1B|Llama3-3B|Gemma2-2B|Llama3-1B|Llama3-3B| |Game of X|ToT|26.4|20.4|27.1|18.2|17.5|18.3| ||CoT-SFT|30.9|22.3|30.7|4|4|4| ||AoT-SFT|58.7|53.1|62.9|33.2|30.8|36.9| ||AoT-O3|76.6|71.2|79.4|17.9|9.7|11.5| |N-Puzzle|ToT|29|30|33|80.0|84.9|63.9| ||CoT-SFT|32.3|33.7|40.3|7.6|7.7|9.2| ||AoT-SFT|62.1|58.7|66.3|26.7|26.6|32.7| ||AoT-O3|72.2|66.1|75.7|17.4|16.7|14.1| |W-Ladder|ToT|12.5|9.2|17.8|38.5|36.0|32.0| ||CoT-SFT|10.8|5.0|13.4|10.1|6.8|9.4| ||AoT-SFT|58.4|48.8|62.1|29.5|28.5|34.1| ||AoT-O3|62.3|52.6|67.3|18.7|13.2|16.2| **** **Response to weak reward signal concerns** In order to alleviate concern's for the reward signal, **we experimented in the reviewer's suggestion domains (BlocksWorld and Logistics) without any SFT stage, and simply applying the RL stage (we used the same setup as in the previous results) on models prompted with five in-context examples**. We followed the training and testing settings in [1] and for ToT we again used breadth of 5. We used the recently released Gemma3 4B IT model for its good performance and relatively small size, and also that we believe it will be the relevant model to use in the remainder of 2025. |**Problem**|**Method**|**Accuracy (%)**|**Solution Length (steps)**| |:--|:--|--:|--:| |BlocksWorld|CoT|27.2|6.3| ||ToT|56.3|45.2| ||AoT|59.1|24.2| ||AoT-O3|**74.7**|11.3| |Logistics|CoT|18.5|12.6| ||ToT|51.5|70.8| ||AoT|49.3|55.9| ||AoT-O3|**71.9**|24.8| [1] https://arxiv.org/pdf/2501.13545 **** **Response to Questions** 1. Thank you for asking this clarification. For N-Puzzle, after shuffling randomly, we selected examples with almost uniform Manhattan Distances (MD) ranging from 15 to 30 (in the original 100 test examples, 10 examples for MD=15, and 6 examples each for MD=16,..,30). We also followed the same logic for Word-Ladder, this time solution path length range from 4 to 10 steps. 2. We have included ToT without finetuning. However, it's not completely correct that ToT has systematic exploration, since each path is disjointed as they appear in the context, and this can lead to visiting the same states at separate time steps. 3. Figure 2 is a histogram, where Y axis is the number of correct instances. **We'll update Section 3.2 in camera-ready to prevent confusions.** **** **Conclusion** We thank you in advance for checking our rebuttal and reconsidering your score if you believe our explanations and the introduction of new results alleviated your concerns. If you have further questions, as far as we understand from the new ICML guidelines, you can only respond one more time (review → rebuttal → reviewer reply → author reply). Therefore, we kindly request you to be precise about your remaining concerns in your response. --- Rebuttal Comment 1.1: Comment: Thank you for the updates on the new results. My primary concerns have been largely addressed or agreed to be addressed. Thus, I have updated my score. However, I remain unconvinced about the potential of the whole direction in contrast to external verification. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for **increasing their score in favor of acceptance of our work**. As agreed, all new results will be integrated into the camera-ready version. Algorithm of Thoughts (AoT) [1] has shown to be competitive or more performant than ToT and AoT+ [2] has shown even further improvements can be made to match or surpass external verification methods such as LLM-Modulo, all while being more efficient in terms token usage and the number queries required to get an answer. In this work, we show that AoT-O3 with reinforcement learning can be used to further improve the performance and efficiency (up to 80\%). We show that even with relatively small LLMs, AoT-O3 reduces the need for external verification, which makes it more practical and scalable to new problems. **We sincerely extend our gratitude to the reviewer for taking the time to really delve into our work for their review**. Their feedback has helped improve the clarity of the paper significantly. [1] Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models (ICML 2024) https://openreview.net/forum?id=KJL2b6BthC [2] LLMs Can Plan Only If We Tell Them (ICLR 2025), https://openreview.net/forum?id=K3KrOsR6y9
Summary: This paper introduces AoT-O3, a framework that combines supervised fine-tuning and reinforcement learning to enhance the planning efficiency of large language models (LLMs). By using a handcrafted reward function that optimizes both solution validity and length, AoT-O3 reduces reasoning steps while maintaining or improving accuracy. The framework is evaluated on three planning benchmarks using three different-sized models, demonstrating its good performance. ## update after rebuttal The method proposed in this paper demonstrates strong performance across multiple benchmarks and shows good generalization to multiple LLMs. Furthermore, the writing is clear and well-structured. Based on these strengths, I believe the paper is worthy of acceptance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, there is no theoretical claim. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I reviewed the entire supplementary material. Relation To Broader Scientific Literature: A simple yet effective handcrafted reward function can help LLMs reduce reasoning steps while maintaining or improving planning accuracy. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The proposed handcrafted reward function, which balances solution validity and length efficiency, provides clear and direct feedback to the model without the need for complex learned reward models. 2. The proposed method demonstrates promising results in achieving higher planning accuracy and shorter solution lengths. Weaknesses: 1. The paper lacks an experimental comparison with a method that directly applies RL to the model without SFT. 2. The paper mentions an alternative to the reward model (i.e., Equation (4)), but its effectiveness is not evaluated in the experiments. Including this evaluation would make the study more solid. 3. The paper lacks qualitative examples for analysis and demonstration. 4. A discussion on the limitations of the proposed method is missing. Other Comments Or Suggestions: On page 7, line 345, the statement "We evaluate AoT-O3 across two challenging planning benchmarks: Game of X and N-Puzzle." should be revised to indicate that there are three benchmarks, including Word Ladder. Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper. We are encouraged that our paper is recognized as "simple yet effective" and to demonstrate "promising results in achieving higher planning accuracy and shorter solution lengths". **** **Response to Weaknesses** 1. We experimented in BlocksWorld and Logistics domains **without any SFT stage, and simply applying the RL stage directly** (we used the same setup as in the previous results) on models prompted with five in-context examples. We followed the training and testing settings in [1] and for ToT we again used breadth of 5. We used the recently released Gemma3 4B IT model for its good performance and relatively small size, and also that we believe it will be the relevant model to use in the remainder of 2025. |**Problem**|**Method**|**Accuracy (%)**|**Solution Length (steps)**| |:--|:--|--:|--:| |BlocksWorld|CoT|27.2|6.3| ||ToT|56.3|45.2| ||AoT|59.1|24.2| ||AoT-O3|**74.7**|11.3| |Logistics|CoT|18.5|12.6| ||ToT|51.5|70.8| ||AoT|49.3|55.9| ||AoT-O3|**71.9**|24.8| [1] https://arxiv.org/pdf/2501.13545 2. Thank you for pointing this out, our preliminary experiments with that reward model was that it performed similarly. However, we will include those results in the appendix in the camera-ready version of our paper. 3. We will again be sure to include qualitative results in the camera-ready version of our paper. 4. We will include limitations of our method in the camera-ready version of our paper. **** **Conclusion** We will also fix the typos mentioned in the camera-ready version. **We thank you in advance for checking our rebuttal and reconsidering your score if you believe our explanations and the introduction of new results alleviated your concerns**. If you have further questions, as far as we understand from the new ICML guidelines, you can only respond one more time (review → rebuttal → reviewer reply → author reply). Therefore, we kindly request you to be precise about your remaining concerns in your response. **Also, please be sure to check our responses to other reviewers where we have introduced new results with utilizing fewer training data, and applying RL without the SFT stage to show the importance of AoT-O3, even in cases where we do not have the SFT stage.** --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I keep my original score.
Summary: The authors claim that while AoT has improved performance compared to CoT, they generally suffer from significantly longer solutions. To deal with this, they proposed AoT-O3, which use AoT generated solutions to optimize models with both accuracy (supervised and RL) and solution length (RL). The proposed method AoT-O3 shortens solution length by up to 80% compared to baseline AoT while maintaining or surpassing performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: Yes I have checked the soundness/validity of any experimental designs or analyses Supplementary Material: Yes, I reviewed the appendix. Relation To Broader Scientific Literature: See the section below. Essential References Not Discussed: - The paper shares many similarity with another branch of papers that learns to simplify reasoning steps. The authors should also mention/analyze/compare to those related works. For example, [1]. The example in Figure 1 especially look like a skipped step. --- [1] Liu, Tengxiao, et al. “Can language models learn to skip steps?.” arXiv preprint arXiv:2411.01855 (2024). Other Strengths And Weaknesses: Strengths: - The paper is very straightforward and easy to understand. Weaknesses: - The introduction parts seem to be a bit too long. The method section only starts from the 5th page. - ‘In this section, we will use V for our reward models due to them actually being value models used in RL frameworks. ’ -> this sentence looks a bit weird, you are not using V as the reward model then, you are directly modeling the value function, reward function is a separate thing. For all the parts in the paper that calls V a reward model, they look a bit confusing. Other Comments Or Suggestions: - Table 2 is a bit hard to read. Can you also highlight/bold the Solution Lengths? Questions For Authors: - What is the y-axis in Figure 2? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper. In the following, we provided clarifications and our responses to the reviewer's concerns. **** **Response to Essential References Not Discussed** **Although, authors cannot be expected to discuss papers that have not been made public before 4 months according to ICML reviewer guidelines** (arxiv submission: 4th of November, NeurIPS public announcement: 6th of November), we want to assure the reviewer that the work suggested is a way to reduce the number steps in CoT reasoning by skipping steps. Our work on the other hand, is not about skipping steps, but eliminating unnecessary exploration of actions. Therefore that paper does not have much in common with our paper. Our example in our paper's Figure 1 indicates the deliberately exploring promising states, and not exploring for the sake of exploration, a phenomenon we explored further in Section 3.2. **However, we will include these comparisons in the camera-ready version of our paper**. **** **Response to Weaknesses** 1. Since we have added comprehensive new set of results as per the request of other reviewers, we will shorten the introduction to include the relevant new results in the main paper instead of adding them in the appendix in the camera-ready version of our paper. 2. We agree with the reviewer, we will update such usage to prevent confusions regarding to usage of the "reward" and "value" terms. **** **Response to Questions** 1. Figure 2 is a histogram, where Y axis is the number of correct instances. **We'll update Section 3.2 in camera-ready to prevent confusions.** **** **Conclusion** We will also fix the typos mentioned in the camera-ready version. **We thank you in advance for checking our rebuttal and reconsidering your score if you believe our explanations and the introduction of new results alleviated your concerns**. If you have further questions, as far as we understand from the new ICML guidelines, you can only respond one more time (review → rebuttal → reviewer reply → author reply). Therefore, we kindly request you to be precise about your remaining concerns in your response. **Also, please be sure to check our responses to other reviewers where we have introduced new results with utilizing fewer training data, and applying RL without the SFT stage to show the importance of AoT-O3, even in cases where we do not have the SFT stage.**
Summary: This paper proposes AoT-O3, a RL based approach that reduces solution length in LLM based planning while preserving improving accuracy. It builds on already popular Algorithm of Thoughts by adding a reward function. Experiments on 3 benchmarks show a reduction in reasoning steps and higher success rates compared to existing baselines. Claims And Evidence: The claims made in this paper are coherent. Methods And Evaluation Criteria: The authors demonstrate their approach on multiple diverse tasks (math-based puzzles, sliding tile puzzles, word-ladder transformations), suggesting good generalization potential. Theoretical Claims: No major theoretical claims mentioned in the paper. Experimental Designs Or Analyses: Experiments looked logical and sound to me. Supplementary Material: I didn't feel the need to go through the supplemental material. Relation To Broader Scientific Literature: The paper merges a cost sensitive RL alignment with Algorithm of Thoughts to improve the planning capabilities. Essential References Not Discussed: They cite the relevant papers to the best of my knowledge. Other Strengths And Weaknesses: Strengths: 1. The paper is well written and easy to understand. 2. This area of research is highly significant: boosting model accuracy while cutting power consumption and simplifying the training pipeline is essential. 3. Unlike exploratory techniques like Tree of thought, AoT-O3 operates within a single pass, making it more computationally viable and simpler to deploy. Weakness: 1. Their approach provides only incremental novelty, as it simply extends a standard RL alignment with a step-cost penalty, offering little beyond an otherwise well known method. 2. The success of the length-aware reward depends upon careful hyperparameter tuning which might be non-trivial for new domains. 3. Since the paper uses a RLHF design, it would make sense to include SFT with other RL training baselines like PPO. Other Comments Or Suggestions: Line 308/309 do not make much sense : "we with the model to generate solutions that are both correct and efficient" Questions For Authors: Can you explain why current AoT implementations may be imitating rather than truly engaging in System 3 thinking? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper. We are encouraged that our paper is recognized as "computationally viable" and "simpler to deploy". We have acted on the reviewer's suggestions to provide new experimental results and provide clarifications. **** **Response to Weaknesses** 1. Our whole approach includes the identification of unnecessary exploration in AoT, and repurposing already existing RL libraries to improve performance and efficiency. As you have also pointed out, "boosting model accuracy while cutting power consumption and simplifying the training pipeline is essential". Recognizing that *simplicity being the ultimate sophistication*, we see our whole approach as much more practical and easier to use. 2. **We have used the same hyperparameters ($\alpha=0.02$, $\beta=-0.5$ and $\kappa=0.2$ ) in all five domains, including the newly added Blocksworld and Logistics**, showing significant improvements in all the domains. We would like to add that **these domains are highly different in terms of their state and actions, implying the generalizability of the hyperparameters we used**. Moreover, we used the same hyperparameters in our new results with PPO, showcasing the generalizability of the hyperparameters. 3. Thank you for these suggestions. We chose RLOO for being more VRAM friendly. However, **we added new results for PPO for Gemma2-2b and Llama-3.2-1B with PPO instead of RLOO**. | Problem | Method | Accuracy (%) | | | Solution Length (steps) | | | |-----------|--------|--------------|-----------|-----------|-------------------------|-----------|-----------| | | | Gemma2-2B | Llama3-1B | Llama3-3B | Gemma2-2B | Llama3-1B | Llama3-3B | | Game of X | AoT-O3 | 80 | 75 | 83 | 15.1 | 10.2 | 12.8 | | N-Puzzle | AoT-O3 | 74 | 69 | 77 | 16.4 | 15.8 | 17.2 | | W-Ladder | AoT-O3 | 61 | 51 | 66 | 19.5 | 14.4 | 15.7 | **** **Response to Questions** 1. Thank you for this highly relevant question to our comment in the introduction of our paper. As we have shown in the Section 3.2, the solution lengths of AoT are highly effected by the length of the in-context examples. This hinted us that shorter solutions were indeed possible, which lead to our method. In our perspective, shorter solutions being possible, indicates that the model is unnecessarily exploring for significant amount of time before giving an answer. The RL stage reducing these total steps significantly, shows us that the AoT-prompted or the AoT-SFT models imitating a significant part of the exploration, therefore imitating. **** **Conclusion** We will also fix the typos mentioned in the camera-ready version. **We thank you in advance for checking our rebuttal and reconsidering your score if you believe our explanations and the introduction of new results alleviated your concerns**. If you have further questions, as far as we understand from the new ICML guidelines, you can only respond one more time (review → rebuttal → reviewer reply → author reply). Therefore, we kindly request you to be precise about your remaining concerns in your response. **Also, please be sure to check our responses to other reviewers where we have introduced new results with utilizing fewer training data, and applying RL without the SFT stage to show the importance of AoT-O3, even in cases where we do not have the SFT stage.**
null
null
null
null
null
null
BRIDGE: Bootstrapping Text to Control Time-Series Generation via Multi-Agent Iterative Optimization and Diffusion Modeling
Accept (poster)
Summary: This work introduces (1) "Text Controlled TSG" a new TSG task, (2) a new LLM/MA framework to synthesize datasets, and (3) BRIDGE, a hybrid TSG framework building off the prior two contributions Claims And Evidence: The authors claim (1) this approach achieves state of the art fidelity on 11 of the 12 datasets and (2) improves controllability compared to no text input generation. Both are borne out from the empirical results. Methods And Evaluation Criteria: The methods are natural given the problem the authors are attempting to solve (high-fidelity and controllability in TSG). The evaluation datasets appear to be natural choices as well, and standard w.r.t. prior literature. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes, skimmed the entirety to gauge content. Relation To Broader Scientific Literature: This work is related to prior work using text for TS analysis, and also other conditional TSG works. Relevant works in each appear to be properly cited and contextualized. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The paper is clear, well-structured, and easy to read - The techniques employed all appear to be natural choices - The empirical results are convincing - The discussion is thorough and insightful Weaknesses - Nothing major Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and encouraging feedback. We are especially grateful for your recognition of the clarity of the paper, the natural design of the proposed methods, and the strength of our empirical results and discussion. Your comments affirm the value of our effort to introduce a novel task—Text-Controlled Time-Series Generation (TSG)—and the corresponding framework BRIDGE, which combines a multi-agent dataset synthesis pipeline with a hybrid diffusion-based generation model. Your summary precisely captures the key contributions of our work: (1) the introduction of a new task—Text-Controlled Time-Series Generation (TC-TSG)—to enable semantically guided, controllable synthetic time series; (2) the design of a novel multi-agent framework, which leverages large language models (LLMs) and role-based collaboration for bootstrapping high-quality text-to-time-series paired data; and (3) the proposal of BRIDGE, a hybrid framework that integrates semantic prototypes and text for flexible and faithful time-series generation. In particular, we are grateful for your acknowledgement that our empirical findings convincingly support both the fidelity and controllability improvements claimed. Your validation that the evaluation criteria and dataset choices are natural and grounded in prior literature is also very helpful. This motivates us to continue refining and extending this line of work. In future iterations, we plan to further investigate theoretical insights behind controllable generation, and explore more efficient multi-agent optimization strategies, building on the foundation established here. Should you have any suggestions or requests for clarification, we would be more than happy to provide additional details. Thank you again for your constructive and generous evaluation—it is deeply appreciated.
Summary: This paper introduces BRIDGE, a framework for text-controlled time-series generation (TSG) using a multi-agent system and diffusion modeling. The authors propose a three-stage process to generate high-quality text-time series pairs, and develop a hybrid framework that combines semantic prototypes with text descriptions. Experiments across 12 datasets show state-of-the-art generation fidelity and improved controllability compared to no-text approaches. Claims And Evidence: The claims are partially supported by evidence. While performance improvements are demonstrated across multiple datasets, the paper lacks: Theoretical justification for the 16 semantic prototypes design choice Comprehensive comparison with other multi-agent frameworks (e.g., MetaGPT, CAMEL,Autogen) Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate. The authors: Use standard metrics (MSE, MAE, MDD, K-L divergence) for quantitative evaluation Include human evaluation to assess the quality of generated time series Theoretical Claims: No major theoretical claims to verify. The paper is primarily empirical with focus on the practical implementation and evaluation of the proposed framework. Experimental Designs Or Analyses: The experimental designs are generally sound, but with limitations: The performance comparison between text types could benefit from more rigorous statistical analysis The effect of different numbers of prototypes is mentioned but not systematically explored Supplementary Material: Not found any supplementary meterial Relation To Broader Scientific Literature: The work connects to several research areas: Extends text-to-X generation from images/videos to time-series data Builds upon diffusion models for continuous data generation Incorporates multi-agent collaboration concepts, though without deep engagement with that literature Essential References Not Discussed: Missing references include: Recent multi-agent frameworks like MetaGPT, AutoGen and CAMEL that offer more sophisticated agent interaction mechanisms. Other Strengths And Weaknesses: Strengths: Addresses an important practical problem with real-world applications The hybrid approach combining semantic prototypes with text shows promise for cross-domain generalization Weaknesses: Limited theoretical foundation for the semantic prototype design The multi-agent framework appears to be a straightforward application of role-based prompting rather than a novel contribution Other Comments Or Suggestions: The paper would benefit from more discussion on potential privacy concerns when generating synthetic time series data from sensitive domains like healthcare. Questions For Authors: Why were 16 semantic prototypes selected? Did you experiment with different numbers, and how does the performance vary with the number of prototypes? How does your multi-agent framework compare with other established frameworks like MetaGPT or CAMEL? Could those frameworks be adapted to your task? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **[Q1] What motivated the choice of 16 prototypes, and how does performance change with different prototype counts?** For selecting the optimal number of prototypes, we observe from the ablation study that the generation performance generally improves along with the increase of the number of prototypes (Shown in Table). However, when the number of prototypes is larger than 16, the performance gain is no longer significant. Therefore, we can conclude that the optimal number of prototypes is 16 in our current setting. MDD Report: |Dataset|4|8|16|32|64| |-|-|-|-|-|-| |Electricity|0.615|0.368|0.135|0.117|0.236| |Wind|0.271|0.299|0.304|0.314|0.309| |Traffic|1.211|0.287|0.315|0.349|0.411| |Taxi|1.008|0.433|0.338|0.371|0.384| |Pedestrian|1.599|0.921|0.576|0.554|0.552| |Air|0.611|0.393|0.418|0.515|0.544| |Temperature|0.487|0.317|0.310|0.330|0.307| |Rain|5.763|4.981|6.002|5.548|6.420| |NN5|1.550|0.796|0.613|0.616|0.573| |Fred-MD|0.407|0.241|0.228|0.245|0.346| |Solar|375.536|375.531|375.531|375.532|375.556| |Exchange|0.365|0.359|0.315|0.309|0.330| K-L report: |Dataset|4|8|16|32|64| |-|-|-|-|-|-| |Electricity|0.006|0.027|0.001|0.001|0.020| |Wind|0.059|0.074|0.056|0.075|0.093| |Traffic|0.200|0.018|0.021|0.022|0.027| |Taxi|0.154|0.013|0.003|0.020|0.019| |Pedestrian|0.075|0.022|0.009|0.003|0.002| |Air|0.010|0.009|0.005|0.006|0.009| |Temperature|0.113|0.025|0.020|0.017|0.042| |Rain|0.014|0.009|0.010|0.008|0.018| |NN5|0.130|0.118|0.004|0.005|0.009| |Fred-MD|0.012|0.015|0.006|0.009|0.030| |Solar|0.025|0.016|0.005|0.008|0.017| |Exchange|0.062|0.063|0.067|0.083|0.046| **[Q2] How does your work differ from exist agent framework ? Could those be adapted for your task?** MetaGPT follows structured workflows tailored for software engineering tasks, with predefined agent roles and Standard Operating Procedures (SOPs). While effective in that domain, this structure is less readily adaptable to the dynamic, iterative processes needed for generating diverse and semantically aligned text–time-series pairs. AutoGen offers more flexible agent coordination, but lacks built-in support for evaluating alignment between text and time-series data. CAMEL, which focuses on role-playing agents for social simulations via inception prompting, is primarily designed for open-ended tasks rather than modality-aware data construction. Although these frameworks offer valuable capabilities in their respective domains, adapting them to text-to-time-series generation (TSG) would require substantial modification. In contrast, BRIDGE is purpose-built for TSG, integrating task-specific evaluation modules and iterative refinement mechanisms to support controllability and fidelity. To better understand their applicability to TSG, we conducted preliminary adaptations of MetaGPT, AutoGen, and CAMEL within our task setting. As shown in the results below, their performance under minimal adaptation was relatively limited. These observations highlight the utility of BRIDGE’s tailored design in addressing the unique demands of TSG, such as modality bridging. Due to space limitations, we only present the MDD results below. |Model|Electricity|Wind|Traffic|Taxi|Pedestrian|Air|Temperature|Rain|NN5|Fred-MD|Solar|Exchange| |-|-|-|-|-|-|-|-|-|-|-|-|-| |MetaGPT|0.353|0.327|0.330|0.363|0.727|0.464|0.373|7.561|0.712|0.434|390.901|0.428| |AutoGen|0.287|0.329|0.358|0.443|0.598|0.494|0.335|7.386|0.661|0.362|377.334|0.425| |CAMEL|0.480|0.385|0.386|0.813|0.905|0.889|0.509|5.553|0.855|0.311|393.671|0.355| |BRIDGE|0.143|0.276|0.246|0.325|0.552|0.438|0.323|5.129|0.570|0.226|375.530|0.312| #### **General Comments (GC):** **[GC 1] More discussion of privacy risks when generating time series in sensitive domains, such as healthcare.** Thank you for highlighting the potential privacy risks in sensitive domains such as healthcare. While synthetic data is often considered a privacy-preserving alternative to real patient records, there is still a potential risks remain if models overfit and replicate training patterns. Our approach mitigates this risk by focusing on controllable and abstract generation guided by semantic prototypes and textual descriptions, rather than directly replicating raw time-series trajectories. Additionally, our framework does not rely on any personally identifiable attributes—only descriptions of the time series are used during generation. We fully agree that a more in-depth analysis of privacy risks—such as membership inference—would be valuable, and we plan to explore these directions in future work. --- We are grateful for your engagement and helpful suggestions, which we will carefully incorporate in the revised version.
Summary: The paper introduces the BRIDGE framework which consists of two broad components - a paired text-time series dataset generation method and a diffusion-based time series generation method conditioned on textual input. For the dataset generation, the paper proposes a very detailed and carefully designed framework that leverages LLMs to parse news, articles, reports, etc., to obtain textual templates that can be filled on an instance or sample basis. The generated text caption is iteratively refined using an evaluator feedback to finally obtain a concise and useful text condition for the time series sample. For the time series generative model, the paper builds on top of [1] to effectively combine text conditions with semantic prototypes to accurately generate time series samples. Addition of textual input increases the generalization capabilities, even for unseen domains, and this is validated empirically in the paper. [1] TimeDP: Learning to Generate Multi-Domain Time Series with Domain Prompts Claims And Evidence: The claims regarding the effectiveness of concise text conditions, semantic prototypes, and their effects on sample fidelity are justified with suitable empirical evidence. Methods And Evaluation Criteria: Yes, the proposed method is evaluated on a diverse set of 10 in-domain and 2 out of domain datasets for generation. However, the commonly used metrics for identifying sample quality, like the Predictive Score and the Discriminative Score [1], are not used in the paper. Additionally, I am not sure about the pairwise mse score, as the generative process is stochastic, and additionally, for a single textual condition, there could be more than one suitable sample as the generative model essentially samples from the conditional distribution. Perhaps metrics like J-FTSD [2] are more suitable for evaluation in this case. Finally, there are multimodal datasets like TimeMMD [3] which can also be used to evaluate the generative model's performance. [1] Time-series Generative Adversarial Networks (TimeGAN) [2] Time Weaver: A Conditional Time Series Generation Model [3] Time-MMD: Multi-Domain Multimodal Dataset for Time Series Analysis Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Yes, the generation results in Tables 2 and 4 for in-domain datasets and the results in Tables 3 and 5 for out of domain datasets are valid. However, the comparison against recent diffusion-based time series generation approaches, such as Diffusion-TS, is missing. Supplementary Material: There is no additional supplementary material. Relation To Broader Scientific Literature: The automated dataset curation approach holds a lot of promise for enhancing the cross-modal alignment between text and time series for various tasks such as forecasting, generation, anomaly detection, etc. Essential References Not Discussed: The related works with respect to diffusion models is not entirely covered in the paper. [1] Non-autoregressive Conditional Diffusion Models for Time Series Prediction [2] MULTI-RESOLUTION DIFFUSION MODELS FOR TIME SERIES FORECASTING [3] Time Weaver: A Conditional Time Series Generation Model [4] CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation [5] Diffusion-based Time Series Imputation and Forecasting with Structured State Space Models Similarly, the related work with respect to multi-modal datasets that include text paired with time series is not discussed. For example, Time MMD [6] is a recent multi modal dataset. [6] Time-MMD: Multi-Domain Multimodal Dataset for Time Series Analysis Other Strengths And Weaknesses: Strengths: 1. The proposed approach shows strong in-domain performance and out-of-domain generalization. 2. The paper provides detailed ablation studies to delineate the performance improvements due to the textual input and the semantic prototypes. Weaknesses: 1. I am not convinced with the choice of metrics used for evaluating generative models. Please check the Methods and Evaluation Criteria for more discussion. 2. The semantic prototypes provide significant performance boost. However, very little information is provided regarding how the prototypes are generated. The appendix suggests these are random orthogonal vectos. Can the authors provide more clarity on this? 3. The ablation studies provide sufficient evidence for the necessity of multi-agent setup for data collection. However, what is the reason behind such detailed intra-group discussion with a scientist, engineer, etc? Can the authors provide more information regarding this? 4. The paper could also use the Time-MMD dataset to show the effectiveness of using text in time series generation. Other Comments Or Suggestions: In the problem formulation section, the variable $z$ seems to appear suddenly, and it is not clearly explained. Does $z$ correspond to the semantic prototypes? Questions For Authors: Please refer to the weaknesses section. Overall, It'd be very helpful if the authors can 1. Justify the use of metrics 2. Provide more information about semantic prototypes 3. Provide some empirical results with comparisons against recent diffusion-based generation approaches like Diffusion-TS. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We truly appreciate the reviewer’s insightful feedback! **[Q1] Justify the use of metrics.** We initially prioritized MDD and KL divergence over PS and DS for their robustness, as they are **not influenced by post-hoc models.** We have now included **PS (MAE reported) and DS** and use **J-FTSD** to complement MSE for controllability evaluation. Our model achieves the **best results across all metrics**. ### PS |Dataset|BRIDGE|TimeVQVAE|TimeGAN|GT-GAN|TimeVAE|Diffusion-TS| |-|-|-|-|-|-|-| |Electricity|0.091|0.138|0.143|0.585|0.152|0.146| |Solar|0.441|1.082|1.631|1.169|0.692|0.566| |Wind|0.174|0.326|0.942|1.182|0.423|0.886| |Traffic|0.208|0.390|0.618|0.919|0.311|0.754| |Taxi|0.234|0.319|0.339|0.687|0.296|0.556| |Pedestrian|0.149|0.456|0.488|0.603|0.359|0.429| |Air|0.113|0.154|0.155|0.478|0.277|0.167| |Temperature|0.875|0.901|1.497|2.354|0.908|0.912| |Rain|0.096|0.051|0.146|0.610|0.598|0.477| |NN5|0.687|0.936|1.485|1.117|0.722|0.745| |Fred-MD|0.075|0.138|0.369|0.180|0.163|0.151| |Exchange|0.165|0.327|0.831|1.092|0.341|0.135| ### DS |Dataset|BRIDGE|TimeVQVAE|TimeGAN|GT-GAN|TimeVAE|Diffusion-TS| |-|-|-|-|-|-|-| |Electricity|0.306|0.367|0.351|0.434|0.376|0.370| |Solar|0.156|0.231|0.258|0.202|0.199|0.236| |Wind1|0.188|0.247|0.323|0.295|0.218|0.203| |Traffic|0.037|0.112|0.195|0.192|0.225|0.198| |Taxi|0.402|0.363|0.409|0.5|0.411|0.391| |Pedestrian|0.246|0.252|0.250|0.287|0.370|0.299| |Air|0.093|0.180|0.197|0.238|0.202|0.267| |Temperature|0.216|0.262|0.292|0.298|0.293|0.275| |Rain|0.199|0.242|0.203|0.396|0.246|0.275| |NN5|0.274|0.5|0.5|0.491|0.454|0.445| |Fred-MD|0.326|0.343|0.5|0.336|0.389|0.355| |Exchange|0.228|0.290|0.343|0.332|0.276|0.287| **J-FTSD** |Model|Electricity|Wind|Traffic|Taxi|Pedestrian|Air|Temperature|Rain|NN5|Fred-MD|Solar|Exchange| |-|-|-|-|-|-|-|-|-|-|-|-|-| |BRIDGE|0.538|5.011|0.570|0.974|0.488|0.6536|3.977|0.132|0.972|0.260|0.295|1.581| |BRIDGEw/otext|1.821|6.935|0.611|1.312|0.550|0.677|4.708|0.151|1.192|0.423|0.330|1.738| |BRIDGEw/oprototype|1.164|6.843|0.597|1.0367|0.662|0.817|4.613|0.141|1.115|0.341|0.322|1.846| **[Q2] Provide more information about semantic prototypes.** Fundamental time series properties for TS generation align with decompositions of TS, such as trends and seasonalities. We use semantic prototypes as latent representations of these commonalities and train the model to decode from combinations of these latent representations for generating certain domain. The prototype array are fixed after initialization, similar to a initializing a codebook for vector quantization while preventing the codebook from updating along with the decoder model. Instead of updating prototypes, the decoder $\epsilon_\theta$ learns to map them to observed sequences, ensuring implicit alignment with time series semantics. **[Q3] Provide comparisons with Diffusion-TS** Please see **[Q 1]** for Diffusion-TS results. **[Q4] What motivated the detailed intra-group discussions in the multi-agent setup?** Our design draws from the well-established **multi-agent cooperation and competition paradigms**, which enhance system performance by encouraging agents to challenge and refine each other’s outputs, mitigating the risk of overconfidence. Additionally, complex problem-solving is often modeled as programming tasks due to their logical structure. Inspired by this, we structured our system after software development workflows, assigning roles like **manager, engineer, scientist, and observer** to different agents [1]. [1] LLM-Based Multi-Agent Systems for Software Engineering: Literature Review, Vision and the Road Ahead **[Q5] Use Time-MMD dataset in TS generation** We achieves **the best performance on MDD, KL, PS and DS across all subdatasets.** Due to space limitations, we only present the MDD results below. |Dataset|BRIDGE|TimeVQVAE|TimeGAN|GT-GAN|TimeVAE|Diffusion-TS| |-|-|-|-|-|-|-| |Agriculture|0.416|0.729|1.031|0.744|0.969|0.950| |Climate|0.185|0.391|0.387|0.530|0.209|0.260| |Economy|0.855|1.292|1.823|1.509|1.055|1.107| |Energy|0.272|0.414|0.561|0.468|0.328|0.390| |Environment|0.191|0.194|0.431|0.281|0.460|0.432| |Health(US)|0.248|0.343|0.649|0.430|0.298|0.355| |Security|0.344|0.355|0.850|0.941|0.824|0.923| |SocialGood|0.292|0.343|0.688|0.503|0.436|0.521| |Traffic|0.641|0.961|1.059|1.153|0.937|0.93| **[Q6] Does z correspond to the semantic prototypes?** As stated in Section 3, z is a latent variable sampled from a prior distribution. In the diffusion model, z typically represents random Gaussian noise. --- Again, we sincerely appreciate the reviewer’s valuable time and insights, which have greatly strengthened our work. Based on your suggestions, we have added **PS, DS, and J-FTSD** metrics, included **Diffusion-TS** as a baseline, and expanded experiments on **Time-MDD**. We also appreciate the recommended references and will discuss them in our revision. We’d be happy to address any further questions—thank you for your thoughtful feedback and support! --- Rebuttal Comment 1.1: Comment: Thank you for the response. Most of my concerns are addressed. Is the J-FTSD metric obtained from the embeddings of a model that is trained contrastively with text-time series pairs for all datasets combined? Or is it trained on a per-dataset basis? Can the authors provide some clarity on that? I would also like to note that currently the Related Work section is shallow, and adding more references that are listed above will require some major changes to the section. The same can be said for the evaluation metrics. To position this paper fairly with respect to the existing literature on time series generation, standard metrics like PS and DS should be given more importance, and to evaluate the specificity of the generated time series, J-FTSD needs to be highlighted in the experiments section. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful follow-up and are truly grateful for your continued engagement in the discussion. It means a lot to us that you took the time to provide further insightful and constructive feedback. We're also **very glad to hear that most of your concerns have been addressed**—thank you once again for your valuable input! Regarding your inquiry about the implementation of the **J-FTSD metric**, the model is **trained jointly on all datasets combined** using contrastive learning. This design avoids introducing any domain-specific information at inference time, which helps prevent potential information leakage and ensures a fairer and more generalizable evaluation across datasets. We also appreciate your suggestion on the **evaluation metrics**. In the revision, we will **place greater emphasis on standard metrics such as Predictive Score (PS) and Discriminative Score (DS)** in Section 6.4, as they offer widely accepted and interpretable measures of the overall utility of generated data. Furthermore, we will **clarify** **the role of J-FTSD** in Section 6.5, **highlighting it as a valuable metric for evaluating the specificity of conditional generation models—an aspect that standard metrics may not fully capture.** Thank you as well for your helpful comments regarding the **Related Work** section. We agree that it can be significantly improved and will revise it accordingly. Specifically, we plan to: **(1) Expand Coverage of Diffusion-based Time Series Generation Methods** such as **CSDI [1], Time Weaver [2], Diffusion-TS [3] , TimeDiff [4], SSSD[5] and Mr-Diff [6]**. These works focus on structured or probabilistic generation. In contrast, we incorporate the ability of fine-grained text conditioning and generalization across domains. **(2) Include more text-to-time series datasets** like **Time-MMD [7]** , which is a valuable benchmark for paired text-time series. Furthermore, we will conduct a more comprehensive survey to include additional reference papers beyond those mentioned above, providing a broader and more thorough overview. Once again, we sincerely thank you for your time and thoughtful feedback throughout the review process. Your comments have been incredibly helpful in improving the quality and clarity of our paper, and we’ve worked hard to address all of your concerns. We **genuinely hope these revisions will lead to a positive adjustment in your final score**. If there’s anything that could benefit from further clarification or elaboration, we would be absolutely delighted to provide more details. Thank you again for your invaluable input and consideration! --- [1] CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation ([[2107.03502] CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation](https://arxiv.org/abs/2107.03502)) [2] Time Weaver: A Conditional Time Series Generation Model ([[2403.02682] Time Weaver: A Conditional Time Series Generation Model](https://arxiv.org/abs/2403.02682)) [3] Diffusion-TS: Interpretable Diffusion for General Time Series Generation ([[2403.01742] Diffusion-TS: Interpretable Diffusion for General Time Series Generation](https://arxiv.org/abs/2403.01742)) [4] Non-autoregressive Conditional Diffusion Models for Time Series Prediction ([arxiv.org/pdf/2306.05043](https://arxiv.org/pdf/2306.05043)) [5] Diffusion-based Time Series Imputation and Forecasting with Structured State Space Models ([arxiv.org/pdf/2208.09399](https://arxiv.org/pdf/2208.09399)) [6] Multi-Resolution Diffusion Models for Time Series Forecasting ([openreview.net/pdf?id=mmjnr0G8ZY](https://openreview.net/pdf?id=mmjnr0G8ZY)) [7] Time-MMD: Multi-Domain Multimodal Dataset for Time Series Analysis ([[2406.08627] Time-MMD: Multi-Domain Multimodal Dataset for Time Series Analysis](https://arxiv.org/abs/2406.08627))
Summary: This paper introduces BRIDGE, a novel framework for text controlled timeseries generation. It addresses two major challenges: lack of high-quality text-to-time-series datasets and difficulty in aligning textual descriptions with time-series data. Claims And Evidence: The paper’s key claims are well-supported by empirical results. The multi-agent dataset synthesis claim is validated by MAE reduction, and semantic prototypes show improved MSE and generalization in ablation studies. However, some areas need further validation: 1. Dataset quality is not compared to human-annotated text-TS pairs. 2. Computational efficiency is not analyzed, leaving inference cost unclear. Methods And Evaluation Criteria: The proposed method is straightforward and well-structured, leveraging multi-agent dataset synthesis and diffusion-based generation. The use of semantic prototypes for bridging text and time-series data is reasonable and aligns with the problem setting. However, the framework could be further analyzed for scalability and efficiency, as its computational cost is not explicitly discussed. Theoretical Claims: This paper primarily focuses on methodological and empirical contributions rather than formal theoretical analysis. Experimental Designs Or Analyses: The experimental design and evaluation in the paper are fairly comprehensive: 1. Datasets. The paper evaluates its method on various datasets, including Electricity, Solar, Wind, Traffic, Taxi, Pedestrian, Air, Temperature, Rain, NN5, Fred-MD, and Exchange for in-domain analysis​. Additionally, Stock and Web datasets were used for unseen domain generalization​. 2. Ablation studies includes assess the impact of text conditioning, prototype usage, and different language models as text encoders​. 3. The paper benchmarks against TimeVQVAE, TimeGAN, GT-GAN, TimeVAE, and diffusion-based methods​. Supplementary Material: I reviewed the entire supplementary material. Relation To Broader Scientific Literature: This paper builds on prior work in TS generation, text conditioned generative modeling, and diffusion models. Essential References Not Discussed: The paper provides a solid literature review, but it would benefit from references to prior work on prompt optimization and LLM-based dataset generation to better connect it to existing research. Other Strengths And Weaknesses: **Pros:** The paper introduces a novel research direction by bridging text-controlled generation with time-series synthesis, which has significant potential for real-world applications. The multi-agent dataset synthesis approach is innovative and helps address the lack of high-quality text-TS pairs. **Cons:** It would be better to add computational efficiency and human evaluation of text-to-TS alignment experiments. Other Comments Or Suggestions: See above Questions For Authors: 1. How does the quality of the multi-agent generated text-TS dataset compare to human-annotated descriptions? 2. What is the computational cost of BRIDGE compared to existing time-series generation models? 3. Can the model generalize beyond seen domains without fine-tuning? The paper demonstrates cross-domain generalization, but are there cases where semantic prototype alignment fails, requiring domain-specific fine-tuning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s time and thoughtful feedback. We are grateful for the recognition of the strengths of our work, particularly the “straightforward and well-structured", "reasonable" nature of our approach, as well as the "novel research direction" of BRIDGE in bridging text-controlled generation with time-series synthesis. We also appreciate the acknowledgment of our "innovative" multi-agent dataset synthesis approach. Your insights are invaluable in helping us refine our work, and we are truly thankful for the opportunity to clarify and improve our paper. Below, we provide our detailed responses to your comments. #### **General Comments:** **It would be better to add computational efficiency and human evaluation of text-to-TS alignment experiments.** We appreciate this suggestion! In fact, **we have already included human evaluation of text-to-TS alignment in our original submission**. As shown in Table 4 and 5, we compare different configurations (with and without prototypes and text) across multiple datasets using both quantitative metrics (MSE and MAE) and **human evaluation scores (HE and HE@3)**. A detailed discussion of these evaluations can be found in Section 6.2 and 6.5. For computational efficiency, please refer to **[Q 2]**. #### **Questions:** **[Q 1] How does the multi-agent generated dataset compare to human-annotated text-TS pairs?** In fact, the first step of our multi-agent system (Section 4.1 Step 1) involves collecting articles, news, and reports from the Internet that describe time series data, which **serve as human-annotated references** (**"Initial Text" in Table 1**). While these outperform rule-based baselines slightly, they are significantly worse than our multi-agent **Refined Text** (Table 1), suggesting **generic human descriptions are suboptimal for this task**. To address this, we implement a multi-agent iterative optimization process that refines textual descriptions by **enhancing key aspects such as trend accuracy, mention of seasonality, completeness of information, and clarity of description** (detailed in Appendix A.5). This process strengthens text-to-time series alignment by optimizing **both quantitative metrics** (e.g., MSE, K-S Test, Wasserstein Distance) and **qualitative evaluations** based on a 5-point Likert scale. As a result, our refined text-TS pairs achieve significantly higher quality for TS Generation task. Furthermore, in **Section 6.3 Table 1** and **Appendix J Table 12**, we provide a detailed analysis of what constitutes useful textual descriptions, offering key insights into the generation of high-quality text-TS pairs. **[Q 2] What is the computational cost of BRIDGE relative to prior TS Generation models?** To evaluate cost-efficiency, we measured training time (s/epochs) and inference time (ms/sample) across multiple baselines on same A100 GPU. BRIDGE achieves a favorable balance between performance and cost among diffusion-based models, maintaining high generation quality and controllability with substantially lower computational cost. | Model | BRIDGE | Diffusion-TS | TimeGAN | TimeVAE | TimeVQVAE | GT-GAN | | --- | --- | --- | --- | --- | --- | --- | | Inference | 27.7 | 142.8 | 6.3 | 3.84 | 4.75 | 7.23 | | Training | 356.6 | 654.7 | 723.1 | 16.67 | 157.1 | 454.3 | **[Q 3] Can BRIDGE generalize to unseen domains without fine-tuning? Are there cases where prototype alignment fails?** Our framework was explicitly designed to promote cross-domain generalization by leveraging semantic prototypes and text conditioning. In our experiments (see Table 3), **BRIDGE was able to generate high-fidelity time series in unseen domains (e.g., Stock, Web) without any fine-tuning**, suggesting that the learned semantic structure and prototype conditioning do generalize beyond the training domains. While we did not observe clear prototype misalignment in these settings, we agree that such cases may be possible, particularly in domains with very different temporal dynamics or semantics. We will explore this further in future work by evaluating on a broader set of domains and investigating potential prototype refinement strategies. --- Thank you again for your thoughtful advice. We will certainly incorporate the discussions in the revised version and include references on prompt optimization and LLM-based dataset generation as per your suggestion. Please feel free to raise any further questions—we greatly appreciate your ongoing input! --- Rebuttal Comment 1.1: Comment: Thanks to the author for the reply, which have addressed my concerns. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for your kind and supportive feedback. We truly appreciate the time you took to review our submission and response. Your encouraging comments and support in raising the score mean a great deal to us.
Summary: The paper considers how to train a generator for time series (the time series generation problem). A multi-step system (process) is created, that includes LLM prompts, time series features and specification, and training. There is also a two-team multi-agent method embedded for some optimization within the system. Several time series data are used, and numerical comparisons are made to some other methods. The primary contribution is some generalization across different time series training data. The numerical results are somewhat mixed in various measures of success. ## update after rebuttal The authors have provided useful explantation and discussion of their work. The overall concept and approach is interesting, and I have raised my score. However, I am still skeptical that the time series application is a good one for this work, and believe that a good generator can be trained without using the semantic approach. Claims And Evidence: Claims of generality are not very clear. The time series specifications (or features) are hand crafted and a rich enough set of these is selected so as to be sufficient for modeling the various time series data. There is little or no formal theory or convergence analysis, the results are entirely based on tuning the system to work with the data sets considered. The extend and impact of the human interventions isn't clear, and this leaves the reader wondering about the overall approach and outcomes. Methods And Evaluation Criteria: The data sets are varied to some extent. However, it isn't clear that existing time series signal decomposition and feature extraction methods couldn't be applied to this data, and use this for training a generator. The testing here is against other similar algorithms, and the advance seems limited to being somewhat more generalized over the example set. The system is so highly tuned with so many pieces that, although the entire approach seems reasonable, the reader can't easily judge the overall contribution or outcomes. Overall, it simply isn't clear that a semantic interface is a good idea for this problem. Having said that, the paper is most interesting in terms of the overall system approach, but the contribution to the application (time series analysis and generation) is very small. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: It is not clear how to evaluate the many aspects of filtering, human interactions, and selected time series features. Supplementary Material: Yes. The discussion of the multi-agent optimization is at a high level but lacks technical details. Apparently this is a form of optimization? Relation To Broader Scientific Literature: The contribution to time series analysis and generation is minor. This is because the features and specifications needed amount to many constraints and properties of the observed time series, and these are hand crafted until the semantic specification of these for an LLM leads to something useful. So if the point is to show an example of using an LLM with a generator, then the approach is somewhat interesting. But if the true goal is a time series generator, then adding in an LLM creates more ambiguity than it solves, and alternative methods for training generators should be found. Essential References Not Discussed: The paper is not well linked with (decades of) time series literature. Other Strengths And Weaknesses: Please see comments in other sections. ***Added after rebuttal.*** The authors have provided useful explantation and discussion of their work. The overall concept and approach is interesting, and I have raised my score. However, I am still skeptical that the time series application is a good one for this work, and believe that a good generator can be trained without using the semantic approach. ******************************** Other Comments Or Suggestions: Would be very good to clearly show the hand tuned and human interaction portions. Can this framework be applied to other applications? Why is the multi-agent approach a good one? Isn't this just an optimization problem? What other optimizers could be applied? Questions For Authors: Given the hand crafted selection of time series specifications, then why is the LLM-based approach a good one? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and feedback, especially the high-level perspective on the entire system, which will guide the future expansion of our work. Thank you! #### **General Comments (GC):** **[GC 1]** *Claims of generality are not very clear.* *"The time series specifications (or features) are hand-crafted, and a rich enough set of these is selected."* Section 4.1 generates text templates, which are then applied to TS data, with **an LLM filling in general, domain-agnostic statistical features** to produce the final text. Thus, **time series specifications are automatically generated by the LLM based on templates** from the multi-agent system, **rather than being manually crafted or selected by human intervention**. We will make it more clear in future revisions. **[GC 2]** *"The extent and impact of human interventions are unclear."* There are only two points in our work where human actions are involved: 1. After collecting the initial text templates **(Section 4.1 Step1)**, LLM prompting filters them to ensure no dataset-specific information is included. A human double-checks the results to prevent potential data leakage. 2. A human assessment (Section 6.2) serves only as an evaluation metric and does not affect model training. **[GC 3]** *"It isn't clear that existing time series signal decomposition and feature extraction methods couldn't be applied to this data and used for training a generator."* We tried rule-based text generation and Seasonal-Trend Decomposition, but both methods were ineffective in generating useful descriptions (**the first paragraph in Section 4 and Appendix A9**). **[GC 4]** *"The contribution to time series analysis and generation is very small."* Our framework is designed specifically to addresses key challenges in text-controlled TS generation, such as lack of aligned data, controllable generation, and domain generalization. We integrate multi-agent reasoning for data construction, semantic prototype conditioning, and a diffusion-based generator to address challenges. Within the multi agent system, we propose tailored evaluation criteria like trend description accuracy, seasonality, completeness, and clarity (Appendix A.5) to assess TS generation. We also explore the impact of different text types on generation, showing that pattern descriptions and concise summaries are more effective (Section 6.3). These insights offer valuable takeaways for time series tasks. #### **Other Comments (OC):** **[OC 1]** *"It would be very helpful to clearly show the hand-tuned and human interaction portions."* We clarify that there is no hand-tuning or human intervention for time series features. Please refer to our response to **[GC 1]** and **[GC 2]**. **[OC 2]** *"Can this framework be applied to other applications?"* Great suggestion! The modular structure could support future applications beyond TS Generation, including in image and video. This is an exciting direction for future work. **[OC 3]** *“Why is the multi-agent approach a good one? Isn't this just an optimization problem? What other optimizers could be applied?”* We chose a multi-agent system over a single-agent or LLM prompt approach for key reasons. Direct LLM prompting was ineffective in generating useful descriptions (see **[GC 3]**), and single-agent systems often suffer from overconfidence, leading to errors without correction [1]. Multi-agent systems allow agents to critique each other’s outputs, improving reliability and transparency by assigning distinct roles. Our multi-agent approach goes beyond scalar optimization, involving tasks like searching for TS text online, summarizing templates, generating text, evaluating alignment, and refining for better templates. While we recognize reinforcement learning could optimize a single LLM, it faces challenges: (1) large optimization space, as RL starts from scratch, while our method refines existing templates; (2) high computational demands due to model training, whereas our agents require no retraining or fine-tuning; and (3) sparse feedback from using final generation performance as a reward, while our agents themselves evaluate text clarity, completeness etc (Appendix A5), improving refinement efficiency. [1] Enhancing LLM Reasoning with Multi-Path Collaborative Reactive and Reflection agents [[arxiv.org/pdf/2501.00430](https://arxiv.org/pdf/2501.00430)] #### **Question**: *“Given the hand crafted selection of time series specifications, then why is the LLM-based approach a good one?”* Please kindly refer to our response to **[GC 1], [GC 2]** for the clarification on avoiding human intervention in feature selection and **[OC 3]** for justification of the multi-agent system. As general statistics with simple rule-based texts are ineffective for TSG (see Section 6.3), we use a multi-agent system. If any further questions or concerns, we’d be happy to provide more information. Looking forward to your response and continued discussions! --- Rebuttal Comment 1.1: Comment: Thank you for your response to my and the other reviews. These detailed comments help with understanding. Based on the reviews and response, I will raise my score. I think that the overall ideas and approach represents interesting research, and although I'm not convinced that time series is a useful problem to study, nevertheless the general research approach is interesting. Some comments about the time series aspects: I do continue to have concerns about the general motivation for this particular application, and feel that the major contribution and interesting research is more about the overall method and approach, and not the focus on time series. I am skeptical that creating good generators for these various time series requires any kind of text based reasoning. It isn't clear that the 'rule based' comparison is very useful. The method relies on internet-available information with semantic descriptions of time series aspects that must be related back to mathematics. So there can be no real guarantees of reliability, and perhaps even bad influences could come into the picture. If a time series generation problem were of significant interest, then domain expertise would be needed to bring such guarantees. It isn't clear that the work brings any new 'features' or TS characteristics that aren't already explored in TS literature. The meta-agent framework is interesting, although it seems other recent frameworks could also be used. Consequently, we have an ML research problem within an ML research problem. This is a challenge for both the authors and the reviewers. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to engage with our work in such depth, offering constructive comments, and kindly raising the score. We truly appreciate your thoughtful suggestions, which have been valuable in helping us improve both the clarity of our presentation and the direction of our ongoing efforts. Your question regarding the general motivation of our work and your suggestion that the framework may extend beyond time series are both insightful and encouraging. Controllable time series generation is itself an important and challenging research problem, with broad real-world relevance. We aim at using text to make the generation process more interpretable and generalizable, which are important in supporting downstream tasks. While we focused on time series due to the unique challenges of controllable generation in this domain, we fully agree that the underlying methodology could extend to other modalities like image and video. This is a direction we find exciting and will actively explore in future iterations of this work. We also welcomed your thoughtful questions on the role of domain expertise in using external textual sources and the novelty of the TS features. Strengthening reliability in sensitive domains such as clinical applications is an important future direction, and we will work on extending our framework to incorporate domain expertise. While we do not explicitly introduce new TS features, our use of text embeddings effectively enriches TS representation and supports controllable generation. We believe that further exploring how textual abstractions interact with traditional TS features is a promising direction for future work and one that could offer meaningful contributions to the community. Thank you as well for your comments on the multi-agent design. We will include the experimental results and discussions mentioned in our response to Reviewer cEkx to further clarify this component. Additionally, we plan to explore simpler or more targeted variants that retain interpretability while reducing system complexity. Once again, thank you for your generous feedback and thoughtful suggestions. Your comments have helped us sharpen the scope of our work, identify promising extensions, and consider broader implications. We are very grateful for your support in improving this paper.
null
null
null
null
High-Dimensional Prediction for Sequential Decision Making
Accept (oral)
Summary: This paper introduces a general framework for constructing sequential decision-making strategies by predicting the states of the environment. The basic idea is that if we can estimate the *future* states of the environment, we can make decisions based on these predictions. However, the challenge is that, since the states are generated adversarially, it is not possible to predict them accurately. Nevertheless, this paper demonstrates that for many sequential decision-making problems, it suffices to construct a predictor that is *unbiased* when weighted by certain events. The main technical contribution is a general prediction strategy that achieves low weighted bias for arbitrary events of polynomial size. The construction is based on a clever reduction to the problem of prediction with expert advice. Furthermore, the paper applies the general state prediction strategy to several concrete problems, including online multicalibration, swap regret, and online combinatorial optimization. I find the idea of reducing sequential decision-making to unbiased state estimation both interesting and original. I expect that this technique will be applicable to a broader range of problems and deserves wider recognition in the community. Therefore, I strongly recommend accepting this paper. Claims And Evidence: The claims are clear and supported by rigorous proofs, which are sound. Methods And Evaluation Criteria: This is a pure theory paper. The authors apply their method to a broad range of problems, which is convincing. Theoretical Claims: I checked almost all the proofs in the main text and am convinced that they are correct. However, the paper also provides many additional results in the appendix, which I did not check and therefore cannot guarantee their correctness. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper provides a general methodology for resolving sequential decision-making problems in adversarial environments, which unifies and simplifies many prior approaches. Essential References Not Discussed: N/A Other Strengths And Weaknesses: One complaint about the paper is that the notation is quite difficult to parse, e.g., the subscripts are messy. Moreover, it seems that the derived regret inevitably depends on the size of the decision space, which may not be applicable for large decision spaces. Although the authors provide an example in the context of online combinatorial optimization, it relies on a very specific additive property of the utility function. Other Comments Or Suggestions: - In line 259 (right), the definition of ECE must be $\sup_{p\in [0,1]}$, not $\sum_{p\in [0,1]}$. - In page 5, you use both $w_{\sigma,i,j}^t$ and $w_{t,(\sigma,i,j)}$, I suggest sticking to $w_{\sigma,i,j}^t$. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful reading and positive assessment of our paper, and for the feedback! We will make sure to address the notational comments in our revision.
Summary: This paper introduces a two-step framework for decision-making in an online adversarial setting in which a "master" algorithm makes predictions of vector-valued "states" that encode the decision-making relevant features of the environment by achieving low bias subject to a collection of conditioning events. Downstream agents may then consume those states through the a simple best-response strategy to enjoy strong regret guarantees themselves, such as online multicalibration or swap regret. The "master" algorithm is efficient so long as the number of conditioning events is polynomial or, in settings with combinatorially large action spaces, it has access to an optimization oracle. Crucially, the power of the master algorithm comes from properly defining a set of conditioning events, $\mathcal{E}$, which lead to the following downstream results in the paper. The authors show three main applications of this "master" algorithm (and slight variants of it): 1. Efficient online multicalibration with $O(T^{2/3})$ rate in the expected calibration error metric. 2. A polynomial number of downstream agents can *all* achieve diminishing swap regret by just "best responding" to the "master" algorithm. 3. A polynomial number of downstream agents can *all* achieve diminishing "conditional" regret in a combinatorial optimization problem over a polynomial number of conditioning events *efficiently* given an optimization oracle for the combinatorial problem. Claims And Evidence: In my reading of the paper, I saw that there was one major claim in **Theorem 2.4**, which gives the bias guarantee of the "master" algorithm, Algorithm 1. The other claims in the paper follow from applying Theorem 2.4 in various ways by changing the events $\mathcal{E}$ for which Algorithm 1 ensures unbiased prediction against. In particular: 1. Section 2.2 shows an algorithm for achieving $O(T^{2/3})$ online (one-dimensional) multicalibration error by defining the events as the collection of groups cross the collection of discretizaiton of $[0, 1]$. 2. Theorem 3.7 demonstrates that $n$ downstream agents with $K$ actions can simultaneously achieve diminishing $O(\sqrt{KT})$ swap regret by best-responding to the "master algorithm" states, so long as the master algorithm is instantiated with the event set as a collection of $nK$ "best-response" events, or indicators that agent $i \in [n]$ had best response to the state equal to action $a$ (of the $K$ actions). 3. Theorem 4.1 demonstrates that $n$ downstream agents with combinatorial action sets granted oracle access to an offline optimization oracle playing best-response actions achieves $O(d \sqrt{T})$ where $d$ is the number of "base" actions in the action sets. I checked the proofs and proof sketches that are present in the main body of the paper, and I believe they are correct. Methods And Evaluation Criteria: The paper is a theoretical work in online learning, and the evaluation criteria of regret (specifically, notions such as online multicalibration and swap regret) are well established definitions in the literature. The proposed evaluation criteria make sense. Theoretical Claims: I checked all the proofs that were included in the main body, and they were correct in my understanding. Experimental Designs Or Analyses: N/A -- this paper is mainly a theoretical work. Supplementary Material: I skimmed Section C of the appendix on connecting prediction and decision making to get better motivation for the main problem the authors pose. Other than that, because the proofs and sketches were all contained in the main body, I did not consult the Appendix further. Relation To Broader Scientific Literature: The key contributions of the paper are situated, in my view, in the literature in online learning, sequential decision-making, and calibration. Specifically, it follows a line of work started in [BM07] for guaranteeing online adversarial learning guarantees against a collection of *conditioning events*, originally termed "time selection functions." This original work motivates the authors' focus on swap regret, a very strong regret guarantee that has further implications in algorithmic game theory and correlated equilibria. The authors also design an algorithm for the online adversarial version of multicalibration, a desideratum originally introduced for the algorithmic fairness literature by [HKRR18] (albeit in a batch/i.i.d. formulation). Finally, the work touches on the literature on optimization in combinatorially large action spaces, where the Learner is assumed to have access to an offline optimization oracle. These "oracle-efficient" regret guarantees follow the early work of [KV05]. [BM07] Blum, Avrim, and Yishay Mansour. "From external to internal regret." Journal of Machine Learning Research 8.6 (2007). [HKRR18] Hébert-Johnson, U., Kim, M., Reingold, O., & Rothblum, G. (2018, July). Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning (pp. 1939-1948). PMLR. [KV05] Kalai, A., & Vempala, S. (2005). Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3), 291-307. Essential References Not Discussed: To my knowledge, the authors discuss all the relevant work to the results of this paper. Other Strengths And Weaknesses: **Strengths** - The paper is well-written and the proofs of the main theorems are organized in an easy-to-understand manner. - The "Predict-then-Act" framework the authors propose in its own right, and it presents a different model of looking at sequential decision-making that I believe is appealing and worth further investigation. - The techniques for proving the main results are clean and appealing. I particularly enjoyed how the arguments take a similar form to the omniprediction/outcome indistinguishability literature, in which the "states" the master algorithm supplies are, in effect, indistinguishable from the true states so long as the downstream decision-makers are concerned with an appropriate collection of conditioning events. **Weaknesses** - I believe the main weakness of this work is that the "Predict-then-Act" motivation for these guarantees can be muddled in the exposition. One suggestion I have would be to begin the introduction with a motivating example to make clearer this framework, as readers may be more familiar with the online learning setting where there is a single agent making actions over $T$ rounds, possibly with access to a context. This additional "layer" of a "master" algorithm producing states for *multiple* agents to consume and then act on is interesting, but I found it difficult to follow at times without referring back to a guiding example. - It would be helpful to present, in a more self-contained way, existing rates in the literature for the three applications you consider so the reader can disentangle and better understand the optimality of the results you propose. Perhaps a table, even in the Appendix, comparing the results to existing results in online multicalibration, swap regret, and online combinatorial optimization would be helpful. Other Comments Or Suggestions: A couple of small suggestions: - Page 4: I believe that the $w_{\sigma, i, j}$ should have a $t$ subscript as well. - Page 4: I would suggest motivating the form of "Unbiased Prediction" in Definition 2.2 with the existing "small-loss" rates in case readers aren't familiar with these faster rates for regret. Or, to prevent confusion, it might be helpful to just denote "Unbiased Prediction" as decaying sublinearly, and then add a remark that the goal is to achieve the "fast rate" decay that MsMwC affords. Questions For Authors: I had one main question for the authors concerning the multicalibration result: 1. Perhaps this is from my lack of familiarity with existing rates for multicalibration, but does the $O(T^{2/3})$ rate come from the fact that the algorithm you propose enjoys the "fast-rate" guarantee that scales with $n_T$ which allows a sharper tuning? In the first paragraph in Section 2.2, you claim that the fast $O(\sqrt{n_T})$ rates are useful, and I was wondering why exactly. I assumed that it was because existing methods for online multicalibration don't exploit such "small-loss" regret guarantees. This might also be worth clarifying in the main paper (second bullet point of "Weaknesses"). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and positive assessment of our paper, and appreciate the insightful comments! We agree with, and will make sure to address, your comments in the revision. Among other things, we completely agree that starting with a single agent case would further enhance the exposition and readability; in fact, the only reason that prevented us from doing so was the page limit on the submission. Also, having a condensed reference paragraph/table for prior state of the art in the directions of our applications will help further clarify the landscape for the readers. Furthermore, we completely agree that first defining unbiased prediction as requiring sublinear (o(T)) bias, and only then instantiating it with the concrete regret bounds, is an excellent expository idea --- and in fact, we had our presentation laid out just like that in the pre-submission manuscript, and had to cut it down to the direct small-loss definition purely because of the page limit. We now give a clarification regarding your question about the utility of sharp “small loss”, group-specific, rates. In the multi-calibration example, please note that for each group, as stated on line 266, the bound is O(T/m + \sum_{i \in [m]} \sqrt{\text{# rounds on which bucket i got played}}). As mentioned further in the argument, by concavity of the sqrt function, we can conclude that the worst-case split of the T rounds into the m buckets is when each bucket gets played equally often, i.e. the worst case bound is $O(T/m + m \sqrt{T/m}) = O(T/m + \sqrt{m T})$, which when choosing the optimal m (that equalizes both terms), results in $m \sim T^{1/3}$ and thus in the overall per-group bound of $O(T^{2/3})$. Meanwhile, suppose our framework only gave $\sqrt{T}$ bounds for each group rather than \sqrt{E[\text{# rounds on which each group appeared}]}. Then, the calibration error bound for each group would simply be $O(T/m + \sum_{i \in [m]} \sqrt{T}) = O(T/m + m \sqrt{T})$, which due to an extra $\sqrt{m}$ factor in the right-hand term leads to the choice of $m \sim T^{1/4}$ and hence implies a worse $O(T^{3/4})$ error bound. In this way, our ability to use our framework without any modifications to get the (currently best-known) rate very concretely hinges on the “small-loss” per-group guarantees. And indeed, as you correctly assumed, the prior method of Gupta et al (2021) didn’t achieve such small-loss bounds and could thus only achieve the 3/4 exponent.
Summary: The paper advances sequential prediction in high-dimensional settings while maintaining unbiasedness across a possible set of events. For this, the paper utilises the multipliscale multiplicative weights with correction algorithm and the standard minmax machinery that is common in this line of work. This unbiased across a set of events prediction is then used to derive a range of other properties: like swap-regret control for downstream decision makers who will best respond to such predictions, online multi-calibration with favorable rates, and control over conditional regret in online optimisation problem. Claims And Evidence: The paper makes convincing claims with proper evidence. Methods And Evaluation Criteria: The paper is primarily theoretical, and is rigorous in that sense. Theoretical Claims: I haven't checked the written proofs word by word, but I do get the intuition and the argument of the proofs. Experimental Designs Or Analyses: Not applicable. Supplementary Material: Skimmed it. Relation To Broader Scientific Literature: The paper is certainly relevant on multiple fronts. While the paper does not propose significant technical tools, Algroithm 1 is based on linearising the bias objective, and then to use the standard argument of min-max theorem to control the bias, the work itself is consequential as in encapsulates a range of related yet distinct problems: for example multi-calibration in online setting with rates of $T^{2/3}$, decision-making for any polynomial number of agents---achieving such decision-making guarantee algorithmically is challenging, and conditional regret guarantees. This work unifies multiple lines of research---calibration, no-regret learning, and combinatorial optimisation---into one efficient framework that handles arbitrary polynomial-size event families. Overall, this is a solid paper. Essential References Not Discussed: not applicable Other Strengths And Weaknesses: The paper is generally well-written as is easier to follow despite being dense. As stated above, the proposed approach is general (however there are costs to generality, and I'd appreciate some discussion around that). Other Comments Or Suggestions: None Questions For Authors: Some questions that can get some clarifications are: 1. What are the practical limits of specifying the finite collection of events E. Are there heuristic or data-driven methods for identifying the most relevant events, especially in high-dimensional contexts? Is it feasible to adaptively add or remove events over time? I assume one can plug-in new events sequentially, or my understanding wrong? 2. Related to above: guaranteeing no-swap-regret, as proposed, is based on explicitly constructing ‘best-response events’ for each agent’s utility, and knowing there are exactly n decision-makers. However, practically one might not know an agent’s utility function in advance, or there is only a partial information, plus new agents with different utilities may arrive at any time step. Could the authors discuss whether their approach can be adapted to handle unknown or changing utilities? In particular, is there a way to achieve similar no-swap-regret guarantees without having to explicitly compute best-response events for each agent ahead of time? I assume the traditional calibration (binary forecasts) result in swap-regret guarantees without any specification of the utility function? 3. The paper treats each round's state $s_t$ as "sufficient statistics" for downstream decision-making, implicitly assuming that an agent's utility depends only on these $d$ coordinates. What is the nature of the sufficient statistics: is it just high-dimensional object or encapsulates higher order moments like variance etc.? I assume not, as the decision-maker still has the linear utility in terms of the state. So I guess I'm asking what does "sufficient statistics" mean in the context of the paper, as it states sufficient for all decision-makers? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and positive assessment of our paper, and for the very insightful feedback! Indeed, as pointed out in your review, our framework is quite general, and discussing the costs of this generality is very pertinent and we will do so in the revision. To address specific points you have raised: 1. Regarding how to specify, and work with, the event collection. Indeed, our framework allows you to flexibly add new events to the event collection at any time point, without advance notice to the algorithm, as well as — conversely — terminate events that are not useful anymore. All this while barely incurring extra cost (cf. log |E| in the bias bound) for doing so. This is enabled by the fact that our algorithm at each round only works with events that are currently active and doesn’t require knowledge about other events, whether or not they were active before or will be active later. In terms of data-driven approaches, indeed in the presence of high-dimensional covariates one can for instance take a dimensionality-reducing bounded representation map, which, when applied to the covariates, results in much smaller-length vectors e.g. in the [0, 1]^d-hypercube, and define events e.g. as the outputs (or a more complex function) of this representation. E.g. the case of linear, bounded, representations give rise to the “multiaccurate” or “multigroup” setting in the literature on algorithmic fairness, and the groups there can be defined either normatively as e.g. demographic groups (age/income/etc), or — using a data-driven approach as you suggest — as adaptively identified high-bias regions in the data that would make the most sense to debias your predictions on. 2. Regarding avoiding explicit computation of best-response events for all agents: This is a great question, and answering it in specific settings is possible and has led to novel follow-up results, in particular addressing the desideratum you describe where we may wish for no swap regret for all downstream agents with utility functions in a certain class. For instance, the follow-up work of Roth and Shi (2024) give a simple method for doing this using our algorithm. Their observation is that for any (linear) utility function, best response regions are convex --- so it suffices to use our algorithm to produce unbiased predictions for all regions defined by the convex hull of points in our prediction space. Each of these defines an event that can be plugged into our algorithm. For one dimensional outcome spaces, these are simply intervals. This gives us a finite collection of events that includes the best response events for all possible downstream agents, even though we don’t have knowledge of their utility functions. The upshot of the described construction of Roth and Shi is that relative to implementing full calibration (which you point out in your review as another agent-agnostic approach), you get better regret rates — O(sqrt(T)) vs. O(T^2/3) in one dimension, and much better improvements in higher dimensions, since bounds for full online calibration scale exponentially with the dimension. Again, their bounds follow from simply instantiating our algorithm with the right choice of events. We look forward to further applications of our framework in this vein. 3. Regarding the notion of “sufficient statistics”: We have referred to the state vectors’ entries as “sufficient statistics” in the intro, to informally convey their meaning. Rigorously, as you indeed say, we define them as the (finite as we require) collection of quantities that the agent’s utility mapping is a function of, for every action. Whether or not they represent some statistics of some implicit/underlying distributions on which the agents’ utilities depend, does not affect our framework. In this work, we focus on establishing the foundational case where the utility is an affine function of these statistics. However, this linearity should not be viewed as some very restrictive assumption that would preclude the use of the framework for nonlinear utilities, but rather as the key tool for nonlinear extensions: this is similar to how linear optimization serves as the core of most convex/nonconvex optimization methods. To be more specific, lots of very important classes of (utility) functions have small functional bases (needed to keep our event collection small), and linearize over the states once an appropriate representation map is applied to the state vectors. E.g. for polynomials of fixed degree, for the states we would consider the moments up to the d-th, and this can be taken far beyond polynomial utilities, too, provided some other smoothness guarantees on the function class are given. For instance a very recent preprint (https://arxiv.org/abs/2502.12564) looks at such extensions and once again directly applies our framework and algorithm on those. --- Rebuttal Comment 1.1: Comment: thanks for the detailed response to my comments. I'm happy with the current submission and would like to see at the conference, however some of these clarifications can go into the main text.
Summary: This paper introduces a general decision-making framework for the design of efficient algorithms with the objective of producing multi-dimensional forecasts in adversarial, sequential decision-making environments The main ides is to first design a way to sequentially predict the future state of the environment, ensuring low bias across a polynomial number of conditioning events, and second to design regret-minimizing agents that essentially assume perfect knowledge of the next state. Therefore, the framework shifts the traditional focus from direct regret minimization to predicting a sequence of "sufficient statistics" of the environment, as it suffices for the agents to best-respond with respect to the predicted states. By guaranteeing small cumulative bias of the state predictions, the overall regret of the agents can be bounded in a nontrivial way while guaranteeing efficiency at the same time by cleverly designing the event collection of interest for the sequential decision-making problem at hand. The authors demonstrate that these almost unbiased state predictions can be used to guarantee strong conditional regret bounds, even when dealing with complex online combinatorial optimization problems and multiple agents. They also show that the algorithm obtained via this framework can efficiently achieve the $T^{2/3}$ rate in online multicalibration. Claims And Evidence: The theoretical claims are clearly stated with complete proofs. The authors provide a detailed description of the proposed framework and the resulting algorithms, which is well-structured and easy to follow. Methods And Evaluation Criteria: N/A Theoretical Claims: Yes, I verified the proofs in this paper and they seem correct. Experimental Designs Or Analyses: N/A Supplementary Material: Yes, I briefly reviewed the supplementary material. Relation To Broader Scientific Literature: The authors provide a comprehensive overview of the related prior work and discuss their contribution with respect to existing results. My only concern is that the paper, as the authors explicitly state, had an extended preprint version that appeared in 2023. Even so, the authors are careful in pointing out that the proposed method lead to follow-up work that achieved improved results for specific sequential decision-making problems. However, there is a chance that a more recent paper could potentially already contain improved results. The authors should further clarify this specific point, for instance by explicitly stating that the proposed technique is not obsolete and currently remains a main reference. Essential References Not Discussed: To the best of my knowledge, the paper seems to discuss the relevant references. Other Strengths And Weaknesses: I found the proposed methodology to be very interesting and the paper to be well-written. The authors provide a clear and detailed description of the proposed framework and the resulting algorithms, which is well-structured and easy to follow. I also appreciated the instantiation of the framework to specific problems, which not only demonstrates the versatility and effectiveness of the proposed approach by deriving improved results in relevant problems, but also helps the reader in further understanding how to apply the framework to other problems. Other Comments Or Suggestions: - Line 223, second column: shouldn't there be some expected value since $n\_T$ is defined as an expectation (Definition 2.2) whereas the realized predictions $\\hat s\_t$ are random variables? - Line 247, second column: "(1)" seems to be superfluous. - Line 260, second column: the sum $\\sum\_{p \\in [0,1]}$ is somewhat ill-defined at first glance because of the uncountable range. Then, given the term in the sum, it would be clearer (and equivalent) to have $p \\in \{p\_1, \\dots, p\_T\}$. Even just a comment would make this clear. - Line 271, second column: the first comma should be a period. - Lien 298, second column: $nK$ instead of $nd$. - Line 368, first column: $O(nK)$ is actually just $nK$ to be precise. - Line 384, second column: "for any set" instead of "for any of a set". Questions For Authors: Could you clarify the relevance of the current work with respect to any potential work that extended or improved the framework since the publication of its preprint version? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and positive assessment of our paper, and appreciate the detailed comments. We will incorporate, in the updated version of the paper, the modifications proposed in Other Comments and Suggestions. As for the relevance of the preprint to future work that followed up on it, such as the cited references in the submission: Indeed, our proposed framework and the algorithm we develop within it is not obsolete and remains state of the art and continues to be used. Subsequent work has not generalized or subsumed our general framework. Rather, follow-up papers have used our framework and algorithm and its generic guarantees as a technical tool towards achieving goals in concrete settings (such as high-dimensional contract design, or no regret guarantees for infinite families of downstream agents, etc) — which is the type of follow-up work that we have envisioned for our paper from the beginning. To summarize, our framework remains state-of-the-art, and subsequent work has used it for various interesting applications. Please feel free to look at the responses to other reviewers for some brief points on a couple of follow-ups and on ways in which these follow-ups use our framework.
null
null
null
null
null
null
Joint Learning of Energy-based Models and their Partition Function
Accept (poster)
Summary: This paper proposes a min-min training paradigm for simultaneously training energy models and their log-normalization constants in a combinatorially-large space. It demonstrates that this approach can effectively approximate the Maximum Likelihood Estimation (MLE) objective and extends it to Fenchel-Young losses. Notably, the training of the objective function does not rely on traditional MCMC sampling; instead, It only needs to sample from the prior distribution and use stochastic gradient descent. Experiments are conducted on two tasks: multi-label classification and label ranking. *Strengths* - The insight that lagrange multiplier exactly coincides with the log-partition function in the MLE (logistic loss) case is good. Based on this insight, the proposed min-min formulation for MLE of EBMs seems to be new. - The proposed min-min training method is supported via theoretical development. - Experiments investigate variations in model architecture, regularization functions, and the number of sampled points, and show the effectiveness of the proposed method. *Weaknesses* - In addition to learning EBM model parameters, learning partition functions simultaneously is interesting. But the experiments conducted in this paper do not show the benefit of such joint learning. - Missing relevant references and comparisons Learning probabilistic EBMs in combinatorially-large discrete spaces have been studied with substantial progress in recent years, particularly in language modeling [a-d]. The discrete space in language modeling is V^k, where V is the vocabulary size and k is the sentence length. A variety of methods have been proposed for learning discrete EBMs, including augmented stochastic approximation (AugSA) [b], noise contrastive estimation (NCE) [c]. Particularly, the AugSA method has been developed to be able to jointly estimate the model parameters and partition functions (up to a constant). The idea of formulating equations with partition functions and then estimating them in this work is very similar to AugSA. Connecting and comparing to these previous studies are needed. - Limitation in experiment evaluation The experiment comparisons are only conducted between different versions of the proposed method, by varying settings such as unary vs pairwise, different network architectures, and different losses. Presenting at least one experiment comparing the proposed method and one of the SOTA methods for learning discrete EBMs is necessary; otherwise, the effectiveness of the proposed method is not convincing. Regarding the results for multilabel classification, only learning curves in Figure 1 and the learned log-partition functions in Figure 2 are shown in the main text. Not clear why the numerical metrics are not reported in the main text, which should be major results. [a] Energy-Based Models with Applications to Speech and Language Processing. Foundations and Trends® in Signal Processing, 2024. [b] Learning Trans-dimensional Random Fields with Applications to Language Modeling. TPAMI, 2018. [c] Learning neural trans-dimensional random field language models with noise-contrastive estimation. ICASSP, 2018. [d] Residual Energy-Based Models for Text Generation. ICLR 2020. *Questions* - The right hand sides of Eq 2 and 5 are the same, which is confusing. - Line 243: "We have therefore obtained convergence rates for learning EBMs in arbitrary combinatorial spaces." Not clear. - In section 3.3, where does the min-max formulation come from? - Figure 3: what are plots (a) and (b)? Claims And Evidence: see above Methods And Evaluation Criteria: see above Theoretical Claims: see above Experimental Designs Or Analyses: see above Supplementary Material: see above Relation To Broader Scientific Literature: see above Essential References Not Discussed: see above Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and relevant references. > **Presenting at least one experiment comparing the proposed method and one of the SOTA methods for learning discrete EBMs is necessary** The original Table 3 contained generalized Fenchel-Young (GFY) losses comparisons, and Figures 1 and 3 contained comparison to logistic regression, which serves as a relevant baseline for structured prediction tasks learned without explicit partition function estimation. Following reviewers feedback, we have now added comparisons with **MCMC** and **adversarial (min-max)** training baselines. **Link to updated Table 1 and Table 3:** https://anonymous.4open.science/r/ebms/tables.pdf If the paper is accepted, we are going to include these tables in the main text, thanks to the extra page allowed by the camera-ready. For convenience, we also show the new results below: Table 1(label ranking) |Loss|Polytope C|Model $g$|authorship|glass|iris|vehicle|vowel|wine| |---|---|---|---|---|---|---|---|---| |Logistic (MCMC)|P|Linear|84.29|80.67|55.56|79.15|49.97|91.36| ||P|MLP|87.90|75.50|97.04|86.34|57.90|91.98| ||B|Linear|88.10|79.33|97.78|82.22|55.99|**96.91**| ||B|MLP|89.68|80.16|97.78|85.88|48.90|91.98| |Logistic (min-max)|P|Linear|61.01|79.22|61.48|70.20|45.68|65.43| ||P|MLP|70.02|67.03|55.56|68.56|40.22|70.99| ||B|Linear|82.58|78.40|61.48|74.90|52.86|93.21| ||B|MLP|78.50|71.68|42.96|68.76|48.99|83.33| Table 3 (multilabel) |Loss|Model$g$|yeast|scene|birds|emotions|cal500| |---|---|---|---|---|---|---| |Logistic (exactMLE)|Linear (unary)|61.46|70.64|43.97|63.35|37.31| ||MLP (unary)|60.71|70.81|40.88|62.99|36.24| ||Resnet (unary)|62.27|71.02|40.78|61.50|37.45| |Logistic (MCMC)|Linear(unary)|61.98|70.53|44.93|61.76|44.61| ||MLP (unary)|60.34|71.57|33.13|62.23|45.70| ||Resnet (unary)|56.57|71.24|27.03|58.98|44.34| ||Linear (pairwise)|61.96|70.50|44.93|61.76|44.61| ||MLP (pairwise)|60.83|71.92|34.61|63.15|45.95| ||Resnet (pairwise)|56.68|70.46|27.08|64.05|44.35| |Logistic (min-max)|Linear (unary)|52.80|52.29|27.59|61.72|35.08| ||MLP (unary)|58.58|52.54|24.95|60.53|46.08| ||Resnet (unary)|57.53|42.82|25.32|55.85|47.59| ||Linear (pairwise)|52.80|52.29|27.59|61.72|35.08| ||MLP (pairwise)|58.76|51.59|26.71|58.68|43.31| ||Resnet (pairwise)|57.53|42.95|22.35|46.72|**47.67**| **Technical details:** For the the min-max approach, we use optimistic ADAM as solver, an MLP as generator and we use REINFORCE (score function estimator) for gradient estimation. For MCMC, we use standard Metropolis–Hastings algorithm. Similarly to the min-min approach, we tune the learning rate and L2 regularization using grid search. > **References [a-d]** Thank you for these references. We will add them. > **Differences with AugSA [b]** Thank you for this very relevant reference. We will cite it and clarify the distinctions. First, this paper studies linear conditional random fields for whole-sentence LMs while we study more general energy-based models. In addition, our method supports the broader class of Fenchel-Young losses (including sparsemax), not just MLE. They indeed propose the idea to optimize the normalization constant but in their case it’s a constant, while in our case it’s a function, parameterized as a neural network that approximates the input-dependent log-partition function. Last, we empirically demonstrate (Figure 2) that our learned $\tau$ generalizes to unseen inputs $x$. > **Not clear why the numerical metrics are not reported in the main text, which should be major results.** This was due to space constraints in the initial submission. If our paper is accepted, we plan to use the extra page allowed for the camera-ready version to move the expanded Table 3 (which now includes MCMC and a min-max approach) into the main text. > **The right hand sides of Eq 2 and 5 are the same, which is confusing.** Please note that Eq. (2) is using an argmax while Eq. (5) is using a max. > **Line 243: "We have therefore obtained convergence rates for learning EBMs in arbitrary combinatorial spaces." Not clear.** Our claim refers to the fact that our proposed objective can be optimized using standard stochastic gradient methods, with the convergence rate discussed after Proposition 2. This is achieved without requiring an oracle for the exact partition function (needed for traditional MLE gradient computation) or k-best solutions (needed for traditional sparsemax loss optimization), which are often intractable in arbitrary combinatorial spaces. We will clarify this context in the paper. > **In section 3.3, where does the min-max formulation come from?** We derived the min-max formulation presented in Proposition 3 and Appendix C.4 ourselves. It serves as a comparison point to our proposed min-min formulation and draws parallels to adversarial learning approaches. > **Figure 3: what are plots (a) and (b)?** Thank you for catching that typo. We were referring to the left and center columns of the Figure.
Summary: It is well-known that computing normalising constants of energy-based models is intractable, and a significant and important problem. The authors propose a simple and elegant technique for discrete models - instead of computing the normalising constant, consider a constrained maximum likelihood optimisation problem with the density constrained to sum to 1, and use Lagrange multipliers to enforce the constraint. Two neural networks are used to jointly learn the energy based model and the normalising constant, as the title suggests. Experiments are conducted on multiabel classification and ranking. ## update after rebuttal As I indicated in my original review, I found and still find the empirical results a little bit underwhelming. However, I believe the other results in this paper are interesting, and are enough to warrant publication even without very strong empirical results. I'm maintaining my current score of accept, noting that the overall average still seems to be erring on accept. Claims And Evidence: Claims and evidence are supported by evidence. Methods And Evaluation Criteria: Evaluation criteria are appropriate. Theoretical Claims: Theoretical claims appear sound. Experimental Designs Or Analyses: The experiments appear sound, although only two datasets are used (see below). Supplementary Material: I briefly checked the proofs for correctness. They use standard tools from optimisation and I could not spot any errors. Relation To Broader Scientific Literature: It is very well placed in broader scientific literature, tackling a central problem in machine learning. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: **Strengths:** - The paper is very well written, provides a good amount of background which is accessible to even those who are not familiar with the area, citing appropriate canonical background texts. The notations seem sensible and easy to read. I was not able to spot typos or errors. Easily the best written paper in my pile this year. - The idea is relatively simple --- use the constraint of the normalising constant equal to 1 as a separate function multiplied by the Lagrange multiplier. Then just use neural nets for both the constraint and the original log loss. A simple idea seems to give good results, without convoluted hacks. **Weaknesses:** - The approach is limited to a discrete target space $\mathcal{Y}$. - Figure 2 is perhaps misleading and I am also not sure what I should take from it. Firstly, the horizontal axis shows randomly selected x values (i.e. there is no "metric" between x and no "continuity" in y), but the plot makes it look like a timerseries or something similar. Secondly, the errors are entirely qualitative, relying on the reader to try and eyeball the difference between the dark and light plots. All I can say is "emotions" looks good and "cal500" looks relatively poor. Finally, there is no "control", i.e. it shows the proposed method, but doesn't compare with any baseline. I think a single scalar number for each of the datasets would be more informative, and a comparison with other techniques. - In the empirical comparison with other techniques, I am not seeing comparisons with other (perhaps intractable?) techniques, like standard (non-probabilistic) deep net classifiers, logistic regression, etc. Could you provide these, or discuss why they are inappropriate? Other Comments Or Suggestions: **Detailed comments:** - The variational perspective in equation (2) is generalised on page 6. NLL loss and KL divergence gives a standard "Bayes posterior" (which is not actually a posterior in this context). Arbitrary loss and KL divergence gives a "Gibbs posterior", which is equation (1). Arbitrary loss and arbitrary divergence (or, more specifically, f divergence) gives a generalised posterior. Maybe you would like to mention this (I guess "posterior" can be replaced by "measure")? --- see Knoblauch - As far as I understand, the problem of sampling from the probabilistic EBM is not studied. Is that right? Would you expect any particular sampling algorithm to be especially well suited to this formulation of EBM, or are we no better off than generic samplers? - The constraint is on the normalising constant equal to 1. Why not constrain the log normalising constant to be equal to zero? Wouldn't this then be of the same "scale" as the likelihood term in the loss? Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive review and constructive feedback. > **The approach is limited to a discrete target space** Our approach can in principle be applied to continuous output spaces $\mathcal{Y}$. However, we believe standard regression tasks are less common applications of conditional EBMs, compared to structured prediction over discrete spaces. This motivated our focus on discrete, combinatorially-large spaces like sets and permutations. > **Figure 2: a single scalar number for each of the datasets would be more informative and a comparison with other techniques** Indeed, Figure 2 plots the learned vs. true log-partition for 100 individual test samples, rather than a time series. We chose this visualization to show the correlation across diverse samples. While scalar metrics like correlation coefficients could summarize this, the plot provides a better view of the approximation quality. We agree that summarizing with a scalar metric in the caption would be beneficial and will add this. A comparison with other techniques for learning the log-partition function is difficult as, to our knowledge, parameterizing and learning $\tau$ as a parametrized function is novel to our work. > **Empirical comparison with other techniques** For multilabel classification, in Table 3, logistic (unary) corresponds exactly to using sigmoids as output layers and binary cross-entropy as loss. In addition, Table 3 includes the results when using the generalized Fenchel-Young loss, which is state-of-the-art for structured prediction tasks. For label ranking, we use full rankings as supervision, which is naturally cast as a structured prediction task. We could potentially reduce full rankings to many pairwise rankings and use pairwise ranking losses instead, but we believe such a comparison is out of the scope of our paper. However, we acknowledge that the paper did lack comparisons with **MCMC** and **min-max** baselines. We therefore added these comparisons to Table 1 and Table 3. **Link to updated Table 1 and Table 3:** https://anonymous.4open.science/r/ebms/tables.pdf If the paper is accepted, these tables are going to be included in the main text, thanks to the extra page allowed by the camera-ready. **Technical details:** For the the min-max approach, we use optimistic ADAM as solver, an MLP as generator and we use REINFORCE (score function estimator) for gradient estimation. For MCMC, we use standard Metropolis–Hastings algorithm. Similarly to the min-min approach, we tune the learning rate and L2 regularization using grid search. > **The variational perspective in equation (2) is generalised on page 6. NLL loss and KL divergence gives a standard "Bayes posterior" (which is not actually a posterior in this context). Arbitrary loss and KL divergence gives a "Gibbs posterior", which is equation (1). Arbitrary loss and arbitrary divergence (or, more specifically, f divergence) gives a generalised posterior. Maybe you would like to mention this (I guess "posterior" can be replaced by "measure")? --- see Knoblauch** This is a good remark. If $g(x, y) = \log p(x|y)$ and $q(y) = p(y)$, then Eq. (1) indeed gives the posterior $p(y|x)$ (in the sense of Bayes’ rule). > **As far as I understand, the problem of sampling from the probabilistic EBM is not studied. Is that right? Would you expect any particular sampling algorithm to be especially well suited to this formulation of EBM, or are we no better off than generic samplers?** Indeed, we do not address the issue of sampling in this paper. We wonder whether the learned log-partition function can help design a better rejection sampling scheme. We leave this to future work. > **The constraint is on the normalising constant equal to 1. Why not constrain the log normalising constant to be equal to zero? Wouldn't this then be of the same "scale" as the likelihood term in the loss?** We haven’t considered this. Our proof is based on the more general Fenchel-Young losses, where the Lagrange multiplier $\tau$ naturally appears as a consequence of Lemma 1.
Summary: The paper proposes a method for learning discrete energy-based models and their partition function. Specifically, by parameterizing the partition function as $\gamma(x)$, the paper introduces the constraint $\sum_y p(y|x) - \gamma(x) = 0$, which can be incorporated into the maximum likelihood estimation using the Lagrange multiplier method. Experimental results on multilabel classification and label ranking demonstrate the effectiveness of the proposed method. Claims And Evidence: The paper is well written and most claims are well supported. However, there is one point I hold different opinions. *In line 295, it states: “However, min-max formulations are notoriously hard to optimize. In contrast, our approach is based on a min-min formulation, which is easier to optimize.”* There are no experimental results in the paper to support this claim. I suspect whether the proposed min-min formulation is indeed easier to optimize than the min-max formulation, which parameterizes a variational distribution to estimate the partition function. In fact, in the continuous domain, variational methods have demonstrated scalability in modeling high-dimensional data [1]. The proposed min-min formulation requires Monte Carlo estimation of the intractable sum, as seen in Equation 13. This approach generally suffers from high variance, which can make optimization unstable. In this regard, I am curious about the scalability of the proposed method. [1] Duvenaud, David, et al. "No MCMC for me: Amortized samplers for fast and stable training of energy-based models." *International Conference on Learning Representations (ICLR)*. 2021. Methods And Evaluation Criteria: The proposed method is straightforward and easy to follow. However, the evaluation criteria are insufficient. In the experiments, the paper only presents the results of the proposed method, lacking a baseline comparison with other approaches, such as the variational-based min-max method. The paper focuses solely on the non-probabilistic EBM setting, specifically computing the mode argmax_y p(y, x). It would be beneficial to explore whether the proposed min-min formulation could also perform well in a probabilistic EBM setting, where the goal is to learn an EBM to model the data distribution rather than merely finding the mode. Moreover, [2] addresses a similar problem—learning set functions—which can be viewed as a type of mode-finding problem (i.e., argmax). [2] explores broader applications, including product recommendation, set anomaly detection, and compound selection, making it a valuable reference. Additionally, [2] proposes a method for training discrete EBMs. Although it is based on a biased objective, it serves as a competitive baseline for performing the mode-seeking task of argmax. [2] Ou, Zijing, et al. "Learning neural set functions under the optimal subset oracle." *Advances in Neural Information Processing Systems* 35 (2022): 35021-35034. Theoretical Claims: The theoretical results appear to be correct. Experimental Designs Or Analyses: As mentioned before, the experimental designs is insufficient. More comparison to the baseline and including more settings, like learning probabilistic EBM s. The experiment in [2] can also be a good reference. Supplementary Material: None Relation To Broader Scientific Literature: None Essential References Not Discussed: There are some missing references regarding the training of discrete EBMs with non-MLE approaches, which could be cited in Section 2.5. - Energy discrepancy [3] is one such non-MLE approach that has an interesting connection to MLE. - Ratio matching and concrete score matching are also applicable for training discrete EBMs. [3] Schröder, Tobias, et al. "Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces." *Advances in Neural Information Processing Systems* 37 (2024): 79246-79281. [4] Hyvärinen, Aapo. "Some extensions of score matching." *Computational statistics & data analysis* 51.5 (2007): 2499-2512. [3]Meng, Chenlin, et al. "Concrete score matching: Generalized score matching for discrete data." *Advances in Neural Information Processing Systems* 35 (2022): 34532-34545. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. > **Challenges of min-max implementation** To avoid any controversial claim, we will remove “notoriously hard to optimize”. Instead, we will explain the technical challenges that arise using this formulation in more detail. The min-max objective is $\mathbb{E}_x \mathbb{E}{y \sim p(\cdot|x)} [g(x, y)] - \Omega(p(\cdot|x)) - \mathbb{E}{(x,y)} [g(x,y)]$, where $\Omega$ is the negative Shannon entropy. The variable $p$ appears in the sampling (expectation) and in the entropy. In practice, we parameterize $p$ as a neural network. If $p$ is an EBM, sampling from $p$ is difficult and the min-max formulation does not bring any benefit. Therefore, it is common to use a parameterization such that it is easy to sample from the mode instead. Another challenge comes from the gradient computation. Since $p$ appears in the sampling, one typically uses the score function estimator (REINFORCE) to estimate the gradient w.r.t. the parameters of $p$. However, this estimator suffers from high variance. In our min-min formulation, the variable $\tau$ is a function, not a distribution and its gradients are easy to compute > **High-variance of Monte-Carlo estimation of expectations** Our experiments (e.g., Table 1, Table 3) were designed to show scalability with respect to the output space size, governed by the number of classes $k$ ($|Y|=2^k$ or $k!$). Scalability with the number of training samples $n$ is addressed through standard SGD optimization. Regarding the MC estimation in Eq. (13), while MC estimates can introduce variance, our doubly stochastic approach (Alg. 1) proved stable and effective in practice across our experiments. Fig. 1 illustrates that even with a small number of prior samples ($y'$), the method converges effectively, although more samples accelerate convergence towards the exact MLE objective. > **Lack of baseline comparison with other approaches, such as the variational-based min-max method.** Our paper included logistic regression (“Exact MLE” in Fig. 1 & 3) and generalized Fenchel-Young losses (GFY) (Table 3) as baselines. However, our paper did lack comparisons with MCMC and min-max approaches. **Links to updated Table 1 and Table 3** https://anonymous.4open.science/r/ebms/tables.pdf. If the paper is accepted, we are going to include these tables in the main text, thanks to the extra page allowed by the camera-ready. For convenience, we also show the new results below: Table 1 |Loss|Polytope C|Model $g$|authorship|glass|iris|vehicle|vowel|wine| |---|---|---|---|---|---|---|---|---| |Logistic(MCMC)|P|Linear|84.29|80.67|55.56|79.15|49.97|91.36| ||P|MLP|87.90|75.50|97.04|86.34|57.90|91.98| ||B|Linear|88.10|79.33|97.78|82.22|55.99|**96.91**| ||B|MLP|89.68|80.16|97.78|85.88|48.90|91.98| |Logistic(min-max)|P|Linear|61.01|79.22|61.48|70.20|45.68|65.43| ||P|MLP|70.02|67.03|55.56|68.56|40.22|70.99| ||B|Linear|82.58|78.40|61.48|74.90|52.86|93.21| ||B|MLP|78.50|71.68|42.96|68.76|48.99|83.33| Table 3 |Loss|Model$g$|yeast|scene|birds|emotions|cal500| |---|---|---|---|---|---|---| |Logistic (exactMLE)|Linear (unary)|61.46|70.64|43.97|63.35|37.31| ||MLP (unary)|60.71|70.81|40.88|62.99|36.24| ||Resnet (unary)|62.27|71.02|40.78|61.50|37.45| |Logistic( MCMC)|Linear(unary)|61.98|70.53|44.93|61.76|44.61| ||MLP (unary)|60.34|71.57|33.13|62.23|45.70| ||Resnet (unary)|56.57|71.24|27.03|58.98|44.34| ||Linear (pairwise)|61.96|70.50|44.93|61.76|44.61| ||MLP (pairwise)|60.83|71.92|34.61|63.15|45.95| ||Resnet (pairwise)|56.68|70.46|27.08|64.05|44.35| |Logistic (min-max)|Linear (unary)|52.80|52.29|27.59|61.72|35.08| ||MLP (unary)|58.58|52.54|24.95|60.53|46.08| ||Resnet (unary)|57.53|42.82|25.32|55.85|47.59| ||Linear (pairwise)|52.80|52.29|27.59|61.72|35.08| ||MLP (pairwise)|58.76|51.59|26.71|58.68|43.31| ||Resnet (pairwise)|57.53|42.95|22.35|46.72|**47.67**| **Technical details**: For the the min-max approach, we use optimistic ADAM as solver, an MLP as generator and we use REINFORCE. For MCMC, we use Metropolis–Hastings algorithm. We tune the learning rate and L2 regularization using grid search. > **Whether the proposed min-min formulation could also perform well in a probabilistic EBM setting** We indeed focus primarily on the prediction setting (i.e., finding the mode, $\mathrm{argmax}_y ~ p(y|x)$), because our approach, while learning the model $p(y|x)$ and approximating its normalization constant, doesn’t directly address the challenge of sampling from this distribution, which would be central to evaluating performance in a generative/probabilistic setting. Sampling in structured discrete spaces can be challenging. Thank you for pointing out the work of Ou et al, that we will cite. > **Missing references** Thank you, we will add these relevant citations. --- Rebuttal Comment 1.1: Comment: Thank you for the response and for sharing the updated results. Overall, the paper is well-written, and I find the narrative clear and easy to follow. It would be interesting to see whether the min-min algorithm performs well in the probabilistic EBM setting as well. My intuition is that it might face challenges in that context compared to MCMC-based methods. If that turns out to be the case, a discussion on why the min-min approach is more effective in the prediction setting than in the probabilistic EBM setting would be a valuable addition. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for your comment and your positive feedback. We agree that evaluating the quality of $p(y|x)$, and not just $\mathrm{argmax}_y ~ p(y|x)$ would be valuable. However, we also think that this evaluation would be challenging and could warrant a separate paper. To our knowledge, evaluating $p(y|x)$ could either mean evaluating whether it is well calibrated or whether generations $y \sim p(\cdot | x)$ are good. Both evaluations are challenging in the structured setting (sets, permutations), as they highly depend on the quality of the MCMC sampler. In our opinion, sampling $y \sim p(\cdot | x)$ would be most interesting if $y$ is continuous, for instance for generating an image $y$ from a prompt $x$. However, our paper focuses on discrete output spaces. In addition, we believe that the learned log-partition (for which we empirically demonstrate the generalization properties in Figure 2) could help design sampling algorithms (such as rejection sampling) with faster convergence than vanilla MCMC. We leave such an investigation for future work. To address your concern, we propose to add a new paragraph “limitations and future work”, that will recap the discussion above. Thank you, The authors
Summary: Authors formulate the MLE of Energy Based Models as a constraint problem with the equality contraint defining the partition function. Then they suggest performing stochastic optimization by learning jointly the energy model and its associated dual variable. Amortization is suggested by parameterizing the dual as a neural network. Claims And Evidence: In the introduction, the authors motivate their work by emphasizing the limitations of MCMC-based methods. However, the experimental section lacks a thorough benchmarking analysis and comparisons with currently established methods, leaving the practical relevance of the proposed approach unverified. Furthermore, while the authors employ an amortized optimization technique to handle a polytope constraint in the dual problem, a very common method used for instance extensively in the literature on neural OT [1], the paper does not clearly explain why the structure of the EBM problem is particularly suited to this approximation over traditional Monte Carlo-based methods. This omission underscores the need for thorough empirical ablation studies, which are notably absent. Based on this important shortcoming, I believe the paper does not meet the acceptance threshold. Additionally, the discussion on generalization to Fenchel-Young losses appears to offer limited practical utility. For future revisions, I recommend relocating the section on Fenchel-Young losses to an appendix, thereby allowing more space for extensive large-scale experiments and detailed comparisons with current benchmarks in EBMs. [1] Korotin, A., Li, L., Genevay, A., Solomon, J. M., Filippov, A., & Burnaev, E. (2021). Do neural optimal transport solvers work? a continuous wasserstein-2 benchmark. Advances in neural information processing systems, 34, 14593-14605. Methods And Evaluation Criteria: Figures 1 and 2 aim to validate the presented theoretical results. However, the experiments appear to be limited in scale and provide minimal insight into performance in real-world settings. For the experimental part, it would be beneficial to include a thorough comparison with current MCMC-based approaches in terms of runtime, memory consumption, and final performance. Also a comparison with adversarial approaches described in section 3.3 can be valuable. Remark: I believe Table 1 is not discussed enough and it is hard to understand what conclusions to draw there. Theoretical Claims: The theoretical claims seem sound and rigorous. Experimental Designs Or Analyses: This is the weakest part of the paper as discussed above. Supplementary Material: Yes I reviewed the proofs which seemed rigorous. Relation To Broader Scientific Literature: This work applies the very well known technique of amortized optimization on the dual problem of a convex optimization problem to the problem of learning EBMs. Essential References Not Discussed: None Other Strengths And Weaknesses: / Other Comments Or Suggestions: I think equations 12 and 13 should be better introduced. It would be nice to be guided in this critical part without having to look at other sections. Questions For Authors: - When using f-divergences like square loss instead of KL, thus leading to sparsity, can the authors provide details about the associated challenges in terms of smoothness of the optimization? Is the method as stable as in the KL "dense" case? - Which other f-divergences could be interesting beyond the square loss and KL? Thanks! Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. > **Lack of baselines, comparison with MCMC and adversarial approaches** While our paper includes GFY and exact MLE (logistic loss) baselines, it did lack comparisons with MCMC and min-max approaches. We have run additional experiments with these baselines. **Link to updated Table 1 and Table 3:** https://anonymous.4open.science/r/ebms/tables.pdf If the paper is accepted, these tables are going to be included in the main text, thanks to the extra page allowed by the camera-ready. **Technical details:** For the the min-max approach, we use optimistic ADAM as solver, an MLP as generator and we use REINFORCE (score function estimator) for gradient estimation. For MCMC, we use standard Metropolis–Hastings algorithm. Similarly to the min-min approach, we tune the learning rate and L2 regularization using grid search. > **Amortized optimization in the dual problem, neural OT [1]** We agree that parameterizing dual variables as neural networks is a commonly-used technique in the OT literature. We already cited Seguy et al. (2018) and will gladly add the suggested citation [1]. Technically, the dual variable in [1] is a convex function, hence the use of ICNNs. In our work, the Lagrange multiplier $\tau$ is a continuous function. That said, we believe our application of that idea to EBMs remains novel and significant. To our knowledge, we are the first to parameterize the log-partition function of conditional EBMs as a neural network. We believe that parameterizing dual variables with neural networks is a powerful concept with potential applications beyond OT and EBMs. Regarding amortized optimization, its goal is typically to accelerate the process of repeatedly solving similar optimization problems. In our paper, the primary goal of parameterizing the log-partition function $\tau$ as a neural network is different: it is to enable the approximation of the log-partition function to generalize to unseen data points $x$. > **Why EBMs are particularly suited to this approximation over Monte Carlo-based methods?** As discussed in Section 2.4, sampling from EBMs, especially in discrete spaces, is often challenging and typically requires MCMC methods like Langevin-MCMC (for continuous spaces). For discrete output spaces, constructing an MCMC sampler is less explored and can be more challenging. Typically each particular output space (such as sets or permutations in our work) will require a specifically-designed sampler. In contrast, our proposed approach offers a simpler, more generic alternative that avoids MCMC sampling during training by jointly learning the energy and the log-partition function. > **Relocating section on Fenchel-Young losses** We chose to include the Fenchel-Young losses because they encompass the sparsemax loss. Our work presents, to our knowledge, the first tractable method for optimizing the sparsemax loss for EBMs in general combinatorial spaces without needing a k-best oracle. Thanks to the extra page in the camera-ready, we believe we do not need to move this. > **Experiments [in Figures 1 and 2] appear to be limited in scale and provide minimal insight** Our primary goal in these experiments was to demonstrate scalability concerning the number of classes $k$, which determines the size of the output space ($|Y| = 2^k$ for multilabel classification, $|Y| = k!$ for label-ranking). Scalability with respect to the number of training samples $n$ is handled via standard SGD. > **Table 1 not discussed enough** The key takeaway from Table 1 is that the Birkhoff polytope representation tends to perform better with linear models for label ranking, likely due to its ability to capture more interactions. However, with more expressive MLP models, both the Birkhoff and Permutahedron representations achieve strong performance. > **Eq 12 and 13 should be better introduced** Thank you for the suggestion. By minimizing the overall objective with respect to both $g$ and $\tau$, the optimization process encourages $\tau$ to approximate the true log-partition function while simultaneously learning the energy function $g$ that fits the data under the EBM framework. We will add a brief, self-contained explanation to the paper. > **When using f-divergences what are the challenges in terms of optimization?** When $f$ is strictly convex, which is the case for the sparsemax loss, the loss is smooth, leading to a well-behaved optimization landscape. The primary challenge associated with the sparsemax loss in structured prediction is the need for a k-best oracle, which our approach alleviates (similarly, we do not need an LSE oracle in the logistic case). > **Which other f-divergences could be interesting ?** We may consider alpha divergences, which interpolate between the logistic loss ($\alpha=1$) and the sparsemax loss ($\alpha=2$). Our approach can readily be applied with these divergences by selecting the appropriate $f$ function.
null
null
null
null
null
null
Unifying Knowledge from Diverse Datasets to Enhance Spatial-Temporal Modeling: A Granularity-Adaptive Geographical Embedding Approach
Accept (poster)
Summary: Spatio-temporal datasets are often heterogeneous, each with its own granularity and representation of entities. Given the data scarcity issue in such datasets, it is crucial to integrate them effectively and learn unified representations for multi-granularity entities. The authors propose a framework to achieve this, called the Segment Quadtree Geographical Embedding Framework (SQGEF). Their approach leverages a quadtree data structure to embed entities of varying granularities. They evaluate SQGEF on three different datasets for spatio-temporal forecasting of carbon emissions and demonstrate that their method generally improves baseline performance for this task. Claims And Evidence: The approach proposed by the authors is particularly useful for spatio-temporal datasets and could be applied to various applications. Their method appears to improve existing approaches for spatio-temporal forecasting. However, the authors claim to "propose a novel data structure" for spatial representation—specifically, segment quadtrees (see abstract). Quadtrees have long been used for spatial data representation, as evidenced by works such as Kothuri et al. (2002). Yet, the authors do not reference any prior work related to quadtrees. I would like to understand the rationale behind omitting such an important body of work. Additionally, what distinguishes a segment quadtree from a standard quadtree? I believe the authors should clarify this distinction in the text. Kothuri, Ravi Kanth V., Siva Ravada, and Daniel Abugov. "Quadtree and R-tree indexes in oracle spatial: a comparison using GIS data." In Proceedings of the 2002 ACM SIGMOD international conference on Management of data, pp. 546-557. 2002. Methods And Evaluation Criteria: The method performs well on the three proposed downstream tasks. However, in my opinion, the experimental datasets are somewhat limited for evaluating this framework for two main reasons: - It focuses on a single spatio-temporal application—carbon emissions. - The three tasks are closely related, with two of them differing only by an inversion of the training and testing datasets, and the other two using different regions. Ideally, I would like to see an additional experiment with a different setup to better assess whether the method can be generally applied to various spatio-temporal modeling scenarios, as suggested by the title. Additionally, the datasets and their spatial distribution are barely discussed in the main text and are only briefly covered in the supplementary materials. Edit: After reading the supplementary material, I noticed the experiment on the air pollution dataset, which is not mentioned in the main text. I believe this experiment should be given more emphasis and better described, including relevant references. Theoretical Claims: I find the theoretical analysis unclear. For example, it is not clear for me where the generalization error bound comes from or what the terms $C$ and $n$ represent. I believe more details should be provided, either through additional references or in the appendix. Experimental Designs Or Analyses: I believe Figure 5 is misleading and cannot be interpreted as the authors suggest. What do the two axes represent? Did the authors perform dimensionality reduction on the embedding space? If so, this should be clearly explained. Moreover, if dimensionality reduction was applied, I do not think it is appropriate to compare the embedding spaces of two models in this way, as distances between points, especially after reduction, may not be directly comparable. There should be more than two points per model to begin to understand the structure of the embedding space. Supplementary Material: As mentioned earlier, the dataset description in the Supplementary material could be expanded to include spatial distribution or similar visualizations. Relation To Broader Scientific Literature: As mentioned earlier, there is a lack of references and connections to previous works on using quadtrees for spatial representations. Additionally, the paper has a limited number of references (only 24), with no citations from the location encoding field. Essential References Not Discussed: References to quadtrees for spatial representations are missing, such as: Kothuri, Ravi Kanth V., Siva Ravada, and Daniel Abugov. "Quadtree and R-tree indexes in oracle spatial: a comparison using GIS data." In Proceedings of the 2002 ACM SIGMOD international conference on Management of data, pp. 546-557. 2002. Yin, Xiang, Ivo Düntsch, and Günther Gediga. Quadtree representation and compression of spatial data. Springer Berlin Heidelberg, 2011. Other Strengths And Weaknesses: I believe the paper lacks clarity due to missing details. For example, in Equation (1), $e_{k,l}$ are introduced for the first time without any explanation of what they are and how they are obtained. Later, they are referred to as embeddings, but it seems to be never explained how they are derived or trained. However, the method may have significant value, and the setup considered is interesting to explore. It could also benefit many other applications. Other Comments Or Suggestions: Line 107: $N^{test}$ instead of $N$ Questions For Authors: I would like the authors to address the following points raised earlier: - Clarification and references to quadtrees - Lack of diversity in datasets considered and a more detailed dataset description - Improvement to the theoretical analysis section - Addressing potential issues with Figure 5 - More references - General clarification of the text I am willing to increase my recommendation if the authors address these points, as the method has potential. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and thorough feedback on our work. We greatly appreciate your insightful comments. Due to space constraints, we are unable to incorporate all of this content into the main text. Since URLs are not permitted (except for figures), we have instead included an updated PDF with detailed proofs in the supplementary materials, which has been uploaded to an anonymous repository. All referenced appendices are provided in full. We sincerely hope you will review these additional materials and reconsider our score. # W1 The idea of an indextree in references you given is to efficiently index spatial entities by hierarchically partitioning a region into proper granularity levels but coarser than entity level (containing multiple cities), implemented as a quadtree where child nodes separate the region of parent nodes, bring no additional information. Our work, neither motivated by nor following indextree, addresses the problem of using heterogeneous datasets to extrapolate knowledge to model unseen entities, a challenge indextree cannot solve due to its focus on summarizing known entities without extrapolation capabilities. To tackle this, we use finer granularity nodes (grids below city level), stores distinct information across parent-child levels from different granularity datasets (detailed discussion in Appendix A.9), and employs a segmenttree query method to fuse mixed-level embeddings, capturing different-level interactions. In summary, our Segment Quadtree is a new data structure designed for knowledge extrapolation, distinct from indextree’s indexing paradigm. We listed more detailed foundational differences between indextree and Segment Quadtree are shown in Appendix A.7. The "segment" in Segment Quadtree reflects its query method, inspired by segment trees, a data structure for efficiently computing range sums over continuous arrays. For example, in a segment tree with values (a1,a2, a3,a4), querying the sum of (a1, a2,a3) uses precomputed nodes like ([a1, a2]) and (a3), reducing computation. Similarly, in our Segment Quadtree in Figure 2(a), querying an yellow entity’s boundary fuses embeddings from mixed levels: one high-level node (Node3), one mid-level node (Node17), and two low-level nodes (Nodes69,71), rather than aggregating all lowest-level grids, as done for France in Figure 1(b). This multi-level fusion accelerates queries and leverages hierarchical information, distinguishing it from standard quadtrees like indextrees, where fusing across levels is less meaningful due to uniform node content. # W2 Due to constraints in a paper, we cannot conduct experiments across a wider range of scenarios. Therefore, we focused on two key cases to allow for a more thorough analysis. We acknowledge this limitation and will explore additional applications of our method in future work. To address the air pollution datasets description concern, we have improved the dataset description for air pollution in Appendix A.1. # W3 We have added a comprehensive proof in Appendix A.5. Due to space constraints, we only summarize the key conclusions here. From the Rademacher Complexity Theorem, for any $\mathcal{H}$, the generalization error bound is $L(h) \leq L^S(h) + 2\hat{\mathcal{R}}_S(\mathcal{H}) + 3\sqrt{\frac{2\ln(2/\delta)}{n}}$, where $\hat{\mathcal{R}}_S(\mathcal{H})$ is the empirical Rademacher complexity. For Segment Quadtree Embedding, $\hat{\mathcal{R}}S(\mathcal{H}{\text{seg}}) \leq C_2 \cdot \frac{|W|^{L+1} \sqrt{p'} \sqrt{m d}}{\sqrt{n}}$, with $m = O(\log k)$ and $p' \propto m d$, giving a tighter bound for Segment Quadtree Embedding. # W4 Following your recommendation, we shifted from visualization-based analysis to a quantitative evaluation of embedding distances on all embeddings. Specifically, we computed distances between all pairs of embeddings and compared them with the corresponding time series distances of carbon emission amounts. For the carbon emission time series, we calculated the L2 norm between every pair of provinces. Similarly, for each model, we computed the L2 distances between all pairs of embeddings. We then assessed the correlation between these embedding distances and the real carbon emission distances using the Pearson correlation coefficient. The results demonstrate that the Pearson score for TimesNet w/Seg embeddings is significantly higher than that of MTGNN. |Exp|TimesNet w/Seg|MTGNN| |-|-|-| |China Province|0.377|0.121| |Europe Country|0.480|0.343| |China City|0.247|0.226| Results indicate that our method better captures the structure of entities compared to MTGNN. # W5 We have written an related work section about Quadtree Applications in Spatial Data Management in Section 6.3. # W6 We have clarified the differences between the Index Tree and Segment Quadtree, added air pollution experiments, provided a detailed theoretical and complexity analysis, and quantative data scarcity in the original paper. --- Rebuttal Comment 1.1: Comment: I appreciate the additional efforts the authors have made to address my concerns, particularly the development of the theoretical analysis, the clarification and references to quadtrees, and fixing the analysis of embedding distances. I hope some of these changes will be included in the main text. Ideally, I would have liked to see experiments across a wider range of scenarios, but I understand the constraints of this field and how difficult it is to incorporate other applications. Accordingly, I will increase my recommendation. --- Reply to Comment 1.1.1: Comment: Thank you so much for the improvement in the score. Your valuable suggestions and the additional supplementary references to prior work have made our paper more solid and significantly improved the clarity of our contributions. We will incorporate them into the main text to further strengthen the paper, following your suggestions, and explore their potential applications in future work. Thank you again for your time and effort in providing such a thorough review! Your feedback has been truly invaluable in improving our work.
Summary: This paper addresses the challenge of data scarcity in geographical scientific datasets for spatio-temporal forecasting by proposing a novel framework called Segment Quadtree Geographical Embedding Framework (SQGEF). The framework integrates knowledge from diverse datasets with varying granularities, time spans, and observation variables to learn unified representations for multi-granularity entities, including those absent during training. The key contributions include a novel method for integrating heterogeneous datasets, a unique embedding approach for different granularity entities, and comprehensive experiments showcasing the method's effectiveness and robustness across various regions and granularities. ## Update after rebuttal: Thanks for author rebuttal and I have read the rebuttal. I think my concerns are addressed and will keep my score this time (for weak acceptance). Claims And Evidence: The claims are supported by experimental result. Methods And Evaluation Criteria: The proposed method is technically sound. Theoretical Claims: The authors try to give theoretical analysis of the method, but more details are needed to make it clear and convincing, e.g., the calculation of bounds should be provided. Experimental Designs Or Analyses: The overall designs of experiment are acceptable. Supplementary Material: I read the whole appendix, which provides some more details about the model and baselines. Relation To Broader Scientific Literature: The Segment Quadtree is novel in the literature. I appreciate the interesting idea. Essential References Not Discussed: The coverage of references is good. Other Strengths And Weaknesses: **Strengths:** - Novel and interesting idea. The Segment Quadtree data structure provides a hierarchical representation of geographical regions, capturing multi-level interactions and geographical entity knowledge - The method shows improvements across various models, including time series and spatio-temporal models, and has potential applications in different scientific domains. **Weaknesses:** - Authors should provide the information about data scarcity of utilized datasets. - It will be better to compare the performance gains from combining data of different granularity compared to using a single data set. Other Comments Or Suggestions: See above. Questions For Authors: See above. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our work. We greatly appreciate your insightful comments, including your detailed suggestions regarding the calculation of bounds, clarifications on data scarcity, and additional experiments. Due to space constraints, we are unable to incorporate all of this content into the main text. Since URLs are not permitted (except for figures), we have instead included an updated PDF with detailed proofs in the supplementary materials, which has been uploaded to an anonymous repository. All referenced appendices are provided in full. We sincerely hope you will review these additional materials and reconsider our score. # the calculation of bounds We have added a comprehensive proof in Appendix A.5. Due to space constraints, we only summarize the key conclusions here. From the Rademacher Complexity Theorem, for any $\mathcal{H}$, the generalization error bound is $L(h) \leq L^S(h) + 2\hat{\mathcal{R}}_S(\mathcal{H}) + 3\sqrt{\frac{2\ln(2/\delta)}{n}}$, where $\hat{\mathcal{R}}_S(\mathcal{H})$ is the empirical Rademacher complexity. For Segment Quadtree Embedding, $\hat{\mathcal{R}}S(\mathcal{H}{\text{seg}}) \leq C_2 \cdot \frac{|W|^{L+1} \sqrt{p'} \sqrt{m d}}{\sqrt{n}}$, with $m = O(\log k)$ and $p' \propto m d$, giving a tighter bound for Segment Quadtree Embedding. # W1 Thank you for your valuable suggestion! We first clarify what data scarcity means in our work and then provide statistical evidence to support it. Due to space constraints, we have added detailed description in Appendix A.6. In our paper, data scarcity refers to forecasting targets in the test set being unseen during pretraining, with no historical data available at that stage. This severely limits direct modeling of their relationships. To tackle this, SQGEF uses a large volume of heterogeneous datasets from related regions or entities, despite lacking test target historical records.To quantify this scarcity, we compare the data volume between the pretraining and test stages across three metrics: total data points, number of temporal points, and temporal duration. |Experiment|Pretrain Set Data Points|Test Set Data Points|Pretrain Set Time Points (Span)|Test Set Time Points (Span)| |-|-|-|-|-| |China Province|8,651,704|37,062|581 (22 years)|1,278 (2 years)| |Europe Country|8,650,752|39,618|528 (22 years)|1,278 (2 years)| |China City|8,687,814|962|1,806 (22 years)|37 (37 years)| From this table, it is evident that the test set contains significantly fewer data points than the pretraining set across all experiments. For China Province and Europe Country, the test set’s temporal duration is notably short(2 years). In the China City, test set has an extremely limited number of temporal records(37 points), indicating sparse sampling. Collectively, these statistics highlight the severe data scarcity in the test set, both in terms of quantity and coverage. Moreover, Table 2 in our paper shows that this scarcity goes beyond data volume to informational richness. Models using only the scarce historical data of forecasting targets consistently lag behind SQGEF, which combines this limited target data with richer, more plentiful data from related grids and entities during pretraining. # W2 Thank you for your suggestion! This will help clarify the effect of different granularity datasets. We use only one grid dataset to train the model. As shown in the following table, using one single dataset significantly drops the performance, which verifies the validity of unifying various datasets. ||MSE|MAE| |-|-|-| |Informer|0.7983|0.7036| |FEDformer|0.3628|0.4674| |Autoformer|0.3241|0.4518| |TimesNet|0.3985|0.5162| |AGCRN|0.8112|0.7142| |MTGNN|0.5518|0.5753| |GWNet|0.3984|0.4954| --- Rebuttal Comment 1.1: Comment: Thanks for author rebuttal and I will maintain my score at this time. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback and for giving us the opportunity to improve our manuscript. Your insightful comments regarding the data scarcity discussion and additional experiments have significantly strengthened our paper. We sincerely appreciate the time and effort you dedicated to reviewing our work.
Summary: This paper tackles the challenge of data scarcity in geographical scientific datasets for spatio-temporal forecasting by introducing a novel framework: the Segment Quadtree Geographical Embedding Framework (SQGEF). SQGEF integrates knowledge from diverse datasets that vary in granularity, time span, and observation variables. It learns unified representations for multi-granularity entities, including those not present during training. The key contributions of this work are threefold: (1) a novel method for integrating heterogeneous datasets; (2) a unique embedding approach for entities of different granularities; and (3) comprehensive experiments that demonstrate the method's effectiveness and robustness across various regions and granularities. ## update after rebuttal The author's detailed response has largely addressed my initial concerns, so I have updated my assessment to positive. Claims And Evidence: The claims are partially supported. How combining multi-granularity datasets benefits forecasting performance seems to be missed. Methods And Evaluation Criteria: The method is theoretically effective and feasible. Theoretical Claims: A rough theoretical analysis is given, while missing some details about calculation procedure. Experimental Designs Or Analyses: The experimental design is reasonable but could be improved. Supplementary Material: I reviewed the whole appendix. Relation To Broader Scientific Literature: The idea of segment quadtree is novel to me. I haven't seen similar technology in existing literature. Essential References Not Discussed: The coverage of references is comprehensive. Other Strengths And Weaknesses: **Strengths:** - The idea of segment quadtree is novel. And the whole framework is technically sound. - The experiment covers different regions, data scarcity, and backbones. **Weaknesses:** - Additional time consumption of applying the proposed framework should be provided. - The theoretical analysis is sketchy. More details should be provided, e.g., the calculation of bounds. - The scarcity of different datasets should be given. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments on our work. We greatly appreciate your feedback, including the detailed suggestions for analysis and clarification. However, due to space constraints, we cannot include all the content in the main text. Since URLs are not permitted except for figures, we have included the updated PDF with detailed proofs in the supplementary materials (uploaded to an anonymous repository). All referenced appendices are provided. We sincerely hope you will review these materials and reconsider our score. # Benefits of Multi-granularity The ablation study w/o SE empirically shows that combining multi-granularity datasets is effective, as performance drops noticeably when this component is removed. To shed more light, we explain below the intuition behind why this approach boosts forecasting performance. In SQGEF, combining multi-granularity datasets enhances forecasting by allowing each Quadtree node to capture info from its own and higher granularity levels. This hierarchical integration occurs during training, where a node’s queried embedding is computed by **fusing its own embedding with those of all child nodes** stored in the tree, as shown in Equation 1. When a high-level node gets supervision from a coarse-granularity dataset, its child nodes are updated simultaneously. This top-down approach ensures lower-level nodes benefit from the richer, global info in higher-level datasets. This mechanism provides key benefits. First, it lets lower-level nodes gain global interactions beyond their limited local data. Second, it supports cross-scale training: lower-level nodes are improved using higher-level dataset supervision, allowing the model to capture nested relationships across granularities. Thus, SQGEF builds a richer representation of geographical entities, enhancing generalization to unseen entities or data-scarce regions. # W1 SQGEF introduces only constant additional computational complexity. During training, it maintains the same space and computational complexity as the naive method while capturing multi-granularity relationships for significant performance gains. During inference, SQGEF achieves lower complexity than the naive approach, ensuring superior scalability and efficiency. Due to space constraints, the detailed analysis is available in the complexity analysis (Rebuttal, Reviewer cJrD) or Appendix A.8. # W2 We have added a comprehensive proof in Appendix A.5. Due to space constraints, we only summarize the key conclusions here. From the Rademacher Complexity Theorem, for any $\mathcal{H}$, the generalization error bound is $L(h) \leq L^S(h) + 2\hat{\mathcal{R}}_S(\mathcal{H}) + 3\sqrt{\frac{2\ln(2/\delta)}{n}}$, where $\hat{\mathcal{R}}_S(\mathcal{H})$ is the empirical Rademacher complexity. For Segment Quadtree Embedding, $\hat{\mathcal{R}}S(\mathcal{H}{\text{seg}}) \leq C_2 \cdot \frac{|W|^{L+1} \sqrt{p'} \sqrt{m d}}{\sqrt{n}}$, with $m = O(\log k)$ and $p' \propto m d$, giving a tighter bound for Segment Quadtree Embedding. # W3 We first clarify what data scarcity means in our work and then provide statistical evidence to support it. Due to space constraints, we have added detailed description in Appendix A.6. In our paper, data scarcity refers to forecasting targets in the test set being unseen during pretraining, with no historical data available at that stage. This severely limits direct modeling of their relationships. To tackle this, SQGEF uses a large volume of heterogeneous datasets from related regions or entities, despite lacking test target historical records.To quantify this scarcity, we compare the data volume between the pretraining and test stages across three metrics: total data points, number of temporal points, and temporal duration. |Experiment|Pretrain Set Data Points|Test Set Data Points|Pretrain Set Time Points (Span)|Test Set Time Points (Span)| |-|-|-|-|-| |China Province|8,651,704|37,062|581 (22 years)|1,278 (2 years)| |Europe Country|8,650,752|39,618|528 (22 years)|1,278 (2 years)| |China City|8,687,814|962|1,806 (22 years)|37 (37 years)| From this table, it is evident that the test set contains significantly fewer data points than the pretraining set across all experiments. For China Province and Europe Country, the test set’s temporal duration is notably short(2 years). In the China City, test set has an extremely limited number of temporal records(37 points), indicating sparse sampling. Collectively, these statistics highlight the severe data scarcity in the test set, both in terms of quantity and coverage. Moreover, Table 2 in our paper shows that this scarcity goes beyond data volume to informational richness. Models using only the scarce historical data of forecasting targets consistently lag behind SQGEF, which combines this limited target data with richer, more plentiful data from related grids and entities during pretraining.
Summary: This paper aims to integrate diverse heterogeneous datasets to address the data scarcity problem in scientific research. The authors propose the Segment Quadtree Geographical Embedding Framework with a novel data structure called the Segment Quadtree. The framework employs two learning strategies: capture interactions at multiple levels from grid datasets, and extract nested relationships and human-defined boundary information from entity datasets. As a result, SQGEF enhances spatio-temporal forecasting for geographical entities across various regions and granularities, even for entities unseen in the training set. ## Rebuttal summary Data scarcity has been a fundamental issue in scientific research. To resolve this, this paper proposes integrating diverse heterogeneous datasets with a novel Segment Quadtree Geographical Embedding Framework. This framework is verified with comprehensive experiments. My initial concerns about experimental designs have been reasonably addressed after the rebuttal, so I have raised my score from 3 to 4. Claims And Evidence: The paper makes two main claims: 1. Heterogeneous datasets from different studies can provide complementary insights into the same underlying system. 2. SQGEF is designed to learn unified representations for multi-granularity entities, including those absent during training. The authors support these claims by using datasets from different sources within the same region. The improved performance on forecasting targets that were absent in the training data validates these claims. Overall, the claims are well-supported and convincing. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: I reviewed the proof in Section 4, and it is correct. Experimental Designs Or Analyses: The overall setting is comprehensive. The authors mainly support the claims from different perspectives. However, the baseline is outdated. The paper should include some more advanced baselines. Supplementary Material: I reviewed the source code provided in the supplementary materials. It includes the data processing and model implementation. Relation To Broader Scientific Literature: This paper addresses the data scarcity problem in the scientific data domain, which has broad applications in fields such as satellite data analysis, and remote sensing. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper addresses a meaningful and challenging problem: integrating heterogeneous datasets to tackle data scarcity in scientific research. 2. The methodology is well-motivated, and the Segment Quadtree data structure is innovative and interesting. 3. The paper provides a comprehensive set of experiments, addressing multiple research questions and demonstrating the framework's effectiveness across various scenarios. Weakness: 1. The baselines used in the experiments are really outdated. There are many more advanced baselines available. 2. The paper lacks a thorough analysis of the framework's efficiency and scalability. For example, there is no discussion of computational complexity, training time, or memory usage, which are critical for real-world applications. Other Comments Or Suggestions: No. Questions For Authors: 1. What is the performance of the framework on more advanced baselines, both for time series models and spatio-temporal models? 2. What is the computational complexity of the model? Can it generalize to larger datasets or higher-resolution grids? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review! Your suggestion to include additional experiments and analysis will further strengthen our paper. # outdated baselines Thank you for your suggestion! We have updated our experiments to include two state-of-the-art baselines: TimeXer (NeurIPS 2024) and iTransformer (ICLR 2024). The results demonstrate that our framework significantly enhances the performance of these cutting-edge baselines, underscoring its robustness and adaptability. |China Province||||Europe Country||||China City||| |---|---|---|---|---|---|---|---|---|---|---| |Method|MSE|MAE|Method|MSE|MAE|Method|MSE|MAE| |TimeXer|0.4027±0.0000|0.4894±0.0000|TimeXer|2.3852±0.0066|1.1684±0.0005|TimeXer|8.1542±0.7333|1.3078±0.0022| |w/Seg|0.3569±0.0002|0.4596±0.0001|w/Seg|2.2058±0.0022|1.1266±0.0002|w/Seg|7.3635±0.3326|1.2838±0.0026| |iTransformer|0.4772±0.0064|0.5004±0.0043|iTransformer|2.5433±0.0000|1.1808±0.0000|iTransformer|8.5086±0.4185|1.3474±0.0018| |w/Seg|0.4319±0.0001|0.4790±0.0001|w/Seg|2.2671±0.0001|1.1454±0.0000|w/Seg|7.8494±0.0765|1.3240±0.0015| # complexity analysis Thank you for your insightful review! We agree that including an efficiency analysis strengthens our paper, and hightlights the effectiveness of our method that we only introduces a constant-level additional complexity to achieve significant performance increase. **Space Complexity:** Both SQGEF and naive grid embedding method have a space complexity of O(N) (where N is the number of grid cells), SQGEF introduces only a constant-level overhead to store multi-level interaction information. In our Segment Quadtree, the total number of nodes is N (root level) + (1/4)N (next level) + ... + (1/4)^h N, where h = ⌈log₄N⌉ is the tree height. This geometric series sums to O((4/3)N), which remains linear in N. Thus, our approach scales efficiently even with higher-resolution grids, incurring minimal additional storage costs. **Training Stage Time Complexity:** While both SQGEF and the naive grid embedding method have a computational complexity of *O(N²)* (where *N* is the number of grid cells), SQGEF adds only a constant overhead to process multi-granularity data. While SQGEF models interactions across all granularity levels, the additional computational cost is minor relative to the substantial performance gains it delivers. The complexity comprises two parts: (1) constructing a graph to capture spatial relations between grids, and (2) computing temporal relations across timestamps. For the graph, the naive method’s complexity is O(N²). In SQGEF, this becomes O(N² + (1/16)N² + ... + (1/16)^h N²) across tree levels, summing to O((16/15)N²), a slight increase over O(N²). Temporal relations are treated as a constant T, as no additional temporal computation is required beyond the input sequence. This modest overhead enables rich multi-level interactions, significantly enhancing forecasting accuracy, as evidenced by our experimental results. **Inference Stage Time Complexity**: During inference, SQGEF outperforms the naive grid embedding method in efficiency. Using the segment tree query method [1]for a region [qX₁, qX₂] × [qY₁, qY₂], we recursively check overlaps with the current node’s region [X₁, X₂] × [Y₁, Y₂]. With a tree height of O(log₄N), and up to 4 nodes visited per level (if the query spans quadrants), the total nodes visited is bounded by 4·O(log₄N) = O(log N) (since log₄N = (1/2)log₂N). This contrasts with the naive grid method’s O(N) complexity, making our approach far more efficient. **Summary:** During training, SQGEF maintains *O(N)* space complexity and *O(N²)* computational complexity, identical to the naive method, enabling the capture of multi-granularity relationships that yield significant performance improvements. In inference, our method achieves O(log N ) complexity, versus O(N) for the naive approach, ensuring superior scalability and efficiency. These trade-offs make SQGEF highly practical and advantageous, especially for numerous downstream spatio-temporal forecasting tasks, where its fast inference speed delivers substantial benefits across a wide range of applications. [1] Lee, D. T., & Preparata, F. P. (1984). *Computational Geometry: A Survey*. IEEE Transactions on Computers.
null
null
null
null
null
null
Fisher-Guided Selective Forgetting: Mitigating Primacy Bias in Deep Reinforcement Learning
Reject
Summary: This paper proposes to address primacy bias (early experiences having an adverse effect on later training) in reinforcement learning by using a forgetting mechanism. The proposed method utilizes an update rule from the _machine unlearning_ literature to forget past experiences and reduce primacy bias. The updates utilize the empirical Fisher to scale Gaussian noise to be added to the weights. This method is evaluated on standard continuous control benchmark tasks. Claims And Evidence: - The proposed method seems to indeed outperform the base SAC agent on the benchmark tasks, but it is unclear whether it is better than adding simple Gaussian noise without rescaling by the empirical Fisher. In Fig.7, the shaded regions largely overlap so there is no clear difference in performance. This casts doubt on the importance of using the empirical Fisher. - Another concern is related to how resets do not seem to provide a benefit to the base SAC agent. See "experimental designs" section for more details. Methods And Evaluation Criteria: - Using the Deepmind Control Suite with SAC is a suitable benchmark. Adding resets to SAC is a reasonable baseline. - Other baselines from the plasticity loss literature should be tested. For example, Shrink-and-perturb [1], l2-init [2], Parseval regularization [3], Spectral regularization [4] to name a few choices. These methods all update weights to improve plasticity and reduce primacy bias (or add a regularization term to accomplish this) and would help assess the relative utility of the proposed method. [1] "On Warm-Starting Neural Network Training" Ash and Adams (2019) \ [2] "Maintaining Plasticity in Continual Learning via Regenerative Regularization" Kumar et al. (2023) \ [3] "Parseval Regularization for Continual Reinforcement Learning" Chung et al. (2024) \ [4] "Learning continually by spectral regularization" Lewandowski et al. (2024) Theoretical Claims: N/A Experimental Designs Or Analyses: The benchmark tasks and baseline may need to be chosen more carefully. In many of the tasks presented, resets are not providing any benefit over the baseline SAC agent. This undermines the main idea that there is primacy bias that needs to be addressed through the proposed method. There is a section on using higher replay ratios where there does seem to be more of a benefit to resets (suggesting primacy bias is present). It may be better to focus on this setting. In [1], we can see a larger benefit to resets with larger replay ratios. Alternatively, reconsidering which tasks to run and choosing those where resets provide some benefit could be an alternative approach. [1] "Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier" D'Oro et al. (2023) Supplementary Material: I looked at certain parts of the appendix regarding dormant neurons and size of updates. Relation To Broader Scientific Literature: This paper highlights an interesting opportunity to use ideas from the _machine unlearning_ literature to address primacy bias. The evaluation is currently not thorough enough to properly assess the proposed method, although there is some promise. Essential References Not Discussed: The paper "Directions of Curvature as an Explanation for Loss of Plasticity" Lewandowski et al. (2024) proposes a hypothesis closely related to the idea of the evolution of the Fisher information indicating primacy bias. In that paper, they discuss how a loss of curvature (as measured by the Hessian rank) can be linked to loss of plasticity (which is closely linked to primacy bias). In fact, they also check the spectrum of a matrix of gradient outer products (which is similar to the definition of the empirical Fisher). There is a direct relationship between the spectrum and the trace since the trace is the sum of the eigenvalues. Other Strengths And Weaknesses: - The idea of using methods from the machine unlearning literature is interesting and novel. Perhaps there could be more connections drawn there or alternative methods from that space could be tried instead. - Generally, the writing is clear and the presentation is nice. - The paper would be benefit from more exposition to the area of machine unlearning. Since this paper is written for a RL audience, a reader is likely to be unaware of these works. Currently, section 3.3 requires additional explanations. Explaining the loss defined there and some of the terms such as $P(S(w)|D)$ would be necessary. Other Comments Or Suggestions: - An analysis of the empirical Fisher could be interesting. In particular, looking at which dimensions it emphasizes would be interesting to identify which weights are considered to be forgottten or remembered. Comparing this to other common heuristics for weight importance such as weight magnitude (related to the neuron importance measure in [1]). - Fig. 4 should have a legend to indicate the hyperparmeter values for each curve. Minor questions: - What is $\sigma^2$ in the update? Is this estimated or is it combined with $\lambda$ into a single hyperparameter. Clarify this in the paper. - Why plot max/min in shaded regions instead of confidence intervals? [1] "Continual Backprop: Stochastic Gradient Descent with Persistent Randomness" Dohare et al. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank Reviewer 738E for their detailed feedback. We address their main concerns below. ### FGSF vs Gaussian Noise: We appreciate the reviewer's analysis of the FGSF vs Gaussian noise comparison (Fig 7). **Ablation Goal:** This ablation showed general noise injection strategies offer benefits over baseline SAC, suggesting noise can combat plasticity loss/Primacy Bias (PB). **FGSF's Advantage:** While the average performance difference across environments seems marginal (Fig 7), FGSF's key benefit is its targeted, FIM-guided noise. This advantage is clearer in: * **High-PB Environments:** In tasks like Humanoid/Quadruped with strong PB (Fig 3/8), FGSF offers more consistent stability and often better peak performance than Gaussian noise (compare Fig 7 vs Fig 2). * **Importance of FIM:** The FIM allows strategic noise injection (adapting to network sensitivity/geometry), unlike simple Gaussian noise. This geometric insight is valuable. ### Experimental Design: We acknowledge the reset baseline's benefit over SAC isn't consistently strong at RR=1. **PB Strength Varies:** PB strength depends on task complexity and settings like RR. In simpler tasks or at RR=1, PB may be less pronounced, reducing the reset's observed benefit. **High RR Confirmation:** As the reviewer and D'Oro et al. (2023) note, PB is often exacerbated at higher RRs. Our RR=4 results (Fig 5) confirm this: baseline SAC degrades, the reset method shows some unstable benefit, and FGSF provides significant, stable improvements over both. This shows FGSF effectively addresses amplified PB. **Focus:** Focusing experiments on settings with strong PB (complex tasks or high RR) better highlights the problem FGSF solves. The RR=1 results were included as the standard from previous work (e.g., Haarnoja et al., 2018). ### Methods: We thank the reviewer for suggesting baselines (Shrink-and-perturb, l2-init, Parseval/Spectral reg). We acknowledge these related works. Comparing FGSF to these requires substantial effort. Our initial goal was introducing the FIM-geometric perspective and establishing viability against the reset baseline. ### Weaknesses: We agree more background on machine unlearning (Sec 3.3) is needed for the RL audience. We will expand this section in the camera-ready version to clarify the Forgetting Lagrangian ($L$), distributions $P(\cdot|\cdot)$, the KL term, and λ. ### Response to Minor Questions: **FIM Dimension Analysis:** Analyzing which dimensions the FIM emphasizes is interesting future work but beyond the current scope. **Fig 4 Legend:** We will add a legend indicating λ values to Figure 4. **$\sigma^2$ in Update:** $\sigma^2$ comes from the uncertainty term in Golatkar et al.'s derivation. In our FGSF algorithm (Eq. Sec 3.4), we tune λ. The $(\sigma^2)^{1/4}$ factor is absorbed into our effective λ. We will clarify this. **Min/Max Shading:** We used min/max shading (5 seeds) to show the full performance variability (worst/best cases). With few seeds (common in DRL), standard CIs can be unreliable, and performance distributions may not be normal. Min/max shading avoids distributional assumptions and shows the empirical boundaries found, offering valuable information in this low-sample context. We thank the reviewer again for their assessment. We hope these responses clarify FGSF's contributions, particularly where PB is pronounced. We will incorporate several improvements based on this feedback. We hope our clarifications provide grounds for reconsideration despite the Reject recommendation. Given space constraints, we welcome follow-up questions to address any remaining concerns in more detail. ### References: 1. Haarnoja, Tuomas, Aurick Zhou, Pieter Abbeel, and Sergey Levine (2018). “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.” 2. Golatkar, Aditya, Alessandro Achille, and Stefano Soatto (2020a). “Eternal sunshine of the spotless net: Selective forgetting in deep networks.” 3. D’Oro, Pierluca, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, and Aaron Courville (2022). “Sample-efficient reinforcement learning by breaking the replay ratio barrier.” In: Deep Reinforcement Learning Workshop NeurIPS 2022. 4. Dohare, Shibhansh, J Fernando Hernandez-Garcia, Parash Rahman, ARupam Mahmood, and Richard S Sutton (2023). "Maintaining plasticity in deep continual learning.” 5. Hochreiter, Sepp and Jürgen Schmidhuber (1997). “Flat minima.” 6. Jastrzebski, Stanislaw, Devansh Arpit, Oliver Astrand, Giancarlo BKerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof J Geras (2021). “Catastrophic fisher explosion:Early phase fisher matrix impacts generalization.” 7. Obando-Ceron, Johan, Aaron Courville, and Pablo Samuel Castro (2024). “In value-based deep reinforcement learning, a pruned network is a good network.” 8. Martens, James and Roger Grosse (2015). “Optimizing neural networks with kronecker-factored approximate curvature.”
Summary: Deep reinforcement learning (RL) agents tend to overfit their early experiences, limiting their ability to learn from subsequent interactions. Nikishin et al. (2022) identified this phenomenon as primacy bias. While they highlighted the issue, their proposed solution—periodically resetting part of the network—is simple and lacks a mechanism to characterize primacy bias. To address these limitations, the authors propose using the trace of the Fisher Information Matrix (FIM) to characterize primacy bias. They also introduce Fisher-Guided Selective Forgetting (FGSF), a method to selectively update the weights to mitigate primacy bias. They show the efficacy of their method empirically on DM Control Suite and add some ablation studies investigating the effectiveness of their introduced components. They demonstrate the efficacy of their method empirically on the DM Control Suite and conduct ablation studies to evaluate the effectiveness of its components. **References** - Nikishin, E., Schwarzer, M., D’Oro, P., Bacon, P. L., & Courville, A. (2022, June). The primacy bias in deep reinforcement learning. In International conference on machine learning (pp. 16828-16847). PMLR. Claims And Evidence: There are claims in the paper that are lacking in evidence and/or not clear to me. > Claim 1: The Trace of the Fisher Information Matrix (FIM) characterizes primacy bias. I do not see how the authors make this claim from Fig.1. If anything, it seems that the trace of the FIM is unrelated to performance. While the the trace of the FIM does give a sense of overall sensitivity of the network’s parameters, there is no evidence in the learning curves indicating primacy bias! Quadruped doesn't seem to suffer from primacy bias. Is this observation correct? > Claim 2: Fisher-Guided Selective Forgetting (FGSF) is a principled mitigation strategy that relies on a geometric understanding of the primacy bias to selectively modify network weights I do not understand what the authors mean by principled here. In sub-section 3.4, they mention that the optimal forgetting procedure takes the form: $S(w) = w − B^{−1} \nabla L_{Dr} (w) + (\lambda \sigma^2)^{1/4} B^{−1/4} \epsilon$, where $B$ is the Hessian of the loss on the retained data, $\epsilon$ is standard Gaussian noise, and $\sigma^2$ represents the uncertainty. In sub-Section 3.5, they propose their approach: $S(w) = w + (\lambda \sigma^2)^{1/4} F^{−1/4} \epsilon$, where FIM is the empirically estimated FIM. Why is dropping a term and replacing the Hessian with Fisher Information Matrix a good approximation? Can you elaborate further? Also, why is this principled? **References** - Golatkar, A., Achille, A., and Soatto, S. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9304–9312, 2020a. Methods And Evaluation Criteria: I have some concerns about the choice of competitors used for comparison against FGSF: 1. Have the authors considered methods from recent studies on the loss of plasticity in deep learning? How would these methods perform in this setting? Specifically, have you considered approaches such as (1) Continual Backprop (Dohare et al., 2024), (2) Layer Norm + Weight Decay (Lyle et al., 2024), and (3) ReDo (Sokar et al., 2023)? 2. While the authors discuss primacy bias, their environments do not exhibit a strong "priming" effect. I suggest adding 1-2 additional environments with inherent non-stationarity to better demonstrate the advantages of their approach. My intuition is that FGSF would outperform its competitors in such settings. **References** - Dohare, S., Hernandez-Garcia, J. F., Lan, Q., Rahman, P., Mahmood, A. R., & Sutton, R. S. (2024). Loss of plasticity in deep continual learning. Nature, 632(8026), 768–774. - Lyle, C., Zheng, Z., Khetarpal, K., Martens, J., van Hasselt, H. P., Pascanu, R., & Dabney, W. (2024). Normalization and effective learning rates in reinforcement learning. Advances in Neural Information Processing Systems, 37, 106440-106473. - Sokar, G., Agarwal, R., Castro, P. S., & Evci, U. (2023, July). The dormant neuron phenomenon in deep reinforcement learning. In International Conference on Machine Learning (pp. 32145-32168). PMLR. Theoretical Claims: N/A Experimental Designs Or Analyses: - The plots intended to motivate the use of the trace of the FIM for characterizing primacy bias are unconvincing. Figure 1 does not provide strong support, and Figure 3 is even more problematic, as this metric suggests that the proposed method, FGSF, performs worse than the baseline SAC on average in terms of Tr(FIM) - In Fig. 9, the authors claim that "... FGSF maintains consistently lower update magnitudes (local delta) throughout training (typically stabilizing between 0.5 and 0.7), with smoother trajectories compared to the higher values and more pronounced spikes observed in baseline SAC". This is not true! It's true on a couple of environments and false on others My main concern is that it's unclear why I should consider using FGSF. It doesn't seem to help maintain a healthy Tr(F) or improve performance. Can the authors clarify why this method is important? Supplementary Material: I looked carefully at Appendix A and B. Relation To Broader Scientific Literature: This paper fits with the broader class of papers on primacy bias in deep RL and loss of plasticity in continual learning literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I believe this work addresses an important issue in deep RL, and I encourage the authors to invest more effort in their writing and experiment selection to better convey their ideas. This has the potential to be a very interesting line of research. I also like their idea to leverage FIM as a tool to characterize the local geometry of the parameter space and measure network sensitivity. Other Comments Or Suggestions: If the authors provide additional empirical evidence—such as comparisons with relevant competitors on 2/3 environments—during the rebuttal and address my other concerns, I would be willing to raise my score. Typos: - In Line 121, the references are missing the year - The first line of subsection 3.1 is incomplete - "The FIM is a fundamental concept in information geometry that quantifies the amount of information a random variable carries about an unknown parameter". There is not mention of the relationship between the parameter (e.g., $\theta$) upon which the probability of the random variand $\displaystyle X$ depends. - Line 228 (left) is a repetition of line 211 (right) Questions For Authors: 1. Have the authors considered methods proposed in recent papers on the loss of plasticity in deep learning (mentioned in Methods And Evaluation Criteria)? How would those methods perform in this context? 2. Is primacy bias a manifestation of the loss of plasticity? 3. How do the authors propose extending their method to on-policy methods, such as PPO? 4. Are large replay buffers necessary for FGSF to be effective? Is FGSF reliant on forgetting and re-learning frequently enough to achieve good performance? 5. The authors mention using a Savitzky-Golay filter in Subsection 3.2 to calculate the trace of the FIM. Could they elaborate on why this specific method was chosen? 6. I need some clarification regarding the algorithm pseudocode: a) In lines 7-9, is the agent scrubbing the data sampled from the replay buffer? b) What does N represent— the size of the replay buffer or the size of the mini-batch? 7. $\lambda \sim [5 \times 10^{-6}, 5 \times 10^{-8}]$: These values are extremely small. Is it due to the magnitude of the FIM? If so, normalizing the observations, features, and rewards could help (Vasan et al., 2024). 8. In Figure 10, why does FGSF have so many dormant neurons? How is this not affecting performance? **References** - Vasan, G., Elsayed, M., Azimi, S. A., He, J., Shahriar, F., Bellinger, C., ... & Mahmood, R. (2024). Deep policy gradient methods without batch updates, target networks, or replay buffers. Advances in Neural Information Processing Systems, 37, 845-891. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer LhZa for their constructive criticism. **Claim 1** Following from our response to Reviewer PPSq regarding FIM interpretation (Weakness 3). Figure 1 primarily illustrates the two-phase pattern described in Section 3.2. The core evidence linking Tr(F) dynamics to PB and performance comes from: * **The Two-Phase PB Signature**: As detailed in Sec 3.2, PB manifests as an initial sharp increase in Tr(F), indicating rapid overfitting to early data, followed by a decrease. * **Correlation with Performance**: Figures 3 & 8 show that in environments like Quadruped where baseline SAC performs poorly due to PB, it exhibits an early Tr(F) spike. FGSF prevents this, maintaining a lower, controlled trace, and achieves significantly better performance. This aligns with findings linking high early Tr(F) to poor generalization and lower Tr(F) to better generalization via flatter minima (Jastrzebski et al., 2021).From these two points, it follows that Quadruped does suffer from PB. Therefore, the claim is not that Tr(F) directly equals performance, but that its dynamic pattern, particularly the problematic early spike, is a quantifiable signature of the PB phenomenon that correlates strongly with subsequent performance limitations. **Claim 2** * **Hessian → FIM**: Replacing the Hessian (B) with the empirical FIM is a standard and computationally feasible approximation in deep learning (Martens & Grosse, 2015). The FIM captures crucial second-order information related to the local geometry. * **Dropping $\nabla L_{Dr}$ Term**: This term from the original formulation is omitted in our adaptation for two key reasons specific to the periodic RL application : 1) The standard RL gradient update step already performs a similar parameter update based on the current batch loss. 2) Unlike one-shot unlearning, FGSF is applied periodically during training, making this specific term redundant. This adaptation makes the principled approach tractable and effective in DRL **Environments** Would tasks like the bit-flipping problem (Dohare et al., 2024) be representative of what the reviewer has in mind? **Tr(FIM) Plots (Fig 1/3):** As clarified under Claim 1, the key insight from the FIM trace is not that "lower is always better globally", but that the pathological early spike in baseline methods (Fig 3/8) signifies PB-induced overfitting. **KL Divergence (Fig 9):** "... reveals that in complex environments, FGSF maintains consistently lower ..." This is true in complex environments (e.g. Humanoid) **Why Use FGSF?:** FGSF offers an approach to PB mitigation grounded in information geometry, moving beyond heuristic resets. Its importance lies in: (1) Providing a framework to characterize PB via FIM dynamics. (2) Demonstrating significant stability improvements over reset methods. (3) Achieving superior final performance in complex, PB-prone environments (Table 1, Fig 2). (4) Offering better sample efficiency than baseline SAC (Sec 5.1). **Methods for Plasticity Loss:** We acknowledge these methods and are planning comparisons. **PB vs Plasticity Loss:** They are deeply related. PB, the overfitting to early data, is arguably a primary cause or specific manifestation of the broader phenomenon of plasticity loss (the network becoming rigid and unable to adapt later) **Extension to On-Policy (PPO):** Since PPO uses batches collected on-policy, we would compute the empirical FIM using the current batch of data used for the policy/value update. The main difference coefficient λ for on-policy dynamics. **Large Replay Buffers** FGSF does not inherently require large replay buffers; the FIM is computed on the mini-batch sampled for the standard update (see Q6b). **Savitzky-Golay Filter:** This filter was used only for visualization purposes. It's a standard technique in signal processing for estimating derivatives. **Pseudocode Clarification:** * a) Yes it is. We will clarify the notation. * b) N in the FIM equation represents the mini-batch size used for the update and FIM calculation **Small λ Values:** Yes, the small optimal values for λ are related to the large magnitude of the FIM trace. Normalizing inputs/outputs/rewards could potentially stabilize FIM calculations. This is an interesting avenue for future work. **Dormant Neurons (Fig 10):** Our current hypothesis (requiring further study) is that FGSF might retain neurons that become dormant but are still important for representing the learned function (potentially having high individual FIM contributions, thus receiving minimal perturbation from FGSF). Simply maximizing active neurons might not be optimal. This resonates with findings that network pruning can reduce PB susceptibility (Obando-Ceron et al., 2024). We hope these clarifications and our commitment to improving the paper warrant an increase in the score. Given space constraints, we welcome follow-up questions to address any remaining concerns in more detail. ### References See Reviewer 738E --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response, which addresses many of my questions. I have accordingly increased the score. I hope the authors will incorporate the feedback to improve clarity. I also highly recommend that the authors have a plan to compare their approach with relevant methods used to address plasticity loss. > **Environments** Would tasks like the bit-flipping problem (Dohare et al., 2024) be representative of what the reviewer has in mind? I didn't have a specific environment in mind. I was simply considering environments where your method would be well-suited. That said, both bit flipping and slippery Ant could be suitable. Tweaking existing DM Control environments with gradually changing settings could also be an option. I like this line of work and would like to see it succeed. I still have a couple of concerns and am willing to further increase my score if the authors can propose a reasonable plan to address the following: 1. Evaluation on suitable non-stationary environments. 2. Comparisons with methods proposed in plasticity loss research. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's continued engagement and the raised score. We are fully committed to incorporating the feedback to enhance the clarity and impact of our work. ## **Evaluation** Regarding the evaluation on suitable non-stationary environments, we are actively exploring the bit-flipping problem. Its reduced computational complexity allows us to isolate and focus on the non-stationary aspects, minimizing potential confounding variables. We are already observing this to be a clear and compelling environment that demonstrates FGSF's capabilities. ## **Comparison** For the comparison with methods proposed in plasticity loss research, we are currently integrating these techniques into our experimental framework. We are particularly interested in understanding how Continual Backprop, ReDo, and Layer Norm + Weight Decay compare to FGSF. Our preliminary explorations suggest that each method addresses the plasticity challenge from distinct angles, and we are eager to quantify their respective strengths and weaknesses within our problem setting further. Specifically, we are in the process of strengthening and confirming some of the findings that already appeared in our Appendices: * **Continual Backprop:** We have kept observing that, in line with the results shown in our ablation study (Sec 5.3, Fig 7), an unstructured noise injection strategy can already offer benefits which are, however, different from Continual Backprop's structured noise injections. The latter noise injections also result in differences with our FGSF's FIM-guided approach. * **ReDo:** Our further analysis of dormant neurons based on the results presented in Appendix B.2, Fig 10 keeps indicating a complex interplay between dormancy and performance. We are exploring how ReDo's approach to resetting dormant neurons contrasts with FGSF's nuanced handling of these neurons. * **Layer Norm + Weight Decay:** While this method represents a different approach, we are working to integrate it into our comparisons to better understand its performance relative to FGSF. Yet at the moment, we do not have any results for this. We understand the importance of these comparisons and are prioritizing their inclusion in our ongoing research. We are confident that these additional experiments will further solidify the contribution of FGSF and provide a comprehensive evaluation of its effectiveness. We hope that their integration within the paper will allow the reviewer to consider our contribution as a strong accept. We are grateful for the reviewer's encouragement and constructive feedback, and thank appreciate their engagement in the rebuttal process.
Summary: This paper addresses the primacy bias (PB) problem in deep reinforcement learning, where early experiences disproportionately influence the learning process. The authors propose a novel method called Fisher-Guided Selective Forgetting (FGSF) that leverages the Fisher Information Matrix (FIM) to selectively modify network weights and prevent early experiences from dominating. The paper presents a characterization of PB through FIM trace patterns, identifying memorization and reorganization phases during learning. The authors evaluate FGSF across DeepMind Control Suite environments, analyze its impact on actor and critic networks, and examine the role of replay ratios and noise injection. Results show that FGSF consistently outperforms baseline methods, particularly in complex environments. Claims And Evidence: The paper's claims about FGSF's effectiveness are generally supported by empirical evidence across multiple environments. The authors show performance improvements compared to baseline SAC and network resetting approaches, particularly in complex environments like Humanoid and Quadruped. However, the evidence would be more convincing if the authors had compared FGSF with other state-of-the-art PB mitigation techniques beyond just network resetting. The characterization of PB through FIM trace patterns is interesting but would benefit from more rigorous statistical validation. Methods And Evaluation Criteria: The proposed methods are well-motivated and the evaluation using the DeepMind Control Suite is appropriate for assessing reinforcement learning algorithms. The ablation studies examining critic-only versus full network scrubbing provide valuable insights into the method's mechanics. However, the evaluation would be strengthened by including comparisons with other advanced PB mitigation techniques, such as plasticity injection methods and self-distillation approaches mentioned in the related work section. Theoretical Claims: no Experimental Designs Or Analyses: The experimental design is generally sound, with appropriate baselines and ablation studies. The use of the FIM trace to characterize learning phases is novel and insightful. However, I noticed that the hyperparameter sensitivity analysis is somewhat limited, focusing primarily on the scrubbing coefficient λ while keeping the scrubbing frequency fixed at 10. A more comprehensive exploration of hyperparameter interactions would strengthen the analyses. Supplementary Material: no Relation To Broader Scientific Literature: The paper effectively connects ideas from information geometry, machine unlearning, and deep reinforcement learning. The authors appropriately contextualize their work within existing literature on primacy bias and acknowledge related approaches. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths include the novel perspective of using FIM for characterizing and addressing primacy bias, the extensive experimental evaluation across various environments, and the insightful ablation studies that uncover the differential impact on actor and critic networks. Other Comments Or Suggestions: The paper is generally well-written, but some figures (particularly Figures 3, 8, 11, and 12) contain too many subplots with small font sizes, making them difficult to interpret. The authors could consider reorganizing these figures or focusing on the most important environments for the main paper. Additionally, the explanation of the FIM approximation using EKFAC could be expanded to ensure reproducibility. Questions For Authors: Why did you limit your comparisons to only network resetting as a baseline PB mitigation technique, rather than including more recent approaches like plasticity injection or self-distillation? Including such comparisons would provide a more comprehensive evaluation of FGSF's relative effectiveness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer mZKQ for their positive assessment and constructive feedback. ### Response to Comparison with other SOTA PB methods: We agree that comparisons with additional recent Primacy Bias (`PB`) mitigation techniques (e.g., specific plasticity injection, self-distillation) would further contextualize `FGSF`'s performance and represent valuable future work. For this initial study introducing our Fisher Information Matrix (`FIM`)-guided approach, we focused on comparing against baseline Soft Actor-Critic (`SAC`) and the foundational periodic reset method (Nikishin et al., 2022). This allowed us to clearly establish the distinct advantages (e.g., stability, performance in complex `PB`-prone tasks) offered by `FGSF`'s principled, geometrically-motivated mechanism compared to a fundamental heuristic baseline. ### Response to Hyperparameter Sensitivity: We focused our sensitivity analysis (Fig 4) on the scrubbing coefficient `λ`, which directly controls the intervention magnitude (Sec 3.3/3.4). We kept the scrubbing frequency `F` fixed because magnitude (`λ`) and frequency (`F`) are inherently coupled; higher frequency generally necessitates lower magnitude for stability. Isolating `λ` allowed a clearer and simpler analysis of its primary effect. While our results demonstrate robustness across a practical range of `λ` values, we agree that explicitly studying the interaction between `λ` and `F` is an important direction for future investigation. ### Response to Other Comments (Figure Readability and EKFAC Explanation): We thank the reviewer for the constructive suggestions regarding presentation. * **Figures**: We agree that Figures 3, 8, 11, and 12 are dense due to presenting results across multiple environments for completeness. In the camera-ready version, we will revise these figures to improve readability, potentially by increasing font sizes, focusing the main paper plots on the most illustrative environments (e.g., `Humanoid`, `Quadruped`, `Pendulum`), and moving others to the appendix if necessary. * **EKFAC**: We apologize for the brevity of the `EKFAC` explanation. We will expand this in the camera-ready version (likely in Sec 4 or Appendix), providing a clearer reference to the specific approximation used (George et al., 2021) and outlining the key implementation details relevant to `FGSF` to enhance reproducibility. We appreciate the reviewer's valuable feedback and positive evaluation. We hope these responses clarify our choices and the contributions of our work. Based on these clarifications, we hope the reviewer feels confident in maintaining or potentially strengthening their assessment. Given space constraints, we welcome follow-up questions to address any remaining concerns in more detail. ### References 1. George, Thomas (2021). “NNGeometry: easy and fast fisher information matrices and neural tangent kernels in PyTorch.” 2. Nikishin, Evgenii, Max Schwarzer, Pierluca D’Oro, Pierre-Luc Bacon, and Aaron Courville (2022). “The primacy bias in deep reinforcement learning.” In: International conference on machine learning. PMLR, pp. 16828–16847.
Summary: This paper proposed that the Fisher information can be a indicator on the plasticity loss during training. Based on the increasing and decreasing tendency of the trace of the Fisher information matrix, the authors proposed a novel method for tackling the plasticity loss by adding noise to the weights periodically. The scale of the noise is proportional to the inverse of the degree of the Fisher information, which indicates that the weights having low Fisher information value can be largely perturbed to promote the plasticity of the network. In the experiment, there are some cases that the proposed method outperforms the baseline (e.g. reset) in DMC environment. Claims And Evidence: The claims are not convincing since the empirical evidence for the relationship between FIM and the plasticity loss cannot explain the experiment results. Methods And Evaluation Criteria: The evaluation citeria does make sense. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: The experimental desings are valid. Supplementary Material: I checked the supplementary materials. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. Different from the claims in previous methods (e.g. ReDo [1], InFeR [2]), this paper pointed out that the degree of the Fisher information can ba a indicator for determining the plasticity loss. Weaknesses 1. Compared to the basline (i.e. reset), the overall performance of the FGSF is somewhat incremental. Furthermore, in case of Acrobat task, applying FGSF rather hinders learning the task. 2. Threre is no experimental resutls when the replay ratio (RR) is high for other tasks. Since FGSF already achieves better performance in Quadruped task, it is not persuasive on showing only the results for Quadruped task. 3. It is confusing that the intuition behind using the FIM as the indicator on the plasticity loss is a proper way. In the Quadruped task, the FIM value is much lower than SAC, but the overall performance is much better than the SAC. How can we explain this result? As the FIM value is low, the sacle factor for perturbing the weight is high, which promotes the forgetting to prevent the plasticity loss. Therefore, if the FGSF is applied properly, the FIM value should be larger than that of SAC. Furthermore, in case of the Acrobot, the overall tendency is totally different from the Quadruped. Does the FGSF can be applied generally across the all tasks? [1] Sokar et. al., The Dormant Neuron Phenomenon in Deep Reinforcement Learning, ICML, 2023 [2] Lyle et. al., Understanding and Preventing Capacity Loss in Reinforcement Learning, ICLR, 2022 Other Comments Or Suggestions: Already mentioned in the above section. Questions For Authors: Already mentioned in the Strengths And Weaknesses section Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer PPSq for their insightful feedback. We address the weaknesses and questions below. ### Weakness 1: While final reward gains may seem incremental in some tasks vs reset peak, FGSF offers significant advantages: * **Learning Stability**: Avoids reset's drastic periodic performance drops (Fig 2). * **Sample Efficiency**: Often faster convergence vs baseline SAC (e.g., 20-30% fewer steps in complex environments, Fig 2). * **Targeted Improvement**: Substantial gains in complex, high-PB environments (Humanoid, Quadruped, Table 1, Fig 2), aligning with our PB characterization (Sec 3.2). Regarding Acrobot failure: We acknowledge FGSF (and baseline/reset) struggled on Acrobot. We attribute this to shared hyperparameters from Haarnoja et al. (2018) (used for fair comparison), under which prior work also reported difficulties (e.g., Nikishin et al., 2022). This suggests task sensitivity or hyperparameter issues common to these methods, rather than a failure of the PB mitigation strategy itself, as PB (characterized by the two-phase FIM pattern) is less dominant in this simpler task. ### Weakness 2: Figure 5 focuses on Quadruped for high replay ratio (RR) analysis because our FIM trace analysis (Fig 1, 5.3, Sec 3.2) indicates it has one of the strongest Primacy Bias (PB) signatures (most pronounced 'Memorization phase' FIM spike for baseline SAC). This makes it the most relevant and challenging test case for evaluating PB mitigation where this phenomenon is amplified (high RR). Due to computational demands and the submission timeline, we focused resources on this representative case. Our internal preliminary results on other environments were consistent with RR=1 findings and the detailed Quadruped analysis: FGSF maintained better stability and performance compared to baseline SAC, particularly where the PB signature was originally evident. The core finding presented – that FGSF's advantage is amplified when PB is exacerbated by high RR – was thus robustly demonstrated in the most critical scenario and supported by initial findings elsewhere. We will add more results, confirming these insights, in the camera-ready version. ### Weakness 3: The reviewer raises excellent questions about interpreting the FIM trace dynamics and the generality of FGSF. This is a central part of our contribution and we welcome this opportunity to clarify our approach by explicitly linking this to our PB characterization framework (Sec 3.2).: * **FIM Trace and Performance (Quadruped)**: Our framework identifies PB via a two-phase FIM trace pattern: an initial, high-sensitivity 'Memorization Phase' (rapid Tr(F) increase) followed by a 'Reorganization Phase' (Tr(F) decrease). The first pahse, where the network rapidly overfits early data, is particularly problematic. In Quadruped, baseline SAC clearly exhibits this pathological Memorization Phase with a massive Tr(F) spike (Fig 3), correlating with poor final performance due to PB. FGSF's success comes from suppressing this detrimental Memorization Phase. The resulting lower Tr(F) observed in FGSF is evidence that it presents the extreme sensitivity and overfitting characteristic of early PB stages. This aligns with findings (Hochreiter & Schmidhuber 1997; Jastrzebski et al. 2021) linking lower early Tr(F) and flatter minima to better generalization. Thus, FGSF's controlled FIM trace leads to superior performance by avoiding the PB-induced pathological state. * **Adaptive Perturbation Mechanism**: The reviewer's intuition is correct: FGSF's $FIM^{-1/4}$ scaling means more perturbation is applied when Tr(F) is low (i.e., sensitivity is low). This is crucial for maintaining plasticity after FGSF has successfully navigated past (or prevented) the damaging Memorization Phase. It keeps the network adaptable without succumbing to early overfitting. We note that the goal isn't necessarily a higher final Tr(F) than the baseline, but rather achieving a healthy state via a controlled trajectory that avoids the initial Tr(F) explosion. * **Acrobot and Generality**: The different FIM dynamics in Acrobot (Fig 2/3) highlight that the PB signature itself (the strong two-phase pattern) is task-dependent. In simpler tasks, the intense Memorization Phase characteristic of PB might be absent or significantly weaker. As FGSF is designed to mitigate PB by intervening based on these FIM dynamics, it naturally has less impact when the PB signature it targets is not the primary factor of poor performance. The principle of FGSF is general, but its effectiveness as a PB mitigation tool directly correlates with the presence and strength of the PB phenomenon as characterized by our paper. We hope these clarifications address the points raised and demonstrate our work's value and novelty. We ask the reviewer to consider these clarifications in the final evaluation. Given space constraints, we welcome follow-up questions to address any remaining concerns in more detail. ### References See Reviewer 738E --- Rebuttal Comment 1.1: Comment: Thank you for the authors' effort on the rebuttal comments. I carefully read the comments, and some notions on FGSF become clear. However, I wonder the contribution on FGSF is still strong. Though the authors explained on the intuition behind using the Fisher informantion for diagnosing the primacy bias, I think the overall tendency is still inconcsistent with the intuition. For the above reason, I will keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer for carefully reading our comments and taking the time to reassess our work. We kindly ask if it would be possible to further elaborate on the exact reasons, possibly with some examples, that make the reviewer believe that there is an inconsistent behavior between the intuition driving our FGSF approach and its empirical performance.
null
null
null
null
null
null
REG: Rectified Gradient Guidance for Conditional Diffusion Models
Accept (poster)
Summary: This paper studies the foundations of classifier-free guidance (CFG). It finds that the common interpretation of scaling marginal distribution $p(x_t \vert y)$, i.e., use guidance to change sampling trajectories, is not theoretically grounded as it is impossible to construct a DDPM process that corresponds to the scaled distribution. The paper then proposes to scale the joint distribution $p(x_{0:T} \vert y)$, which could be proved to produce a valid DDPM process. With this new framework, CFG can be treated as a special approximation The paper then proposes a new form of approximation to the theoretically-grounded formulation, i.e., REG, and demonstrates its effectiveness on 1D and 2D synthetic data as well as high-dimension image synthesis tasks, i.e., ImageNet generation. Claims And Evidence: The paper is well-written and the claims are well-supported. I have one question about Sec. 3's "Common Interpretation of Guidance": I do not think that the original CG or CFG only scales the marginal distribution of $p(x_0 \vert y)$. However, the authors state > (L161 left) CG and CFG rewards are explicitly stated in (Ho & Salimans, 2022) Can authors point me to the specific part where **marginal distribution** scaling is mentioned in CFG paper? Methods And Evaluation Criteria: The evaluations are solid and thorough. Theoretical Claims: I read through all the proofs and found they are detailed and easy to follow. Experimental Designs Or Analyses: The experiments are extensive and convincing. Supplementary Material: I read through all of the supplementary materials. One question: I do not get the issue of the highlighted part in Fig. 8 - 10 as I personally find that the images of "w/o REG" look reasonable. Can authors elaborate more? Actually, I am wondering whether it would be better to display more than one sample and show that in general, REG performs better. Relation To Broader Scientific Literature: The paper falls in the recent community effort to fundamentally understand the mechanism of CFG. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is solid in theory and thorough in experiments. I do not find a major weakness in the paper besides my questions spread in other sections. Other Comments Or Suggestions: N/A Questions For Authors: People have applied CFG to flow-matching-based approaches by modifying the velocity functions [a] and the velocity has close connections to score functions. Thus I am wondering whether REG could be easily applied to a model based on flow matching. 1. If the answer is affirmative, I am wondering whether the authors have the resources to apply REG to models based on flow matching, e.g., Stable Diffusion 3. This would further enhance the impact of the paper. 2. If the answer is negative, what would be the difficulty? [a] Ma et al., SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers. ECCV 2024. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for acknowledging our contribution and the constructive feedback. Below we address each concern raised. --- **Q1.** Where marginal distribution scaling is mentioned in CFG paper? **A1.** Thank you for the great question. As a quick recap, Section II of our paper explains the theoretical pitfall of guidance. We begin by examining the case of scaling only the terminal marginal distribution (Eq. (5)) and then generalize to scaling all marginal distributions (Eq. (8)). The latter is a stricter goal that subsumes the former. We show that neither is theoretically well-justified. Strictly speaking, Ho & Salimans (2022) state that classifier guidance (CG) leads to sampling from scaled versions of all marginal distributions—see the second equation under Algorithm 1 on page 4. This aligns with our Eq. (8)–(9), where their reward term $p_\theta(c | z_\lambda)^w$ corresponds to our notation $R_t(x_t, y) = p_\phi(y | x_t)^w$ in Eq. (9). They also express the CFG reward using $p^i(c | z_\lambda)$ in the second-to-last paragraph before Section 4. Thus, the CG and CFG reward forms are indeed explictly stated there. For presentation clarity, we choose to start with the simpler case of scaling only the terminal marginal (Eq. (5)), but we acknowledge that Eq. (8) is the original formulation in Ho & Salimans (2022). We will revise the text and footnote accordingly to make our claim more accurate. --- **Q2.** The qualitative visualizations in Fig. 8-10. **A2.** Thanks for the great question. To elaborate: in Figure 8, the knight’s gesture while holding the sword appears unnatural, and the horse’s legs are blurry in the baseline images. In Figure 9, there are black artifacts in the sky (not birds), and the lower part of the tree suffers from low contrast in the baseline images. In Figure 10, the dog’s paws are unnatural, and the sunglasses are deformed in the baseline images. We agree with the reviewer that these differences might be subtle. For additional context, it might be helpful to consult qualitative results reported in other works to put our qualitative results in context --- see, for example, Figure 6-7 of [1], Figure 4 of [2], Figure 1 of [3], and [4]. To better address the concern, we have followed the suggestions and added multiple samples with the same method for a given prompt. These can be found in File 1 and File 2 at this [anonymous link](https://anonymous.4open.science/r/icml-1655-rebuttal). Please note that due to OpenReview rebuttal format constraints, we are unable to include figures directly in this response. However, anonymous links are permitted under ICML guidelines, and we will include these visualizations in the revised manuscript. --- **Q3.** Apply REG to flow-matching based diffusion models. **A3.** Thank you for the insightful question. We agree that evaluating REG in the context of flow-matching models would further enhance the impact of our work. Due to time and resource constraints—particularly the large model size (SDv3 has over 2 billion parameters) and its lengthy inference time—we were unable to run experiments on SD v3. However, we conduct extra experiments on SD v2.1, a velocity-parameterized diffusion model. This setup was mutually requested by other reviewers and, we believe, is also relevant to the current question. Namely, while velocity-parameterized models are not identical to flow-matching approaches, they are closely related. In fact, the distinction between them can often be seen as a subtle difference in the noise schedule [5]. We perform a grid search over guidance weights for different methods with SD v2.1 and report the best FID $\downarrow$ and CLIP $\uparrow$ scores for each method in the table below. We will perform thorough evaluations on REG for flow-matching based generative models and report them in the final version if the paper is accepted. | | (CLIP, FID) w/ REG | (CLIP, FID) w/o REG | |-|-|-| | Linear CFG | (31.62, 27.83) | (31.40, 28.94) | | Cosine CFG | (31.48, 23.32) | (31.72, 24.54) | --- **References** [1] Tuomas Kynkäänniemi et al., 'Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models,' Neurips, 2024. [2] Tero Karras et al., 'Guiding a Diffusion Model with a Bad Version of Itself,' Neurips, 2024. [3] Tianwei Yin et al., 'One-step Diffusion with Distribution Matching Distillation,' CVPR, 2024. [4] Xi Wang et al., 'Analysis of Classifier-Free Guidance Weight Schedulers,' TMLR, 2024. [5] Ruiqi Gao et al., 'Diffusion Meets Flow Matching: Two Sides of the Same Coin', https://diffusionflow.github.io/. --- Rebuttal Comment 1.1: Comment: I thank the authors for their time and effort in addressing my questions and concerns. I carefully read the other reviews and the rebuttal, I would like to keep my positive rating for this work for its solid theoretical principles and superior performance. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the positive feedback and for recognizing the value of our work. We will make further efforts to improve the clarity and quality of the paper according to the suggestions.
Summary: This paper introduces a Rectified Gradient Guidance to improve conditional generation in diffusion models. Also the paper provides the theoretical foundation of the proposed guidance method. Experiments show the proposed method enhances the quality of generated images. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, the theorems in section 4. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, the authors provide the code. Relation To Broader Scientific Literature: The proposed Rectified Gradient Guidance is related to the guidance techniques in diffusion models, like classifier guidance and classifier-free guidance. The paper replaces the scaled marginal distribution target with a valid scaled joint distribution objective, aligning with the theoretical motivations of guidance methods. This theoretical advancement builds on prior works that have explored the statistical theory and mathematical foundations of conditional diffusion models Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow. 2. The paper provides the theoretical insight about the limitations in current guidance method and proposes a solution to improve the performance of existing guidance methods. 3. The authods conduct extensive experiments to justify the effectiveness of the proposed method. Weaknesses: 1. The paper lacks reporting on inference time requirements. The authors should clarify whether the proposed REG introduces computational overhead compared to baseline methods. 2. The visualization results presented in the paper are insufficient to fully demonstrate the method's effectiveness. Additional qualitative examples across diverse scenarios would strengthen the paper. 3. For the 1D and 2D experiments, the evaluation would be more convincing if conducted on standard public datasets rather than custom data. The reported accuracy for 2D generation also appears quite low. Other Comments Or Suggestions: None Questions For Authors: 1. In Table 3, why do most of the guidance methods exhibit worse performance than vanilla CFG, particularly in the context of the SD model? 2. Same problem also appear in the DiT-XL model. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for acknowledging our contribution and the constructive feedback. --- **Q1.** Runtime and memory cost. **A1.** Thanks for the great question. The tables below summarize runtime and peak memory usage of CFG and REG on a single NVIDIA A40 GPU. Runtime is reported using example batch sizes, while memory is measured with batch size 1 to isolate per-image cost. Since REG introduces one extra gradient computation on top of vanilla CFG, a moderate increase in runtime and memory usage is expected. Similar inference-time gradient calculations have also been explored in Universal Guidance (Arpit Bansal et al., 2024), albeit in a different context. We emphasize that our main contribution lies in correcting CFG theory, and REG serves as an empirical validation. Its practical deployment depends on the specific application and acceptable overhead. We will include these tables in the updated paper. | Model | Resol. | Batch Size | CFG/REG Runtime (sec) | Increase (x) | |-|-|-|-|-| | EDM2-S | 64 | 8 | 25.96 / 42.99 | 1.66 | | DiT-XL/2 | 256 | 8 | 59.79 / 94.23 | 1.58 | | EDM2-S | 512 | 8 | 46.14 / 62.87 | 1.36 | | EDM2-XXL | 512 | 8 | 49.21 / 92.60 | 1.88 | | SD-V1.4 | 512 | 4 | 32.63 / 39.54 | 1.21 | | SD-V2.1 | 768 | 4 | 36.55 / 59.76 | 1.64 | | SD-XL | 1024 | 2 | 47.48 / 74.52 | 1.57 | | Model | Resol. | CFG / REG GPU Peak Mem (GB) | Increase (x) | |-|-|-|-| | EDM2-S | 64 | 0.87 / 1.49 | 1.71 | | DiT-XL/2 | 256 | 4.15 / 5.01 | 1.21 | | EDM2-S | 512 | 1.19 / 1.81 | 1.52 | | EDM2-XXL | 512 | 4.59 / 7.31 | 1.59 | | SD-V1.4 | 512 | 2.73 / 4.39 | 1.61 | | SD-V2.1 | 768 | 2.72 / 6.51 | 2.39 | | SD-XL | 1024 | 6.91 / 19.49 | 2.82 | --- **Q2.** Extra visualization results. **A2.** Thanks for the constructive remark. We respectfully point out that Figures 1, 2, 5, 8, 9, and 10 already provide qualitative visualizations for synthetic 1D/2D cases and real benchmarks. To address this concern, extra visualizations (e.g., standard 2D toy dataset, text-to-image and class-conditioned generation) are performed and added in this [anonymous link](https://anonymous.4open.science/r/icml-1655-rebuttal). Please note that due to OpenReview rebuttal format constraints, we are unable to include figures directly in this response. However, anonymous links are permitted under ICML guidelines, and we will include these visualizations in the revised manuscript. --- **Q3.** Standard public 1D and 2D datasets and 2D generation accuracy. **A3.** Thanks for the helpful feedback. Our 1D setup follows standards from prior works, such as Interval Guidance (Kynkäänniemi et al., 2024) and Autoguidance (Karras et al., 2024). For 2D, we use custom shapes with fine-grained structures, more challenging than standard 2D toy datasets like “two moons” or “Swiss rolls.” Extra results on standard 2D datasets are performed and can be accessed via this [anonymous link](https://anonymous.4open.science/r/icml-1655-rebuttal) (anonymous links permitted by ICML guidelines). About generation accuracy, we clarify that Figure 2(b) appears reasonably well when compared to the target in Figure 2(a). We acknowledge that the results may not be perfect, as we use a relatively small diffusion model in order to compute the golden gradient $\nabla \log E_t$ efficiently. We also clarify that the metric in Table 1 indicates how often REG achieves lower error compared to no REG. A value x\%>50% suggests consistent improvement. --- **Q4.** Cosine and linear CFG perform worse than vanilla in SD models and DiT-XL/2. **A4.** Thanks for the great question. We first want to clarify that our experiments are designed to verify whether REG can enhance a given guidance method. This argument has been validated by the results shown in Table 2 and 3. Our aim is not to compare “cosine + REG” vs. “vanilla CFG,” since such a comparison conflates the base method’s performance with the effect of REG. Hence, we believe the essential question here is that cosine and linear CFG perform worse than vanilla CFG in SD models and DiT-XL/2 (they do perform better in EDM2). This trend is consistent with results shown in related works, such as the analysis of CFG weight schedulers in Xi Wang et al., TMLR 2024 --- Both our Figure 4 (right) and their Figure 7(c) show that vanilla CFG achieves the lowest FID, linear CFG shifts slightly upward, and cosine CFG shifts it further upward in CLIP-FID space. Two factors likely contribute: (i) all these methods are heuristic, and their effectiveness can vary across model architectures and datasets. (ii) linear and cosine CFG require tuning two hyperparameters, while vanilla CFG uses only one—making linear and cosine CFG more expensive and less robust to optimize. We will include this clarification in the updated paper.
Summary: This paper addresses the discrepancy between the theoretical motivation and practical implementation of guidance techniques in conditional diffusion models. The authors propose a new method called Rectified Gradient Guidance (REG) to improve the performance of existing guidance methods. The main findings of the paper include the identification of a significant gap between the theoretical derivations and practical implementations of current guidance techniques. The authors demonstrate that the commonly used marginal scaling approach is theoretically invalid and propose a joint distribution scaling objective as a valid alternative. The key algorithmic contribution is the introduction of REG, which incorporates a novel correction term into existing guidance methods. This correction term is derived from a theoretical analysis of the optimal guidance solution and is designed to better approximate this optimal solution under practical constraints. The main results show that REG provides a better approximation to the optimal solution than prior guidance techniques in 1D and 2D experiments. Extensive experiments on class-conditional ImageNet and text-to-image generation tasks demonstrate that incorporating REG consistently improves Fréchet Inception Distance (FID) and Inception/CLIP scores across various settings compared to its absence. The conceptual contribution lies in establishing a unified theoretical framework for understanding guidance techniques in conditional diffusion models. The authors theoretically prove the invalidity of marginal scaling and demonstrate that established guidance implementations are approximations to the optimal solution with quantified error bounds. The practical contribution is that REG is shown to be compatible with various guidance techniques and diffusion model architectures. The method can be easily integrated into existing diffusion pipelines and consistently enhances performance without requiring significant computational overhead. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors provide theoretical analysis, mathematical derivations, and extensive experimental results to validate their claims. 1. The discrepancy between theoretical motivation and practical implementation of guidance techniques is demonstrated through detailed analysis of the marginal scaling approach and its limitations (Section 3). The authors show mathematically why marginal scaling is invalid and how it conflicts with the constraints of diffusion models. 2. The claim that established guidance implementations are approximations to the optimal solution is supported by Theorem 4.1, which establishes the optimal solution under joint scaling, and Theorems 4.2 and 4.3, which quantify the approximation error of current methods. The authors clearly show the gap between current practices and the optimal solution. 3. The effectiveness of REG in 1D and 2D synthetic examples is demonstrated through visual comparisons (Figure 1 and Figure 2) and quantitative win ratios (Table 1). These results clearly show that REG provides a better approximation to the optimal solution than previous methods. 4. The improvement in performance on class-conditional ImageNet and text-to-image generation tasks is supported by comprehensive quantitative results (Tables 2 and 3) across multiple model architectures and guidance techniques. The Pareto front analyses (Figures 3 and 4) further demonstrate the consistent improvement provided by REG. 5. The compatibility of REG with various guidance techniques and diffusion model architectures is shown through experiments with different models (DiT, EDM2, SD-v1-4, SD-XL) and guidance methods (vanilla CFG, cosine CFG, linear CFG, interval CFG, AutoG). The implementation details confirm that REG can be easily integrated into existing pipelines. Methods And Evaluation Criteria: The proposed Rectified Gradient Guidance (REG) method and the evaluation criteria utilized in this paper are both well-suited for addressing the problem of improving guidance techniques in conditional diffusion models. REG directly targets the identified theoretical-practical gap in existing guidance methods by introducing a theoretically justified correction term derived from optimal guidance solutions under joint distribution scaling. This approach is logical as it enhances established guidance techniques rather than replacing them entirely, ensuring compatibility with various existing methods and model architectures. The evaluation criteria, including benchmark datasets like ImageNet and COCO, along with performance metrics such as FID, IS, and CLIP score, provide a comprehensive assessment of REG's effectiveness. The combination of synthetic and real-world datasets, together with both quantitative metrics and qualitative visual comparisons, ensures a thorough evaluation of the method's impact across different scenarios and applications. Theoretical Claims: I've carefully examined the theoretical claims and proofs presented in this paper, focusing on the key theoretical contributions that form the foundation of the proposed REG method. The theoretical framework begins by identifying a critical issue in existing guidance techniques: the discrepancy between the theoretical motivation based on marginal distribution scaling and the practical implementation. The authors demonstrate that marginal scaling is theoretically invalid due to the constraints of the diffusion model's denoising process. This is established through a detailed analysis of the reverse denoising process and the implications of attempting to scale marginal distributions at different time steps. The paper then introduces the concept of joint distribution scaling as a valid alternative. Theorem 4.1 is central to this argument, establishing the form of the optimal noise prediction network under joint scaling. The proof of this theorem is rigorous and follows logically from the definition of the scaled joint distribution objective. The authors demonstrate the existence and uniqueness of the transition kernels corresponding to the scaled joint distribution, and derive the form of the updated noise prediction network. This theorem provides the theoretical justification for the REG method. Building on this foundation, Theorems 4.2 and 4.3 analyze the approximation error of existing guidance methods compared to the optimal solution. These proofs are technically sound and provide quantitative bounds on the approximation error. The analysis considers the practical constraints of guidance implementation and shows how the lack of future foresight affects the accuracy of guidance signals. The proofs involve careful application of mathematical analysis and probability theory, considering the properties of the diffusion process and the guidance rewards. The derivation of the REG correction term is also well-supported theoretically. The authors start from the optimal guidance equation and make reasonable approximations to derive a practical correction term that can be implemented in existing diffusion frameworks. The chain rule application and Jacobian simplifications are justified, and the empirical validation in the experiments supports the effectiveness of these approximations. Experimental Designs Or Analyses: The experimental designs and analyses in this paper are generally sound and valid, providing comprehensive support for the proposed REG method across different scenarios and applications. For the 1D and 2D synthetic examples, the experimental design effectively demonstrates REG's improvement in a controlled setting where ground truth can be computed. The comparisons are fair and isolate the effect of the guidance method, with win ratios and visual comparisons providing clear evidence of REG's effectiveness. In class-conditional ImageNet generation, the evaluation is comprehensive, testing across multiple resolutions and model architectures. The use of standard metrics like FID and IS is appropriate, and the Pareto front analysis effectively shows how REG improves the efficiency of guidance techniques. The consistent improvement across different models and guidance methods strengthens the validity of the claims. For text-to-image generation on COCO-2017, the experimental design is appropriate, evaluating REG with different model architectures and relevant metrics. The qualitative examples complement the quantitative results, providing additional evidence of improved generation quality. While the experimental designs are robust, there are minor areas where additional analysis could enhance the work. More detailed analysis of computational overhead, especially for larger models, would be beneficial. Additional ablation studies could better isolate the contributions of different components of REG. Including more diverse and challenging prompts in text-to-image generation would further demonstrate REG's robustness. Comparing REG against other recently proposed guidance enhancements would also be valuable. Supplementary Material: I reviewed several key parts of the supplementary material that were crucial for fully understanding and evaluating the paper's contributions: 1. **Appendix A: Remarks on Score Function Formula** - This section provided important details about the score function formula used in diffusion models, clarifying its validity across different time steps and formulations. This was essential for understanding the theoretical foundation of the guidance methods and the paper's critique of marginal scaling. 2. **Appendix B: Invalid Marginal Scaling** - This part contained the proof showing how marginal scaling leads to implicit determination of previous time steps' rewards, supporting the paper's claim about the invalidity of marginal scaling approaches. 3. **Appendix C: Scaling the Joint Distribution - Proof of Theorem 4.1** - I carefully reviewed this section as it provided the rigorous mathematical proof for the paper's central theoretical contribution - the optimal noise prediction network under joint distribution scaling. 4. **Appendix D: Approximation Error - Proofs of Theorems 4.2 and 4.3** - These proofs were critical for understanding the quantitative analysis of the approximation error in existing guidance methods and how REG addresses these issues. 5. **Appendix E: Experimental Settings and Additional Numerical Results** - This section contained detailed information about the experimental setups, including model architectures, training parameters, and additional results. It was particularly helpful for assessing the reproducibility and robustness of the experimental findings. Relation To Broader Scientific Literature: REG can be seen as an evolution of classifier-free guidance (Ho & Salimans, 2022) with a theoretically motivated correction term. Unlike previous enhancement techniques like AutoG (Karras et al., 2024a), which requires identifying a "bad" model version, REG provides a general correction applicable to various guidance methods without additional model training or complex setup. Essential References Not Discussed: None Other Strengths And Weaknesses: # Strengths 1. The paper demonstrates originality by identifying and addressing a fundamental theoretical-practical gap in guidance techniques for diffusion models. While classifier guidance and classifier-free guidance have become standard approaches, this work provides a novel theoretical framework that re-examines their foundations. The identification of the invalidity of marginal scaling and the proposal of joint distribution scaling represent creative advances that build upon but significantly extend previous work. 2. The improvements demonstrated by REG across multiple benchmarks and model architectures indicate substantial practical significance. For conditional diffusion models, which are widely used in applications like image generation, text-to-image synthesis, and video generation, even small improvements in guidance techniques can lead to meaningful enhancements in output quality and diversity. The versatility of REG, as shown in the experiments, suggests it could become a standard enhancement in future diffusion model implementations. 3. The paper is well-written and structured logically. The theoretical sections build upon each other in a coherent manner, making complex concepts accessible. The experimental results are presented clearly with appropriate visualizations and quantitative analyses. The appendices provide additional details that enhance reproducibility and understanding. # Weaknesses 1. While the paper mentions that REG introduces minor computational overhead, a more detailed analysis of the additional computational requirements would be beneficial, especially for larger models and in production settings. This could help practitioners better understand the trade-offs when implementing REG. 2. The paper demonstrates REG's effectiveness across several model architectures and guidance methods, but a more extensive analysis of its performance across a broader range of architectures and applications would strengthen the claims of generalizability. Additionally, testing on more diverse and challenging datasets beyond ImageNet and COCO could provide further insight into its robustness. 3. The paper could benefit from a more comprehensive comparison against other recently proposed guidance enhancements. This would help establish REG's position relative to other state-of-the-art methods and highlight its unique advantages. 4. While REG is presented as a versatile enhancement, the implementation details, particularly regarding the correction term, might require careful tuning and understanding of the underlying diffusion framework. Providing more implementation guidance or open-source code could lower the barrier to adoption for practitioners. Other Comments Or Suggestions: 1. In Section 3, when discussing the invalidity of marginal scaling, adding a brief intuitive explanation alongside the mathematical proofs might help readers better grasp the concept. 2. In the experimental sections, providing a summary table that compares the performance of REG across different guidance methods and model architectures could help readers quickly see the consistent improvements. 3. Including more specific implementation details about the REG correction term, particularly regarding computational considerations, would be helpful for practitioners looking to implement the method. 4. While the paper mentions computational overhead, a more detailed discussion of potential limitations, such as increased memory requirements or compatibility with certain model architectures, would provide a more complete picture. Questions For Authors: The paper mentions that REG introduces minor computational overhead but doesn't provide specific details. Could you quantify the additional computational requirements of REG compared to standard guidance methods, especially for larger models like SD-XL? This would help clarify practical trade-offs. If the overhead is substantial, it might affect the practical significance despite performance improvements. The experiments demonstrate REG's effectiveness across several model architectures, but how does REG perform with score-based generative models that use different parameterizations or sampling schemes? Understanding its performance on architectures beyond those tested would clarify its generalizability. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for acknowledging our contribution and the constructive feedback. --- **Q1.** Runtime and memory cost. **A1.** Thanks for the great question. The tables below summarize runtime and peak memory usage of CFG and REG on a single NVIDIA A40 GPU. Runtime is reported using example batch sizes, while memory is measured with batch size 1 to isolate per-image cost. Since REG introduces one extra gradient computation on top of vanilla CFG, a moderate increase in runtime and memory usage is expected. Similar inference-time gradient calculations have also been explored in Universal Guidance (Arpit Bansal et al., 2024), albeit in a different context. We emphasize that our main contribution lies in correcting the CFG theory, and REG serves as an empirical validation. Its practical deployment depends on the specific application and acceptable overhead. We will include these tables in the updated paper. | Model | Resol. | Batch Size | CFG/REG Runtime (sec) | Increase (x) | |-|-|-|-|-| | EDM2-S | 64 | 8 | 25.96 / 42.99 | 1.66 | | DiT-XL/2 | 256 | 8 | 59.79 / 94.23 | 1.58 | | EDM2-S | 512 | 8 | 46.14 / 62.87 | 1.36 | | EDM2-XXL | 512 | 8 | 49.21 / 92.60 | 1.88 | | SD-V1.4 | 512 | 4 | 32.63 / 39.54 | 1.21 | | SD-V2.1 | 768 | 4 | 36.55 / 59.76 | 1.64 | | SD-XL | 1024 | 2 | 47.48 / 74.52 | 1.57 | | Model | Resol. | CFG / REG GPU Peak Mem (GB) | Increase (x) | |-|-|-|-| | EDM2-S | 64 | 0.87 / 1.49 | 1.71 | | DiT-XL/2 | 256 | 4.15 / 5.01 | 1.21 | | EDM2-S | 512 | 1.19 / 1.81 | 1.52 | | EDM2-XXL | 512 | 4.59 / 7.31 | 1.59 | | SD-V1.4 | 512 | 2.73 / 4.39 | 1.61 | | SD-V2.1 | 768 | 2.72 / 6.51 | 2.39 | | SD-XL | 1024 | 6.91 / 19.49 | 2.82 | --- **Q2.** Experiments on diverse model architectures, applications, and datasets. **A2.** Thanks for the valuable feedback. To address architecture concern, we conduct extra experiments using SD-V2.1, a velocity-parametrized diffusion model; results can be found in A3 of Reveiwer BefU. Below is a summary of the models used, covering a wide range of settings. Regarding applications and datasets, we respectfully clarify that our choices are **consistent with current standards in the literature**. For examples, DiT (William Peebles and Saining Xie, 2023) and EDM2 (Tero Karras et al., 2024) primarily conduct experiments on ImageNet. In addition to ImageNet, Interval Guidance (Kynkäänniemi et al., 2024) and Autoguidance (Tero Karras et al., 2024) perform qualitative experiments on text-to-image tasks using SD models. Due to limited time and resource constraints, we will explore additional datasets in future work. | Model | DiT-XL/2 | EDM2-S | EDM2-XXL | SD-v1-4 | SD-XL | SD-V2.1 | |-|-|-|-|-|-|-| | # Params | 675 M | 280 M | 1.5 B | 860 M | 2.6 B | 865 M | | Sampler | 250-step DDPM | 2nd Heun | 2nd Heun | PNDM | Euler Discrete | PNDM | | Parametrization | epsilon | x0 | x0 | epsilon | epsilon | velocity | | Architecture | Transformer | U-Net | U-Net | U-Net | U-Net | U-Net | --- **Q3.** Comparison with recent guidance enhancements. **A3.** Thanks for the constructive suggestion. We respectfully note that the proposed REG method has already been compared with SOTA guidance techniques, such as Interval Guidance (Kynkäänniemi et al., 2024) and Autoguidance (Tero Karras et al., 2024), both of which are strong and recent baselines. As shown in Table I and Figure 7, REG is still able to improve upon these methods. We greatly appreciate suggestions from the reviewer on any specific missing guidance methods that we should compare to. --- **Q4.** REG implementation details and open-source code. **A4.** Thanks for the constructive remark. We will open source our code to ensure full reproducibility, and add implementation details in the updated paper. --- **Q5.** Other comments: (i) Add intuitive explanation for marginal scaling in Section 3. (ii) Add a summary table of results and architectures in numerical result section. (iii) REG with different sampling schemes and different parametrizations. **A5.** Thanks for the constructive remarks. (i) We will update our manuscript accordingly. (ii) We want to respectfully point out that Table 2 and 3 have already included all numerical results and Table 4 (in supplementary) have summarized model architectures. (iii) We respectfully note that our experiments already cover a wide range of settings. Please refer to the summary table in A2 for details.
Summary: This submission focuses on demystifying the classifier-free guidance for diffusion models. CFG has proven to be essential for the success of diffusion models. However, recent literature noted that the guided score function does not correspond to the forward diffusion process. In this work, the authors identify the source of the discrepancy and then introduce guidance for the joint distribution. This results in guidance relying on the expected reward at $x_0$ at every timestep of sampling. Using such formulation directly would be computationally prohibitive, as it would require completing the denoising process to the terminal state, at every timestep. However, under mild assumptions, which can be met in practice, a convenient approximation is proposed, which the authors call “rectified gradient guidance”. REG comes at the cost of computing the diagonal of the Jacobian of the denoiser. The proposed method is supported by strong evidence in toy, controlled scenarios, as well as for state-of-the-art class- and text-conditional image generation diffusion models. In addition, the authors show the standard CFG can be seen as an approximation of REG, and characterize the approximation error. ## update after rebuttal My assessment remains positive after the rebuttal. The authors engaged in the discussion and provided additional details that were requested. Claims And Evidence: Both the claims regarding theoretical results, as well as, claims about the effectiveness of the proposed method are supported by appropriate evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have reviewed the proofs. I focused on understanding what technique the authors used for each of the proofs. I have not thoroughly checked all derivations. I have no reason to believe they are incorrect. Experimental Designs Or Analyses: Yes. The synthetic experiments in section 5.1. show that the proposed relaxation has a smaller error than the alternative which is vanilla CFG. Image generation experiments in section 5.2. show that this translates to quantitative and qualitative evaluation for SOTA models. There are no issues. Supplementary Material: I reviewed the Appendix. Relation To Broader Scientific Literature: This submission continues on the path of demystifying, and trying to understand CFG. I agree with the authors that their work complements previous findings in this field. I am inclined to believe that this is the explanation we have been looking for. Essential References Not Discussed: Nothing that was available at the time of submission. The authors may find [1] interesting if they didn’t know it already, [1] Pavasovic, Krunoslav Lehman, et al. "Understanding Classifier-Free Guidance: High-Dimensional Theory and Non-Linear Generalizations." arXiv preprint arXiv:2502.07849 (2025). Other Strengths And Weaknesses: This is a strong submission. Other Comments Or Suggestions: - It would be interesting to include samples with the golden guidance in Figure 2, given that it is already computed. - In Sec 5.2 none of the models uses v-prediction. Not that I expect a different behavior for such a model, but for an even stronger message it could be included. Questions For Authors: 1. L243 right column “assuming that the Jacobian matrix is diagonally dominant” - is diagonal dominance enough? I think that the assumption here is a diagonal Jacobian. 2. On the approximation of eq 22 with eq 21: wouldn’t the efficient vector-jacobian-product be applicable here? This would eliminate the assumption mentioned in the question above. 3. What is the computational overhead of the proposed method compared to vanilla CFG? Despite the use of approximation, I suspect that the additional computational and memory cost is significant for any reasonably sized model. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you so much for acknowledging our contribution and the constructive feedback. --- **Q1.** Reference [1]: Krunoslav Pavasovic et al., arXiv 2025. **A1.** Thanks for bringing [1] to our attention. We are aware of this work and will include it in our references. As the reviewer kindly noted, this paper was published on arXiv on February 11th, after the ICML submission deadline. [1] shows that while CFG may reduce diversity in low-dimensional settings, it becomes effective in high-dimensional regimes due to a "blessing of dimensionality." The authors identify two phases: an early phase where CFG aids class selection, and a later phase where it has minimal impact. They propose non-linear CFG variants that deactivate in the second phase, improving quality and diversity without extra computational cost. --- **Q2.** Include samples with the golden guidance in Figure 2. **A2.** Thanks for the great question. We respectfully clarify that Figure 2(a) already displays the golden/target samples that we aim to generate, and Figure 2(d) presents the golden guidance $\nabla \log E_t$. To the best of our understanding, these are all the golden cases available. We are more than happy to include any missing visualizations to address this concern. --- **Q3.** Extra results on diffusion models using velocity-parametrization. **A3.** Thanks for the constructive feedback. We conduct extra experiments on SD-V2.1, a velocity-parametrized text-to-image diffusion model. Due to time and resource constraints, we perform a grid search over guidance weights and report the best FID $\downarrow$ and CLIP $\uparrow$ scores for each method in the table below. A more thorough evaluation (e.g., a full FID-CLIP curve) will be included in the final version if accepted. | | (CLIP, FID) w/ REG | (CLIP, FID) w/o REG | |-|-|-| | Linear CFG | (31.62, 27.83) | (31.40, 28.94) | | Cosine CFG | (31.48, 23.32) | (31.72, 24.54) | --- **Q4.** Diagonal Jacobian matrix assumption. **A4.** Thanks for the constructive remark. We agree that a diagonal Jacobian is more precise. We originally refer to a diagonally dominant Jacobian for a writing reason. From Eq. (22) to Eq. (21), three approximations are introduced, as detailed in Lines 220–224 on the right side of the paper, with the Jacobian assumption being the third. The first two approximations are essential—(2) is self-explanatory, and (1) is explained in our response to Q5. Given these, the transition from Eq. (22) to Eq. (21) is already approximate, regardless of whether the Jacobian is diagonal or diagonally dominant. We will revise the text to refer to a diagonal Jacobian where appropriate. --- **Q5.** Apply Vector-Jacobian-Product (VJP) in Eq. (22). **A5.** Thanks for the great question. In short, the application of VJP requires that the function $\log R_0(\hat{x}_0, y)$ be differentiable with respect to $\hat{x}_0$. However, in current CFG-like frameworks, $R_0(\cdot, y)$ is not arbitrary—it is specifically defined as shown in the second line of Eq. (6). This definition renders $\log R_0(\cdot, y)$ non-differentiable with respect to its first argument (or very consuming to evaluate). Consequently, VJP is not applicable in this case. Note that to address it, we use chain rule and approximate $\nabla_{\hat{x}_0} \log R_0(\hat{x}_0, y)$ with $\nabla{x_t} \log R_t(x_t, y)$, which can be further simplified via Eq. (7) and (9). --- **Q6.** Runtime and memory cost. **A6.** Thanks for the great question. The tables below summarize runtime and peak memory usage of CFG and REG on a single NVIDIA A40 GPU. Runtime is reported using example batch sizes, while memory is measured with batch size 1 to isolate per-image cost. As expected, REG introduces minor overhead due to the extra gradient computation. We will add these tables in the updated paper. | Model | Resol. | Batch Size | CFG/REG Runtime (sec) | Increase (x) | |-|-|-|-|-| | EDM2-S | 64 | 8 | 25.96 / 42.99 | 1.66 | | DiT-XL/2 | 256 | 8 | 59.79 / 94.23 | 1.58 | | EDM2-S | 512 | 8 | 46.14 / 62.87 | 1.36 | | EDM2-XXL | 512 | 8 | 49.21 / 92.60 | 1.88 | | SD-V1.4 | 512 | 4 | 32.63 / 39.54 | 1.21 | | SD-V2.1 | 768 | 4 | 36.55 / 59.76 | 1.64 | | SD-XL | 1024 | 2 | 47.48 / 74.52 | 1.57 | | Model | Resol. | CFG / REG GPU Peak Mem (GB) | Increase (x) | |-|-|-|-| | EDM2-S | 64 | 0.87 / 1.49 | 1.71 | | DiT-XL/2 | 256 | 4.15 / 5.01 | 1.21 | | EDM2-S | 512 | 1.19 / 1.81 | 1.52 | | EDM2-XXL | 512 | 4.59 / 7.31 | 1.59 | | SD-V1.4 | 512 | 2.73 / 4.39 | 1.61 | | SD-V2.1 | 768 | 2.72 / 6.51 | 2.39 | | SD-XL | 1024 | 6.91 / 19.49 | 2.82 | --- Rebuttal Comment 1.1: Comment: I thank the authors for a very well-organized reply and the additional information provided. Please find my additional questions/comments below: 1. A&Q 2 Would it be possible to generate samples with the golden guidance from Figure 2(d)? Or is this exactly what we see in Figure 2(a)? 2. A&Q 5 What I meant was using jacobian-vector-product (not VJP) in eq 22, using the last approximation used in your reply. I even implemented it for the toy model provided in the supplementary material, and it seems to work fine. 3. Additional comment As mentioned by other reviewers, it would be great if you could release the code for all the experiments. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our responses and the thoughtful follow-up questions. --- **1. Response to Q&A 2** Thank you for the clarification — we now better understand the question. It indeed looks simialr to Figure 2 (a), so we have omitted it since Figure 2 already has 8 columns. --- **2. Response to Q&A 5** Thank you for the clarification, the insightful suggestions, and even trying to implement it in our supplementary toy code. We now understand that the reviewer is referring to the Jacobian-Vector Product (JVP) --- applying JVP to the final line of Eq. (22) after we approximate $\nabla_{\hat{x}_0} \log R_0(\hat{x}_0, y)$ with $\nabla{x_t} \log R_t(x_t, y)$. We agree with the reviewer that using JVP is a valid and promising approach here. It can also elminate the need for the diagonal Jacobian assumption. Taking CFG as an example and using our notation, we know $-\sqrt{1-\bar{\alpha}_t}\nabla{x_t} \log R_t(x_t, y)=\epsilon_t(x_t,y,t) - \epsilon_t(x_t,t)$. This suggests that JVP for the last line of Eq. (22) can be written roughly in the following pseduo code: ```python # net(xt, y, t) is the trained conditional noise prediction network uncond_pred = net(xt, None, t) # unconditional noise prediction without labels cond_pred = net(xt, y, t) # conditional noise prediction with labels pred = cond_pred + w * jvp(net, xt, cond_pred - uncond_pred) ``` We sincerely appreciate the reviewer’s insightful suggestion, which we had not considered in our original method. In our view, compared to our current implementation, the JVP-based formulation introduces one fewer approximation and therefore has the potential for theoretically even better performance. A complete evaluation requires testing JVP on our full experimental setup (i.e., class-conditioned ImageNet and text-to-image tasks), which cannot be completed within the rebuttal period since we need to sweep the FID v.s. IS (or FID v.s. CLIP) curves. We will implement and evaluate the JVP-based variant and will update the manuscript accordingly. Once the updated version is publicly available, we welcome any further feedback or suggestions from the reviewer.. Finally, we want to thank the reviewer again for this important remark. --- **3. Additional Comment** Thank you so much for the feedback. We plan to release all the code for full reproducibility if the paper is accepted. Additionally, we will incorporate all discussions from the rebuttal period into the updated manuscript. Most importantly, we will implement the JVP, examine it, and add its results.
null
null
null
null
null
null
Optimizing Large Language Model Training Using FP4 Quantization
Accept (poster)
Summary: This paper presents a FP4 training framework for large language models, which could potentially reduce the costs of LLM training. This work addresses the accuracy challenges caused by FP4 quantization with two key innovations: 1) Differentiable Gradient Estimator (DGE) for accurate weight updates and 2) Outlier Clamping and Compensation to tackle activation outliers. As a result, this framework maintains accuracy comparable to BF16 and FP8 while effectively scaling to models with up to 13B parameters. Claims And Evidence: Please refer to **“Methods And Evaluation Criteria”**. Methods And Evaluation Criteria: The accuracy evaluation and ablations appear to be plausible. It covers models with sizes from 1.3B to 13B and provides both training loss curves and downstream evaluations. However, to make the claim stronger, it would be better to have: 1) Perplexity (PPL) evaluation of the models trained with the proposed method and BF16 baseline, as PPL is more sensitive and commonly used than downstream evaluations. 2) Efficiency evaluation for this paper is lacking. Although FP4 tensor cores are not available, some analysis/estimations regarding the speedups would be helpful. Theoretical Claims: I have checked the formulations for DGE in this paper. The overall derivation looks good to me. One question on Equation (8): Should there be a $\delta$ in the numerator of the constant term? Experimental Designs Or Analyses: Please refer to **“Methods And Evaluation Criteria”**. Supplementary Material: Yes. I have read the appendix of this paper, including the quantization kernel, weight/activation visualization, DGE details, etc. Relation To Broader Scientific Literature: The paper follows the framework of [FP8-LM](https://arxiv.org/pdf/2310.18313), targeting the accuracy issues regarding FP4 LLM training. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: [Strength] 1. The paper targets an important topic: efficient training of LLMs. Despite room for further improvements, the evaluations and ablations appear to be plausible and comprehensive. 2. The DGE design in this paper seems interesting and effective for FP4 training. [Weakness] 1. It would be better if the authors could estimate the speedups in the paper. While the FP4 tensor cores may not be available, the estimation of theoretical speedups can be obtained with kernel-level latency breakdown. For example, what are the latency proportions of FP8 GEMM kernels (for FP4 simulation) and quantization kernels in the training process? 2. Perplexity (PPL) evaluation of the models trained with the proposed method and BF16 baseline are not provided. As PPL is more sensitive and commonly used than downstream evaluations, it would be helpful to include PPL evaluation. Other Comments Or Suggestions: N/A. Questions For Authors: 1. According to Figure 2, the input activations and weights are quantized from BF16 to FP4 on-the-fly. As a result, the proposed method in this paper should not be able to reduce memory consumption during the training process. Is this contradictory with the claim in the Impact Statement that “By significantly lowering computational and memory demands, FP4-based methods can …” ? 2. The authors mention that activation is quantized token-wise and weight is quantized channel-wise. I would like to confirm that does it mean that each token in activation (each channel in the weight) has only one scaling factor? If so, is the per-channel/per-token max value calculation process included in the quantization kernel? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive feedback on our work. We appreciate your recognition of the framework’s potential and your insightful questions, which have helped clarify key aspects of our methodology and claims. Below, we address each point in detail. **W1: Speedup Estimation and Kernel-Level Latency Breakdown** A1: We completely agree with the importance of evaluating theoretical speedup. Since we employ a mixed-precision training framework, we analyze the impact of FP4 acceleration on each computational component of a standard Transformer layer. Given a hidden size *h*, batch size *b*, and sequence length *s*, the FLOP breakdown is as follows: |Component|Subcomponent|FLOPs(FP32)|FLOPs(FP4)|Speedup Factor| |-|-|-|-|-| |Input Layernorm| |$4bsh$|$4bsh$|1x| |Multi-Head Attention|Query,Key,Value Projections|$6bsh^2$|$1.5bsh^2$|4x| | |Attention Scores Computation|$4bs^2h$|$4bs^2h$|1x| | |Softmax Computation|$bs^2h$|$bs^2h$|1x| | |Output Projection|$2bsh^2$|$0.5bsh^2$|4x| |Post-Attention Layernorm| |$4bsh$|$4bsh$|1x| |Feed-Forward Network(FFN)|Up Projection|$8bsh^2$|$2bsh^2$|4x| | |GeLU Activation|$28bsh$|$28bsh$|1x| | |Down Projection|$8bsh^2$|$2bsh^2$|4x| |Sum| |$24bsh^2+5bs^2h+36bsh$|$6bsh^2+5bs^2h+36bsh$| | Since the backward pass requires approximately twice the FLOPs of the forward pass, the theoretical FP4 speedup (excluding DGE and OCC overhead) for a 7B model (h=4096, s=2048) is: $$\frac{3*(24bsh^2+5bs^2h+36bsh)}{3*(6bsh^2+5bs^2h+36bsh)}=\frac{24h+5s+36}{6h+5s+36}=3.12$$ The reviewer also point out the importance of kernel-level latency breakdown. Currently, we use FP8 GEMM kernels for all matrix multiplications in the above table, leading to a theoretical speedup of: $$\frac{3*(24bsh^2+5bs^2h+36bsh)}{3*(12bsh^2+5bs^2h+36bsh)}=\frac{24h+5s+36}{12h+5s+36}=1.83$$ However, in practice, the framework of [FP8-LM](https://arxiv.org/pdf/2310.18313) reports only a 1.28× actual speedup for h=4096 (7B model). Our implementation incurs additional precision casting overhead due to extra precision casting from BF16 to FP4 (for simulation) and FP4 to FP8 (for FP8 GEMM computation). These conversions are unnecessary with specialized FP4 hardware, where quantization could be fused into the GEMM kernel. So although we write the FP4 quantization kernel, it will still cause heavy overhead in the training process, reduceing speedup from 1.28× to approximately 1.08× in real training. **W2: Perplexity (PPL) Evaluation** A2: We agree that PPL is a sensitive metric for language model training. Below, we present the evaluation results: |Model|Precision|Lambada_openai|Lambada_standard|Pile10k|Wikitext|Average| |-|-|-|-|-|-|-| |1.3B|FP4|14.98|25.10|82.77|26.65|**37.38**| |1.3B|BF16|15.33|23.07|82.52|26.51|**36.86**| |7B|FP4|14.34|23.33|77.72|24.86|**35.06**| |7B|BF16|14.29|24.42|78.42|25.36|**35.62**| |13B|FP4|12.42|22.45|75.06|24.81|**33.69**| |13B|BF16|13.67|21.62|75.84|24.83|**33.99**| The results demonstrate that FP4 models achieve comparable or even slightly lower PPL than BF16 models. As expected, larger models achieve lower perplexity under the same training token budget. **Q3: For Equation (8), should there be a $\delta$ in the numerator of the constant term?** A3: Yes, it was a typo error and we're really sorry! We sincerely appreciate your attention to detail and will correct this in the final version. **Q4: Memory Consumption vs. Impact Statement Claim** A4: We appreciate the reviewer’s feedback on this potentially misleading claim. While our experiments confirm reduced GPU memory usage compared to BF16, this reduction primarily results from: 1. FP8 optimizer states in Mixed-precision optimizers. 2. FP8 activation storage (from FP8-LM framework). The current FP4 online quantization strategy does not reduce GPU memory. Further reductions would require FP4 optimizer states and activation storage, which we leave for future work. To prevent confusion, we will remove the memory claim from the Impact Statement in the final version. **Q5: Quantization Granularity** A5: Yes, each token (activation) or each channel (weight) has a single scaling factor. Currently the provided quantization kernel in Appendix A only covers the quantization of the scaled tensor, and per-channel/per-token max value computation is not yet fused into the kernel. We sincerely appreciate the reviewer’s insightful feedback, which has strengthened both our analysis and the clarity of our claims. We will incorporate all necessary revisions into the final manuscript.
Summary: The paper tackles the challenging problem of FP4 training for LLMs. They propose two innovations: a differentiable quantization estimator (DGE) for precise weight gradient updates and an Outlier Clamping and Compensation (OCC) strategy for activations. The authors also provide extensive experiments on model scales (up to 13B parameters) show that the proposed FP4 method achieves comparable accuracy to traditional FP8 and BF16 baselines. ## Update after rebuttal I maintain my original score. I am generally satisfied with the authors’ response. Claims And Evidence: The paper effectively supports its claims, showing through extensive experimentation that the FP4 quantization framework performs closely to higher precision baselines in training LLMs. The proposed differentiable gradient estimator (DGE) and outlier clamping and compensation (OCC) methods are convincingly validated by clear ablation studies (e.g., Figures 3, 4, and Table 1). Methods And Evaluation Criteria: The proposed evaluation criteria, such as training loss and zero-shot task performance, are appropriate for quantization research. The authors selected established benchmarks like PiQA, HellaSwag, ObQA, and Lambada, which are standard for assessing language models. Theoretical Claims: There are no theoretical claims/proofs. Experimental Designs Or Analyses: Experimental designs appear sound, but direct access to FP4 tensor cores is simulated due to hardware constraints. This limitation slightly impacts the empirical assessment of true hardware-level gains but cannot be attributed to the authors. Supplementary Material: NA Relation To Broader Scientific Literature: The problem is very relevant, especially with the recent interest in training LLMs in a resource-efficient manner. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths** - The proposed Differentiable Gradient Estimator (DGE) method is particularly innovative, providing a differentiable approximation that improves gradient accuracy significantly over the Straight-Through Estimator (STE). - The Outlier Clamping and Compensation strategy is well-motivated and effective, especially given the difficulty of handling outliers in FP4 quantization. - Clear presentation with illustrative figures and well-structured experimental analyses. Authors have also addressed other limitations of their work in the paper such as using FP8 tensor cores to emulate FP4 computations due to hardware limitations Other Comments Or Suggestions: NA Questions For Authors: 1. Could you elaborate on the computational overhead introduced by the OCC method during actual training scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful evaluation of our work and for recognizing the significance of our contributions. We appreciate your positive feedback on the effectiveness of DGE and OCC, as well as the thoroughness of our experiments. Additionally, we are grateful for your understanding of the hardware constraints that necessitated our simulation-based approach, which, while unavoidable, slightly limits empirical hardware-level assessments. **Q1: OCC’s Computational Overhead During Actual Training Scenarios** A1: The computational overhead of OCC primarily arises from additional sparse matrix multiplications. Specifically, the input activation tensor Y is decomposed as: $Y=Y_c+\Delta Y$, where $\Delta Y$ contains outlier values processed in higher-precision FP8 GeMM. Since $Y_c=\text{clamp}(Y, max=\alpha,min=1-\alpha)$ and $\alpha$ is very close to 1, the $\Delta Y$ matrix remains highly sparse, with a sparsity ratio of $2*(1-\alpha)$. A detailed theoretical analysis of the computational cost of $\Delta Y$ GeMM is provided in our response to reviewer iCFi (A2). We kindly refer you to those details if you're interested. The results show that under the training configuration of **hidden_size=4096, sequence length=2048 and $\alpha=0.99$** (as used inour 7B model training), the theoretical speedup of FP4 training decrease slightly from 3.12 to 2.95 due to OCC. Considering the large accuracy benefits of the OCC method, we think this performance drop is acceptable. Given OCC’s significant accuracy benefits, we consider this tradeoff acceptable. However, careful tuning of $\alpha$ is crucial to maintain high sparsity, as current hardware struggles with efficient sparse matrix multiplications at lower sparsity levels. However, the situation in real training scenarios is a little different. In our current training setup, FP8 GeMM is used to simulate FP4 behavior, and $\Delta Y$ is naturally handled in FP8 format. Consequently, both $Y_c$ and $\Delta Y$ are computed within FP8 GeMM, allowing their results to be directly summed without requiring an additional GeMM operation, thereby eliminating OCC-related overhead in this simulation setup. At present, the primary computational overhead stems from: 1. DGE computations (negligible). 2. Precision casting between BF16, FP4, and FP8 (significant). However, if native FP4 hardware becomes available and is used for $Y_c$ computations, then OCC’s computational cost must be carefully reassessed. Once again, we appreciate your constructive feedback and your recognition of this work’s broader relevance in advancing resource-efficient LLM training.
Summary: The paper introduces a framework for training LLMs using 4-bit floating-point (FP4) quantization to address the computational burdens of LLM training. It introduces a Differentiable Gradient Estimator (DGE) for weight updates and an Outlier Clamping and Compensation (OCC) strategy to manage activation outliers, mitigating quantization errors. The framework integrates mixed-precision training and vector-wise quantization. Experiments conducted on LLaMA2 models (up to 13B parameters,100B tokens) using FP8 tensor cores to emulate FP4 suggest performance comparable to BF16 and FP8 baselines in training loss and zero-shot tasks. Claims And Evidence: The core claim—that FP4 can train LLMs with minimal accuracy loss compared to BF16—is supported by training loss curves (e.g., 1.97 vs. 1.88 for 13B models) and zero-shot task accuracies (e.g., 54.95% vs. 54.44% for 13B). DGE and OCC are presented as solutions to FP4’s quantization challenges, with ablation studies showing their necessity (e.g., direct FP4 casting diverges). However, the evidence is undermined by the absence of native FP4 hardware testing, limiting efficiency claims to speculation. Methods And Evaluation Criteria: The methodology employs mixed-precision training, quantizing GeMM operations to FP4 while using higher precision elsewhere. DGE approximates quantization differentiably, and OCC clamps outliers with sparse compensation. Evaluation includes training loss, quantization fidelity metrics (cosine similarity, MSE, SNR), and common zero-shot accuracy. Besides, the author need report the ppl metrics. Theoretical Claims: This paper presents that FP4’s 16-value representation suffices for LLM training via DGE and managing outliers with OCC. However, The introduction of Equation 7 appears somewhat abrupt, and its specific definition, as well as the rationale for its particular form, can be quite confusing. These claims lack comparisons to other low-bit formats (e.g., INT4). The theoretical novelty is modest, leaning heavily on prior quantization concepts. Experimental Designs Or Analyses: Experiments test LLaMA 2 models (1.3B, 7B, 13B) on 100B tokens from the DCLM dataset, with ablation studies isolating DGE and OCC effects. While the design is systematic, its scale is insufficient—13B parameters and 100B tokens pale against modern LLMs (e.g., Llama 3’s15T tokens). Supplementary Material: Appendices A (FP4 implementation), B (DGE proof), and C (tensor distributions) provide useful details. The CUDA kernel and quantization table (Appendix A) clarify implementation, the DGE proof (Appendix B), and tensor visualizations (Appendix C) highlight outlier issues. Relation To Broader Scientific Literature: This work builds on FP8-LM , aiming to extend quantization to FP4 for training. This paper insufficiently engages with related FP4 works, like LLM-FP4. Essential References Not Discussed: The paper discusses a relatively comprehensive review of related work. Other Strengths And Weaknesses: Strengths: 1. Interesting topic: This paper explored FP4 for LLM training, aligning with the push for future hardware trends (e.g., Blackwell GPUs). 2. Technical Effort: DGE are thoughtful attempts to address FP4’s quantization issues, supported by derivations (e.g., Equations 6-8) and ablations (Figure 6). 3. The comparison is extensive, including FP8-LM and TE, demonstrating the effectiveness of this paper. Weaknesses: 1. Hardware Dependency: The lack of native FP4 hardware testing renders efficiency claims speculative, a drawback for a paper emphasizing training cost reduction. 2. Insufficient Scale: Experiments at 13B parameters and 100B tokens seems to outdate compared to current LLM scales (e.g., GPT-4’s 1T parameters/ more tokens). 3. Confused DGE: The direct presentation of Equation 7 is overly abrupt and somewhat confusing, lacking appropriate reasoning and motivation. 4. I understand that current hardware do not yet support FP4 tensor core; however, the authors should at least present the memory usage or training acceleration of existing methods. The absence of such data makes it difficult to accept their motivation for reducing training costs. 5. The novelty of OCC is straightforward and seems to lack novelty. Besides, author should present the computation burden of delta_Y. Other Comments Or Suggestions: See above strengths and weaknesses. Questions For Authors: 2. Does increasing the number of training tokens further still result in consistent convergence effects? 1.The implementation details will be useful. Could the authors consider open-sourcing the code for testing? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive critique. We appreciate your recognition of the technical effort and alignment with hardware trends, as well as your insightful suggestions. Below, we address your concerns and questions: **W1: Hardware Dependency** A1: We acknowledge the lack of specific hardware as a limitation, as noted in Sec. 6. We provide a theoretical analysis of FP4 acceleration and demonstrate a **theoretical 3× speedup** (depending on model parameters). The detailed analysis is included in our response to reviewer gQsM (A1) due to space constraints. Additional analysis on DGE and OCC overhead, addressed in reviewer iCFi’s reply (A2), indicates the overhead remains acceptable while preserving accuracy. However, this is not speculative. As reviewer 1Hhi noted, while the absence of hardware testing affects empirical validation, it does not detract from our goal—to explore FP4 quantization for large-scale training in alignment with future hardware trends. **W2: Insufficient Scale (13B parameters, 100B tokens)** A2: We agree that scaling to trillion-parameter models is crucial. However, within academic compute constraints, our focus was to establish FP4’s feasibility. A **13B model trained on 100B tokens represents a reasonable scale in research**, even if it is smaller than industrial models (e.g., GPT-4, LLaMA3). Scaling stability is essential but is best explored in future industry-driven work. **W3: Clarification on DGE** A3: We apologize for not including the full mathematical derivation of Eq. 7 due to paper space constraints. We choose power function $f(x)=x^a$ (a<1) as the base differentiable estimation function because it saturates at 1 as x tends to 1, and quickly drops to 0 as x tends to 0, very much in line with the right half of the quantization function. To fully simulate the quantization function, we take absolute values of x and apply sign function for central symmetry: $f(x)=sign(x)\cdot|x|^a$. We then translate the function on the x-axis and the y-axis to move to the quantization range of [0, $\delta$]: $f(x)=\delta(1+sign(x-\frac{\delta}{2})\cdot|x-\frac{\delta}{2}|^a)$. Finally, we adjust the value of the power exponent to control the steepness and write a(a<1) as 1/k(k>1). We will clarify this in the final version to avoid possible confusion. **W4: Memory Usage and Training Speed** A4: We thank the reviewer for the suggestion. Below, we report real-time training memory usage and throughput: |Model Size|Precision|Memory Usage|Tokens per second| |-|-|-|-| |1.3B|BF16|51.03GB|470.3k (1.0x)| |1.3B|FP4|46.52GB **(-9%)**|395.1k (0.84x)| |7B|BF16|72.04GB|255.7k (1.0x)| |7B|FP4|52.75GB **(-27%)**|276.1k (1.08x)| |13B|BF16|70.28GB|118.4k (1.0x)| |13B|FP4|53.80GB **(-24%)**|126.9k (1.07x)| Note that our method is based on FP8-LM framework, where its speed up for three models are 1.03x, 1.24x and 1.27x (paper reported). In our implementation, **we need to do extra precision casting** for BF16 to FP4 (for simulation) and FP4 to FP8 (for FP8 GEMM computation), **leading to large overhead since native FP4 hardware is inaccessible**. In other words, evaluating speed performance during a real training process without dedicated hardware support can only serve as a point of reference. **W5: OCC’s Novelty and Computational Burden** A5: We thank the reviewer for pointing out that we did not state the OCC methodology very well in the paper. OCC’s novelty lies in its effective clamping strategies for characterizing numerical ranges and sparse compensation which avoids dense high-precision residuals. This method is suitable for online use and better matches the pre-training task. For the computation burden of $\Delta Y$, please refer to reviwer iCFi's reply (A2) due to the charactor limit. We'll refine these statements in the final version. **Q6: Perplexity (PPL) Metrics** A6: We acknowledge PPL’s importance as evaluation metric. Due to the character limitation, we kindly refer to reviwer gQsM's reply (A2) for detailed PPL results. **Q7: Convergence with Increasing Tokens** A7: Our preliminary analyses of existing scaling trends suggest that extended token training would likely achieve stable convergence. While scaling analysis is crucial for understanding model convergence, resource limitations currently prevent comprehensive token scalability studies. We highlight this as a critical research direction and commit to open-sourcing our framework to support collaborative exploration of token scaling dynamics. **Q8: Open-Sourcing the Code** A8: We sincerely appreciate the reviewer’s interest in implementation details. We are fully committed to open-sourcing the code and will release it once it has been thoroughly organized to ensure clarity and usability. Thank you again for your rigorous feedback. We hope these clarifications address your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. Despite the hardware limitations and token scale, which prevented the authors from conducting actual tests on FP4, from my view, the topic and experimental results of this paper still make sense as an academic study. In particular, the authors should carefully revise the derivation process of DGE in the final version to make it clearer. Ultimately, I sincerely suggest that the authors open-source the code to enhance the reproducibility of this work. Overall, despite its shortcomings, the author's response has addressed my concerns. I have increased my score accordingly. Good Luck! --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback and for recognizing the academic value of our work despite the hardware limitations. We sincerely appreciate your thoughtful suggestions, which will undoubtedly strengthen the final version of this paper. We will rigorously revise the derivation of DGE in the final manuscript. We also fully agree on the importance of reproducibility, and we will definitely release the implementation code publicly with detailed documentation. Thank you again for your valuable insights in refining this work, and also for your encouragement and support.
Summary: The paper presents an innovative framework for training large language models (LLMs) using FP4 quantization. The key contributions include: 1.Differentiable Quantization Estimator: This method improves gradient updates during FP4 computations by analyzing the impact of quantization on both forward and backward passes of neural networks, deriving a function with correction terms for accurate gradient estimation. 2.Outlier Clamping and Compensation Strategy: This addresses the issue of outlier values in activation tensors during LLM training, which can cause significant quantization errors. The strategy involves clamping these outliers to a predefined threshold and compensating for the introduced error using a sparse outlier matrix. Detailed experiments are presented in the paper to validate the effectiveness of the framework: 1.Focused on 4-bit quantization for GeMM operations 2.Quantization is applied along distinct dimensions for activation and weight tensors, aligning with matrix multiplication logic. 3.The framework is tested on LLMs with up to 13B parameters trained on 100B tokens. 4.Results show that the FP4 framework achieves accuracy comparable to BF16 and FP8 with minimal degradation. ## update after rebuttal Low-precision training has emerged as a clear trend in reducing computational costs for machine learning. After carefully considering the rebuttals from the authors and the feedback from other reviewers, I believe this work makes a significant contribution to both the academic community and the industry. Its technical insights and practical implications are particularly valuable in advancing the field of efficient training methodologies. Therefore, I will maintain my original rating of accept. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of optimizing LLM training using FP4 quantization. They address the key challenges and provide a comprehensive assessment of the framework's effectiveness. Theoretical Claims: I checked the derivation of DGE and OCC. Both of them are theoretically correct. However, there is a small issue of OCC. In Sec. 3.2, the author provides a study of the influence of CLAMP/COMP/QUANTILE (Table 1). It can be seen that lower alpha value yields less loss, because that more values are moved to high precision residuals. However, in Sec. 4.3 (Activation part), they claim higher alpha leads to better model accuracy. This is inconsistent with that in Sec. 3.2. It should be clarified. Experimental Designs Or Analyses: The experimental designs and analyses in the paper appear sound and well-considered, including Comparison to Baselines/ Model Sizes and Training Data/ Training Loss Curves/ Downstream Task Evaluation/ Ablation Studies/ Quantization Granularity Analysis. However, the authors do not give more details on speed evaluation. Though there is no hardware that support FP4 natively. A theoretical analysis on the computation overhead or other cost should be given. Another issue is the spikes of the loss curve, Fig. 5. It seems more and larger spikes are observed for larger models. It's better to give the reasons, as it may indicate some potential risk scaling to larger models. Supplementary Material: I did review the supplementary material, including Appendix A: Implementation of FP4 Quantization, Appendix B: Supplementary Proof for Differentiable Quantization, and Appendix C: Analyzing Quantization Difficulty Through Tensor Distribution. These supplementary materials effectively support the main claims and methods presented in the paper. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature in several specific ways: 1. Advancement of Quantized Training Methods 2. Differentiable Quantization Estimator 3. Outlier Clamping and Compensation Strategy 4. Mixed-Precision Training and Fine Grained Quantization 5. Hardware Considerations Essential References Not Discussed: All important literatures are cited in this paper. Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and insightful evaluation of our work, as well as for recognizing the theoretical and practical contributions of our FP4 training framework. Below, we address the questions and suggestions: **Q1 (Theroretical Claims): Clarification on OCC’s $\alpha$ value inconsistency (Sec. 3.2 vs. Sec. 4.3)** A1: Thank you very much for pointing out this issue. This is indeed a typo in Section 4.3—the correct statement should be that **a lower $\alpha$ leads to better model accuracy**. Theoretically, a lower $\alpha$ results in reduced quantization loss, as stated in Section 3.2 and shown in Table 1. Experimentally, Figure 6(c) in Section 4.3 confirms that lower $\alpha$ corresponds to a lower training loss curve. We will correct this in the final version. Again, we appreciate your keen observation! **Q2 (Experimental Designs Or Analyses): Details on speed evaluation / Theoretical analysis on the computation overhead or other cost** A2: Thank you for the suggestion. We recognize the importance of analyzing FP4’s theoretical speedup and computational overhead. The theoretical speedup of FP4 **(excluding DGE and OCC overhead)** is: $$\frac{3*(24bsh^2+5bs^2h+36bsh)}{3*(6bsh^2+5bs^2h+36bsh)}=\frac{24h+5s+36}{6h+5s+36}=3.12 \quad (h=4096, s=2048)$$ For detailed analysis on the decomposition of computational components, we kindly refer the reviewer to the response to reviewer gQsM (A1) due to space constraints. **Regarding computational overhead:** - DGE overhead: DGE introduces an additional nonlinear function during GEMM backward for weight updates, adding ~8 FLOPs per input element (Eq. 8). Accumulating all GEMM operation, this will cause a total overhead of $8*(3bsh+bsh+4bsh+4bsh)=96bsh$ (Four additions in brackets are weight shapes of attn qkv proj, attn out proj, MLP up proj and MLP down proj). Note that this overhead occurs only once per forward-backward iteration. - OCC overhead: OCC incurs extra sparse matrix multiplication. FP8 sparse GEMM is used with an activation sparsity of $2(1-\alpha)$. These FP8 sparse GEMM are added into every GEMM computation, adding an extra $2*(1-\alpha)(12bsh^2)$ FLOPs, where $12bsh^2$ comes from the cumulative sum of all GeMM calculations in the decomposition table that can be accelerated by FP4 (attn qkv proj, attn out proj, MLP up proj and MLP down proj), but in FP8 format with twice as many FLOPs as in FP4 $\bigg(2*(1.5bsh^2+0.5bsh^2+2bsh^2+2bsh^2)\bigg)$. Since we set $\alpha=0.99$, meaning these GEMMs are highly sparse, the overhead remains small, though hardware inefficiencies in sparse matrix multiplication necessitate choosing a larger $\alpha$ to ensure the high sparsity of the $\Delta Y$ matrix. Accounting for these overheads, the adjusted theoretical speedup is: $$\frac{3*(24bsh^2+5bs^2h+36bsh)}{3*(6bsh^2+5bs^2h+36bsh+2*(1-\alpha)(12bsh^2))+96bsh}=\frac{24h+5s+36}{6h+24*(1-\alpha)h+5s+68}=2.95 \quad (h=4096, s=2048, \alpha=0.99)$$ The overhead from DGE and OCC acounts for $32/(6h+5s+36)=0.1\\%$ and $24(1-\alpha)h/(6h+5s+36)=5.6\\%$ of the total computation, respectively, reducing the theoretical speedup ratio for FP4 compared to BF16 reduced from 3.12 to 2.95. We believe this is an acceptable trade-off between accuracy and efficiency since they can largely reduce quantization errors during training. **Q3 (Experimental Designs Or Analyses): Loss curve spikes in Fig. 5 (larger models):** A3: Thank you for the insightful observation. Larger models (e.g., 13B) exhibit more pronounced loss spikes compared to smaller models (e.g., 7B, 1.3B). While similar spikes occur in BF16, they are more frequent and severe under FP4 due to its limited representation range and the larger number of elements per FP4 quantization vector, increasing quantization error. Addressing this may require more aggressive accuracy compensation strategies, like decreasing the parameter $\alpha$. Additionally, the issue may stem from the optimizer, as we currently use a FP8-based mix-precision optimizer. Larger models may demand higher precision for the optimizer. Thank you again for your valuable comments. We appreciate your constructive feedback, which has helped us clarify key aspects of our methodology and presentation.
null
null
null
null
null
null
Navigating Conflicting Views: Harnessing Trust for Learning
Accept (poster)
Summary: This paper introduces a novel approach to resolving conflicting predictions in multi-view classification by integrating an instance-wise, probability-sensitive trust discounting mechanism within an evidential framework. The method computes a degree of trust for each view through a referral network, which is then used to discount the functional predictions on a per-instance basis before fusing them using BCF. A stage-wise training strategy is also proposed to alternately optimize the functional and referral networks, leading to improved prediction accuracy and consistency across various datasets, particularly in scenarios where different views provide contradictory information. ## update after rebuttal The authors have addressed several of my concerns, and I appreciate the detailed discussion and experimental analyses provided in their rebuttal. I decided to maintain my rating. Claims And Evidence: Yes. I think the claims are supported. Methods And Evaluation Criteria: The proposed methods are well-motivated and generally appropriate for addressing conflicts in multi-view classification. The trust discounting mechanism, combined with the stage-wise training strategy, effectively integrates into the evidential framework, enhancing both prediction accuracy and consistency. However, the evaluation primarily relies on standard classification benchmarks (e.g., Handwritten, Caltech101, PIE, etc.) and metrics like Top-1 Accuracy and Fleiss’ Kappa. While these are valid, they might not fully capture the nuanced challenge of handling view conflicts. Additional evaluation criteria—such as direct conflict quantification metrics or experiments on synthetic datasets with induced conflicts—could provide a more comprehensive validation of the method for the intended application. Theoretical Claims: I examined the proofs provided for several theoretical claims, particularly focusing on Proposition 3.5 and Proposition 3.6 (detailed in Appendix B.4). And the derivations are consistent with the framework of subjective logic and evidential deep learning, and I did not identify any glaring mathematical errors. Experimental Designs Or Analyses: Q1. The benchmarks used are standard classification datasets that may not naturally exhibit strong conflicting views. Additional experiments on synthetic or real-world datasets specifically engineered to induce view conflicts could provide more direct validation of the method's core motivation. Q2. The analysis primarily emphasizes classification accuracy and consistency, but further evaluation—such as direct measures of conflict (e.g., disagreement ratios between views) or error analysis on instances with high inter-view disagreement—would strengthen the conclusions regarding conflict resolution. Supplementary Material: Yes, the supplementary material complements the main paper by offering additional clarity on both the theoretical and practical aspects of the proposed approach. Relation To Broader Scientific Literature: The method extends evidential deep learning techniques by incorporating trust discounting to adjust the influence of unreliable views. It also advances previous multi-view classification frameworks such as Trusted Multi-View Classification (TMC) (Han et al., 2021) and its variants (e.g., ETMC, ECML, TMNR, CCML) by explicitly modeling and mitigating the impact of conflicts among views. Essential References Not Discussed: No additional essential references appear to be missing. However, it is recommended that the authors discuss two newly accepted ICLR’25 papers related to Trusted Multi-View Classification to further contextualize and strengthen the contributions of this work. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - Line 355, the right column, ‘our methods’ - Lin 318, the left column, ‘since the’ Questions For Authors: Please see the above questions. If the authors can adequately address these concerns, I would be happy to reconsider and improve my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank you for your valuable feedback, and we appreciate your recognition of our novel idea, the effectiveness of the proposed trust discounting method and stage-wise training algorithm, and its adaptivity to the existing evidential framework. We have carefully considered each of the concerns raised and have provided detailed responses below. **Experimental Designs Or Analyses:** (1) Our work primarily focuses on prediction conflicts, for example, the case shown in Fig. 1, where different views' prediction is different from others. We agree with you that explicit evaluation on datasets engineered to induce view conflicts is critical. However, existing literature lacks an ideal method for conflict simulation. For example, the only existing work, ECML, attempted to address the conflict issue, which randomly replaces original features with instances from other classes (e.g., substituting a "cat" with a "dog") to simulate the conflict issue. However, this approach has limitations: i). unrealistic conflicts: random replacements (e.g., swapping "cat" with "airplane") may create semantically implausible disagreements, unlike naturally occurring multi-view conflicts, the one we show in Fig.1. ii). distribution mismatch: artificially generated conflicts ignore view-specific feature distributions, potentially biasing evaluation results. Instead, we propose an alternative conflict simulation method (acknowledging its imperfections, we move detailed results to Appendix D.7, Fig. 6–7 due to the space limitation). Our approach injects Gaussian noise into a randomly selected half of the views. The intuition is that as noise increases, corrupted views become less informative, approaching random guesses, while uncorrupted views retain (ideally) correct predictions, creating a controlled conflict scenario. We train models with noise injected data, ensuring the conflicts arise during the learning phase. As shown in Fig. 6–7, our methods (ETF/TF) outperform baselines in handling such conflicts. (2) We agree that direct conflict measurement is important, and we regret that space limitations prevent us from including these results in the main text. In Appendix D.5 and Fig. 4, we measure pairwise view (or inter-view) prediction conflicts and also provide a concrete instance-level example of such conflicts in Fig. 5. Our methods TF and ETF builds upon TMC and ETMC, and as shown in Fig. 4, both our methods consistently and significantly reduce conflict ratios across all view pairs. For example, the prediction conflict ratio between the GIST view and the pseudo view of EMTC is 0.6, and our method ETF reduces it to 0.45, a 15\% absolute improvement. For the instance level case illustrated in Fig. 5, the original ETMC prediction was Label 4 (while the ground truth was Label 3). After applying ETF, the model correctly shifts higher belief mass to the true label (Label 3). Notably, this improvement is not limited to the combined view, it also enhances consistency in view-specific predictions (e.g., pseudo-view and View 2), demonstrating stronger agreement among views. **Essential References Not Discussed:** We appreciate the reviewer for highlighting the newly accepted ICLR 2025 work, which we recognize as concurrent research to ours and worthy of discussion in our submission. However, after thorough investigation, we were only able to identify one relevant paper titled ``Trusted Multi-View Classification via Evolutionary Multi-View Fusion". While the authors provide a GitHub link, we note the implementation appears unavailable at this time. Upon careful analysis, we find their work orthogonal to ours: their primary contribution focuses on enhancing pseudo-view quality through evolutionary fusion, whereas we propose a fundamentally different approach for handling prediction conflict via Trust Discounting. That said, we acknowledge potential synergies - our ETF method could potentially incorporate their evolutionary architecture by: i) integrating our referral ENN to further refine the pseudo-view generated by their evolutionary fusion, and ii) maintaining our core innovation in conflict resolution while benefiting from their improved pseudo-view feature. This complementary combination might yield additional improvements, though we leave such exploration for future work given the current unavailability of their implementation. In the revised manuscript, we will explicitly cite and discuss the recommended paper, highlighting its relevance and clarifying how our work differs from or builds upon it. We hope our responses effectively address your concerns and appreciate if you can reconsider and improve the evaluation.
Summary: The paper addresses the issue of conflicting predictions in multi-view classification tasks, where traditional methods often assume views are equally reliable and aligned. It proposes a computational trust-based discounting mechanism within the Evidential Multi-view Classification (MVC) framework. This mechanism employs instance-wise probability-sensitive trust evaluation based on subjective logic, discounting less reliable view predictions before fusion. The method includes a stage-wise training algorithm and demonstrates improved accuracy and consistency across multiple real-world datasets, outperforming existing MVC approaches. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Empirical experiments conducted on six benchmark datasets convincingly demonstrate that the proposed approach outperforms established baselines in terms of accuracy and consistency metrics, underscoring its practical effectiveness and broad applicability. Supplementary Material: Yes, I have reviewed the source code in the supplementary material. It is well-organized and includes clear running instructions. Relation To Broader Scientific Literature: This paper introduces a novel trust-based mechanism for resolving conflicting information across multiple views in classification tasks. The proposed framework aligns closely with broader efforts in uncertainty estimation and trustworthy multi-view learning, potentially influencing further studies in multi-modal fusion, robustness in decision-making models, and uncertainty-aware ML systems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: # Strengths * The paper effectively addresses the practical and often overlooked issue of conflicting predictions in multi-view classification, highlighting scenarios where the standard assumption of equally reliable views does not hold, such as autonomous driving and medical diagnostics. * A notable strength is the introduction of an innovative trust-based discounting mechanism grounded in subjective logic, which quantifies uncertainty and reliability on an instance-wise basis. This approach uniquely adjusts the fusion process by discounting less reliable view predictions, thus enhancing robustness. * The authors make a methodological advancement by developing a structured stage-wise training strategy, effectively integrating both functional predictions and referral opinions. This training scheme contributes to improved model robustness, particularly in handling conflicts among views. # Weaknesses * While the paper introduces an incremental innovation through a trust-based discounting mechanism, it lacks a clear articulation of how this method differs from and outperforms closely related approaches, such as ECML. Clarifying these distinctions would strengthen the paper's theoretical contribution and impact. * The paper does not clearly explain how the conflict data is constructed for experiments. The dataset used are all normal multi-view data, which seems contradictory to the conflict scenario the paper aims to address. * The paper could benefit from a more detailed discussion on scenarios where the proposed method may not perform as expected. Understanding the limitations and failure cases can provide more nuanced insights and guide practical implementations. * The improvement of the proposed method in some experimental results isn't obvious compared with some other state-of-the-art MVC baselines. Other Comments Or Suggestions: N/A Questions For Authors: * What are the definitions of "referral opinion" and "functional opinion"? Is there any specific difference between their meanings? According to my understanding, they are generated by two separate sets of evidence network parameters. * Are there known limitations or potential failure cases of the framework that were not discussed in the paper? Understanding where the model might not perform as expected could be crucial for practical implementations. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's valuable feedback and their recognition of: our research motivation, the novelty of proposed trust-based discounting mechanism and effectiveness of training algorithm. We have carefully addressed each of the concerns raised with detailed responses below, and we hope these clarifications satisfactorily answer all questions. **Questions For Authors:** (1) We adopt the terminology from the Subjective Logic book, distinguishing between referral opinions and functional opinions generated by different types of Evidential Neural Networks (ENNs). Specifically, i) Referral opinions serve as reliability assessments, indicating whether a prediction should be trusted, ii) Functional opinions directly support decision-making based on their own evidence. We confirm your observation that these are generated by two distinct sets of evidence network parameters, and there are still some key differences worthy of highlighting. - As noted in line 266 and line 270, the referral ENN and functional ENN process different input features, referral ENN takes one more input which is the belief masses of functional opinion; - Their roles are also different, as in Eq. 5 (Trust Discounting Mechanism) – referral opinions modulate trust, while functional opinions provide evidence for decision making; - Referral opinions are always derived from beta distributions (Fig. 2), quantifying the reliability of their associated functional opinions. However, functional opinions may follow either: 1) Dirichlet distributions for multiclass classification or 2) Beta distributions for binary classification. (2) We appreciate this important question. Let us analyze potential limitations from both global and local perspectives. - From a global perspective, the evidential multi-view classification framework relies on late fusion. This means that there is no early interaction between views during feature extraction. Although effective in many cases, this design might limit knowledge integration when different views provide complementary but partial information about the complete pattern. - From a local perspective, our Trust Discounting (TD) module has a potential limitation: it may still use referral opinions with high uncertainty. Since high uncertainty suggests lower reliability, this indicates that our current trust adjustment mechanism could benefit from more fine-grained handling of such cases. We acknowledge this as an area for future improvement. **Weaknesses:** (1) The difference between ECML and our work can be summarized as follows, - Different Conflict Resolving Mechanism: our method uses a trust discounting module to modulate the trust on the functional opinions, while ECML uses a loss function to harmonize different views' functional opinions (this is already mentioned in the related work section). - Our method is built upon TMC and ETMC, so using the same belief fusion method, which is the Dempher-Shafer rule for combining different views from opinion perspective. However, ECML uses a different, evidence averaging based method to fused opinions. - To keep the effectiveness of the introduced TD module, we proposed stage-wise training algorithm, which is also different from ECML. (2) Thank you for raising this important point. A similar concern regarding Experimental Designs or Analyses has been addressed in detail in our response to Reviewer ngw9 (please refer to our first response statement to Reviewer ngw9). (3) Please refer to our reply to your same question above (4) Thank you for raising this important point. Our method consistently outperforms existing baselines across all six evaluated datasets. We acknowledge that in some cases, the improvements might appear marginal, which we attribute primarily to the following reasons: - Limited improvement space: For datasets such as Handwritten, all compared methods already achieve accuracy levels above 99\%, leaving minimal room for further substantial gains. - Intrinsic difficulty of certain datasets: For challenging datasets like Caltech, existing state-of-the-art methods typically achieve around 94\% accuracy. Despite this intrinsic difficulty, our method successfully pushes performance beyond this threshold into the 95\% range, demonstrating effectiveness even under constrained conditions. - Baseline instability and fluctuations: On datasets such as CUB, baseline methods exhibit performance instability, fluctuating notably between 90\% and 93\% accuracy. In contrast, our approach demonstrates clear superiority by consistently achieving around 94\% accuracy, indicating greater robustness and stability. Additionally, it's important to highlight that on datasets like Scene, our method achieves a substantial improvement of roughly 4-5\%, while even in the least improved scenario (PIE and HMDB dataset), we still observe a meaningful increase of above 1%. --- Rebuttal Comment 1.1: Comment: I acknowledge that I have read both the rebuttal and the reviews from other members of the Reviewers. 1. Some definitions, such as "referral opinion" and "functional opinion," need further clarification. I recommend that you provide clearer definitions for these terms in the revised version to help readers better understand your arguments. 2. Additionally, I recommend incorporating a dataset or toy examples that more clearly demonstrate the advantages of your proposed method. Including such a dataset would strengthen the paper by providing more compelling evidence of the method's effectiveness. Overall, the authors have addressed some of my concerns, and I would like to keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up and constructive feedback. 1. We apologize for unintentionally overlooking your question regarding the clear definitions of functional and referral opinions. We appreciate your observation and will include the following formal definitions in the revised manuscript: &nbsp; A **Functional Opinion** expresses belief in a model’s own ability to perform a certain task—such as a classification task. It reflects direct trust in the model’s prediction. Let model $A$ be evaluated for its ability to perform a function $f$ (e.g., classification). Then, a functional opinion is a subjective opinion represented as: $$\acute{\omega} = [\acute{\mathbf{b}}, \acute{u}, \acute{\mathbf{a}}]$$ A **Referral Opinion**, in contrast, expresses belief in a model’s ability to provide reliable referrals regarding another model’s ability to perform a task. It reflects trust in the model’s judgment, not in its own functional capability. Let model $B$ be asked to refer another model $A$ for the function $f$. A referral opinion captures our belief that model $B$ is reliable in making referrals about anther's (i.e., model $A$'s) ability to perform $f$, and is denoted as: $$\ddot{\omega} = [\ddot{\mathbf{b}}, \ddot{u}, \ddot{\mathbf{a}}]$$ Regardless of whether the opinion is functional or referral, $\mathbf{b}$ is the belief mass vector, $\ddot{u}$ is uncertainty score, with $\mathbf{a}$ being the base rate (i.e., a prior probability distribution over classes, generally a discrete uniform distribution), as already defined in Lines 154–159 in original manuscript. &nbsp; 2. We appreciate your suggestion to include a dataset or toy example that more clearly demonstrates the advantages of our proposed method. &nbsp; In Section 3, we have already included a toy example to illustrate the behavior and motivation of our method in a controlled setting. Additionally, our method is evaluated on six benchmark datasets, following previous works in Evidential MVC. To further support our claims, we have incorporated a conflict simulation study in the Appendix to analyze the behavior of our method under varying levels of inter-view conflict. &nbsp; Finally, to validate the scalability and real-world applicability of our approach, we also conduct End-to-End training on the large-scale UMPC-Food101 dataset, which consists of 101 classes. &nbsp; We believe this combination of toy example, diverse benchmarks, controlled conflict simulation, and large-scale evaluation provides a comprehensive demonstration of our method’s effectiveness.
Summary: This paper focuses on conflicting multi-view tasks and identifies that misleading predictions with high confidence (low uncertainty) from specific perspectives are key factors in the errors of conflicting multi-view decision-making. To address this issue, the authors propose a view fusion method based on computational trust. This method draws on the principle of trust discounting from Subjective Logic, assigns a binary opinion to each perspective to obtain the trust level for that perspective, and then uses the trust level to weight and fuse the opinions from all perspectives. Extensive experimental results on datasets show that TF has promising results. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: This paper does not present a theoretical claim or proof. However, through well-designed experiments, it effectively demonstrates the superior performance of the proposed method in addressing conflicting multi-perspective issues compared to existing approaches. Experimental Designs Or Analyses: These experiments used six datasets, but only utilized normal datasets, lacking a comparison with the results on datasets with added conflicts. Supplementary Material: Yes, I have examined the source code in the supplementary materials. The modules are clearly structured and highly consistent with the algorithm flowchart presented in the paper. Relation To Broader Scientific Literature: This paper introduces a multi-view fusion method based on computational trust. The proposed trust discounting framework is highly consistent with trustworthy multi-view learning and will play a promoting role in the research on multi-view decision-level fusion, reliable uncertainty estimation, and conflicting multi-view tasks. Essential References Not Discussed: No Other Strengths And Weaknesses: Paper strength: 1.The manuscript is well-organized and clearly written, making complex concepts accessible to a broad audience. The visual representations are well-designed and effectively illustrate the proposed method's motivation, methodological designs, and effectiveness of the proposed method. 2.A deep understanding of TF Enhanced Evidential MVC is demonstrated through a comprehensive literature review. The fusion formula of TF(TD + BCF) is derived in detail, which proves that the TF method maximizes the belief quality of the ground truth label, and that the TF fusion gains a larger u than the non-discount fusion. 3.The proposed method is reasonable and innovative. It uses the Trust Discount (TD) in Subjective Logic to assign a binomial opinion to each perspective to measure the trust level, guides the fusion process to generate reliable fused opinions, and solves the wrong predictions caused by opinion conflicts from different perspectives. 4.Extensive experiments on multiple benchmark datasets show significant improvements of the proposed TF method. The effectiveness of the TD module is validated through ablation studies, the capability of handling view conflicts is verified by evaluating multiple models with MVAGT, and the CR demonstrates the TF method's effectiveness in reducing view prediction conflicts. Paper weakness: 1.Some experimental results need to be added. The paper does not seem to compare the performance after adding conflicts to the datasets. The authors should compare the performance changes of each method on normal datasets and datasets with added conflicts. 2.The detailed description of the network structure is lacking, which is important for reproducing and thoroughly understanding the paper. 3.The paper could be strengthened by providing a more detailed analysis of the limitations of the proposed approach. This factor could be significant for guiding future research and practical applications of the methodology. Other Comments Or Suggestions: We noticed that your main innovation lies in the trust discount fusion from Subjective Logic, which is a reasonable theoretical choice. However, to more clearly demonstrate the unique value of your research, we hope you can further elaborate on the differences between your innovation and the content of the book. Questions For Authors: This paper introduces the Trust Fusion Enhanced Evidential MVC method based on the Trust Discount in Subjective Logic, and conducts extensive evaluations. Regarding the implementation of the Referral Network, in the context of conflicting multi-view learning, "conflict" occurs at random view of each sample, rather than referring to an overall conflict trend in the entire dataset (for example, most samples have a high probability of conflict between view v1 and view v2). After the Referral Network is trained using the training set, when it receives a test sample, it assigns a Binomial opinion to each view of the test sample, thereby assigning a view weight p. The question is whether this weight reflects the overall conflict trend learned from the training set, rather than directly addressing the random inter-views conflict of the current test sample. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of our work's strengths: 1) the clear organization and presentation of our manuscript, 2) the comprehensive literature review demonstrating deep domain understanding 3) the methodological innovation in our trust-based fusion approach, and 4) the extensive experimental validation demonstrating effectiveness. We have carefully addressed each of the concerns raised and provide detailed responses below. We hope these clarifications fully resolve any remaining questions about our work. **Weaknesses:** 1) Thank you for raising this important point. A similar concern regarding Experimental Designs or Analyses has been addressed in detail in our response to Reviewer ngw9 (please refer to our first response statement to Reviewer ngw9). 2) Regarding network architecture, we maintain consistency with established approaches (TMC, ETMC, ECML, etc.) for fair comparison: - For the six vector-based datasets, we use identical architectures: a single-layer network with Softplus activation to generate functional opinions. - For the Food101 dataset, we employ standard pre-trained models (ResNet50 for images, BERT-base-uncased for text) as feature encoders (as noted in Section 4.4), and referral and functional networks share these encoders on each view. However, each ENN maintains its own output layer (linear + Softplus) to ensure specialized processing while leveraging common feature representations. 3) Please refer to our response to Reviewer icFc, we will include the limitation our work in the revision. **Other Comments Or Suggestions:** We thank you for pointing this out. We believe that clarifying our innovation will make it clearer. 1) While the original subjective logic framework proposes a global trust-discounting mechanism, we extend it to operate in an instance-wise manner. This critical advancement enables adaptive handling of varying conflict patterns across different samples, significantly enhancing the framework's flexibility. 2) We introduce a novel stage-wise training approach that is essential for keeping the effectiveness of our Trust-Discounting (TD) module. Based on our study, simply incorporating the TD module into existing training frameworks (e.g., ETMC's approach) would actually degrade performance. Our designed training strategy ensures stable optimization and reliable conflict handling. **Questions:** We acknowledge that the learned weight p reflects the inherent patterns in the training data. Specifically, when certain views exhibit consistent conflicts, the framework will assign them lower weights. However, our design is fundamentally instance-wise, meaning it remains effective even when conflicts lack clear trends across the dataset. For example: 1) in our controlled experiments (Appendix D.7, Fig. 6–7), we artificially introduced random conflicts by corrupting 50\% of views with noise—a scenario without view-specific bias. 2) under these conditions, such as our method (ETF) still outperformed baselines like ETMC, demonstrating its robustness to sporadic conflicts. This validates that the framework adapts not only to systematic conflicts but also to instance-specific conflicts. We appreciate the opportunity to clarify this point.
Summary: The paper introduces a novel, trust-based discounting method to enhance the existing Evidential Multi-view Framework in the real-world scenario when the various views are not fully aligned on the labels of some of the examples. The authors leverage a belief-fusion process that considers the reliability of the predictions made by individual views via an instance-wise probability-sensitive trust discounting mechanism. The paper also introduces Multi-View Agreement with Ground Truth, which is a novel metric for measuring the reliability of the predictions. Claims And Evidence: The Section 3 of the paper is poorly written and organized, which makes it extremely hard to follow (thus, difficult to evaluate the paper's claims). One way to improve its structure would be to: - simplify sub-section 3.1 to the bare minimum of terminology and notation - add to sub-section 3.1 the definition of referral and functional opinions, which are used in 3.2 (lines 2013-214) without being introduced (easy to prove by performing a simple search in the doc) - turn sub-section 3.2 into an illustrative running example; basically show the complete flow of computations (i.e., for Tables 1, 2, and 3, make it clear which values are given inputs and which ones are computed; for this second category, show exactly how they are computed) not only for this particular example, but for all four possible cases of true positive (i.e., the views agree on its label, and the fusion consolidates that decision), a "false positive" (i.e., the current example, for which the naive fusion method decides that it is safe, when -in fact- it is not), a false negative (the opposite of the current example in Table 1, when the naive method wrongly predicts "unsafe"), and a true negative. Having all flows and computation for these four scenarios side by side will allow any reader to truly and understand your approach. - simplify sub-section 3.3 to a version of Algorithm 1 that can be used to trace what happens with each of the four illustrative example in the new sub-section 3.2 Methods And Evaluation Criteria: Please re-organize Section 4 and the appendices in such a way that the new Section 4 matches the very intuitive presentation of Table 1 in [Han et all, 2022]. In the current form of the paper, you split the [Han et al, 2022] table, which, helpfully, includes both accuracy and AUROC, in Table 4 from Section 4 and Table 13 in the APPENDIX. Very confusingly, your Table 13 in appendix D.2 has significantly lower AUROC values for ETMC. The values in Table 1 of [Han et al, 2022] are much higher (actually quite competitive with yours: on the 6 evaluation domains, the Han paper has AUROCs superior (at times far superior) to TF's and ETF's: - ETMC: 99.95%, 99.89%, 99.77%, 96.17%, 95.58%, and 99.13% - TF: 99.32%, 88.99%, 95.90%, 64.56%, 83.59%, and 53.52% - ETF: 99.90%, 88.70%, 92.47%, 70,44%, 86.23%, and 64.41% These results should be discussed and fully explained in the main paper Theoretical Claims: As all proofs are in the appendices, I did not check them Experimental Designs Or Analyses: See comments from "Methods And Evaluation Criteria" above Supplementary Material: I did not carefully check the appendices, but rather focused in the discrepancy on the AUROC results between the original [Han et al, 2022] results and those in appendix D.2 (see comments above) Relation To Broader Scientific Literature: The authors seemed to have done an adequate coverage on the broader literature. However, the discrepancy between the results reported in [Han et al, 2022] and those in appendix D.2 are a source of concern. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please explain the discrepancy between (1) ETF's clear superiority wrt ETMC in Table 5 (Fleiss' Kappa), and (2) a version of your Table 13 that is aligned/reconciled with the results in Table 1 from [Han et al, 2022]. If ETMC truly outperforms your approaches on AUROC, wouldn't this also raise concerns about the usefulness of Fleiss' Kappa in this context. Other Comments Or Suggestions: - lines 60-63, left coliumn: please add a reference for the Evidentiary Multi-view framework - line 57, right column: the "can" in "our method can also enhance ..." makes the statement weak; to strengthen the 3rd claim, you may want to replace it by "does" - line 190: please avoid unnecessary negatives, such as "is expected to be NOT lower" --> "is expected to be higher" Questions For Authors: - in spite of your claim in line 190 wrt Table 1 (to paraphrase - "the fused opinion is expected to be higher that those of each individual views"), in the final results in Table 3 we still have 0.08 < 0.10 < 0.42 < 0.76. Why is this the case? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your feedback and below is our response to your questions, 1) Regarding AUROC usage: We clarify that our use of AUROC differs fundamentally from TMC [Han et al., 2022]. While TMC employed AUROC to measure label prediction accuracy, we follow the established uncertainty evaluation framework of [Filos et al., 2019], where AUROC assesses uncertainty calibration - specifically, whether the model's confidence in its predictions correlates with their correctness. In other words, it measures the rate of referring the least confident predictions to human experts. This distinction is important because: - As noted in TMC's results (Table 1), AUROC for label prediction consistently shows high scores ($>$95\%) even when accuracy is modest (67.74\% on Scene15, 65.26\% on HMDB), suggesting limited discriminative value - Subsequent works (ECML, CCML, TMNR) have consequently abandoned this usage - Our application evaluates a fundamentally different capability: the model's ability to identify unreliable predictions through uncertainty scoring. 2) Writing logic: We appreciate the suggestions regarding paper organization. While we will carefully consider these recommendations for future work, we believe the current structure most effectively presents our technical contributions and results. 3) Wording improvements: We thank the reviewer for their thoughtful suggestions on phrasing and will incorporate appropriate refinements while maintaining the paper's technical precision. 4) Regarding the question: We appreciate your close examination of our results. There appears to be a misunderstanding regarding the relationship between Tables 1 and 3, which we would like to clarify: - Table 1 demonstrates the limitation of common belief fusion methods (like BCF [Han, 2022]) where, as we stated in Line 190, "the fused opinion is expected to be higher than those of each individual views" - this represents the problem scenario we aim to solve - Table 3 presents results after applying our Trust Discounting mechanism, which addresses the conflict issue. The values shown (0.08 < 0.10 < 0.42 < 0.76) are: a) expected outcomes of our method, b) demonstrate proper handling of conflicts, and c) represent an improvement over the Table 1. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answers. I have one follow-up question wrt your comment #4 above: - in the original doc, line 190 says “the uncertainty is expected to be not lower than that of all views to reflect the struggle among different opinions in the presence of conflict.” - at the risk of sounding pedantic, to remove any ambiguity, I would rephrase it as “the FUSED uncertainty is expected to be HIGHER than that of EACH INDIVIDUAL view - to reflect the DISAGREEMENT OF THE CONFLICTING VIEWS.” Is this what you meant? - after applying the ToDs from Table 2 to the Beliefs in Table 1, we get the Beliefs in Table 3. In spite of the discounting, we still have the same conflicting views, right? I.e., Captain & PolarBear still disagree w the Dolphin (but less so, and, in the fused version, the Dolphin’s view now prevails) - according to line 190, I expected that in Table 3 the Fused uncertainty will be higher than the ones of each individual view, which is not the case (hence my confusion) Could you please help me understand what am I missing here? Thank you! --- Reply to Comment 1.1.1: Comment: We thank you for your follow-up and for clarifying the logic behind your question. To address your question, we would first like to clarify a preliminary point: In ETMC [Han, 2022], it was proven that BCF inherently exhibits the characteristic of **always generating lower uncertainty**. As explained in the Subjective Logic book (section 12.2), this is achieved by **ignoring conflicts** and leaning toward a more certain (i.e., lower uncertainty) final prediction. Therefore, based on the theoretical proof by Han [2022] and the explanation from the Subjective Logic book, we understand that regardless of the opinions from each view, the fused opinion generated by BCF will always have lower uncertainty than each individual view. In this regard, the results shown in Table 1 and Table 3 are consistent with the expected behavior of BCF. However, our method is built upon BCF, so it does not break the theoretical property of always producing a fused opinion with lower uncertainty. Nonetheless, even under this constraint, after applying the proposed TD, we observe that the uncertainty of the fused opinion increases compared to before — for example, 0.08 in Table 3 vs. 0.01 in Table 1. This supports our Proposition 3.6: > "The combined opinion generated by proposed TF (TD+BCF) for conflicting views, will exhibit greater uncertainty than obtained through fusion with non-discounted functional opinions". Proofs for this are provided in the Appendix. Additionally, BCF operates in an uncertainty-aware manner, as indicated by Eq.(2) in the original text. With the TD, the fused opinion changes to the correct one (with more belief mass on “Unsafe”), and its uncertainty also increases (compared to original BCF) — indicating the presence of conflict. However, the uncertainty remains within a reasonable range (with a magnitude of 0.08), so the opinion can still be considered reliable. In summary, our statement at line 190 reflects an intuitive expectation. While we acknowledge that “higher than” may appear clearer to some readers, its intended meaning is essentially consistent with the original phrasing “not lower than.” We believe either formulation is acceptable in conveying the intended message. Meanwhile, without violating the theoretical rule of BCF, our proposed TD module still adjusts the uncertainty to a more reasonable value, which achieves: 1) reflecting the presence of conflicts by exhibiting higher uncertainty compared to BCF without TD (proposition 3.6). 2) providing a reasonable level of uncertainty in the fused opinion, which is slightly higher but still within a reliable range for decision making.
null
null
null
null
null
null
Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More
Accept (poster)
Summary: This paper aims to explore how the performance of vision transformers changes when the patch sizes are scaled up. The main contribution is the scaling law, which has not been presented in previous work, that as the patch sizes get smaller, the classification and segmentation performance can be lifted. The authors also show that the benefit of scaling up patch number is larger than scaling up the model size. This is really an interesting phenomenon. Claims And Evidence: The authors have made extensive experiments to support the claims. Methods And Evaluation Criteria: The criteria is widely-used. Theoretical Claims: These is no theoretical claims. Not applicable. Experimental Designs Or Analyses: The experimental designs are thoroughout. The authors conduct a series of experiments to demonstrate the claims and the designs are also meaningful. - One concern about the experimental design is that the authors only use two baseline models, i.e., DeiT and Adventurer. I would like to see whether the scaling law still exists when stronger baselines are used, i.e., [A] and [B]. - In addition, the experimental designs are all based on the ViT-like plain architectures. This type of architectures has been widely used in vision tasks, but the pyramid architecture is also important. For example, Swin Transformer is such a type of architecture adopting a pyramid network. Have the authors conducted experiments on this type of architectures? [A] All tokens matter: Token labeling for training better vision transformers. NeurIPS, 2021. [B] Early convolutions help transformers see better. NeurIPS, 2021. Supplementary Material: The supplementary material provides more experimental settings and model details. Relation To Broader Scientific Literature: The scaling law presented in this paper has not been discussed in previous work to my knowledge. Essential References Not Discussed: See the 'Experimental Designs Or Analyses' section. Other Strengths And Weaknesses: Strengths: - The presentation of this paper is good. The authors clearly explain the motivation of this paper and the method of this paper is easy to follow. - Though the method does not present an interesting method, the phenomenon observed by the authors are interesting. Weakness: - The paper [B] has shown that adding early convolutions to the plain ViT model can largely improve the model performance. However, the authors did not compare with this type of works, which aim to replace the patchification methods. It would be good to add some comparisons about this. - In Fig. 3, it is good to see the comparison between patch size scaling and parameter scaling. However, it is straightforward to come up with a new question: Could these two types of scaling methods benefit from each other? In other words, could the performance be further improved if fine-grained patchification is used when taking higher-resolution images as inputs? - The authors claim that the proposed approach is also applicable to Mamba-like models. However, it seems that there are no experimental results supporting this. [B] Early convolutions help transformers see better. NeurIPS, 2021. Other Comments Or Suggestions: Actually, when I read the introduction section of this paper, I was supposed that a novel method is presented to solve the computation overhead when using smaller patches. If such a method is proposed, I think the overall quality of this paper could be further improved. Questions For Authors: Not applicable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their careful evaluation and thoughtful comments. Detailed responses can be found below. **Q1: *Experiments with other architectures (e.g., Swin Transformer, LV-ViT)*** Thank you for your insightful feedback. We would like to highlight that our conclusion also holds for pyramid networks such as Swin Transformer (see results in the response of Q1 for Reviewer DwN7) and stronger ViT baselines such as LV-ViT (see table below). To complete this experiment within a limited time frame, here we use 112×112 input images and employ LV-ViT-S as the backbone. Under this stronger baseline, we observe a performance trend similar to that in the standard ViTs, which demonstrates the architecture-wise robustness of patch size scaling. We will include the corresponding discussion in the revised version. | Patch | 16x16 | 8x8 | 4x4 | 2x2 | |-------|-------|-----|-----|-----| | Acc. | 78.0 | 81.1| 82.5| 83.2| **Q2: *Comparison with early convolutions to the plain ViT model.*** Thank you for your suggestion of comparing with early convolutions. We summarize the results of patchification scaling with early convolution in the table below. As shown, when the patch size is at the standard 16×16 level, applying early convolution brings a noticeable performance gain. However, this benefit diminishes rapidly as the patch size decreases, and becomes negligible at 8×8. In simple terms, the basic idea behind early convolution is to decompose a single 16×16 kernel size and 16×16 stride convolution layer into a stack of smaller kernels (e.g., 3×3) with smaller strides (e.g., 1×1 or 2×2). This approach effectively addresses the training instability caused by the abrupt spatial resolution drop at the patchification layer in standard ViTs. However, as the patch size decreases, this issue becomes much less pronounced. We are encouraged to see that our paper shares some core insights with this line of research—namely, that large-kernel patchification in standard ViTs can limit model expressivity. The difference is that we take a more direct approach to reduce the spatial downsampling effect in patchification. When extended to the extreme case of pixel tokenization, this issue is already fundamentally resolved. We will include these results and discussions in the revised version. | Patch size | Original accuracy | Early conv. accuracy | |------------|-------------------|----------------------| | 16x16 | 82.6 | 83.1 | | 8x8 | 83.9 | 83.9 | **Q3: *Could these two types of scaling methods (patch size and parameter size) benefit from each other?*** Thank you for your insightful question. Yes, they do benefit each other and we have related results in Table 4 and discussions in line 364-376r. Table 4 shows a consistent upward trend from the top-left (small model and large patch size) to the bottom-right (large model and small patch size) corner. Additionally, as observed from both Table 1 and Table 4, reducing the patch size and increasing the input resolution can both have a positive impact, demonstrating the good potential of jointly employing these scaling dimensions. **Q4: *Application to Mamba-like models.*** Thank you for your feedback. We evaluate the patch size scaling performance with ViT and Adventurer models, where for Adventurer, we are actually using its Mamba-based setup so the application to Mamba models is already included. We will clarify this point in the revision. **Q5: *I was supposed that a novel method is presented to solve the computation overhead when using smaller patches. If such a method is proposed, I think the overall quality of this paper could be further improved.*** We appreciate your constructive comment on this point! In fact, we first identified a solution to address the computational challenges brought by small patch sizes before conducting the patchification scaling study: we chose to carry out the main experiments using the Adventurer model, whose computational cost scales linearly—as opposed to ViT’s quadratic scaling—with respect to sequence length. This linear complexity fundamentally resolves the computation bottleneck, which enables us to perform pixel tokenization experiments using modest computational resources (256 A100 GPUs). We also discuss the advantages of this linear architecture in Table 6: in the most demanding training setup with the longest sequences, Adventurer achieves an 8.4× speedup compared to ViT with FlashAttention. This substantial efficiency gain is what made our large-scale pixel tokenization experiments practically feasible. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. My concerns have been solved. Other reviewers have difference concerns on this paper, but I think the observation of this paper is interesting. I would like to keep my rating unchanged.
Summary: The authors perform a study regarding the size of patches used in modern vision transformers or state space models. Utilizing the Adventurer state space model, the authors are able to experiment with resulting sequence lengths of up to 50,176 tokens. The work arrives at the conclusion that there exists a scaling law that holds until each input pixel is represented by a dedicated patch/token. Smaller patches naturally align well with dense prediction tasks like semantic segmentation, where the need for a decoder diminishes with the use of smaller patches. ## update after rebuttal I thank the authors for their efforts in answering my questions and providing additional experiments. Still, I am of the opinion that the introduction would benefit from references to other works, that already experimented and observed the benefit of smaller patches. Furthermore, the authors explain the improvement by a reduction of the information loss in the patchification step. Which is very reasonable and certainly true for smaller models. The insightful experiment presented in Figure 4. shows that the scaling potential is bounded by the original image resolution. I think a similar experiment with respect to the hidden dimension of the model would be of high relevance. Thus, I still think that the paper would greatly benefit from an experiment that investigates the scaling potential in a regime where no compression is needed to create patches. The experiments provided during the rebuttal support the information loss argument, as the smaller patch versions consistently show a much smaller reconstruction loss. Yet on a second glance, the results are not totally plausible, as they show no impact of the bottleneck dimension of the autoencoder whatsoever. Claims And Evidence: The work claims that there exists a scaling law when it comes to the size of patch tokens used in transformer or state space models for image processing. The authors provide empirical results that clearly support the claim. Methods And Evaluation Criteria: Supervised training on ImageNet is a standard evaluation that allows for a comparison to a wide range of models. Semantic segmentation on ADE20k is common as well. The authors clearly state that they focus on decoder free evaluation as it aligns with the benefits of smaller patches. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Experiments follow recipes proposed in the respective publications (e.g. DeiT and Adventurer). Adaptations are listed in the Appendix. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: The contribution of this paper adds another example to the set of scaling laws that have shown empirically that an exponential increase in resources leads to a linear performance improvement. Essential References Not Discussed: Not essential, but there are quite few works in self-supervised learning that already utilize smaller patches to improve performance. And especially for smaller models that is a widely used practice. Already the 14x14 ViT Huge model in the ViT paper suggests that the potential benefit of smaller patches has been known for quite some time. E.g. [1-3], especially in 3 the authors already work with small 4x4 patches to achieve an advantage in the resulting per-parameter comparison to other models [1] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." Proceedings of the IEEE/CVF international conference on computer vision. 2021. [2] Zhou, Jinghao, et al. "Image BERT Pre-training with Online Tokenizer." International Conference on Learning Representations. [3] Assran, Mahmoud, et al. "Masked siamese networks for label-efficient learning." European conference on computer vision. Cham: Springer Nature Switzerland, 2022. Other Strengths And Weaknesses: Strengths: - the work provides empirical evidence that smaller and smaller patches steadily improve the performance of visual backbones and follow a scaling law. - both for transformer and state space models. - the experimental setup is well documented. - Additional studies that one patch per pixel is the optimal size and that more patches than pixels shows no benefits. Weakness: - no technical or theoretical contribution - the benefit of smaller patches has been known and has been exploited for quite some time Other Comments Or Suggestions: typo fig 3b. form - from Questions For Authors: Concerning the argument that the scaling law can be attributed to the information loss in the patch creation step . Why should this hold in the case where the hidden dimensions are large enough to simply stack the corresponding pixels? Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive review, insightful feedback, and recognition of the empirical contribution of this work. The detailed response is summarized below. **Q1: *No technical or theoretical contribution.*** Thanks for the comment. As you noted, this is indeed an experiment-driven study, and our focus is on delivering empirical results to demonstrate a new scaling law. We respectfully ask the reviewers to take into consideration that all studies on scaling laws are inherently grounded in empirical observations. For large vision or language models, it is extremely difficult—if not impossible—to theoretically derive precise upper or lower bounds on their fitting capacity. Therefore, empirical evidence remains the most practical and informative approach for uncovering such trends. If we consider the initial scaling law proposed by Kaplan et al. (2020) as a theoretical foundation, this work will be a meaningful extension of their theories in vision. Specifically, in the initial study, they show that in language modeling tasks, when the model size (parameter count $M$) and the amount of training data (token count $N$) increase, the perplexity (or loss) on the validation set tends to follow a near power-law relationship: $ L(N,M)\approx\alpha N^{-\beta}+\gamma M^{-\delta}+\text{(smaller interaction terms)}$ where $L$ represents the validation loss (e.g., cross-entropy), and $\alpha,\beta,\gamma,\delta$ are constants fitted from large-scale experiments. In this work, we not only reproduced similar scaling trends on vision tasks, but also validated the second component of the scaling law (the token count) in the vision domain. Specifically, we show that increasing token count in vision can be achieved by reducing patch size rather than solely increasing dataset size, which we believe is an important theoretical extension of the original scaling law foundation. We sincerely appreciate your suggestion on this point and will include the corresponding discussion in the revised version. **Q2: *The benefit of smaller patches has been known and has been exploited for quite some time*** Thanks. We agree that many existing studies have observed that using smaller patch sizes within a certain range can improve prediction performance. However, prior to our work, these observations remained scattered and lacked a unified theoretical or empirical framework. In contrast, our study elevates patchification scaling from isolated findings to a law-level conclusion. We believe this distinction is substantial—not only in terms of the conclusions themselves, but also in their implications for guiding future progress in the community. For example, before the NLP scaling laws were formally introduced by Kaplan et al., practitioners had already noticed from experience that "larger models and more data tend to yield lower loss." Yet such heuristic insights were insufficient to offer reliable guidance or theoretical grounding for large-scale language model development. The formulation of scaling laws provided the community with a much clearer signal: the scaling curve had no evident upper bound, or at least we were far from reaching it. This encouraged researchers to confidently invest significant resources in scaling up language models, ultimately contributing to the breakthrough success of models like the GPT series. In the same spirit, we would like to highlight the contribution of our work. We aim to offer a long-term, principled perspective on vision model development. The value of patchification scaling laws lies in showing that patch size is a reliable scaling dimension—even at today’s typical input scales, shrinking the patch size down to 1×1 continues to yield noticeable gains. This suggests that for those seeking to push the limits of model performance, reducing patch size remains a viable and promising direction. **Q3: *Concerning the argument that the scaling law can be attributed to the information loss in the patch creation step . Why should this hold in the case where the hidden dimensions are large enough to simply stack the corresponding pixels?*** When the hidden dimension exceeds the total number of pixels within a patch, the patch embedding may theoretically support a lossless projection. However, in practice, the high-dimensional hidden features are optimized to represent the holistic semantic of the whole patch, rather than preserving the pixel-level representations. This is because the token mixing process (e.g., self-attention) is performed on patch-level features. That says, the different segments within a feature vector undergo the same operation in token mixing. Moreover, the hidden dimension cannot be flexibly scaled up since it quadratically increases parameter count for all linear projection layers and can easily lead to training collapse (e.g., also observed in Dehghani, et al., 2023). **Q4: *Typo fig 3b.*** Thanks. We will fix it in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. @W1: When comparing this work and Kaplan 2020, I think there is still a difference regarding novelty/superprise factor of the results and scale/scope of the recpective experiments in favor of Kaplan 2020. @W2: Was not solely about the distinction of your work, but also about the fact that your work does not mention any of these observations. @Q1: If I read your answer corrrectly, you as well attribute the improved performance of models that use smaller patches to the increased capacity due to an exponential increase in neurons, which becomes even more relevant when the level of abstraction and sparseness increases. Combined with the fact that attention and feed forward layers use weight sharing, no additional parameters are introduced and overfitting/ training instabilities are less of a problem. Which is not exactly a preservation of information. --- Reply to Comment 1.1.1: Comment: We appreciate your detailed response to our rebuttal. **@W1:** We agree that Kaplan 2020, as the first work to propose scaling laws, undoubtedly has profound novelty and practical impact. However, our work fills the gap in previous visual scaling laws and introduces a new dimension orthogonal to parameter scaling, which provides valuable guidance for the future development of visual models. **@W2:** In the paper, we discussed some prior observations that using smaller patch sizes can help improve performance. For example, in the Introduction, we note that previously, reducing DeiT's patch size to 8x8 resulted in significant performance gains (lines 28-31r), and Nguyen 2024 observed substantial gains using pixel patchification on small input images at a 28x28 resolution. We want to again highlight that these observations are sporadic, and our work systematically studies this issue and concludes it into a scaling law. In the revised version, we will provide a detailed summary of previous work on the impacts of patch size discoveries. **@Q1:** Regarding the information loss and preservation problem, here we would like to give direct evidence that large patches easily lead to an information loss and smaller patch sizes are effective solutions. We conduct a simple pixel-reconstruction experiment to verify whether information can be completely restored after patchification. We employ a three-layer model consisting of a patchification layer (i.e., a $p\times p$ kernel and stride convolution), layer normalization, and a de-patchification layer (another similar convolutional layer that brings patches back to pixel dimensions). We then train this model on ImageNet using a pixel-to-pixel L2 loss, with a batch size of 64 for 60k iterations. The results are shown in the following table. We observe that the reconstruction loss is closely related to the patch size but largely independent of the hidden dimension. At a 16x16 patch size, we see a significant reconstruction loss, e.g., 0.211 for the base-sized hidden dimension (768). However, when the patch size is reduced to 1x1, the loss dramatically decreases to 0.012, indicating that we fundamentally solve the issue of information loss in this process. | Hidden dimension | Patch size ($p$) | Reconstruction loss ($\downarrow$) | |------------------|------------------|---------------------| | 384 | 16 | 0.204 | | 384 | 4 | 0.068 | | 384 | 1 | 0.015 | | 768 | 16 | 0.211 | | 768 | 4 | 0.065 | | 768 | 1 | 0.012 | | 1280 | 16 | 0.202 | | 1280 | 4 | 0.060 | | 1280 | 1 | 0.014 | It's noteworthy that theoretically, as long as the hidden dimension in the linear layer is sufficiently large, it can preserve all the pixel information within the original patch. However, in practice, any passage through a non-linear layer (such as LayerNorm) compresses these fine-grain details. Thus, as observed in this experimental result, increasing the hidden dimension does not mitigate information loss. In both ViT and Adventurer, each token mixer is preceded by a LayerNorm, resulting in significant information loss with the traditional 16x16 patch size.
Summary: The paper evaluates the performance (test loss) of vision models (standard ViT and Adventurer) against different patch sizes. The paper's findings are that with reduced patch size, the performance of the networks increases and in the extreme case of 1x1 patch size, the segmentation tasks do not need a decoder head. The paper evaluates the patchification scaling on the standard ImageNet- 1k classification, ADE20k semantic segmentation, COCO object detection and instance segmentation benchmarks. Claims And Evidence: 1. The paper examines how compressive encoding affects visual representations and whether patch size can be a new scaling dimension for modern visual architectures. It provides evidence in the form of empirical results obtained by changing the patch size down to 1x1 (pixel level). 2. The paper claims tokenization/patchification is another dimension for scaling law. The evidence is provided by reducing the patch and getting marginal gains. Methods And Evaluation Criteria: This study should have been done very carefully and meticulously because there are many aspects and variables that need to be carefully analyzed and ablated. The overall method simply involves getting two networks and reducing the patch size down to 1x1 (not for pure transformers) and then showing results on vision and timeseries tasks. The issue is that the method is counter to common wisdom of balancing the compute and gains trade off. It does not make sense to increase compute just because more hardware is available without getting real gains from it. The method showcases its results on metrics without ever indulging into the GLOPs, test and training time compute, throughput and other aspects that come with reducing the patch size of the input. The paper raises the slogan of “a pixel is worth a token” (line 233) but does not practice it in ViT because the “a pixel is worth a token” requires a lot of resources. Line 312-313L: The paper provides misleading sense of great results but the comparison is unfair since the numbers are compared to 32x32 patch size instead of the mainstream/common 16x16 or 14x14 patch size. The wosre results on uncommon 32x32 patch size makes it looks like the results are good (which they arent) Theoretical Claims: This paper actually could ground itself from a mathematical point of view and framework when talking about compression and information contained in the embedding vectors of a given size. The lack of such framework and relying entirely on empirical results is not much convincing. Experimental Designs Or Analyses: The experimental design and ablations in the paper are either flawed or entirely absent. Please refer to the other sections for the stated problems. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: NA Essential References Not Discussed: It's surprising that "On the relationship between self-attention and convolutional layers" by JB Cordonnier et. al., which was the basis of ViTs and started with the patch size of 2x2 (which was later changed to 16x16 in ViT paper) has not been discussed at all. Comrpession and compression related topics (e.g. information theory) are entirely missing form the paper. Other Strengths And Weaknesses: Weaknesses: 1. The overall idea of scaling is to increase the generalization capability of the models. It is not supported by the evidence shown in the paper that patchification leads to better generalization. Moreover, smaller patchification leads to more computation without having significant benefits. 2. Analogous patchification in NLP would be the character-level patchification which has been shown not to be optimal. It is not clear why and why not similar will be true for vision models from this work. 3. There are claims and arguments (discussed above) that are not supported by any experiments or ablation studies. 4. There is no effort to introduce or quantify the information loss lost by compressive encoding from the information theory perspective. 5. There is no ablation that discusses the patchification vs. embedding dimension size trade-off. 6. The results themselves are only marginally better and yet underperforming to SOTA methods with a greater patch size (Spatial Mamba for example has better results). Other Comments Or Suggestions: Missing footer. Not using the ICML template or the template is altered. Line 250-257: Sounds like the paper is claiming the credit for the method "Adventurer" only for increasing the input token size by reducing the patch size. Increasing the input size shouldn't be posed as an achievement. Line 351R: The same condition does not hold for the paper's own method. Questions For Authors: Line 15-16R (Right paragraph): "We argue that this operation often incurs irreversible information loss to visual inputs." There is no reference, ablation or supporting evidence provided for this argument. Line 105: Why exactly computation needs to be scaled up? Line 114-115: Single pixel patch (or even few pixels patch) expanded to embedding dimension is not even compressive paradigm anymore but "expansive" regime so to say. There is no discussion on compression ratio. Line 324R: "patch size scaling not only exhibits a better computation-accuracy tradeoff": Where is the trade-off provided? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your comments and feedback during the review stage. We‘d like to respectfully point out that your review contains several factual misunderstandings of our paper. These points were clearly presented in the main text, and we'd like to first clarify them below: 1. ***Absence of 1x1 patch size for ViT:*** This is factually incorrect. We do implement pixel patchification in ViTs, albeit at lower input resolutions. The results are clearly shown in the first row of Tab.1. 2. ***“The overall method… showing results on vision and timeseries tasks.”*** Our work is solely focused on vision tasks, and there is no mention or evaluation on time series data anywhere in the paper. 3. ***“without ever indulging into the GLOPs, test and training time compute”*** This is completely inaccurate. We provide thorough analysis of FLOPs (Fig.3), memory usage and training time (Tab.6), and discuss computational costs of patch scaling in multiple places (e.g., Lines 324–329, 436l–408r). 4. ***"no ablation that discusses the patchification vs. embedding dimension size trade-off."*** We compared patch size and parameter size scaling in the first experiment of the ablation study in Section 4.3. The embedding dimension is highly equivalent to the parameter size. E.g., in the standard scaling of ViT models from ViT-T to ViT-B, the only change lies in the embedding dimension; from ViT-B to larger models such as ViT-L, it still involves increased embedding dim while depth is also changed. **Q1: *Compare to 32 patch size*** Our study focuses on exploring scaling laws of patchification, which means we need to examine both scaling up and scaling down. We aim to collect a wide-range performance curve to demonstrate that the gains are smooth and consistent across different patch sizes. The 32 patch size results are reported to provide a more comprehensive view; we did not hide the results for the mainstream 16 patch size—they are shown in the same table and scaling from it leads to a +2.2 box AP. **Q2: *Generalization capability*** We demonstrated that patch scaling consistently leads to improved predictive performance, which is generalizable for different models, tasks, and input resolutions. Below we include more results of generalization capability against out-of-domain data, where our conclusions hold true for ImageNet variant test samples as well. |Patch|IN-v2|IN-R|IN-A|IN-S| |-|-|-|-|-| |16|71.7|87.5|31.4|36.8| |8|72.9|87.9|36.6|38.3| |1|74.3|88.8|41.6|39.5| **Q3: *Patchify in NLP*** Word-level tokenization has indeed been prominent in NLP. However, we believe this is only a phase-based conclusion but no one has ever proven that it is the only correct approach. In contrast, recent studies(e.g. Byte Latent Transformer) have shown that byte-level LLMs can outperform word-level models and offer better scaling potential. Thus, whether patchify is necessary remains an open question that deserves further investigation. Moreover, directly transferring conclusions from NLP to CV is not always appropriate. Images and text differ significantly in terms of information density; a pixel cannot be naively equated to a character in text. These differences call for careful study, and our work aims to contribute to this ongoing exploration. **Q4: *Quantify the information loss from the information theory perspective.*** This paper is not a theory-oriented study, but rather an empirical investigation into scaling laws. We believe both empirical and theoretical perspectives are equally important, and our experimental results can serve as a solid foundation for future theoretical work. In fact, our evaluations already offer explanations from an information point of view, in which compression can be assessed via entropy differences. Fig.1 explicitly shows how cross-entropy loss changes during patch scaling, which is actually measuring the KL divergence between ground-truth distribution and encoded distribution. This typically serves as a direct proxy for compression in information theories. **Q5: *Results marginally better; underperform SOTA models*** Our focus and contribution is figuring out how patch size affects the performance of vision models, rather than chasing SOTA on open-ended architectures. Whether a performance gain is considered marginal is subjective and varies by context. For fixed architecture, improving accuracy from 82.6 to 84.6 is a significant achievement. It is widely acknowledged that hierarchical models (e.g., Swin, SpatialMamba) are more capable of fitting ImageNet-scale datasets compared to plain models, but plain models have wider applications in multimodal tasks. We focus on patch scaling within plain architectures so the performance gap with SOTA models is expected. In our response to R#DwN7 we also show that patchification scaling laws hold true for Swin, so we believe that future follow-up work applying patch size scaling to SOTA architectures could reasonably expect further performance improvements. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Ironic that the mistakes I have made while writing the review have forced me to write the rebuttal on my own review. I apologize for the mistakes. 1. Clarification on the absence of 1x1 patch size for ViT: My original point may have been unclear. While the rebuttal correctly state that pixel-level patchification (i.e. 1x1 patches) is implemented at lower input resolutions, the critique was centered around the discrepancy between the title's claim ("An Image Is Worth 50,176 Tokens") and the actual evaluations. Specifically, table 1 does not present results for a sequence length of 50,176 on DeiT-Base trained on ImageNet, despite this being a central message of the paper. Based on the runtime extrapolation from Table 6, training with this sequence length would take approximately 6.6 years on a single A100 GPU (80GB) with a batch size of 5 (a figure that is already optimistic). Given the substantial computational demands, I strongly recommend including a carbon footprint section. Such a section would contextualize the environmental implications of the proposed approach and help readers understand the trade-offs involved. This could be benchmarked using standard ImageNet training protocols on A100 or similar hardware. 2. “The overall method… showing results on vision and timeseries tasks.” This is an error on my part. This reference was mistakenly carried over from another paper I reviewed that included timeseries experiments. I apologize for the oversight. Rest assured that this error does not influence my evaluation of the paper and is only a typographical error. 3. “without ever indulging into the GLOPs, test and training time compute”. While this was initially an oversight, revisiting Figure 3 reaffirms my concern. The method demonstrates only marginal performance gains despite incurring approximately 150x the computational cost in FLOPs. This trade-off raises serious concerns about the practicality of the approach. For example, training a single epoch with batch size 1 requires 967 GPU hours, compared to only 0.36 GPU hours for DeiT-Base. This stark difference highlights a major limitation of the proposed method despite increased hardware capabilities. I appreciate the answers provided for Q3 and Q4 in the sense that the answers are reasonable. Regarding answer to Q5, it still leaves the primary objective of the paper ambiguous. If the intention is to present an analytical perspective on patch size scaling, then the paper should frame itself explicitly as such, rather than prescribing a specific patchification strategy (e.g. pixel-level patchification or "learning from pixels") as a new norm due to increased hardware capabilities. "For fixed architecture, improving accuracy from 82.6 to 84.6 is a significant achievement." "Siginifcant" is a subjective term but it must be weighed against the massive increase in FLOPs (from 1x to 150x). These gains do not come at zero cost. To improve transparency, I suggest reporting the GFLOPs alongside the metrics in the main results table (e.g. Tab 1), allowing readers to better evaluate the trade-offs involved. Unanswered Question: Why is the paper using the altered ICML template (e.g. the footer is missing) and the whole **Impact statement section is missing**? The impact statement section for the proposed patchification laws is needed even more due to the gigantic increase in computational demands. My main concerns are the framing of the paper as not being analytical enough, alteration of the template for unfair space gains compared to other papers, not being transparent enough about the impact of the proposed patchification "laws" with exponential compute increase, no theorectical discussion. I am open to reevaluating my rating based on the answers provided to me and other reviewers. Based on the answers provided so far to me and other reviewers, i will increase my rating to 2. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your detailed comments and the improved scores. We are pleased to see that some issues have been resolved, and here we provide further explanations to address your remaining concerns: ***Accuracy-Computation Trade-off:*** For all studies investigating scaling laws, their greatest value lies in revealing the potential of scaling in a specific direction—that is, along a particular dimension, how much gain can be achieved. With the model being scaled-up, the accuracy-computation trade-off will become inevitably worse as we are approaching its performance limits. Therefore, assessing a scaling direction solely based on the growth of FLOPs is unfair. A more objective comparison can be achieved by contrasting one scaling dimension against another. For example, in Figure 3, we compare patch size scaling with traditional parameter scaling, where we not only achieve a better trade-off but also demonstrate a superior scaling limit, suggesting that we already have favorable scaling performance in terms of both efficiency and effectiveness. Furthermore, "scaling law", as a widely accepted term, refers to studies aimed at exploring the scaling-up potential of a specific dimension within a given architecture rather than proposing new structures or methods, so our work is an analytical study. For a scaling law, empirical results are the most direct (or almost the only) means of assessment, as our goal is to demonstrate its practical performance. Like most scaling laws, a model's actual capacity is difficult to prove mathematically, but empirically fitted scaling curves (as shown in our Figures 1 and 3) already provide sufficient insight to the community. ***ViT with 1x1 Patch Size:*** Thank you for your updated question. Our paper aims to explore whether patchification granularity can be a reliable scaling direction in visual understanding. Using the linear complexity of Adventurer, we have drawn a positive conclusion and demonstrated that an image can be scaled to 50,176 tokens, which we believe is an exciting finding and thus included it in the title. For ViTs, the scaling conclusions obtained on smaller inputs are consistent with those of Adventurer. While ViT, due to its higher complexity, demands more computational power for processing very long sequences, this is an intrinsic limitation of ViT itself, not a weakness of our study. Conversely, our evaluations help the community better understand this limitation: to handle longer sequences, ViT needs to introduce essential lower-level optimizations such as FlashAttention, SplashAttention, and KV-sharing strategies. With these strategies, the actual runtime of ViT is significantly shorter, and we have roughly estimated that an experiment with ViT at 224x224 resolution and 1x1 patch size could be completed within a week using 512 TPU v4 cores. ***ICML Footer:*** In the official ICML template, whether to display the footer depends on the command \printAffiliationsAndNotice{}, which is commented out by default in the template, so the footer does not display. We have inquired with the officials, and they indicated that this is permitted during the review phase. ***Computation Cost and Environmental Impact:*** Thank you for your suggestion; we will specify the specific resource consumption such as FLOPs and runtime in more parts of the paper (e.g., Table 1). We apologize for overlooking the impact statement section, as we thought it was optional. We will include the following statement in the final version: *This paper presents work whose goal is to advance the field of Machine Learning. Our experiments involved approximately 50,000 A100 GPU hours, which is considered a modest level of resource consumption compared to large-scale vision or language model research. While there are many potential societal consequences of our work, none are significant enough to warrant specific highlighting in this context. We believe the ethical impacts and societal implications are well-aligned with the advancement of machine learning technology.*
Summary: The paper addressed the scalability issue of patch sizes in vision transformer (ViT), which is a widely-adopted backbone in vision-related task. In past research a moderate parameter of patch size is used by default when ViT is chosen as the backbone. This study empirically investigate the effect of varying patch size (e.g., reducing it down to even 1*1) and find a scaling-law like rule in a variety of computer vision tasks (recognition, detection, segmentation). ## update after rebuttal Thanks for the response and additional experiments to clarify my concern on SwinTransformer. I have also checked the comments from other reviewers and stick to my original recommendation. Claims And Evidence: All are fine except for possibly problematic setting in the section "limitations of input size scaling" and Figure 4. The comparisons may not be on some fair basis. Methods And Evaluation Criteria: The evaluations are conducted on several most popular benchmarks (e.g., ImageNet, COCO etc.) and with standard metrics (e.g., average precision for object detection and instance segmentation). I have not concern about the evaluations. Theoretical Claims: There is no theoretic proof or claim. This is a work fully based on empirical evaluations on data. Experimental Designs Or Analyses: In fact most of pages in the paper were devoted to experiments, including both the reported performance scores on the chosen benchmarks and a series of ablation studies to reveal the effect of key factors (e.g., scaling of patch or parameter). I carefully check the experimental settings. They seem following previous practice, with sufficient details presented. Supplementary Material: Yes. Relation To Broader Scientific Literature: The work reveals the importance of choosing proper patch size in using ViT. Given the popularity of ViT in a large number of domains (computer vision, medical image analysis, weather forecasting, earthquake prediction etc.), the insight reported here may be valuable in improving many models nowadays used in these domains. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Overall I regard this is a good work with clear insight and experimental designs. The key idea (varying patch size to investigate whether and kind of scaling law holds) seem empirically validated by a series of experimental results in this work. As far as I know such a study is still missing in the literature, thus I currently lean to recommend to accept this work. However, there is still some space for improving the work. One key difference between computer vision and other general tasks lies in the sparsity of attentions in ViT. One example is the swinTransformer as proposed by Microsoft research. The work reveals the scalability of increasing visual token-based sequences. It is not clearly discussed whether the claims still hold when it is combined with sparse and spatially local attentions. I would suggest the authors to include additional experiments and discussion. The section "limitations of input size scaling" is not reasonable. There are some key technical details missing in the main text, particularly the specific way of increasing the number of parameters in patchification. Fixing the ratio of image-size / patch-size makes patches from different images not granularity-equal. According to my experiences, this will complicated the training of the model since it has to tackle more complex input during generalization. All above make the claim not fully convincing. Other Comments Or Suggestions: n/a Questions For Authors: Please read me comments on the weakness of this work. Ethical Review Concerns: There is no ethical issue found in the submission. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s constructive feedback and detailed suggestions. The detailed response to your questions/concerns are presented below: **Q1: *Additional experiments and discussion about sparse and spatially local attention networks.*** **A1:** Thank you for your constructive suggestion. We followed your feedback and conducted additional experiments on Swin Transformer, with the summarized results provided below. Specifically, we use Swin-T as the base model, which has an initial patch size of 4×4 and hierarchically downsamples the feature map by a final factor of 8 in both width and height during intermediate stages. We scaled it up by reducing the initial patch size to 2×2 and 1×1, while keeping the default downsampling rates unchanged. This scaling strategy closely mirrors the approach we used when handling standard ViT models. As shown in the table, our conclusion also holds for Swin Transformer—smaller patch sizes consistently lead to lower test loss and improved accuracy. In the revision, we will include larger local-attention models such as Swin-S and Swin-B to further support our findings. | Initial patch size | Inference time | Test loss | Accuracy | |--------------------|----------:|----------:|---------:| | 4x4 | 1x | 0.806 | 81.3 | | 2x2 | 3.9x | 0.734 | 82.0 | | 1x1 | 15.1x | 0.697 | 82.5 | **Q2: *Limitations of input size scaling.*** **A2**: We appreciate your insightful comments on this matter. We would like to explain the motivation behind this set of experiments: in fact, input size scaling and patch size scaling are largely equivalent—they both proportionally change the feature processing granularity and the sequence length of the model. Through these experiments, our goal is simply to show that patch size is a more suitable choice for a standardized direction of scaling, since it does not affect input storage, and input size scaling does not necessarily offer performance advantages over patch size scaling beyond a certain range. Specifically, we know that increasing the input size within a certain range tends to improve model performance. For example, on ImageNet, using the same model such as DeiT-Base/Patch16, an input resolution of 384×384 achieves noticeably better accuracy compared to 224×224 (e.g., 83.1 vs. 81.8). We believe this improvement basically comes from two sources: 1) the direct benefit of increased computation from scaling up the input; and 2) a higher input resolution reduces the distortion caused by resizing images during preprocessing. To ablate these two effects, we keep a fixed ratio between input size and patch size, which helps maintain a roughly constant computational cost and figure out the impact of the second term. Then our experiment demonstrates that this benefit tends to vanish once the input resolution exceeds the average original image size in the dataset. We will carefully rephrase the description and analysis of this experiment in the revised version based on your suggestions and comments.
null
null
null
null
null
null
When to Forget? Complexity Trade-offs in Machine Unlearning
Accept (poster)
Summary: The paper considers machine unlearning for strongly convex objective functions and under the assumption that the forget set is inaccessible to the unlearning method. The paper provides and proves some theoretical lower and upper bounds on the ratio of the number of training iterations needed by unlearning from the fully trained model and retraining from the initial model to achieve a given excess risk threshold. The bound depends on the weight of the forget set and data dimensionality. The paper identifies three regimes where 1) trivial unlearning is possible by adding noise to the parameters, 2) unlearning is inefficient (impossible) and does not asymptotically improve the number of iterations and 3) efficient unlearning is possible by adding noise before fine tuning with stochastic gradient descent. ## update after rebuttal The authors have satisfactorily addressed my concerns and my review was already positive. Claims And Evidence: The claims made are supported by clear and convincing evidence. However, there are some minor issues 1) The paper makes the strong assumption of strongly convex and Lipschitz smooth loss functions. While this seems standard in previous work, their applicability in practice needs to be clarified. 2) Definitions 1 and 2 are slightly non-standard and strong as they are for _any_ forget sets. In the usual DP definitions, there is some constraints regarding neighbouring forget sets. 3) In Section 4.3, inefficient unlearning seems to be a more appropriate description than impossible unlearning. Unlearning is possible and may even more efficient but the speedup is not an asymptotic order. Methods And Evaluation Criteria: Not applicable, this is a theory work. Theoretical Claims: I have checked the correctness of proofs in Appendix A and C. In the proof of Lemma C.1, the distribution which the expectation is taken w.r.t. should be stated. It is first on the retain set, then on the forget set. if r_f > .5, the bound can just be L. In the proof of Theorem 2, the connection between Eq (36) and the claim in Theorem 2 is not immediately clear. The readability and clarity of the proofs can be enhanced. Experimental Designs Or Analyses: There is only one experiment on linear regression model on synthetic function and dataset instead of real world datasets such as the Californian Housing dataset. I am not sure why the expected loss function in line 389 is used over other loss functions. Supplementary Material: I reviewed some proofs but not in detail. Relation To Broader Scientific Literature: (Chourasia and Shah, 2023; Allouah et al., 2024) have identified upper-bounds on the number of gradient steps needed by their specific MU algorithm and retraining to achieve unlearning. The paper identified lower and upper bounds of the ratio of the number of training iterations needed by unlearning from the fully trained model and retraining from the initial model to achieve a given excess risk threshold. The bounds would hold for any stochastic, iterative, first-order learning algorithm. Essential References Not Discussed: Essential references (Chourasia and Shah, 2023; Allouah et al., 2024) are discussed. Other Strengths And Weaknesses: **Strengths** 1. The paper and mathematical definitions (especially Section 1-3) are well-written and clear. 2. The paper provides novel and important theoretical insights that would work for any stochastic, iterative and first-order algorithm. **Weaknesses** 1. The experiments are limited. It is on a synthetic dataset and loss function. Other Comments Or Suggestions: Typos * In Sec 4, there should a brief summary about how each theorem is proven. For example, Theorem 1 uses the MU algorithm that adds Gaussian noise to the previous optimum * "Let the unlearning algorithm consist in simply" in line 554 * Distribution expectation is taken w.r.t. in Lemma C.1 proof is missing Questions For Authors: 1. Allouah et al., 2024 have separate results from out of distribution data. How do out of distribution data affect your theoretical guarantees? The DP-based MU algorithms and definition seem to work for any dataset as long as the weight on the forget set is fixed. The response would make the difference between both papers clearer. 3. Explain assumption 2 and why it is "verified for continuous distribution" 4. Why is it appropriate to use the expected loss function in line 389 over other loss functions? The response would affect the evaluation of the experiments Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank *Reviewer tKga* for their in-depth review and detailed analysis of our theoretical work. We address their comments below and will make sure to include the corrected elements into the final version. $$ $$ - **Impossible VS Inefficient Unlearning:** You are correct in saying that unlearning in this regime is inefficient rather than impossible. Therefore, we will rename IR to "Inefficient Unlearning" in the final manuscript. $$ $$ - **Form of the loss function:** The loss function we rely on is inspired from Eq. 26 in order to represent a loss of the "worst-case" type we study in our theoretical analysis. However, it is adapted in order to be both harder (non-smooth) and yield more stable results, as explained in details in our response to *Reviewer KpLi*. $$ $$ - **Experiments on Real Datasets:** Following the reviewer’s suggestion, we conducted additional experiments using the standard [real-world dataset "Digits" of handwritten numbers](https://scikit-learn.org/1.5/auto_examples/datasets/plot_digits_last_image.html) to further validate our theoretical findings. We are pleased to report that the three regimes predicted by our theory can be observed in these experiments, as illustrated [here (anonymized link)](https://anonymous.4open.science/r/Submission6456/figure.pdf). Additional experimental details are provided in the second part of our response to Reviewer KpLi. $$ $$ - **Non Standard Definitions:** Our first unlearning definition (Definition 1) is indeed worded differently from other works, but remains close to what other papers in the literature have used, see e.g. [Georgiev et al](https://arxiv.org/pdf/2410.23232)'s Definition 2, [Chourasia et al](https://arxiv.org/pdf/2210.08911)'s Definition 2.4 where the forget set is taken as any subset of the overall dataset. While our Definition 2 is indeed unusual, it relies on the same principles and implicitly uses the concept of neighboring dataset: If $\mathcal{D}_f$ and $\mathcal{D}_f'$ are of size $k$, then the total datasets $\mathcal{D}_r \cup \mathcal{D}_f$ and $\mathcal{D}_r \cup \mathcal{D}_f'$ are $k$-neighbors. It is common for the overall datasets to be neighbors in unlearning definitions, but this is usually not the case for forget sets. $$ $$ - **Out-of-distribution data handling:** In our analysis, there indeed is no direct analysis of the in- vs out-of-distribution character of the data. We do not make specific assumptions on the data as to the distribution they follow. $$ $$ - **Verification and explanation of Ass. 2:** Assumption 2 is a technical assumption needed for the proofs. We believe it could be lifted with little change in term of results but the analysis would be made significantly heavier. There are two principal elements to Assumption 2: the existence of $A$ such that $\mathbb{P}(\xi_r \in A) = p$ where $\xi_r \sim \mathcal{D}_r$, and the existence of another data distribution $\mathcal{D}_f'$. The second point is relatively trivial if, as stated in the article, "$supp(\mathcal{D}_r)$ and $supp(\mathcal{D}_f)$ do not cover the whole space. The first point is a consequence of the Intermediate Value Theorem: Define $f(r) = \mathbb{P}(X \in B(0,r))$. By the continuity of the density, $f$ is continuous, with $f(0)=0$ and $lim f(r)=1$. By the Intermediate Value Theorem, for every $p\in [0,1]$ there exists an $r_p \ge 0$ such that $f(r_p)=p$. - **Lemma C.1:** We are grateful for the reviewer’s attention to detail and will add the missing expectations, that indeed hold on the retain set, then forget set, respectively.
Summary: In this paper, the author revisits the machine unlearning topic and analyzes the efficiency of unlearning methods. The author defines a complexity ratio metric that compares the computation costs of the unlearning method and full model retraining. A phase diagram is drawn based on the complexity ratio and reveals three distinct regimes, which are impossible unlearning, efficient unlearning, and trivial unlearning, respectively. Based on the phase diagram, it answers the question about which would be the most efficient way to achieve the unlearning purpose under certain conditions. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The author summarizes many interesting and meaningful insights, which are helpful and can be considered as guidelines for future research in the field of machine unlearning. Essential References Not Discussed: Yes Other Strengths And Weaknesses: Strength: * This paper is well-organized. * The author summarizes many interesting and meaningful insights, which are helpful and can be considered as guidelines for future research in the field of machine unlearning. * The authors provide extensive theoretical evidence to support their claims. Weakness: * In the introduction section, the author defined TS as the stochastic gradient steps required to unlearn by retraining from scratch. But the definition of TU is missing. And here the author defines the T as the steps, where the next sentence says the ratio of time required. Note that the ratio of the number of steps and the ratio of time required is different. * The author does not explain what the excess risk e is. * It would be better if the author could have a more comprehensive discussion on the limitations of the proposed theory. * The evaluation part lacks some practical cases to prove the feasibility of the proposed theory. Other Comments Or Suggestions: In the introduction section, the author mentioned “we provide the first lower bound for the unlearning complexity ratio, answering an open problem in the literature (Allouah et al., 2024)…”. It would be better to explicitly describe what the open problem is. Questions For Authors: Please refer to the weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank *Reviewer qtfk* for their positive review of our work and their attention to rigour. Here are our answers to their main comments. $$ $$ - Missing introduction of quantities: We thank the reviewer for pointing out two omissions that affect the clarity of the paper. We appreciate their careful reading and the opportunity to address these issues. $T^U_e$ and $T^S_e$ will be properly introduced, as well as $e$, by this modification of lines 50 to 55: > "Specifically, in a general machine learning setting, let us define $r_f$ as the fraction of data in the forget set, and $e$ as the excess risk—that is, the gap in loss between a learned model and the optimal one. We denote by $T^U_e$ (resp. $T^S_e$) the time required for an algorithm to unlearn the forget set (resp. to retrain from scratch without the forget set) until the excess risk falls below $e$. Note that *time* is here measured by the number of accesses to gradients of individual data points, and thus assumes that gradient computation is the main computation bottleneck." $$ $$ - **Limitations of the proposed theory:** Our theoretical analysis is limited in its scope, as discussed in the conclusion, by the assumptions we make: strong-convexity of the objective, first-order of the algorithms, dependence only on the retain set of the unlearning algorithms, and verification of Assumption 2. Additional limitations include the tightness of our analysis since our bound from Theorem 3 scales with $\sqrt{d}$, which is not matched by our lower bound in Theorem 2, as discussed in subsection 4.5, and the potential non-optimality of Alg. 2, as discussed in subsection 5.2. The reviewer is right in pointing-out that these points are dispersed throughout the paper and we will make sure to group them in our final manuscript. $$ $$ - **Experiments on Real Datasets:** Following the reviewer’s suggestion, we conducted additional experiments using the standard [real-world dataset "Digits" of handwritten numbers](https://scikit-learn.org/1.5/auto_examples/datasets/plot_digits_last_image.html) to further validate our theoretical findings. We are pleased to report that the three regimes predicted by our theory can be observed in these experiments, as illustrated [here (anonymized link)](https://anonymous.4open.science/r/Submission6456/figure.pdf). Additional experimental details are provided in the second part of our response to *Reviewer KpLi*.
Summary: This paper proposes a novel efficiency-oriented metric for machine unlearning. Based on the training resources required for a retrained model, it measures the efficiency of machine unlearning with respect to the size of the forget set and introduces three different regimes. The first regime addresses cases where unlearning can be achieved with minimal cost, while the second and third regimes deal with the lower and upper bounds of the complexity ratio, respectively. The paper empirically demonstrates the validity of these regimes through experiments. Claims And Evidence: It is positive that the authors clearly present their claims through Assumptions, Theorems, and Corollaries. However, the empirical evidence supporting the existence of such regimes does not appear to be sufficient. The paper only provides experiments on a limited set of settings, and it does not demonstrate how generally these regimes hold. Methods And Evaluation Criteria: As I mentioned earlier, I do not think the paper has conducted an adequate evaluation on this point. However, I am not fully confident in this judgment. To me, this paper appears to be more theory-focused rather than practical, and since I am more of a practitioner than a theorist, it is possible that my evaluation is biased due to my limited perspective. Therefore, I plan to further assess this aspect through discussions with other reviewers during the discussion period. Theoretical Claims: The paper presents theoretical theorems for the three regimes. Although I have read through the logical arguments, I have not verified them rigorously. Experimental Designs Or Analyses: The argument on this point does not seem very convincing to me. I hope the authors will provide a thorough explanation in the rebuttal as to why the experiments presented in the paper are sufficient. Supplementary Material: The supplementary material only contained the proofs, and I did not review them rigorously. Relation To Broader Scientific Literature: The paper appears to focus primarily on reviewing DP-based unlearning methods, and I am not sure if this is sufficient. It would have been better if the authors had also mentioned machine unlearning literature from a more practical perspective. However, since the main contribution of this paper lies in providing a theoretical foundation, I consider this a minor concern. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The strength of the paper is that it is well-written and easy to understand, while the weakness is that the experiments are not sufficiently thorough. Please refer to the above comments for more details. Other Comments Or Suggestions: The strength of the paper is that it is well-written and easy to follow, while the weakness is that the experiments are not sufficiently comprehensive. Please refer to the above sections for more detailed comments. Questions For Authors: It seems necessary for the authors to provide an explanation of why the current evaluation in the paper is sufficient. As it stands, I am not fully convinced, and therefore, I find it difficult to recommend acceptance at this point. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank *Reviewer KpLi* for their insights and appreciate their practice-oriented review of our paper. We think adding the following discussion to the final version of the paper will benefit it greatly. $$ $$ - **Choice of experiments:** Our work studies the ratio of "worst case" unlearning and retraining time, meaning that the we bound the convergence time for every algorithm when optimising over the hardest possible functions. Our theoretical analysis is based on "worst-case" functions that have forms such as the one described in Eq. 26, used to prove lower bounds on convergence speeds: $\ell^g(\boldsymbol{\theta}, \xi) = \frac{\mu}{2} \|\boldsymbol{\theta}\|^2 - \frac{L}{2} g(\xi) \theta_1$. Therefore, in order to assess the relevance and significance of our theoretical analysis, it makes sense to empirically evaluate how fast unlearning and retraining algorithms go when optimising functions of this form. We started by optimizing over this loss function directly, but faced two issues. The first and most significant was that we did not observe the linear delimitation between TR and ER theorized in Theorem 1, the second was numerical instability. The first problem occurred because of the smoothness of the function: since Theorem 1 does not assume smoothness of the loss and Eq 26. provides a smooth loss, TR expanded beyond what was expected, forming a square root boundary rather than a linear one. For this reason, we introduced non-smoothness in the form of an $L1$ penalty, making the loss: $\ell^g(\boldsymbol{\theta}, \xi) = \frac{\mu}{2} \|\boldsymbol{\theta}\|^2 - \frac{L}{2} g(\xi) \theta_1 + \lambda \|\boldsymbol{\theta}_{2:d}\|_1$, which fixed the issue and provided a loss function better-aligned with the theory. The second issue was numerical instability: since every parameter but the first one was quickly decreasing to $0$, the remaining task was optimising over the first parameter. The first parameter naturally oscillated between values over and under the optimal one, resulting in a noisy process that could converge prematurely by chance. Therefore, we chose to expand the loss on the first parameter to half of the model's parameters, solving the second issue, and making the loss $$\ell^g(\boldsymbol{\theta}, \xi) = \frac{\mu}{2} \left\|\boldsymbol{\theta}\right\|^2 - \frac{L}{4} g(\xi) \sum_{i=1}^{d/2} \theta_i + \frac{L}{4} \sum_{i=d/2 + 1}^{d} \left|\theta_i\right|.$$ While uncommon, it represents a hard loss function that verifies every assumption made in the paper, which is why we chose it to perform our experiments. $$ $$ - **Practical applicability of the experiments:** In order to asses the practical applicability of our theory, we corroborated our experimental results with additional experiments on [the real-world dataset "Digits" from sklearn](https://scikit-learn.org/1.5/auto_examples/datasets/plot_digits_last_image.html). We performed a logistic regression on this well-known dataset of handwritten digits. We used a standard cross-entropy loss with $L2$ weight regularisation (thus ensuring strong convexity and Lipschitzness of the loss) to train a linear classifier on the data, and compare unlearning and retraining times, as done in Figure 2. [The result for the experiment is available at this anonymised link](https://anonymous.4open.science/r/Submission6456/figure.pdf), in compliance with the [ICML 2025 Peer Review FAQ](https://icml.cc/Conferences/2025/PeerReviewFAQ). Additionally, since adding this new experiment broadens the scope of the experimental section and further verifies our theoretical findings, we will include them in the updated version of the paper. The specific hyperparameters used to generate this figure are: batch size of $64$, use of SGD with a starting learning rate of $0.01$, decaying by a factor of $0.6$ every $1000$ batch, with a penalty weight of $1$ and results averaged over $50$ seeds. $$ $$ - **Mention of non DP-based unlearning methods:** While we do mention several unlearning methods with both no unlearning guarantees and with non DP-based guarantees in the first and second paragraphs of our literature review, we will include an additional discussion on the link between these papers and our, as explained in more details in our response to *Reviewer ak4V*. --- Rebuttal Comment 1.1: Comment: I am satisfied with the author response and will finalize my score as a 3.
Summary: In this paper, the authors studied the complexity trade-offs of machine unlearning, in particular, at which conditions, unlearning can outperform retraining from scratch regarding the efficiency or unlearning cannot bring any benefits. To accomplish this, the authors proposed to leverage a phase diagram to reveal these regions, which is supported by the authors' rigorous theoretical analysis throughout the paper. ## update after rebuttal I went over the authors' responses and other reviewers' comments. I think the authors have adequately addressed all of our comments, in particular, our concerns about the real datasets. Since all of us lean to accept this paper, I would love to increase the score to 4. Claims And Evidence: Most claims of the paper look fine to me. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation make sense to me. Theoretical Claims: The theoretical claims and analysis look fine to me. Experimental Designs Or Analyses: Yes, I checked the experimental designs and analyses. I feel that the authors may need additional experiments on real datasets to justify the theoretical results in this paper. Supplementary Material: I read the theoretical analysis in the supplementary material and it looks good to me. Relation To Broader Scientific Literature: The contributions of this paper are closely related to the area of machine unlearning. Essential References Not Discussed: I think the authors primarily focus on one type of machine unlearning approach. It would be better if the authors could incorporate some discussion in the paper to discuss the connections of the proposed approach to other machine unlearning approaches, in particular, those not dependent on adding gaussian noises, say Bourtoule, Lucas, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. "Machine unlearning." In 2021 IEEE symposium on security and privacy (SP), pp. 141-159. IEEE, 2021. and Wu, Yinjun, Edgar Dobriban, and Susan Davidson. "Deltagrad: Rapid retraining of machine learning models." In International Conference on Machine Learning, pp. 10355-10366. PMLR, 2020. Other Strengths And Weaknesses: Strengths: + The authors have done a series of theoretical analyses to reveal the complexity trade-offs in machine unlearning, which is perfect. + The authors' overall presentation is clear and easy to follow + The authors also performed some empirical studies, which can justify the phase diagram proposed by them. Weakness: - One of my biggest concerns is that the analysis conducted in this paper is closely tied to Algorithm 2. I am not sure whether the analysis could apply to general scenarios. Maybe I missed something but it would be great if the authors could have some discussion on this. - I think it would be better if the authors could perform experiments on real datasets rather than synthetic ones to verify their findings. If the results on real datasets match their analysis, that would be more convincing - I also feel a little confused about Theorem 3. Based on my understanding, this theorem should correspond to the ER area in Figure 1, right? According to Figure 1, the ER area only appears between $a * \frac{e}{e_0}$ and $b * \frac{e}{e_0}$ where $0 < a < b < 1$ for any fixed $k_{\epsilon, \delta}$. But it seems to me that Theorem 3 applied to any $e$ smaller than $e_0$. Although the authors attempt to have this discussion in paragraphs after Theorem 3, I still feel that more clarification would be essential (perhaps by combining Figure 1). Other Comments Or Suggestions: See above comments Questions For Authors: See above comments Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank *Reviewer ak4V* for their detailed feedback on our work, and we especially appreciate their focus on the theoretical elements of the paper. Here is a point-by-point response to their questions: $$ $$ - **Link to non DP-based unlearning papers:** There are indeed several ways of achieved certified unlearning besides DP-based MU methods. While we have mentioned some of them in the literature review, including *Bourtoule et al*’s "Machine Unlearning" paper, it is true that our work would benefit from discussing these elements in more details later in the analysis. Indeed, there is no direct equivalence between DP-based guarantees and the type of guarantees offered by order two methods like "Deltagrad" or exact unlearning methods like SISA (from *Bourtoule*’s work). However, our impossibility result in Theorem 3 has interesting consequences as to the feasibility of exact unlearning in our setup. Since the "unlearn" and "retrain" distributions ought to be the exact same in exact unlearning, it implies (0,0)-Unlearning as defined in Definition 1. Thus, $\kappa_{\epsilon,\delta}=\infty$ and Theorem 2 implies that $T^U_e$ cannot asymptotically improve over $T^S_e$, regardless of $e$. Note that this result does not contradict the potential efficiency of exact unlearning methods since they usually rely on a modification of the training process, which is outside of the theoretical framework laid-out in this paper. As for the link between second-order and first-order unlearning, since the unlearning guarantees used are very different by nature, they are hardly comparable theoretically. However, auditing first and second-order MU methods and their relative resilience to various attacks would be a valuable research direction and is left to future works. $$ $$ - **Role played by Algorithm 2**: The exact role that Algorithm 2 holds in our paper is indeed a subtle question, that we are happy to expand upon. Please be assured that the following elements will be included in our final version. In our analysis, we rely on Alg. 2 solely in order to show the existence of efficient first-order unlearning algorithms, but our analysis characterises the behaviour of every first-order algorithm as long as it relies only on the retain set. Indeed, Alg. 2 is only used in the proof of Theorem 3 to characterize the regime in which we know that unlearning algorithms can outperform retraining asymptotically. Therefore, the boundaries of the IR and TR are independent from Alg. 2, which is only used in Theorem 3 to describe the ER. $$ $$ - **Theorem 3 and the Efficient Regime:** We thank the reviewer for pointing out a potential source of confusion for readers, as Theorem 3 is in essence not a result that delimits the efficient regime, but rather shows that efficient methods do exist in this regime. In particular, Theorem 3 only provides the unlearning time for Alg. 2, which happens to improve upon retraining from scratch precisely in the efficient regime. To improve readability, we have decided to rename Theorem 3 as *"noise and fine tune efficiency"*, and add the following corollary to this theorem: **Corollary (Efficient regime).** There exists a universal constant $c > 0$ such that, for any $\gamma\in(0,1)$, if $e \geq \frac{c}{\gamma} \left(\frac{r_f}{1-r_f}\right)^2 \left(1+d\kappa_{\varepsilon,\delta}^2\right) e_0$, then $$ \frac{T^U_e}{T^S_e} < \gamma. $$ - **Experiments on Real Datasets:** Following the reviewer’s suggestion, we conducted additional experiments using the standard [real-world dataset "Digits" of handwritten numbers](https://scikit-learn.org/1.5/auto_examples/datasets/plot_digits_last_image.html) to further validate our theoretical findings. We are pleased to report that the three regimes predicted by our theory can be observed in these experiments, as illustrated [here (anonymized link)](https://anonymous.4open.science/r/Submission6456/figure.pdf). Additional experimental details are provided in the second part of our response to *Reviewer KpLi*. --- Rebuttal Comment 1.1: Comment: I would love to thank the authors for their responses. I think they have addressed my concerns. I would love to wait for other reviewers' further comments. If there is no serious issue, I would love to increase my score. Thanks!
null
null
null
null
null
null
Taming Rectified Flow for Inversion and Editing
Accept (poster)
Summary: The paper points out the problem of in accurate reconstruction for regular inversion method on flow models, and proposed to methods for inversion and editing. For Inversion, the authors introduced a new ODE by employing higher order with Taylor expansion, and found the second-order expansion is already helpful for better inversion. Though intriducing more computation when approximating the second order term, the authors cut half of the NFE for a fair comparision. The second contributions is to manipulate the self attention layers during the reverse process. The authors applied editing for both image and videos to demonstrtate its effeciveness. ## update after rebuttal I increased score to accept since the authors addressed my concern on the ablation studies. Claims And Evidence: Mostly yes, please see experiment section below for more questions. Methods And Evaluation Criteria: yes Theoretical Claims: The equations in the paper look correct to me. Experimental Designs Or Analyses: 1. One important contribution of the paper is to use higher order RF ODE for better inversion/reconstruction. I would like to see how much of improvement it brings up. Though Table 5 compares under the same NFE, it does not show the advantage of the proposed higher-order ODE as it increases. I understand that RF-Solver-2 might be a good sweet spot for tradeoff performance and latency, but there’s no experiment to demonstrate the effectiveness when increasing the order using Tyler explosion, which is the key idea of the paper. Specifically, I’m mainly looking for the proof of the intuition behind the methods, e.g., you find higher order ODE brings better preformance but more computations expensive, and then you figure out using less NFE to resolve it and fair comparison, or, on the other hand, increase the steps used in regular inversion/editing with same NFE for fair comparison. 2. An important problem in many inversion-based editing methods are the tuning of hyperparameters. It's mentioned in the last section that sharing step 5 is a good choice, do you have more studies on it? e.g., when to apply to early stage of later, what's the hyperparameter robustness for different editing (e.g. stylization, adding object, changing object)? 3. what happen if you only have RF-solver but not RF-edit for editing? this mean you only have RF-solver to better preserve the original construction of the image and use prompt for editing without changing the attention modules. This would better demonstrate the advantage of RF-edit. 4. What's the memory for storing Vs? Why not storing/sharing Ks or attention maps? Supplementary Material: The author proveded more details and results in the appendix, as well as the code. Relation To Broader Scientific Literature: The method proposed in this paper differs from other works by emplying taylor expansion to construct a higher order ODE for flow model inversion. The editing part is good but less significant because attention-based editing has been explored a lot in previous works. Essential References Not Discussed: no Other Strengths And Weaknesses: refer to the other secions Other Comments Or Suggestions: refer to the other secions Questions For Authors: refer to the other secions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 5wXG, Thank you for your comprehensive and detailed review of our paper and the recognition of our work's effectiveness. We provide our feedback as follows. > Experiment to demonstrate the effectiveness when increasing the order using Taylor expansion, which is the key idea of the paper. Thanks for your advice! We have provided experiments on both image generation and reconstruction in [Table5](https://postimg.cc/4mxRLZNc). The results illustrate that under various timesteps, our methods outperform the baseline significantly. With the higher order expansion (RF-Solver-3), the performance becomes even better. Although the comparison is conducted with the same number of timesteps (rather than the same NFE), we notice that under similar NFE, a higher order expansion sometimes also illustrates a better performance. For example, considering the 20 timesteps for Vanilla RF, 10 timesteps for RF-Solver-2, and 7 timesteps for RF-Solver-3, the NFE for them is similar (20, 20, 21, respectively), while RF-Solver-3 illustrates the best performance among them. > Hyperparameter robustness for feature sharing We have provided a more detailed analysis about the choice of feature sharing in [Figure11](https://postimg.cc/p5QJNgW5), [Figure12](https://postimg.cc/9rVBPFbd), [Figure13](https://postimg.cc/CdbNffGM), [Figure14](https://postimg.cc/F7ZVT5C2). In our work, solely tuning the hyperparameter of the feature-sharing step is enough to obtain a satisfying result. What's more, editing a high-resolution image (1360 \* 768) using our methods only takes a short time (less than 1 minute). As a result, we believe that the parameter-tuning is acceptable for most users. > What happens if you only have RF-solver but not RF-edit for editing As mentioned in Line 242~258 in the main paper, only using RF-Solver for editing sometimes cannot maintain the consistency between the source image and the target image. Figure 7 illustrates some results produced solely by RF-Solver. We also provide a quantitative ablation study about this in [Table7](https://postimg.cc/dky96hgB). > What's the memory for storing Vs? Why not storing/sharing Ks or attention maps? For editing a 1360\*768 image, the total memory needed to store the feature is about 18G. We store the feature in the CPU, rather than GPU Memory. In practice, this would not significantly reduce the efficiency of inversion and editing. We also add the qualitative results about sharing K or attention maps in [Figure13](https://postimg.cc/CdbNffGM). Experimental results demonstrate that, compared to K or the attention map, V contains the richest information regarding the original image. Sharing V can effectively preserve the source image's details in a relatively small number of feature-sharing steps, whereas sharing K and the attention map under the same number of steps yields less satisfactory outcomes. Therefore, we choose to share V for its effectiveness and efficiency. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for the responses and additional results. My concerns are addressed and I will increase my rating to accept. The additional ablation studies explains the design choice and the effectiveness of the proposed methods.
Summary: This paper proposes a flow inversion method based on the improved higher order ODE, which simply extracts the first order derivative from the volatility prediction for more accurate model denoising generation. this method can be applied to accurate image or video inversion based on the pre-trained image or video flow diffusion models. To keep more consistency, the author also proposes the reuse the "Value" from the attention for image or video editing, which results in better consistency on text-guided editing. Claims And Evidence: yes Methods And Evaluation Criteria: yes, it makes sense to me Theoretical Claims: yes Experimental Designs Or Analyses: The experimental comparison is fair and sufficient. Supplementary Material: Yes, the inference code for inversion editing based on flux and hunyuan video. Relation To Broader Scientific Literature: DDIM inversion would be the most related literature. Essential References Not Discussed: Most of the references are discussed. Other Strengths And Weaknesses: Strengths: This paper proposes an effective flow diffusion inversion method, and it is training-free, which achieves pretty accurate image and video inversion. Based on the proposed method, we can have more consistent content editing. Weakness: No significant weakness found. Other Comments Or Suggestions: Nice work, I saw there is a hunyuan video editing code; is there any results on this? Questions For Authors: Please refer to above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 1xw9, We sincerely thank for your recognition of the methods and experiments in our work! > I saw there is a hunyuan video editing code; is there any results on this? We provided some video editing results produced by HunyuanVideo in Figure 9 in the Appendix. We also provide the code for video editing in the supplementary. The quantitative results are shown in the table below, where the experimental settings follow Table 4 in the main paper. We will also add this to the main paper. **Table 9. Quantitative results about HunyuanVideo** | |SC|MS|AQ|IQ| |-|-|-|-|-| |RF-Edit (HunyuanVideo) | **0.9573** | **0.9749** | **0.6880**| **0.7298** | |RF-Edit (OpenSora) | 0.9501 | 0.9712| 0.6796 | 0.7207 | --- Rebuttal Comment 1.1: Comment: Thanks for the great work! --- Reply to Comment 1.1.1: Comment: Dear Reviewer 1xw9, We sincerely appreciate your recognition of our work! May we kindly ask whether our response has addressed your concern, and would you consider further increasing the ratings accordingly? Thanks once again for your support! Authors
Summary: This paper aims to leverage Rectified-flow-based generative models for unified image and vide editing. Specifically, this paper proposes RF-Solver, which uses high-order Taylor expansion to eliminate the errors in the inversion and reconstruction process. The paper further proposes RF-Edit, which use value feature-sharing for editing. Extensive experiments illustrate the effectiveness of the proposed methods. Claims And Evidence: 1. The paper primarily contributes in two aspects: the RF-Solver sampler and the RF-Edit framework. The RF-Solver sampler is novel and effective. The RF-Edit framework is somewhat similar to existing U-Net-based methods, while this paper explores its application on the mainstream Diffusion Transformer architecture. 2. The research area of this paper is very active and interesting. Given the performance of rectified-flow-based models in image and video generation, exploring their performance on various downstream tasks is meaningful and promising. 3. The paper is well-written and easy to follow. The authors provide a theoretical derivation of the high-order expansion for RF-Solver. Methods And Evaluation Criteria: 1. The paper proposes using a high-order Taylor expansion to reduce errors during the inversion and reconstruction processes. Then, it proposes the RF-Edit framework, which uses a feature-sharing mechanism to further preserve unintended modifications, i.e., the background. 2. Extensive experiments from both qualitative and quantitative perspectives illustrate the effectiveness of the methods. The authors explore performance across various backbones, including FLUX and OpenSora, demonstrating the universality of the proposed methods. Theoretical Claims: The theoretical derivation provided in this paper is comprehensive, offering theoretical justification for both sampling and inversion processes. Experimental Designs Or Analyses: Overall, the experiment is extensive and thorough. Supplementary Material: Authors provide the full code in the supplementary. Relation To Broader Scientific Literature: The paper aims to achieve satisfying editing outcomes using the recent rectified-flow based DiT. There are extensive works about image and video editing based on diffusion and UNet architecture such as PnP [1] and Masactrl [2]. More recently, there are also some works using FLUX to achieve image editing [3]. [1]. Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation [2]. MasaCtrl: Tuning-free Mutual Self-Attention Control for Consistent Image Synthesis and Editing [3]. Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations Essential References Not Discussed: No Other Strengths And Weaknesses: The paper exhibits several strengths, introducing novel aspects and presented in a clear and well-structured manner. My main concerns are listed below: 1. Some implementation details are unclear, particularly regarding experiments conducted on HunyuanVideo. The authors are expected to provide a more thorough specification of the implementation. 2. The authors should provide qualitative and quantitative results to demonstrate the benefits of RF-Solver without feature-sharing. Additionally, to further validate the superiority of RF-Edit, the authors could include qualitative comparisons between RF-Edit and previous methods such as MasaCtrl and PnP, implemented on the U-Net architecture. 3. The ablation study in Table 5 maintains the same total number of function evaluations (NFE) across different orders, leading to suboptimal performance for higher-order expansions (e.g., the row labeled 'RF-Solver-3'). The authors should conduct additional ablation studies using the same number of timesteps (instead of total NFE) to better illustrate whether higher-order expansions can further enhance performance. Other Comments Or Suggestions: No Questions For Authors: See "Other Strengths and Weaknesses" Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer eQrG, Thanks for your comprehensive review and insightful comments on our paper. We appreciate that you recognize the motivation and performance of our methods. The response to your concerns is shown below. > Implementation Details about HunyuanVideo Thanks for your advice! For video editing on HunyuanVideo, total timesteps are set to 25, and the feature-sharing step is between 1 to 5 for different cases. We will add these implementation details in the main paper. What's more, we have provided the code in the supplementary materials and planned to release it in the future. > The authors should provide qualitative and quantitative results to demonstrate the benefits of RF-Solver without feature-sharing. Thanks for your advice, we have provided the qualitative results of feature-sharing in RF-Edit in [Table7](https://postimg.cc/dky96hgB). The quantitative results are provided in Figure 7 in the paper. What's more, we also provide a more detailed analysis about the feature sharing strategy in [Figure11](https://postimg.cc/p5QJNgW5), [Figure12](https://postimg.cc/9rVBPFbd), [Figure13](https://postimg.cc/CdbNffGM), [Figure14](https://postimg.cc/F7ZVT5C2). We finally choose to share the V feature in the last 20 single-stream blocks of FLUX in the timesteps near to noise, which is proved to be the most effective choice. > Qualitative comparisons between RF-Edit and previous methods such as MasaCtrl and PnP. Thanks for your advice! The results are provided in [Table6](https://postimg.cc/XB6kzVb0) and [Figure15](https://postimg.cc/5X2qG7Zj). The qualitative comparisons between our method and PnP are shown in Figure 5. Our method outperforms the baselines both qualitatively and quantitatively. > Additional ablation studies using the same number of timesteps Thanks for your advice. We have provided more detailed experiments on both image generation tasks and reconstruction tasks in [Table5](https://postimg.cc/4mxRLZNc). The results illustrate that with the increase of Taylor Expansion order, our method illustrates a better performance at various timesteps. Although the comparison is conducted with the same number of timesteps (rather than the same NFE), we notice that under similar NFE, a higher order expansion sometimes also illustrates a better performance. For example, considering the 20 timesteps for Vanilla RF, 10 timesteps for RF-Solver-2, and 7 timesteps for RF-Solver-3, the NFE for them is similar (20, 20, 21, respectively), while RF-Solver-3 illustrates the best performance among them. --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive responses; I would like to check the video editing results to verify the editing consistency. Could you please provide the video results in the Figure 9? --- Reply to Comment 1.1.1: Comment: Dear Reviewer eQrG, Thanks for your reply! Some results are provided as follows: | Source Video | Prompt |Edited Video| |-|-|-| |[link](https://postimg.cc/SJdDbmrv) | rabbit -> cat | [link](https://postimg.cc/k6wv3cNt) | |[link](https://postimg.cc/62QGRNRy) | parrot -> dragon| [link](https://postimg.cc/3kBRZc3p) | |[link](https://postimg.cc/hzVN9HBR)| kangroo-> Tom Cat| [link](https://postimg.cc/XB8hPdq2) | |[link](https://postimg.cc/pyRPsB1J)| human -> Cat| [link](https://postimg.cc/9rfChQ24) | |[link](https://postimg.cc/565NxRYG) | + Crown | [link](https://postimg.cc/ctRZLvHY) | |[link](https://postimg.cc/FfY4JWzh)| heart -> car | [link](https://postimg.cc/WhdhTn6c) |
Summary: This paper introduces RF-Solver, a training-free, high-order solver to improve inversion and reconstruction in rectified flow models, and RF-Edit, a feature-sharing mechanism for image and video editing. The method states improvements in inversion accuracy and editing quality compared to vanilla rectified flow and several baselines. Claims And Evidence: The paper claims that RF-Solver improves inversion accuracy by reducing the error in solving the rectified flow ODE through a high-order Taylor expansion, and that RF-Edit enhances image and video editing by transferring self-attention value features from the inversion process to the denoising process. The authors support these claims with quantitative results (e.g., lower MSE, LPIPS, and higher CLIP scores in the inversion and editing tasks) and qualitative comparisons against several baselines. However, some details affecting the strength of these claims remain unclear. For instance, the evaluation lacks complete descriptions of dataset selection, the exact number of solver steps used by baseline methods, and a detailed step-vs-quality analysis. In addition, the editing evaluation relies primarily on LPIPS, which may not fully capture semantic consistency; alternative metrics such as CLIP-I or DINO could offer more insight. Finally, the derivation presented in Eq (9) appears equivalent to Heun's method, and it is not fully clear how the proposed method differs from standard second-order approaches. Methods And Evaluation Criteria: The evaluation method and clarity could be improved. The paper does not explain how the evaluation data were chosen or how large each set was. The authors also do not describe in detail how many solver steps are used by the baseline models. They claim a faster solver but do not show a thorough step-vs-quality analysis. Feature-sharing for editing is described, but it is similar to ideas in works like DragDiffusion or MasaCtrl. The paper does not include direct comparisons or discussions that clarify how this method stands out. Table 3 depends strongly on LPIPS, known to be uninformative in some editing tasks. Authors do not attempt alternative metrics (CLIP-I, DINO, or MDINO) that better capture semantic consistency. Doubling baseline solver steps to match function evaluations is not necessarily fair since most baseline methods achieve pareto-optimality in lower number of steps. Theoretical Claims: The derivations (e.g., on Taylor expansion) are technically solid. Same for the attention sharing. But if I understand it correctly your step update in Eq(9) is equivalent to Heun's method: $ Z_{t+\Delta t} = Z_t + \Delta t u_t + \frac{1}{2} (\Delta t)^2 \frac{u_{t+\Delta t}- u_t}{\Delta t} = Z_t + \Delta t u_t + \frac{1}{2} \Delta t (u_{t+\Delta t}- u_t) = Z_t + \frac{\Delta t }{2} \Delta t (u_{t+\Delta t}+ u_t) $ Could you please explain how your method differs from other 2nd order methods? Experimental Designs Or Analyses: The authors valuate image editing with LPIPS, but that metric often fails to measure fine-grained editing fidelity. They do not include widely used metrics like CLIP-I or DINO, which may capture semantic changes more precisely. The effect of solver order 2 vs 3 is briefly mentioned, yet the explanation for 3rd order being worse remains superficial (“less timesteps overall”). More thorough ablations or error analyses would help. Supplementary Material: N/A Relation To Broader Scientific Literature: They cite SANA 2025 for diffusion-based high-resolution image synthesis, and mention DragDiffusion, MasaCtrl, and Style Injection for attention-based editing. But the paper itself does not seriously compare with those or situate how its approach truly diverges from known feature-sharing or other inversion techniques. Essential References Not Discussed: Some very related papers are cited but not discussed: [1] Xie et al, SANA: EFFICIENT HIGH-RESOLUTION IMAGE SYNTHESIS WITH LINEAR DIFFUSION TRANSFORMERS (2025) [2] DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing [3] Can et al., MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing [4] Chung et al, Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - Please confirm if eq(9) is indeed identical to Heun. If there is novelty, show a direct comparative formula. - Provide more data on number of steps vs. final image quality. - Include key experimental details (datasets, NFE, hyperparameters) in the main text to improve reproducibility. - Consider more robust metrics (e.g., CLIP-I, DINO) for evaluating editing quality to complement LPIPS. - Clarify and justify choices in feature-sharing (number of layers and timesteps) with further ablations or discussion. - Discuss potential failure cases or limitations, especially regarding inversion prompt dependency and computational cost. - Show comprehensive experiments against stonger baslines (i.e. SANA) to highlight improvement. Questions For Authors: **I am adding here extra questions for the authors because there is an issue with the visibility of my comments.** ## New questions April 8th I thank the authors for their response. The extra experiments are welcome. In light of your response, the method you propose is part of the well known family of Taylor series integrators. To my understanding this type of integrator **have** been extensively studied for diffusion models in the probability flow ODE setting [1-3] . Could provide further clarifications about the novelty of your method and how it differs from previous work? In Table 5 does the number of steps indicate the same number of NFE? [1] [DEIS](https://arxiv.org/abs/2204.13902) [2] [DPM-Solver](https://arxiv.org/abs/2211.01095) [3] [DPM-Solver++](https://arxiv.org/pdf/2206.00927) Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer n1SC, Thanks for your time and thoughtful review! We appreciate your recognition of the effectiveness of our methods. Here is our feedback: ## More Explanations about Our Methods ### RF-Solver If we substitute the formulation of derivative into Equation 9, then the formulation becomes $Z_{t_{i+1}} = Z_{t_{i}} + ({t_{i+1}} - {t_{i}}) v_\theta (Z_{t_{i}}, t_{i}) + \frac{1}{2} (t_{i+1} - t_{i})^2 \cdot \frac{v_\theta (Z_{t_{i} + \Delta t}, t_{i} + \Delta t) - v_\theta (Z_{t_{i}}, t_{i})}{\Delta t}.$ In the above formula, $\Delta t$ is required to be set to a **sufficiently small value** to estimate the derivative with less error (as specified in Line 187~188 in paper), which **is not equivalent to** $t_{i+1} - t_{i}$. As a result, RF-Solver exhibits a clear difference from the Heun method. RF-Solver also outperforms Heun method (Table 1). What's more, RF-Solver is not limited to 2nd order expansion. We derive the **general form** (Equation 7) by firstly deriving the exact formulation of the solution for RF ODE and then applying the high-order Taylor expansion. **This has not been explored by previous works.** If we apply 3rd order expansion, the performance can be further improved ([Table5](https://postimg.cc/4mxRLZNc)). ### RF-Edit The methods you mentioned are based on U-Net. Among them, Dragdiffusion and Style Inject target point-based editing and style transferring, which is not the core focus of our work. Due to the time limit, we would like to further explore the potential of RF-Edit on these tasks in future work. Our work focuses on **prompt-based editing using DiT** (such as FLUX and OpenSora). Given DiT's distinct architecture from U-Net and larger parameter count, designing an effective and efficient feature sharing method is non-trivial and underexplored. Addressing this, we thoroughly explore the different choices of feature sharing in DiT (more results are shown in the "More Experiments" Section), proposing RF-Edit. RF-Edit is a unified feature-sharing-based framework which can be applied to various DiT architectures. Achieving satisfying results, we believe RF-Edit is insightful for further work. ## More Experiments Thanks for your valuable and insightful advice! More experiments are provided as follows: - Step-vs-quality analysis: Shown in [Table5](https://postimg.cc/4mxRLZNc). - Alternative metrics for image editing: Shown in [Table6](https://postimg.cc/XB6kzVb0). At the same time, we would like to kindly point out that LPIPS is also a widely-used metric for measuring the image consistency in previous works of image editing such as PnP (CVPR 2023), Null-Text-Inversion (CVPR 2023). - Choices for feature sharing: Further analysis is shown in [Figure11](https://postimg.cc/p5QJNgW5), [Figure12](https://postimg.cc/9rVBPFbd), [Figure13](https://postimg.cc/CdbNffGM), [Figure14](https://postimg.cc/F7ZVT5C2). We finally choose to share the V feature in last 20 single-stream blocks of FLUX in the timesteps near to noise, which is proved to be the most effective choice. - Comparisons on more baselines: Results about MasaCtrl are shown in [Table6](https://postimg.cc/XB6kzVb0) and [Figure15](https://postimg.cc/5X2qG7Zj). Results about SANA are shown in [Table8](https://postimg.cc/8F3HLRXc). ## Detailed Experiment Setup For image editing task, our evaluation dataset consists of about 300 images, following previous works such as PnP (CVPR 2023), InstructPix2Pix (CVPR 2023), Null-Text-Inversion (CVPR 2023) and SmartEdit (CVPR 2024). For the image generation task, we conducted experiments on MS-COCO validation dataset, following previous works such as Stable Diffusion (CVPR 2022). For video inversion and editing tasks, we mainly follow the dataset construction process in COVE (NeurIPS 2024). For image generation task, the NFE is set to 10 for both RF-Solver and Vanilla RF. The timestep for DPM-Solver++ is set to 20 according to their official github repo. For editing tasks, the hyperparameters for all the baseline methods follow their official github repo for best results. We will add more detailed information in the main paper. ## Other Questions > Doubling baseline solver ... pareto-optimality in lower number of steps. As illustrated in [Table5](https://postimg.cc/4mxRLZNc), with the increase of timesteps, the performance of both baseline and our methods becomes better. In low timesteps scenarios such as 3 steps for RF-Solver-2 (6 NFE) and 7 steps for Vanilla RF (7 NFE), our methods also illustrate better performance. > Discuss potential failure ... computational cost. The inversion prompt is **optional** and does not significantly impact editing results (you can also see from some examples provided in the supplementary materials) and our method can edit a high-resolution image (1360\*768) with less than 1 minute on a single A100 GPU. The failure case is shown in [Figure16](https://postimg.cc/5XKqM5Vg). We will add these discussions to the main paper!
null
null
null
null
null
null
Video-Enhanced Offline Reinforcement Learning: A Model-Based Approach
Accept (poster)
Summary: The paper proposes a method that leverages unlabeled internet videos to enhance the performance of offline RL. A discretized latent action space is learned on unlabeled video data is learned by the behavior abstraction network (BAN) using vector quantization. The paper proposes using a two-stream world model, one conditioned on latent behaviors extracted from video data and the other on the agent’s real actions. This dual-branch structure allows the agent to effectively learn control policies by aligning state rollouts from two branches, thereby transferring knowledge from the diverse, out-of-domain video data to the RL agents. The authors compare with several prior works to show the efficacy of the proposed method. The paper also provides an ablation study to justify the design choices. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Overall, the experiments and analyses seem comprehensive. Supplementary Material: Yes, but not in great detail. Relation To Broader Scientific Literature: The paper proposes a method for enhancing offline RL with off-domain video data. This is an important direction and baking in task priors from internet videos could potentially improve the robustness as well as sample efficiency of such policies. Essential References Not Discussed: Being a researcher in a closely related but not the same field, the references seem adequate to me. Other Strengths And Weaknesses: Strengths - The paper tackles an important problem of enhancing offline learning with unlabeled video data and shows the efficacy of the proposed approach in improving offline RL. - The proposed method significantly improves upon prior work and also enables finetuning online on a new scenario with better sample efficiency than DreamerV2. - The authors also provide an ablation study justifying design choices. Weaknesses - The authors use Dreamer V2 as their primary baseline. However, this is an outdated baseline and it would be great if the authors could provide comparisons with Dreamer V3. - It would be great if the authors could include details about how Dreamer V2, which is an online RL algorithm, is adapted to the offline setting. Other Comments Or Suggestions: - It would be great if the authors could include an ablation study of the effect of different amounts of unlabeled data on policy performance. It would be interesting to see if reducing the amount of data also reduces the policy performance. Questions For Authors: It would be great if the authors could address the questions from the previous section and in Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. Below, we address each comment point-by-point. > Q1. The authors use Dreamer V2 as their primary baseline. However, this is an outdated baseline and it would be great if the authors could provide comparisons with Dreamer V3. Please refer to our response to **Reviewer LSCr Q1**. > Q2. It would be great if the authors could include details about how Dreamer V2, which is an online RL algorithm, is adapted to the offline setting. The core adaptation of DreamerV2 to the offline setting lies in how the replay buffer is initialized and utilized. In the original online DreamerV2, the agent interacts with the environment to collect new trajectories incrementally. These trajectories are stored in a replay buffer, and the world model and policy are trained by sampling batches from this dynamically updated buffer. In the offline setting, we instead preload the entire static dataset into the buffer once at the start of training. No new data is collected during training, and all models are trained exclusively on this fixed dataset. > Q3. It would be great if the authors could include an ablation study of the effect of different amounts of unlabeled data on policy performance. It would be interesting to see if reducing the amount of data also reduces the policy performance. As suggested, we have conducted a new ablation study to analyze how the amount of unlabeled data impacts policy performance. The results are summarized below. As the amount of source domain data increases, the model's performance improves accordingly, demonstrating that our method effectively leverages information from source domain videos and exhibits strong scalability. Full results across different datasets will be included in the revised paper. |Handle Press | All videos | 1/2 videos | 1/4 videos | w/o videos (results from Fig 7) | DreamerV2 | | ----- | --- | --- |--- | --- |--- | |Episode return | 2651 $\pm$ 620 | 2477 $\pm$ 441 | 1859 $\pm$ 423 | 1814 $\pm$ 212.06 | 1201.75 $\pm$ 422.10|
Summary: This work proposes a novel method called VeoRL (Video-enhances offline RL). This method allows to use additional pre-training video data, without the need for reward or action annotations. That data is used to train a behavior abstraction network and the planning network. The in-domain data is used to train the planning network, as well as the trunk network. Then, inside the world model, a policy is trained to optimize the rewards, as well as to optimize suboal reaching objective, with subgoals specified by the planning network. The method is evaluated on metaworld, Carla and minecraft, showing impressive performance compared to existing methods such as APV, VIP, and DreamerV2. Claims And Evidence: The main claim: VeoRL can transfer common sense knowledge of control behaviors from natural videos. The claim is supported by the experiments. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: I checked the design of the main expeirments. One issue I have is the choice of DreamerV2 as opposed to DreamerV3. DreamerV3 appears to be a more natural baseline in this case, especially for minecraft, as it has demonstrated good performance there. Can the authors justify their choice of DreamerV2 over V3? Is that due to the fact that the world models you trained have the DreamerV2-style architecture? Supplementary Material: No Relation To Broader Scientific Literature: This paper addresses the important question of extracting knowledge from videos. Natural videos are widely available and contain enormous amounts of information, yet most modern approaches do not tap into that resource. This particular work focuses on videos showing similar tasks but in different environments. Essential References Not Discussed: This paper doesn't cite FICC, which is another method that tackles a very similar problem: Ye, Weirui, et al. "Become a proficient player with limited data through watching pure videos." _The Eleventh International Conference on Learning Representations_. 2022. Other Strengths And Weaknesses: ###### Strengths - The paper investigates an important yet underexplored direction of research, an introduces a novel method - The proposed method shows great improvement over the baselines on the proposed experiments - The paper is well-written ###### Weaknesses - The method is compared to DreamerV2 and not V3 - The tasks selected for minecraft and metaworld are quite simple, and do not involve multi-step decision making, e.g. pick and place for metaworld, or crafting an metal sword for minecraft - The natural videos this method leverages contain, for the most part, high-quality trajectories. This enables the planner to simply use a behavior cloning policy. This however is a quite limiting assumption. Other Comments Or Suggestions: 247 right - considering more on -> did you mean "focusing more on"? Questions For Authors: Q1 Why did you opt for DreamerV2 instead of DreamerV3? Q2 Do you have any intuition regarding how the quality of the action-free data affects performance? Since the downstream policy learning relies on behavior cloning, my intuition is that you may need something more intricate than behavior cloning if the data is lower quality or is more diverse. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's valuable comments and have addressed each point below. > Q1. The method is compared to DreamerV2 and not V3. Please refer to our response to **Reviewer LSCr Q1**. > Q2. This paper doesn't cite FICC, which is another method that tackles a very similar problem. Thank you for pointing out the FICC paper. We will add the citation and provide a detailed comparison in the revised manuscript. Although FICC and VeoRL both employ VQ-VAE and a forward model to extract latent actions in an unsupervised manner, they are distinct in the following aspects: **(1) We focus on offline RL.** While FICC focuses on pre-training a world model with offline data and then transferring it to downstream **online tasks**, our work addresses **offline RL**, where policies must be learned purely from static datasets without online interactions. These are fundamentally distinct problem settings with different technical challenges. For instance, offline RL primarily grapples with distributional shift due to the mismatch between the fixed dataset and real environment dynamics, while FICC employs online fine-tuning to achieve fast adaptation to downstream tasks. **(2) We handle significant domain gaps.** FICC operates under the assumption that the source domain (offline data) and target domain (downstream tasks) share the **same** environment. In contrast, our method tackles a more challenging **cross-domain** transfer scenario: extracting knowledge from real-world videos and employing it in a simulation environment. The significant visual and dynamics gaps between real-world and simulated domains substantially increase the difficulty of knowledge transfer. **(3) We leverage hybrid action spaces for behavioral alignment** FICC is explicitly designed for **discrete actions** (tested only on Atari) and acknowledges its limitation in handling continuous control tasks ("difficult to handle the continuous action space" as stated in their paper). Our method, however, focuses on the tasks with **continuous actions**. The dual-path policy optimization process we designed associates discrete actions extracted from videos with the continuous action space of the target environment, thereby enhancing offline RL policies with high-level behavioral abstractions (e.g., 'reach', 'push') from video data. > Q3. Experiments on more difficult, multi-step tasks. As suggested by the reviewer, we compare our method with DreamerV2 and VIP in **Soccer** from Meta-World. This task typically involves a two-stage decision-making process where the agent must first fetch a ball and then push it to a goal position. |Soccer|DreamerV2|VIP|Ours| |-|-|-|-| |Episode return|112.53 $\pm$ 37.90|109.84 $\pm$ 17.32| 231.10 $\pm$ 20.68| > Q4. The natural videos this method leverages contain, for the most part, high-quality trajectories. This enables the planner to simply use a behavior cloning policy. This however is a quite limiting assumption. **(1) Do the auxiliary videos contain only high-quality trajectories?** No, the videos we leverage are **not curated or task-specific**. Instead, they are **diverse, easily accessible, and collected without explicit task filtering**. While these videos may not always align closely with the target tasks (meaning they **should not** be considered high-quality trajectories). Even when the tasks in the videos differ from the target tasks, our method can extract useful priors for policy learning. **(2) Does the method reduce to simple behavior cloning?** No, our approach goes beyond naive behavior cloning, as real action labels are unavailable in passive videos. Instead, we learn discrete latent actions that capture abstract behavioral patterns. This allows our method to **generalize across environments**, as it clones high-level behavioral concepts rather than relying on low-level action sequences. By leveraging these latent structures, our method can transfer useful priors even when direct action replication is impossible. > Q5. 247 right - considering more on -> did you mean "focusing more on"? Yes. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and running additional experiments! Q1: This clarifies your choice of DreamerV2 over V3, thank you. Q2: Thank you for the clarification. I agree that the distinction between FICC and your method is quite big, and I'm not saying you should benchmark against FICC. Q3: Thank you for running this! (1) Do the auxiliary videos contain only high-quality trajectories? and (2) Does the method reduce to simple behavior cloning? I see. The dataset you used for metaworld for example consists of trajectories that are collected using teleoperation. All the trajectories in that dataset are therefore high-quality and solve some kind of task. Granted, most of those tasks may be irrelevant to the task you're testing on. But the high-quality of those trajectories is what enables you to use behavior cloning in the planning network ($\bar a_t = F_\mathrm{BC}(s_t)$). This $\bar a_t$ is then used to condition the policy to execute the high-level behavior $\bar a_t$. How can the high-level behavior cloning policy $F_\mathrm{BC}(s_t)$ provide sensible high-level actions if it isn't trained on any high-quality trajectories relevant to the task at hand? In my understanding, that behavior-cloning policy will perform badly if the offline data has e.g. random trajectories. Is my understanding correct? If so, do you think there could be other ways of training the high-level policy to avoid the limitations of BC? I understand that this is definitely more intricate that naive behavior cloning because you first have to identify the latent actions, and then train your BC policy using those actions, but using BC policy on the high-level has its limitations. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for the prompt reply and ongoing discussion! **(1) Clarification on Data Quality and Experimental Setup:** Since both the source domain videos and the target domain trajectories are offline in our setup, we would like to briefly restate our experimental setup in the Meta-World environment to avoid any potential misunderstandings: - Source Domain (**BridgeData Videos**): A large-scale dataset containing diverse robotic manipulation videos, including 50,365 teleoperated demonstrations across 24 environments. - Target Domain (**Offline Meta-World**): The offline trajectories used for the target task are **NOT** expert-level, but of medium quality (e.g., suboptimal or unsuccessful). **(2) For Low-Quality Source Videos:** A key assumption in our work is that the demonstration videos are **large-scale** enough to cover a broader range of skills. This enables us to construct a codebook of latent behaviors that can potentially support cross-domain policy learning. These videos are **readily available** on the Internet. For example, if the target domain is autonomous driving, the vast number of human-driving videos available online can serve as a rich source of demonstrations. If the reviewer's concern--"*How can the high-level behavior cloning policy provide sensible high-level actions if it isn't trained on any high-quality trajectories relevant to the task at hand?*"--refers to a scenario where the source data consists only of failed robotic manipulation attempts, or comes from domains that are significantly less related (e.g., using the Something-Something data as the source videos while the target domain is Meta-World), then we acknowledge that the effectiveness of our method in such cases remains uncertain and would require further investigation. We agree with the reviewer that using BC at the high level may struggle if the source videos contain entirely random or irrelevant trajectories. If the dataset lacks meaningful high-level behaviors, the extracted latent actions may not provide useful guidance. However, in these cases, since our policy network is learned using a hybrid BC-RL approach, it does not rely solely on BC. The RL component allows the policy to adapt to the target task. To partially investigate the impact of using fewer high-quality videos, we have conducted an ablation study analyzing how the quantity of unlabeled source videos affects policy performance. The results are summarized below, indicating that: - As the number of source domain videos increases, the model's performance improves accordingly, demonstrating that our method effectively leverages information from source domain videos and exhibits strong scalability. - Even when using only a quarter of the videos, VeoRL's results still significantly outperform DreamerV2 (which does not utilize any source domain videos for training), indicating that our method can effectively extract useful skills from the videos. |Handle Press | All videos | 1/2 videos | 1/4 videos | DreamerV2 | | ----- | --- | --- |--- | --- | |Episode return | 2651 $\pm$ 620 | 2477 $\pm$ 441 | 1859 $\pm$ 423 | 1201.75 $\pm$ 422.10| Nevertheless, we would like to emphasize that our work focuses on leveraging **action-free** and **cross-domain** demonstration videos, which may involve **different embodied agents and video appearances** from those in the target domain, to facilitate the offline task at hand. We believe this approach broadens the scope of offline RL. **(3) For Low-Quality Target Domain Trajectories:** As noted in Line 230(left), our framework trains the Plan Net and the corresponding BC model using data from **both** the source domain (BridgeData) and the target domain (Meta-World). This design leverages the diversity of BridgeData to enable the BC model to infer high-level behavioral abstractions (e.g., "reach" or "grasp") that are applicable to the target task (e.g., "Button press"), even when the offline data is of low quality and lacks successful trajectories. To validate this, we add a new experiment, in which the offline Meta-World data consists of random trajectories, potentially containing extremely poor demonstrations. The results below indicate that our method could still benefit from the latent skills (estimated by BC) from the auxiliary videos, demonstrating the robustness of our approach in challenging offline RL settings. |Button Press (Episode return)|Medium offline data|Random offline data (NEW!)| |-|-|-| |Ours|850.20 $\pm$ 169.05|637.85 $\pm$ 150.25| |DreamerV2|764.87 $\pm$ 120.12|505.47 $\pm$ 150.06| |VIP| 545.29 $\pm$ 93.67|276.5 $\pm$ 98.8|
Summary: This paper introduces a method for offline reinforcement learning through using unlabeled video data to train a world model and doing model based policy optimization. The paper include experimental results from Meta-World, CARLA, and MineDojo. Claims And Evidence: This paper makes several claims: 1. The proposed method has a performance improvement compared to other offline RL baselines. This claim is generally supported with evidence from Meta-World, CARLA, and MineDojo environments. However, the paper states that the DreamerV2 algorithm is a main baseline without acknowledging DreamerV3, which is a newer method in the same area. 2. Better offline to online transfer. While it does show this compared to DreamerV2, it could use more empirical evidence comparing to other methods. 3. Overestimation bias. The claim is made that overestimation bias is the key challenge in offline reinforcement learning, but does not show if the proposed method mitigates overestimation in the value function. Methods And Evaluation Criteria: The benchmark datasets of Meta-World, CARLA, and MineDojo make sense for the problem or application at hand. Theoretical Claims: There are no substantial theoretical claims made in the paper. Experimental Designs Or Analyses: The experimental design is generally sound. The paper compares VeoRL against both model-based methods such as DreamerV2 and LOMPO and model-free methods such as CQL. Supplementary Material: The supplementary material provides more implementation details and description of baselines. Relation To Broader Scientific Literature: The key contributions of the paper build on several important research threads in offline reinforcement learning namely leveraging unlabeled videos for RL and model based offline RL. Essential References Not Discussed: The key contribution is a method for model based offline RL from unlabeled videos. It does not compare with DreamerV3, which is a model based RL algorithm that builds on DreamerV2, a main baseline in this paper. Other Strengths And Weaknesses: The paper is original and significant. The main weakness is it does not compare with an important baseline which is DreamerV3. Other Comments Or Suggestions: I do not have other suggestions. Questions For Authors: 1. Value overestimation: in the paper it is claimed overestimation bias the key issue in offline RL, but don't provide measurements for value function estimates in the proposed method. Did you compare estimated vs actual values in the experiments? 2. Baseline comparisons: the main baseline for the paper is DreamerV2. Is there a reason DreamerV3 is not compared to in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments and provide our responses below. > Q1. Comparison with DreamerV3. The decision to use DreamerV2 as the backbone stems from two key considerations: - **Offline setting performance:** While DreamerV3 achieves strong performance across diverse tasks with fixed hyperparameters, our experiments (as shown below) demonstrate that it underperforms in offline RL settings without hyperparameter tuning compared to DreamerV2. - **Backbone consistency:** LAMPO, a key baseline in our work, employs DreamerV2 as its backbone. To enable direct and equitable comparisons between our method and LAMPO, we maintained consistency by using the same backbone. Notably, we have conducted additional experiments on Meta-world by integrating VeoRL with DreamerV3. The results show that our approach outperforms vanilla DreamerV3, showing its ability to seamlessly integrate with different model architectures. |Episode return|DreamerV2|DreamerV3|VeoRL(DV2)|VeoRL(DV3)| |-|-|-|-|-| |Drawer Open|1168.35 $\pm$ 59.55|674.55 $\pm$ 79.04|1953.6 $\pm$ 121.48|1393.5 $\pm$ 122.5| |Handle Press|1201.75 $\pm$ 422.10|257.85 $\pm$ 247.05|2650.90 $\pm$ 619.60|1360.15 $\pm$ 547.85| |Success rate|DreamerV2|DreamerV3|VeoRL(DV2)|VeoRL(DV3)| |-|-|-|-|-| |Drawer Open|0.18 $\pm$ 0.04|0.00 $\pm$ 0.00|0.70 $\pm$ 0.07|0.55 $\pm$ 0.15| |Handle Press|0.33 $\pm$ 0.11|0.05 $\pm$ 0.05|0.60 $\pm$ 0.12|0.35 $\pm$ 0.15| > Q2. Offline to online transfer. In the offline-to-online transfer setup, we used DreamerV2 as the comparison model, as it has demonstrated stable performance across multiple environments. In this rebuttal, we conduct further experiments with the second-best comparison models in each environment (VIP, LOMPO, and VPT, respectively). The results presented below consistently demonstrate that our approach outperforms existing methods. The updated Fig 6 is available at https://anonymous.4open.science/r/image-765. |Meta-World return (Soccer)|20K|60K|100K| |-|-|-|-| |DreamerV2|37.22|81.41|106.92| |VIP|18.52|29.03|117.31| |VeoRL|150.09|213.34|234.80| |CARLA return (Night)|20K|60K|100K| |-|-|-|-| |DreamerV2|2.01|15.17|42.95| |LOMPO|−20.45 |-12.05|-1.26| |VeoRL|27.24|37.61|79.07| |MineDojo success rate (Cobblestone)|50K|100K|150K| |-|-|-|-| |DreamerV2|0.27|0.41|0.70| |VPT|0|0|0| |VeoRL|0.58|0.86|1.00| > Q3. Value overestimation. **(1) Does VeoRL alleviate value overestimation?** Overestimation bias is indeed a key challenge in offline RL, and our method mitigates this indirectly through intrinsic rewards derived from auxiliary videos. To further validate its impact, we compare estimated values with true values (i.e., the discounted sum of actual returns) on the Meta-World Handle Press task. - We analyze the distribution of estimated and true values using histograms. The results below show that VeoRL's predicted values align more closely with the true values, whereas DreamerV2 exhibits significant overestimation. Specifically, in 69.12% of states, DreamerV2 predicts values exceeding 1000, despite 57.71% of true values being below 200. Notably, - Since our model incorporates both environmental and intrinsic rewards in value estimation, we also include the intrinsic reward when computing the true value. The negative intrinsic reward leads to a lower overall highest true value in VeoRL. As a result, directly comparing the true values of the two models is not meaningful. - Additionally, value computation follows each model’s successful trajectories. Corresponding figures are available at https://anonymous.4open.science/r/image-765. Our approach is shown to produce value estimates that are closer to the true values compared to DreamerV2, indicating reduced overestimation bias. |Histogram of 50 episodes|[0,200)|[200,400)|[400,600)|[600,800)|[800,1000)|[1000,1200]| |-|-|-|-|-|-|-| |DreamerV2 estimate|0.44%|3.12%|5.20%|6.45%|15.68%|69.12%| |True value|57.71%|5.08%|5.73%|6.34%|25.14%|0%| |Histogram of 50 episodes|[0,200)|[200,400)|[400,600)|[600,800)|[800,1000)|[1000,1200]| |-|-|-|-|-|-|-| |VeoRL estimate|1.98%|11.79%|30.69%|55.54%|0%|0%| |True value|15.93%|24.51%|49.98%|9.58%|0%|0%| **(2) How does VeoRL specifically handle the value estimation bias?** VeoRL leverages **intrinsic rewards** to mitigate value bias in offline RL by **grounding value estimates in realistic behaviors**: - Traditional value functions in offline RL can extrapolate poorly for unseen states because they optimize based only on limited environmental feedback. - Intrinsic rewards encourage high-value states to align with realistic source behaviors. By incorporating intrinsic rewards from video priors, our approach provides an alternative supervision signal, reducing reliance on extrapolated Q-values. - This approach acts as a regularization mechanism, constraining the model from assigning arbitrarily high values to unfamiliar states, preventing unrealistic value spikes.
Summary: This paper presents a model-based RL method, Video-Enhanced Offline RL (VeoRL), which leverages unlabeled Internet videos to enrich the world model. The world model comprises two state transition branches: through real actions using a trunk net and through latent behaviors using a plan net. And the policy is trained by two-stream imagined trajectories generated by the trunk net and the plan net. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not contain any proofs or theoretical developments; hence, it is unnecessary to perform a check. Experimental Designs Or Analyses: Yes. Supplementary Material: The appendix lacks complete code documentation and the corresponding environment configuration files. Relation To Broader Scientific Literature: This contribution is closely related to existing offline RL methods (e.g., CQL, DreamerV2) and video-enhanced RL approaches (e.g., APV, VIP). Essential References Not Discussed: None. Other Strengths And Weaknesses: **Strength**: 1. The paper is well-written and easy to follow. 2. The method leverages unlabeled internet video data to enhance offline RL performance and addresses cross-domain distribution shifts. 3. The paper provides extensive experimental validation across multiple visuomotor control tasks (e.g., robotic manipulation, autonomous driving, and open-world games), demonstrating that VeoRL significantly outperforms existing offline RL methods on multiple tasks. **Weaknesses**: 1. **Computational Complexity**: The paper mentions that VeoRL's training process involves extracting latent behaviors and optimizing a dual-branch world model, which may lead to high computational complexity and memory requirements. However, the paper does not provide a detailed quantitative analysis of the computational overhead or compare it with other methods. 2. **Interpretability of Latent Behaviors**: While the paper shows the correspondence between latent behaviors and real actions, the semantic interpretation of these latent behaviors remains unclear. For example, the paper does not discuss in detail how these latent behaviors specifically guide policy optimization or their generalizability across different tasks. Other Comments Or Suggestions: 1. The paper could include a quantitative analysis of computational complexity in the experimental section, particularly in comparison with other offline RL methods. 2. In the experimental section, the authors use only 3 seeds. However, I believe that this is not enough. And I recommend the use of more than five seeds to ensure robustness. Questions For Authors: The paper uses datasets like BridgeData-V2 and NuScenes as auxiliary video sources but does not provide an in-depth analysis of their diversity and suitability. Did the authors perform an analysis of the distribution differences between these datasets and the target tasks? If so, could more details be provided in the revised version? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. > Q1. Complete code documentation and environment configuration files. Code and full configuration files will be open-sourced upon acceptance. > Q2. Computational complexity. Below, we present the training time and GPU usage until model convergence. While the latent behavior extraction and dual-branch architecture in VeoRL introduce additional computations, our experiments demonstrate a favorable efficiency-performance trade-off: - **Compared with DreamerV2:** VeoRL requires 1.3× GPU memory and 2× training time. This modest cost results in about 360% improved episode return in CARLA and 140% higher success rates in MineDojo, as shown in Fig 1. - **Compared with APV, VIP, and LOMPO:** VeoRL shows comparable training time and offers advantages in lower memory usage. ||VeoRL|DreamerV2|APV|VIP|LOMPO| |-|-|-|-|-|-| |Training time|19h|9h|19h|20h|25h| |GPU memory|5126M|3892M|6039M|12292M|8841M| > Q3. Interpretability of latent behaviors. **(1) Semantic interpretation of latent behaviors:** Latent behaviors align with real actions through our model, guided by the reward function and optimization objectives. As visualized in Fig 7, for example, in BridgeData, distinct latent behaviors correspond to atomic actions like "grasp", "reach", or "push". **(2) How do the latent behaviors guide policy optimization?** Latent behaviors are abstract and high-level, requiring alignment with low-level actions in the target task. To enable latent behaviors to guide target policies, we modify the MBRL algorithm as follows: - The policy network conditions on both the current state and latent actions from the behavior cloning module, enabling high-level intentions to guide low-level actions. - State transitions are modeled using a Trunk Net (driven by real low-level actions) and a Plan Net (guided by high-level latent behaviors) for long-horizon planning. - An intrinsic reward encourages the rollouts from the Trunk Net to progressively align with long-term states predicted by the Plan Net, bridging the target policy with behaviors extracted from auxiliary videos. **(3) Generalizability across tasks:** We conduct additional experiments on Meta-World and MineDojo to validate the transferability of latent behaviors across tasks. The results are presented below. Training schemes: The behavior abstraction network is frozen after training on Task A, with a fixed latent codebook. For Task B, this pre-trained network is directly deployed without undergoing task-specific fine-tuning on source video data. The minimal performance degradation confirms that our method's latent behaviors trained on Task A remain effective for Task B, consistently outperforming other baselines trained specifically on Task B. This transferability significantly reduces training costs for downstream tasks, as it spares the time for re-training on the auxiliary videos. |Meta-World|Codebook construction|Downstream task|Success rate| |-|-|-|-| |VeoRL|Button Press|Button Press|0.62 $\pm$ 0.02| |VeoRL|Handle Press|Button Press|0.63 $\pm$ 0.12| |DreamerV2|NA|Button Press|0.58 $\pm$ 0.08| |VIP|NA|Button Press|0.18 $\pm$ 0.08| |MineDojo|Codebook construction|Downstream task|Success rate| |-|-|-|-| |VeoRL|Harvest sand|Harvest sand|0.55 $\pm$ 0.05| |VeoRL|Harvest water with bucket|Harvest sand|0.50 $\pm$ 0.10| |DreamerV2|N/A|Harvest sand|0.25 $\pm$ 0.06| |VPT|NA|Harvest sand|0.20 $\pm$ 0.04| > Q4. More seeds. As suggested, we have conducted additional experiments on MineDojo using five different random seeds. The updated results below show consistent performance of our model. We'll include full results in the revised paper. ||VeoRL|DreamerV2|LOMPO|VPT| |-|-|-|-|-| |Harvest log in plains|0.40 $\pm$ 0.07|0.13 $\pm$ 0.08|0.03 $\pm$ 0.04|0.02 $\pm$ 0.01| |Harvest sand|0.55 $\pm$ 0.05|0.25 $\pm$ 0.06|0.05 $\pm$ 0.05|0.20 $\pm$ 0.04| > Q5. The distribution differences between video data and the target tasks. We summarize the differences between the source and target domains below. These distinctions highlight VeoRL’s ability to effectively leverage real-world video data, even with notable distributional discrepancies, to enhance target domain task performance. ||Source: BridgeData-V2 | Target: Meta-World| |-|-|-| |Robot arm|WidowX 250 6DOF|Simulated Sawyer| |Data source|Collected from real robots|Generated in simulated environments| |Task design|Real-world tasks (e.g., kitchen activities, tool use)| Standardized tasks (50 predefined tasks)| |Camera view|Random-view|Right-view| |Episode length|Random|500| ||Source: NuScenes| Target: CARLA| |-|-|-| |Data source|Real-world data collected in Boston and Singapore using sensor-equipped vehicles|Synthetic data generated via an open-source simulation platform| |Scenarios|1000 real-world scenarios (e.g., lane change, unprotected turn, jaywalker)|Customizable scenarios (e.g., dynamic weather, traffic density)| |Camera number|6|1| |Episode length|Random|1000| --- Rebuttal Comment 1.1: Comment: Thank the authors for a detailed response to my review. The authors provided quantitative comparisons of training time and GPU usage to clarify the computational overhead. The addition of experiments with five random seeds for MineDojo strengthens the statistical reliability of the results, mitigating concerns about variance. The rebuttal lists domain differences (e.g., real vs. simulated robots) but does not explain how VeoRL mitigates these shifts. Ablation studies on the MMD loss or other domain adaptation components would clarify their contribution.
null
null
null
null
null
null
Proposer-Agent-Evaluator (PAE): Autonomous Skill Discovery For Foundation Model Internet Agents
Accept (poster)
Summary: This paper introduces Proposer-Agent-Evaluator (PAE), a learning framework designed to enable foundation model-based Internet agents to autonomously discover and refine new skills without human supervision. PAE consists of three core components: (1) a task proposer, which generates skill acquisition tasks based on website context, (2) an agent policy, which attempts the tasks, and (3) an autonomous evaluator, which assesses success based on visual observations. The evaluation signal is then used to refine the agent’s policy through reinforcement learning. The paper demonstrates PAE’s effectiveness on web navigation benchmarks, particularly WebVoyager and WebArena, where it achieves a 50% relative improvement in success rate. ## update after rebuttal I keep my rating since most of my concerns have been addressed. Regarding the long-term agents, I was thinking that some works—such as OpenManus—have started to support this direction. However, since these developments occurred within the past three months, they may not fall within the scope of this review’s evaluation period. Thus, I would keep my rating. Claims And Evidence: The paper claims that: 1. PAE enables autonomous skill discovery 2. PAE improves generalization 3. PAE’s improvement is not strictly dependent on stronger models for evaluation and task generation These claims are supported by Table 1, 2, and 3. Methods And Evaluation Criteria: The task proposer utilizes contextual information (such as website names or user demonstrations) to generate training tasks. The agent policy is trained using reinforcement learning, incorporating a reasoning step before execution. The evaluator assesses task success through a binary (0/1) reward signal based on final state screenshots. Evaluation is conducted on WebVoyager and WebArena, with success rates compared against SFT-trained models and proprietary VLMs. The methodology is well-structured, but long-term evaluation, e.g., whether PAE-discovered skills remain useful over extended training periods is not explored. Theoretical Claims: The paper does not proposed theoretical claims. Experimental Designs Or Analyses: The experiments effectively test the proposed method by comparing different agent training strategies and task evaluation techniques. The following aspects strengthen the study: - Comparison with SFT baselines, which highlights PAE’s advantages. - Scaling experiments, showing consistent performance improvements across different model sizes. - Evaluation on unseen tasks and websites, demonstrating generalization. However, failure analysis is limited—cases where PAE-generated tasks lead to incorrect generalizations or inefficient behaviors are not thoroughly discussed. Supplementary Material: Yes, I reviewed Supp. A, G and H. Relation To Broader Scientific Literature: The paper contribute to self-supervised skill discovery with RL and web-based foundation model agents. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - Addresses scalability issues in web-based agents by autonomously generating skill acquisition tasks. - Robust across different models, showing that PAE generalizes well beyond a specific agent architecture. - Strong empirical results, demonstrating zero-shot generalization to unseen websites. Weaknesses: - The paper does not discuss whether the learned skills persist over time or if they degrade when learning new ones. Understanding skill retention is crucial for real-world deployment. - While the agent receives reward feedback, the paper does not explicitly explain how it tracks what skills it has acquired. Without an explicit tracking mechanism, the agent might relearn already mastered skills instead of focusing on truly novel capabilities. - Sparse reward signal—the 0/1 evaluation approach might miss intermediate learning signals that could improve policy refinement. Other Comments Or Suggestions: None Questions For Authors: 1. How does PAE handle situations where the proposed task is infeasible or ambiguous? Would the system benefit from a self-correction mechanism? 2. Does the agent recognize when to reuse a learned skill versus learning from scratch? 3. Does PAE facilitate long-term skill retention? Do discovered skills remain useful across multiple training phases, or do they degrade over time due to task shifts? 4. Would intermediate reward signals (e.g., step-based evaluations) improve agent learning? The paper suggests outcome-based evaluation is more stable, but step-based signals could provide earlier correction if the agent starts diverging from the goal. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and feedback on the paper. We have provided additional clarifications to the questions that you have raised on how PAE handles cases where the proposed task is infeasible, how the agent recognizes when to reuse a learned skill, and whether intermediate reward signals can help. **Please let us know if your concerns are addressed and if so, we would appreciate it if you could re-evaluate our work based on this context. We are happy to discuss further.** ## However, failure analysis is limited—cases where PAE-generated tasks lead to incorrect generalizations or inefficient behaviors are not thoroughly discussed. We actually **do have comprehensive analysis in appendix B with manual classifications of the trajectories from each model**. We have also provided qualitative examples of each failure mode in Figures 7-11. We will include them in a revised version of the paper. ## How does PAE handle the situations where the proposed task is infeasible or ambiguous? An important factor for PAE to choose to use vanilla REINFORCE without a baseline value function is that this loss function (line 239 in the manuscript) automatically takes care of the cases when the proposed task is infeasible or ambiguous. In these cases, the agent would not be able to finish the task and therefore can only get a reward 0. As shown in the loss function in line 239, if the trajectory reward is 0, it automatically zeros out the loss so it does not affect the training process, except a slight waste of sample efficiency during online trajectory collection. ## Does the agent recognize when to reuse a learned skill versus learning from scratch? How does PAE facilitate long-term skill retention? A crucial difference in skill discovery considered in this setting of web agents versus skill discovery in traditional hierarchical deep reinforcement learning [1, 2] is that most tasks that current sota open-source VLM web agents can complete are rather atomic and short-horizon (~10 steps), such as “Find the Easy Vegetarian Spinach Lasagna recipe on Allrecipes and tell me what the latest review says” as shown in appendix Figure 7. Under such a setting, skills are essentially equivalent to the tasks that the agents are practicing on so the issues of identifying new skills and long-term skill retention are not major concerns at this stage. However, as open-source models get more capable and are able to handle more complicated tasks, we believe that there will be more interesting research problems related to those concerns and the framework of PAE will serve as a foundation to facilitate such research. [1] HIQL: Offline Goal-Conditioned RL with Latent States as Actions [2] Data-Efficient Hierarchical Reinforcement Learning ## Would intermediate reward signals improve agent learning? The choice of the reward is actually an important ablation choice that we have made to maximize the exploitation of asymmetric capabilities of SOTA VLMs as skill proposers/evaluators and as agents. As shown in Figure 3, we found the use of sparse outcome rewards only achieved the best performance while other choices such as autonomous step rewards and functional evaluators tend to result in lower performances. Through a more careful inspection, we found that in many cases it would be very hard to give an accurate step-level reward because there can be multiple paths to solving the tasks. That said, we also believe that as the tasks get harder and models get more capable, how to exploit autonomous intermediate reward signals would be an important future direction to be explored. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. Most of my concerns have been addressed. Regarding the long-term agents, I was thinking that some works—such as OpenManus—have started to support this direction. However, since these developments occurred within the past three months, they may not fall within the scope of this review’s evaluation period. Thus, I would remain my rating. --- Reply to Comment 1.1.1: Comment: Thank you so much for your response and appreciation of our response!
Summary: The paper presents a novel framework, Proposer-Agent-Evaluator (PAE), enabling foundation-model internet agents to autonomously discover and learn diverse skills without relying on human-annotated task templates. By leveraging visual-language models (VLMs), PAE autonomously generates tasks (Proposer), executes them using an agent policy guided by chain-of-thought reasoning, and autonomously evaluates task success. Experiments demonstrate significant improvements in zero-shot generalization performance across realistic internet browsing tasks compared to existing baselines. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper explores an interesting and practical problem—how AI models can automatically discover useful skills without relying on humans manually labeling tasks, which could greatly expand what these models can do in the real world. 2. The proposed framework (PAE) is straightforward and well-designed. It enables the model to come up with tasks by itself, figure out how to solve them, and evaluate its own performance, without needing human input every step of the way. 3. The authors ran thorough experiments, clearly showing that their method outperforms existing approaches and achieves strong results. Weaknesses: 1. The paper lacks comprehensive comparisons with alternative reinforcement learning algorithms and does not sufficiently explore performance differences with other large language models. 2. The paper conducts experiments only on WebVoyager and WebArena environments. Although these environments are rich, the tasks tested are predominantly simple, single-turn interactions (e.g., searching products or clicking elements). It would be beneficial to experiment with multi-turn interactive tasks or other long-term interaction scenarios. 3. The paper lacks a detailed analysis of computational efficiency, including specifics on computational resources, data efficiency, and training costs. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and feedback on the paper. We have included additional clarifications in the multi-turn nature of WebArena and WebVoyager, and details of computational efficiency. We are running additional experiments comparing different RL algorithms and we will share them here once we have preliminary conclusions. **Please let us know if your concerns are addressed and if so, we would appreciate it if you could re-evaluate our work based on this context. We are happy to discuss further.** ## Experiments are only conducted on single-turn interactions on WebVoyager and WebArena. It would be beneficial to experiment with multi-turn interactive tasks or long-term interaction scenarios To the best of our understanding, WebVoyager and WebArena environments are multi-turn and interactive environments where the agent needs to interact with the websites for a few steps (~10-20 steps) to be able to complete a task like “Find the Easy Vegetarian Spinach Lasagna recipe on Allrecipes and tell me what the latest review says” as shown in appendix Figure 7. While we agree that these environments are still relatively short-horizon in that most tasks require between 10-20 steps for successful executions, these are already the hardest interactive benchmarks on which existing open-source VLM web agents can achieve non-trivial performance results. We expect that more research in credit assignments over longer horizons would be possible in the future as the capabilities of open-source VLMs improve. ## The paper lacks a detailed analysis of computational efficiency Our main experiments of 7B models were conducted using 1 8x40G A100 and 3 8x24G A10G machines where the gradient update steps are performed on A100 only and A10G machines are only used for distributed data collection (i.e. trajectory rollouts). The trajectory collection phase takes around 90% of the training time while gradient updates only take 10%. The trajectory collection speed using all 4 machines is around 1k-2k trajectories per hour, so 7B experiments can be completed within 2 days. Our run of 34B model was conducted using 3 8x40G A100, where gradient update steps are performed on one of the A100 machines and all machines participate in trajectory collections. The training for the 34B model takes around 5 days under this setup. We will include this section in the appendix for the revised version. ## The paper lacks comprehensive comparisons with other reinforcement learning algorithms and does not sufficiently explore performance differences with other LLMs. The novel finding of this paper is to study whether an entire self-improvement framework of LLMs for web agents is possible and what the necessary design choices are. While additional RL results are good to have, because of the computational constraints discussed in the last response, we were unable to perform systematic tuning for other reinforcement learning algorithms like ppo and grpo with more hyperparameters and more sample complexity. Therefore, we refrained from drawing comparisons between different RL algorithms. With that being said, we are running additional experiments with PPO as the RL algorithm. We will share the results here once we have preliminary conclusions.
Summary: The authors propose Proposer-Agent-Evaluator (PAE) as a web agent that generates probable tasks (instructions) by itself and performs RL fine-tuning with binary rewards from VLM-based evaluators. For the generation of tasks for training, they employ a number of different pieces of information about target websites, such as website names and human demonstrations. Using the Set-of-Marks (SoM) annotations for web page screenshots, they fine-tune their VLM with binary rewards that are generated by prompt-based VLM evaluators, which take the last three web page screenshots and the agent's answer as input. The authors suggest that their approach brings performance improvements relative to supervised fine-tuned VLM agents. ## update after rebuttal Thank you for providing the rebuttal. Regarding the originality of this work, using an autonomous evaluation and fine-tuning based on the filtering was a practice used by "Autonomous Evaluation and Refinement of Digital Agents" (Pan et al., 2024, cited by this submission, as well), although the filtered behavior cloning was for their iOS agents, and it was first presented as a preprint on Apr 9, 2024 and as a non-preprint at the Multimodal Algorithmic Reasoning Workshop (CVPR) on Jun 17, 2024. Using foundation models for task proposal in the web navigation domain has been used by WebVoyager (He et al., ACL 2024) or even Mind2Web (Deng et al., NeurIPS 2023). As I mentioned in my original review, I am not suggesting that there are no differences from existing work, and I agree that empirically showing that self-improvement can work could be meaningful. But overall, my view regarding the originality remains similar. For the empirical evaluation, I believe demonstrating the effectiveness of the proposed framework with a stronger setup on a more difficult set of tasks would provide better empirical takeaways. While I appreciate the authors for providing the response to my review, I keep my original rating due to my primary concerns about this work. Claims And Evidence: - The assumption about providing the knowledge and information about desired websites to the task proposer is reasonable, because coming up with tasks that are useful and executable at the same time for different websites can be challenging. - On the other hand, the use of demonstrations on the websites comes with its own challenge of being outdated as the websites get updated. This can make the proposed approach less "automated," especially for websites that are not very well-known or get frequently updated. In this sense, the demonstration (screenshot)-guided task generation may not fully solve the issue of anticipating what is possible on each website. Methods And Evaluation Criteria: - The proposed method, Proposer-Agent-Evaluator, makes sense for web agents. As mentioned in the Claims and Evidence section, generating tasks in real-world domains like web navigation should be well-grounded. Theoretical Claims: There is not much of theoretical claims from this submission. Experimental Designs Or Analyses: - Although the authors provide an explanation for creating the "easy" split of WebArena, I believe using the original WebArena (even if it does not include the full list of websites it provides) would be better for comparison across different papers. Supplementary Material: I checked out the Algorithm and some trajectory examples. Relation To Broader Scientific Literature: - The task proposer may be applicable in other domains or environments. In that sense, there is a possibility of broader application of this work. Essential References Not Discussed: I think relevant papers are reasonably cited. Other Strengths And Weaknesses: - Overall, the originality of this work looks somewhat limited to me. My understanding is that using foundation models to generate task instructions, trajectory data collection with agents, and foundation model-based evaluators for evaluation and trajectory filtering (some of the corresponding prior work for these items is already mentioned in this work) are part of the standard practice for training web agents, these days. While some decisions for these components may have difference, I think the overall pipeline is similar to the norm. - The use of the term "skill" may not be appropriate in this context and may be replaced with a clearer term. For instance, I don't think there would be much difference if "policy" was used, instead. At least in the context of agents and reinforcement learning, "skill" and "skill discovery" sound more appropriate for focusing on learning behaviors at levels smaller/lower than the task level (i.e., "skill" level), so that they can be combined or leveraged to solve more complex downstream tasks. Other Comments Or Suggestions: - I think the readability and presentation of the manuscript can be improved. For instance, many of the tables and figures have too small fonts and are hard to read. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review and feedback on the paper. Just a gentle reminder that by the definition established by the ICML2025 committee, concurrent work includes all papers from within 4 months of the submission deadline. This applies to all other pipelines for web agents [1,2,3] with similar components (task proposers, agents, and evaluators). As such, they should not be considered as an established standard. With that being said, we also provide additional discussions on the differences with these works to address your concerns regarding novelty and also answer your other concerns below. **Please let us know if your concerns are addressed and if so, we would appreciate it if you could re-evaluate our work based on this context. We are happy to discuss further.** ## Overall, the originality of this work looks somewhat limited to me First, we would like to emphasize that while other pipelines [1,2,3] with similar components (task proposers, agents, and evaluators) have been applied to web agents because of the effectiveness of these components, they are all made public within 4 months of the submission deadline of ICML. As stated in the reviewer guideline of ICML, they should be considered as concurrent instead of an established standard and “Authors cannot expect to discuss other papers that have only been made publicly available within four months of the submission deadline. “. Additionally, a key difference is that [1,2,3] generate instructions and evaluation rewards from stronger proprietary models such as GPT4o to train weaker models, while our PAE system focuses on exploiting the asymmetric capabilities of foundation model web agents to achieve self-improvements, where even weaker models can be used to improve the performance of stronger agents. We propose that this is a meaningful contribution, as it demonstrates a path to advance state of the art capabilities. In our main results, we have also performed extensive experiment analysis seeking to understand when such self-improvements are possible (the use of a reasoning step, the use of sparse outcome-based reward, different choices of the models etc) and how well such self-improvements can generalize (to unseen tasks and unseen websites). This is the novel finding of this paper. [1] Nnetscape navigator: Complex demonstrations for web agents without a demonstrator, 2024. [2] Openwebvoyager: Building multimodal web agents via iterative real-world exploration, feedback and optimization, 2024b. [3] Webrl: Training llm web agents via self-evolving online curriculum reinforcement learning, 2024. ## Using the original WebArena would give better comparisons across different papers. In addition to experiments on WebArena Easy as reported in the main text, in appendix G Table 4 we also reported the performance of our models on the original WebArena using only screenshots as observations, and provided additional explanations in terms of why WebArena Easy is used as opposed to original WebArena. For your convenience, we also include results from Table 4 below. These results in Table 4 would be directly comparable with other papers testing screenshot-only models on WebArena. At the time when the main experiments of this paper were conducted, the best open-source VLM that we were able to train was LLaVa-Next but they were not able to achieve higher than chance performance (around 5%) for most of the websites on WebArena. In these cases, RL would be almost impossible because the base model does not perform meaningful explorations for these hard tasks. | | | OpenStreetMap | PostMill | OneStopMarket | Average | |----------------|-----------------|--------------:|---------:|--------------:|--------:| | Proprietary | Claude 3 Sonnet| 24.3| 10.6| 11.2 | 14.6 | | Open-Source | Qwen2VL-7B | 0.7| 0.0| 1.3 | 0.7 | | Open-Source | InternVL2.5-8B | 2.6| 0.2| 3.3 | 2.3 | | Open-Source | LLaVa-7B | 0.0| 0.0| 0.0 | 0.0 | | Ours | LLaVa-7B SFT | 15.2| 1.4| 5.8 | 7.2 | ## The use of the term "skill" may not be appropriate in this context and may be replaced with a clearer term. We used the term “skill” because most of the tasks that can be completed by open-source VLM web agents were restricted to rather short-horizon and atomic tasks like “find a chicken recipe with more than 4.8 reviews”. To avoid confusion, we will replace them with the term “policy” as suggested in the context. ## The readability and presentation of the manuscript can be improved. Many of the tables and figures have too small fonts to read. Thank you for the concrete advice on improving the presentation of the paper. We will increase the fonts in the updated version. --- Rebuttal Comment 1.1: Comment: Thank you for providing the rebuttal. Regarding the originality of this work, using an autonomous evaluation and fine-tuning based on the filtering was a practice used by "Autonomous Evaluation and Refinement of Digital Agents" (Pan et al., 2024, cited by this submission, as well), although the filtered behavior cloning was for their iOS agents, and it was first presented as a preprint on Apr 9, 2024 and as a non-preprint at the Multimodal Algorithmic Reasoning Workshop (CVPR) on Jun 17, 2024. Using foundation models for task proposal in the web navigation domain has been used by WebVoyager (He et al., ACL 2024) or even Mind2Web (Deng et al., NeurIPS 2023). As I mentioned in my original review, I am not suggesting that there are no differences from existing work, and I agree that empirically showing that self-improvement can work could be meaningful. But overall, my view regarding the originality remains similar. For the empirical evaluation, I believe demonstrating the effectiveness of the proposed framework with a stronger setup on a more difficult set of tasks would provide better empirical takeaways. While I appreciate the authors for providing the response to my review, I keep my original rating due to my primary concerns about this work. --- Reply to Comment 1.1.1: Comment: We appreciate your response to our rebuttal, and thank you for acknowledging the difference of our paper with existing work and the meaningful contribution of showing that self-improvement can work. We agree that carefully ablating the design choices of each component in the system and performing extensive experiments to understand self-improvement are exactly the contributions of this paper, instead of an individual novel component. We understand that you suggest we try the proposed framework on more difficult set of tasks but WebVoyager and WebArena are the hardest multi-turn web agent benchmark at the time when the experiments were conducted (as shown in Table 1 and Table 2, even the strongest Claude Sonnet 3.5 can only achieve ~50% success rate). We have also tried a harder set of tasks on WebArena in Table 4 but unfortunately no open-source VLMs at the time of experiments (even after SFT) can perform meaningful explorations on it, so the tasks considered in our paper are already the hardest multi-turn web agent possible for open-source VLM web agent.
null
null
null
null
null
null
null
null
Multi-Agent Design: Optimizing Agents with Better Prompts and Topologies
Reject
Summary: The authors propose a sequence of three phases to come up with a high-performing agentic system. The phases are: 1. individual “block” prompt optimization as warmup, 2. the topology refinement phase without altering the prompts, and 3. joint prompt optimization for all “blocks”. In phase 2 the authors score the topologies with the introduced “incremental influence” metric. The authors verify the proposed method on the extensive set of 8 benchmarks. Even though the majority of the experiments were done on a single model: Gemini 1.5 Pro, the authors verify the findings on Claude 3.5 Sonnet. The authors compare with an extensive set of 6 baselines including strong competitors. For prompt optimization the authors use off the shelf MIPRO optimizer. Claims And Evidence: The work claims that the combination of prompt and topology optimization is the key for high MAS performance. This claim is well supported by the experimental results. While topology optimization is well discussed and compared to the prior works, the prompt optimization is arbitrarily chosen to be MIPRO, without considering and even discussing the alternatives: [1-4]. Many references to the key papers on automatic prompt optimization (APU) are missing: [1] Yang et al, 2024., Large language models as optimizers. [2] Yuksekgonul et al, 2024. TextGrad: Automatic "Differentiation" via Text. [3] Cheng et al, 2024. Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs. [4] Wang et al, 2024. How to Correctly do Semantic Backpropagation on Language-based Agentic Systems. The claim that stages 1 and 2 of MASS are parallelizable is legit. However, the details of the implementation are incomplete. In Figure 2, the “propose new workflow” should be the focal point of the implementation walkthrough, but it is barely discussed in the paper in Section “Workflow topology optimization”. The formal algorithm for “propose new workflows” is missing even though it is mentioned as a step 13 in Algorithm 1. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. The convergence of all three optimization stages of the algorithm is not studied. Experimental Designs Or Analyses: The scores of the baselines and the proposed method are sound. Supplementary Material: I did not find the supplementary material, specifically the code of the experiments. Without the code and without the formal algorithm for “propose new workflows” the paper is incomplete. Relation To Broader Scientific Literature: The paper is an effort towards AGI. Essential References Not Discussed: The 4 essential references are not discussed, please see “Claims And Evidence”. Other Strengths And Weaknesses: The undoubted merit of the paper is showcasing joint prompt and workflow optimization ands its benefits, even though these two were performed in an interleaved manner. I appreciate Figure 8 with the final found best topologies for each dataset. Other Comments Or Suggestions: I’d appreciate examples of original and refined prompts during 1PO and 3PO. It is not clear what a typical prompt improvement looks like. Questions For Authors: In formula 1 it is not clear why $a$ is a function of a single sample $x$ (referring to $a(x)$). Is the workflow of the configuration $a$ a singular version applied to all samples of the dataset $D$. There is no formal definition of a “block” in Section 2. Like 307. It’s not clear what a “proper” prompt design means. Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful suggestions and acknowledging the undoubted merits of MASS-style optimization! We appreciate the reviewer pointing out the details in the paper that could be further clarified. We have provided further clarifications in this response and will update the final manuscript. We hope that our response sufficiently addresses the reviewer’s concerns, and the reviewer could consider improving their score. > Alternatives to the prompt optimizer. We appreciate the reviewer for suggesting many insightful works that have advanced the field of prompt optimization and will certainly include all your referred literature in the related work. We’d like to recall that MASS is a plug-and-play framework with arbitrary prompt optimizers, and one of our primary contributions is identifying the influence of prompt optimization on the MAS. We integrate MIPRO as a representative prompt optimizer due to the importance of simultaneous instruction and exemplar optimization, which has been justified in [1, 2] that show superior performance over OPRO-style [3] instruction-only optimization methods. It is also worth noting that the MASS framework itself is agnostic to prompt optimizer, and thus any prospective better methods can only *enhance* the overall performance of MASS. Below, we additionally provide an ablation of common prompt optimizers, and we show MASS with exemplar optimization (+DSPy) also led to significant gains. In line with the reviewer, we have also considered extending the existing PO to feedback-based optimizers (e.g., ProTeGi or TextGrad) that may come with better sample efficiency, which we have included as a desirable future work (line 727). |Data|MATH| |-|-| |CoT|66.7| |MASS|| |+APE|73.3| |+DSPy|78.2| |+MIPRO|81.0| [1] Opsahl-Ong, K., Ryan, M. J., Purtell, J., Broman, D., Potts, C., Zaharia, M., & Khattab, O. (2024). Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs. EMNLP 2024. [2] Wan, X., Sun, R., Nakhost, H., & Arik, S. O. (2024). Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization. NeurIPS 2024. [3] Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q. V., Zhou, D., & Chen, X. (2023). Large Language Models as Optimizers. ICLR 2024. > Details of parallelization in MASS. We thank the reviewer for pointing out the parallelizable feature of our optimization. We will highlight in Algorithm 1, lines 5-8 & 12-13, to indicate phases that can be parallelized to improve the efficiency of MASS implementation. > Clarification on the “propose new workflow” in workflow-level topology optimization. We thank the reviewer for suggesting to formulate the topology optimization more precisely. Given the configuration space per topology building block, as described in step 12, we conduct rejection sampling to sample workflow candidates. Formally, the workflow is randomly sampled from a pruned configuration space within a maximum budget, such that $a \sim \mathcal{A} \text{ s.t.} N(a) < Budget$, where $N(a)$ caps the overall number of agents; $\mathcal{A} = (N_{aggregate}, N_{reflect}, N_{debate}, N_{Tool}, …)$ is the configuration space as defined in Sec 2.2, and each search dimension will be weighted by the influence of that dimension and treated as deactivated if $\mathcal{Uniform}(0, 1) > p_{a_{i}}$. Followed by that, the workflow $W(a)=(a_i, a_i+1, …)$ is constructed in a predefined rule to arrange the order of agents (line 267). We included the specification of the detailed search space in the App. A, Table 3. Overall, we thank the reviewer and will update the algorithm 1, step 13 from one-line description to suggested mathematical formulations. > In formula 1, it is not clear why $a$ is a function of a single sample $x$ (referring to $a(x)$). Is the workflow of the configuration a singular version applied to all samples of the dataset. In Equation 1, the optimization objective function is to maximize the **expectation** of a configuration $a$ over all samples. Therefore, it is expressed as $a(x)$ for a single sample but marginalized over the whole dataset $\mathcal{D}$. > There is no formal definition of a “block” in Section 2. We thank the reviewer for the suggestion of providing a formal definition of “blocks” earlier. In Sec. 2, line 65, we refer to the topology of agents as building blocks. Formally, building blocks represent the minimum set of agents within a type of topology. It forms the final search space of MASS. We currently define it from Sec. 2.2 line 161 with visualization provided in Figure 8. We will move this part of the information earlier in light of your suggestion. > Line 307. It’s not clear what a “proper” prompt design means. We appreciate the reviewer pointing this out. The “proper” prompt design actually refers to “prompt optimization”, and we will rephrase the sentence using “prompt optimization” instead for clarity.
Summary: This paper investigates the interactions between multiple design spaces including prompts and topologies, and the impact of various aspects such as optimizing prompts, scaling the number of intelligences, and involving different types of topologies are investigated. The optimized MAS is generated by optimizing the identified influential components.The optimization process consists of Block-level prompt optimization, Workflow topology optimization and Workflow-level prompt optimization. Claims And Evidence: Yes. The claims are supported by clear evidence. Methods And Evaluation Criteria: The evaluation criteria generally make sense. Reporting metrics about cost is also needed. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: I have checked the experiments. The chosen bencharks are generally appropriate. However, since the optimization requires API cost, there are limited discussions on the cost. Supplementary Material: some of the prompts Relation To Broader Scientific Literature: May be related to AI agents. Essential References Not Discussed: no Other Strengths And Weaknesses: ### Strengths 1. The design factors for MAS performance are analyzed, the importance of prompt is emphasized, and the use of APO on MAS is implemented. 2. The overall logic is good and makes clear the reasons for the implementation of the method. The motivation is clear. ### Weaknesses 1. The optimization process relies too much on a pre-defined dataset and the pipeline is somehow long, making it slow and resource-intensive for optimizing the workflow. 2. Critical algorithms are not described in detail, just scribbled over. For example, it is unclear how the authors optimize the topology. 3. The comparisons on token consumptions are not discussed. As the proposed method involve many steps that require token consumptions, this part is critical. Other Comments Or Suggestions: It seems like the proposed method require many token consumptions during the optimization, which should be discussed in details. Questions For Authors: In Algorithm 1, how do you implement prune the design space based on the selection probability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful suggestions! Please see below for our detailed response, where we provide more details on the token consumption with further clarifications on optimizing the topology. We will include your valuable discussion in the manuscript. We hope that the reviewer could consider increasing the score if they feel the concerns have been sufficiently addressed. > The optimization process relies too much on a pre-defined dataset and the pipeline is somehow long, making it slow and resource-intensive for optimizing the workflow. We agree with the reviewer that optimizing workflow requires a validation set. However, we would like to recall that all workflow optimization baselines require some form of labeled samples, and, as reflected in Figure 6, MASS is capable of exploiting in a more refined and effective search space, advancing multi-agent performance while being more computation-efficient than the state-of-the-art automatic agent design methods (ADAS and AFlow), and the 3-stage pipeline, while seemingly long, can be justified that each stage leads to concrete performance improvements (Fig 5 & Table 5). In future work, we expect the sample-efficiency of MASS could be further improved by researching more sample-efficient prompt optimizers (Line 727) and low-cost proxies as optimization objectives. > Critical algorithms are not described in detail, just scribbled over. For example, it is unclear how the authors optimize the topology. We thank the reviewer for suggesting a better description of the topology optimization. Given the configuration space per topology building block, we conduct rejection sampling to sample workflow candidates. Formally, the workflow is randomly sampled from a pruned configuration space within a maximum budget, such that $a \sim \mathcal{A} \text{ s.t.} N(a) < Budget$, where $N(a)$ caps the overall number of agents; $\mathcal{A} = (N_{aggregate}, N_{reflect}, N_{debate}, N_{Tool}, …)$ is the configuration space as defined in Sec 2.2, and each search dimension will be weighted by the influence of that dimension, and rejected if $\mathcal{Uniform}(0, 1) > p_{a_{i}}$. Followed by that, the workflow $W(a)=(a_i, a_i+1, …)$ is constructed in a predefined rule to arrange the order of agents (line 267). We have included the specification of the detailed search space in the App. A, Table 3. Overall, we thank the reviewer and will update the algorithm 1, steps 12 & 13 with better mathematical formulations. > The comparisons on token consumption are not discussed. As the proposed method involves many steps that require token consumption, this part is critical. We thank the reviewer for suggesting a token consumption report. Here, we include an additional table of the actual token cost compared to baselines. In accordance with Figure 6 in the paper, we show that the training cost of MASS is comparable to state-of-the-art automatic agent design baselines. |Model |Training: Input Token|Output Token|Cost ($)|Runtime (mins)|Inference (per query): Input Token|Output Token|Cost ($)|Acc (%)| |-|-|-|-|-|-|-|-|-| |SC|||||1538|3013|0.0010|69.3| |Reflect|||||2051|850|0.0004|71.3| |Debate|||||6536|2483|0.0012|71.7| |AFlow|11M|8M|3.89|67|2523|1481|0.0006|64.3| |ADAS|23M|13M|5.61|55|7850|3335|0.0016|72.7| |MASS|24M|11M|5.09|58|6645|3263|0.0014|81.0| > In Algorithm 1, how do you prune the design space based on the selection probability? In sampling the valid configuration of agents at each iteration, we first prune each search dimension with the normalized selection probability that guarantees at least one dimension is kept activated. The target dimension will be rejected if $\mathcal{Uniform}(0, 1) > p_{a_{i}}$, where $a_{i}$ refers to the individual dimension in $\mathcal{A}$. For example, given the original search space $\mathcal{A}=(N_{aggregate}, N_{reflect}, N_{debate}, …)$, if $p_{reflect}$ is 0.8, in each iteration of sampling, this search space dimension will have a 20% chance pruned (i.e., turned off), and the rest dimensions will form the current search space, such that $\mathcal{A}$ becomes $(N_{aggregate}, N_{debate}, …)$. We hope this addresses your concerns.
Summary: The paper proposes Multi-Agent System Search (MASS), a novel multi-stage framework designed to optimize multi-agent systems (MAS) by automating the design of prompts and topologies. The authors demonstrate that both prompt design and agent topology significantly impact the performance of MAS. The optimization process is divided into three stages: 1) block-level prompt optimization (local prompt optimization for each agent); 2) workflow topology optimization (determining the most effective agent configuration); and 3) workflow-level prompt optimization (global optimization of prompts conditioned on the selected topology). The MASS framework leverages a plug-and-play approach to optimize individual agent prompts and the overall workflow structure, resulting in a more effective MAS for complex tasks. The authors claim that MASS outperforms existing multi-agent systems, both manually-crafted and automated, across multiple benchmarks, including reasoning tasks (e.g., MATH), multi-hop understanding (e.g., HotpotQA), and code generation tasks (e.g., LiveCodeBench). Furthermore, the paper proposes guidelines for building effective MAS based on the insights gained from the optimization process. "## update after rebuttal" Thanks for the authors' response and revision. I've checked out the authors' responses as well as concerns & comments of all other reviewers. I agree that the revised paper has improved over the original version. However, it is still not enough for increasing my rating for the paper to next level, and I'd keep my rating as "weak accept" considering the level of significance and novelty of the paper. I still think it is a borderline paper - it might be publishable for ICML if there is space but I won't push hard for its acceptance if other reviewers have strong different opinions. Claims And Evidence: The claims in this paper are largely supported by empirical evidence. The authors compare MASS against several baselines (e.g., CoT, Self-Consistency, Self-Refine, Multi-Agent Debate, ADAS, and AFlow), showing that MASS leads to substantial performance improvements across all tasks. Results are provided in tables and figures (e.g., Table 1 and Figures 2-7), demonstrating the effectiveness of the framework. Strengths: The paper presents a clear experimental setup with rigorous benchmarks. Results in Table 1 show substantial performance gains for MASS in comparison to both manual and automatic MAS systems. Ablation studies (Figures 5 & 6) effectively demonstrate the importance of each optimization stage in MASS. Potential Issues: While the results are compelling, the generalization to larger-scale or more diverse real-world settings is not extensively explored. Future work could address this by applying MASS to a broader range of domains. Methods And Evaluation Criteria: The methods presented are well-motivated and effective for the problem at hand. The approach is a clear improvement over prior work, particularly due to the integration of both prompt and topology optimization in a staged process. The pruned search space reduces the combinatorial complexity, allowing for more efficient optimization of MAS. Evaluation Criteria: The paper uses a wide range of benchmarks to validate MASS, such as reasoning tasks, multi-hop understanding, and coding tasks. The comparative performance across tasks shows that MASS consistently outperforms existing baselines, both manual and automated. Strengths: The evaluation is thorough and diverse, covering a wide array of task domains. Real-world applications such as LiveCodeBench and HotpotQA demonstrate the scalability and robustness of MASS. Theoretical Claims: The theoretical claims seem sound. The authors claim that prompts and topologies play critical roles in MAS design, and they back this up with an in-depth analysis of the design space. The paper formulates optimization problems and provides justifications for pruning the search space based on observed relationships between prompt design and system performance. Correctness: The mathematical formulation of the optimization problem is clear, and the theory behind the multi-stage optimization process is well-articulated. The stage-by-stage optimization approach is logically sound, and the benefits of each stage (block-level prompt optimization, topology optimization, and workflow-level prompt optimization) are well-supported by experimental data. Experimental Designs Or Analyses: The experimental design is solid, and the analysis appears robust. The authors compare MASS with a variety of baselines, providing detailed statistical results and ablation studies that validate the contributions of each part of the MASS framework. Strengths: Ablation studies show that stage-wise optimization (starting from block-level to workflow-level) provides significant improvements. Cost-effectiveness analysis demonstrates the computational efficiency of MASS, including comparisons with baselines like AFlow and ADAS. Possible Concerns: While the ablation studies are comprehensive, the real-world applicability of MASS (e.g., in extremely large-scale systems with thousands of agents) could be further explored in future experiments. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper makes important contributions to the field of automated multi-agent system design. It draws on existing work related to prompt optimization, multi-agent collaboration, and topology design. The paper positions MASS as a significant improvement over prior methods like ADAS, AFlow, and DyLAN. Related Work: The paper acknowledges existing works in MAS optimization, such as DyLAN (Liu et al., 2024) and Archon (Saad-Falcon et al., 2024), but it goes beyond them by incorporating both prompt optimization and topology optimization in the design process. The use of joint optimization (prompts and topologies) aligns with emerging trends in neural architecture search (NAS), where search space design is becoming increasingly important (e.g., Zhou et al., 2023). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Some weaknesses & areas for Improvement: 1) Why specific topologies are more effective than others is not deeply explored beyond empirical evidence. 2) Scalability considerations: The paper does not extensively discuss the computational cost of running MASS. How MASS scales with increasing agent complexity remains unclear. More details on runtime and computational overhead across different tasks would be useful. 3) Baseline comparisons & ablations: a deeper discussion of why MASS outperforms AFlow across different tasks would add value. It would also be beneficial to include an ablation study evaluating the impact of different topology configurations. 4) While the paper provides strong empirical results, a discussion on limitations (e.g., cases where MASS underperforms) is missing. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful and positive comments, especially for many valuable suggestions on a deeper discussion on topology design space, impacts of actual configurations, and further extending MASS to real-world applications! We have included your suggestions in this response and will also incorporate them into the final manuscript. We hope that in light of the response, the reviewer could consider improving their score. > While the results are compelling, the generalization to larger-scale or more diverse real-world settings is not extensively explored. Thank you for suggesting the exploration of larger-scale, real-world agent applications. The scale investigated in our work (roughly O(10) agents) aligns with current state-of-the-art agent design methodologies. Importantly, this scale is characteristic of numerous deployed real-world MAS applications where complex interactions within smaller teams are common [1]. Therefore, while we recognize the value of scaling further and consider it an important avenue for future research, our current focus addresses a highly relevant regime. Extending MASS to systems with substantially more agents remains a key area for future investigation. [1] Xia, C. S., Deng, Y., Dunn, S., & Zhang, L. (2024). Agentless: Demystifying LLM-based Software Engineering Agents. > Why specific topologies are more effective than others is not deeply explored beyond empirical evidence? It would also be beneficial to include an ablation study evaluating the impact of different topology configurations. We thank the reviewer for raising this very insightful question! The optimal topology does indicate certain patterns per task family, and there are topologies that demonstrate clear advantages over other topologies in particular tasks. By inspecting Figure 8, we notice that the “debating” topology brings significant gains to all multi-hop tasks that require factual knowledge: HotpotQA, MuSiQue, and 2WikiMQA, which is aligned with [2] that argues debating will elicit more truthful answers. Reasoning tasks: MATH and DROP benefit from more exploration, where SC becomes more effective. Lastly, the coding tasks share a common pattern of reflection with tool-using. However, even the best configuration in the same task family still shows differentiations, indicating the necessity of automatic optimization. Therefore, no matter the underlying complexity of the task-dependent topology, the unique advantage of MASS is being able to identify the most influential topology automatically for any customized search space. We will incorporate this discussion into a new ablation subsection. [2] Khan, A., Hughes, J., Valentine, D., Ruis, L., Sachan, K., Radhakrishnan, A., ... & Perez, E. (2024). Debating with more persuasive LLMs leads to more truthful answers. ICML 2024. > Scalability considerations: The paper does not extensively discuss the computational cost of running MASS. How MASS scales with increasing agent complexity remains unclear. More details on runtime and computational overhead across different tasks would be useful. We thank the reviewer for suggesting a token consumption report. Here, we include an additional table of the actual token cost for running MASS and how it is compared to baselines, where we show the training cost and the actual run-time of MASS is comparable to the training cost of auto-agent baselines. |Method|Training: Input Token|Output Token|Cost|Runtime (mins)|Inference (per query): Input Token|Output Token|Cost $|Acc %| |-|-|-|-|-|-|-|-|-| |SC|||||1538|3013|0.0010|69.3| |Reflect|||||2051|850|0.0004|71.3| |Debate|||||6536|2483|0.0012|71.7| |AFlow|11M|8M|3.89$|67|2523|1481|0.0006|64.3| |ADAS|23M|13M|5.61$|55|7850|3335|0.0016|72.7| |MASS|24M|11M|5.09$|58|6645|3263|0.0014|81.0| > Baseline comparisons & ablations: a deeper discussion of why MASS outperforms AFlow across different tasks would add value. While the paper provides strong empirical results, a discussion on limitations (e.g., cases where MASS underperforms) is missing. We agree with the reviewer that a deeper discussion with AFlow can shed light and add value for future designs. We’ve provided a discussion in Lines 323-382 and Lines 431-434, and we are happy to further elaborate on that: the core differentiation between MASS and AFlow sits at the design of the optimization dimension (i.e., search space), and we observe that the importance of search space design outweighs the actual search algorithm, which have been reflected in many NAS literature. More precisely, MASS exploits a more effective prompt design space in conjunction with a general topology space, whereas AFlow conducts search in a more constrained set of operators with a very limited prompt space designed to be tailored to certain tasks. Therefore, the human-prior in defining these operators could provide implicit advantages for some tasks (e.g., 2WikiMQA in Table 1) where MASS shows lower but still comparable results.
Summary: This paper formulates multi-agent system design as a joint prompt and topology optimization problem. It introduces the MASS framework, a multi-stage search process that interleaves block-level prompt optimization, workflow topology optimization, and workflow-level prompt refinement, to efficiently navigate the vast MAS design space. By focusing on the most influential prompts and a pruned set of topologies, the method dynamically constructs multi-agent systems. Experiments across benchmarks for reasoning, long-context understanding, and coding demonstrate that MASS significantly outperforms both manually-crafted and automatically-generated alternatives. Claims And Evidence: 1. The paper proposes MASS framework that jointly optimizes both the agent prompts and the system topology. It addresses the interdependence of prompt design and agent connectivity, which prior work often treats in isolation. 2. The paper includes detailed ablation studies that clearly illustrate the contribution of each optimization stage, enhancing the credibility of the proposed method. Methods And Evaluation Criteria: 1. My main concern is that the proposed method appears to lack sufficient novelty. The prompt optimizer leverages the existing MIPRO approach, and the topology optimization primarily relies on established topological designs. 2. The proposed method depends on many manual design choices, such as the topology search space, and does not explain why these particular topology structures were chosen. Moreover, these topology structures might not be applicable to all domains. Theoretical Claims: No Theoretical Claims. Experimental Designs Or Analyses: The paper uses large base models, Gemini 1.5 and Claude 3.5 Sonnet, which makes it unclear whether the proposed method would be effective on smaller open-source models, such as Llama 3.1 8B, and other propieratory model. This limitation restricts the method’s applicability. The paper does not compare with some essential baselines, such as GPTSwarm (which also treat agents as optimizable graphs), MacNet (which also considers LLM agents on graphs), and more recent multi-agent debate methods. Moreover, the proposed method consumes a great number of tokens. The efficiency of the proposed method has not been compared, especially compared to more cost-efficient methods such as self-consistency. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper is broadly related to multi-agent systems research and LLM research . Essential References Not Discussed: No. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: There are still some minor errors in the manuscript; for example, it appears the x-ticks are missing in Figure 3. ---------------post rebuttal----------------------------- I am not convinced that it is worthwhile to spend so many efforts (in terms of the number of tokens consumed, which is the most straightforward measurement rather than the money spent) to "optimize" the topologies for perfermance improvement on only a single dataset. In fact, without clear evidence of generalization, **this approach risks being a form of “overfitting.”** However, this may not be a limitation unique to this specific work, but rather a broader issue within this research direction. It might therefore be unfair to criticize only this paper for this limitation. With that in mind, I have adjusted my rating from 2 to 3 (weak accept, though rejection remains justifiable). Questions For Authors: 1. Figure 2 suggests that prompt-optimized agents exhibit superior scalability relative to Debate, Reflect, and self-consistency. Could the authors provide additional evidence or discussion on whether this scalability advantage persists across other models and benchmarks? 2. Figure 8 shows that the agent topologies are noticeably more complex than the baseline approaches (e.g., CoT, self-consistency) presented in Table 1. Could the authors elaborate on whether this discrepancy in complexity affects the fairness of the comparison? Moreover, if the sampling numbers for the Self-Consistency baseline were increased, would the proposed method still maintain its performance advantage? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive suggestions! We have included additional experimental results with open-source models, a graph optimization baseline, and provided clear comparisons on the token consumption. We hope in light of our response, the reviewer could consider improving their score. > Novelty of MASS A key contribution of this work is highlighting critical, under-explored design dimensions in MAS: the significant impact of prompt engineering and the redundancy in conventional topology choices. Unlike prior approaches often using manual prompts or emphasizing scaling alone, we demonstrate that optimizing these elements yields substantial gains and interacts critically with other dimensions like agent scaling. We argue this insight is foundational for MAS—a developing field—as it clarifies the necessity for automated co-optimization, answering why methods like strong prompt optimization are essential. Leveraging this understanding, we introduce MASS, a novel methodology for automated MAS design. While adaptable to various prompt optimizers (MIPRO was used here), MASS employs a distinctive three-stage, interleaved optimization strategy. This approach navigates the complex design space by sequentially refining the most influential components within pruned search space boundaries. Consequently, MASS achieves state-of-the-art performance, significantly outperforming existing automated design methods and the specialized prompt optimizer used in isolation (Fig 5, left). > Justification of topology design choices While there are some manual design in deciding the search perimeter, our topology search space aligns with well-established topological designs, including SC, reflect, and debate, that have been shown generalizable and effective to a wide spectrum of tasks in the board literature and were also used in the search spaces of seminal previous works like ADAS and AFlow, which we also chose for a fair comparison and generality. However, the MASS framework on its own does not depend on a specific topology space, and we can easily extend it to customized topology choices. We leave experimental results with MASS on other involved topology search spaces to future work. > Open-source models We extend experiments to Mistral-Nemo-12B, where the consistent gains prove the applicability of MASS even to small open-source models. |Method|MATH|DROP|HotpotQA| |-|-|-|-| |CoT|13.3|49.0|55.9| |SC|22.0|57.6|58.9| |Refine|14.3|48.6|52.5| |Debate|26.0|61.4|59.5| |MASS|43.7|68.4|62.6| > Graph optimization baseline Following the reviewer’s suggestion, we report GPTSwarm and observe that the graph optimization is more effective in improving the inference efficiency from a fully-connected graph to a sparse graph rather than enhancing the task performance, whereas the prompt optimization component of MASS particularly led to more significant contributions. |Method|MATH|HumanEval| |-|-|-| |GPTSwarm (Pro)|76.0|85.0| |MASS (Pro)|84.7|91.7| |GPTSwarm (Flash)|61.0|73.0| |MASS (Flash)|81.0|84.7| > Cost-efficiency of the method We present the token consumption comparison of gemini-flash below, where it’s clear that MASS consumes comparable compute to ADAS and AFlow: |Model|Train:Input Token|Output Token|Cost|Runtime (min)|Inference:Input Token|Output Token|Cost $|Acc %| |-|-|-|-|-|-|-|-|-| |SC|||||1538|3013|0.0010|69.3| |Reflect|||||2051|850|0.0004|71.3| |Debate|||||6536|2483|0.0012|71.7| |AFlow|11M|8M|3.89$|67|2523|1481|0.0006|64.3| |ADAS|23M|13M|5.61$|55|7850|3335|0.0016|72.7| |MASS|24M|11M|5.09$|58|6645|3263|0.0014|81.0| > Q1: Generalization on prompt scalability We’d like to refer to Table 5, Page 16 where it’s obvious that the gain from “Base” to “1PO”, which is the prompt optimization step, far exceeds, e.g., that from “1PO” to “2TO”, which is the topology search step, which aligns with the observation in Fig 2. We additionally show Claude results below, which demonstrate that the observation is not specific to Gemini models. |Claude|Avg.| |-|-| |Base|60.2| |1PO|70.0| |2TO|71.9| |3PO|72.4| > Q2: Fairness of comparisons As mentioned in the cost comparison table above, we’d like to emphasize that we broadly controlled the cost, and all methods roughly consume the same token/dollar costs. We also provide detailed specifications of baselines in App. B.2 that all come with a fair number of token consumptions compared to MASS. Regarding further scaling SC, firstly we note that SC is a part of the MASS search space, so MASS can naturally benefit from SC scaling. Second, as we have shown in Figure 2 & 9, even if SC brings significant benefits, SC still saturates earlier than MASS-optimized topologies, which show a better token-effectiveness. In other tasks in Table 1, even assigned with a large budget, SC only makes limited gains on multi-hop tasks, whereas the Debate topology substantially advances the performance. This observation further consolidates the necessity of automatic topology optimization in MASS.
null
null
null
null
null
null
Progressive Tempering Sampler with Diffusion
Accept (poster)
Summary: This paper proposes learning diffusion models at various temperatures using MCMC data at high temperatures. These models are then used to sample from unnormalized probability distributions. The authors employ a Taylor expansion of the target distribution over temperature to derive a temperature-dependent drift term, which is sequentially applied to generate samples at lower temperatures. Biases are corrected using importance resampling. The method is evaluated on three different target distributions commonly referenced in the literature. Claims And Evidence: The authors claim to achieve significant improvements in target density evaluation efficiency compared to previous diffusion-based neural samplers. However, upon examining Tables 1 and 2, I am not convinced that this claim is fully substantiated. To me it looks like PT+DM performs better in many cases and possibly insignificantly worse in some cases. Methods And Evaluation Criteria: The evaluation criteria are sensible, but it would be beneficial if the authors also reported the Evidence Lower Bound (ELBO). The method is evaluated on only a few problem types, which is less comprehensive than typical evaluations in related literature. Additionally, the 2-D Gaussian Mixture Model (GMM) problem is quite simplistic. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is logical, and some ablation studies are conducted. Supplementary Material: I reviewed the supplementary material for more details about the experiments. While some descriptions are provided, they are insufficient for reproducing the results. Relation To Broader Scientific Literature: The broader relation to scientific literature is established. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: **Strengths:** - The idea of combining diffusion models with parallel tempering is compelling. **Weaknesses:** - Important experimental details are missing. For instance, for the LJ-55 problem, it is unclear what architectures are used for each method, how equivariance is ensured, and how many diffusion steps are employed. - There is no documentation of the attempts made for methods that diverge on the LJ-55 problem. - The experiments are limited to three tasks, one of which (GMM-2D) is overly simplistic. I suggest using more challenging alternatives like GMM-50D and Mixture of Students in 50D. - The Taylor expansion with respect to temperature, where $\Delta T$ is significantly greater than 0, seems to be a crude approximation. - The method introduces many additional hyperparameters (Tab. 4), and it is unclear how much tuning is required. It is also uncertain whether the same level of hyperparameter tuning was applied to other methods, particularly PT+DM. - It is unclear what type of diffusion model was used for PT+DM. Overall, the experiments should be described in more detail. - Figure 1 suggests that PTSD performs better than PT+DM, but Table 1 indicates that it is often outperformed by other methods. - no error bars are reported Other Comments Or Suggestions: - "PF-ODE" is inconsistently written in three different ways in the paper. - Line 288: Consider providing a reference to the appendix and reporting the used truncation threshold. - Lines 120 ff.: Although not part of the proposed method, could the authors elaborate on whether the swaps are performed with a certain probability after each MCMC update step? - From where is the MW32 ground truth data obtained? - L.127 ff. please make clear that you are using the Variance Exploding SDE. The notation with $\dot{\sigma}$ is rather unusual. so please explain that you are using the same notation as in Song 2021. Questions For Authors: - do you train your diffusion model using Hutchinson’s trace estimator or using score matching? - How does your procedure compare to training CMCD and other diffusion samplers on data from parallel tempering and then fine-tuning it on the target distribution? - How many samples are used for evaluation in Table 1? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and your constructive comments. We hope that our detailed responses would solve your concerns.Should you find our reply satisfactory, we kindly encourage you to raise your score. ## Q1: More details of the experiments Thank you for pointing out this. We will add them in our camera-ready version. We now address your concern related to method details you mentioned. ### no documentation of the attempts made on the LJ-55 problem We did not save the results for LJ-55 for some baselines as it is significantly worse than others. A similar pattern has already been observed by iDEM (Fig 9). We note that these baselines will anyway require significantly more energy evaluation times than ours (and PT+DM). ### Details about additional hyperparameters in Tab 4 For most of the targets, we tune PT+DM so that the swap rate is ~30%, and we use the same schedule for PT+DM and our approach. We also specifically tune PT+DM on GMM40 as it requires fewer levels. So we in fact put similar effort into tuning the temperatures for both approaches. However, we agree that our approach also has many other hyperparameters to tune, and this can make the tuning more involved. We will update our camera-ready paper to reflect this limitation. We also want to emphysize that more hyperparameters also means a **larger design space**, which has the potential to improve the performance of our approach further. ### More details about the diffusion model used for PT+DM To ensure fair comparison, we employ exactly the **same** neural network architecture and training hyperparameters for PT+DM and PTSD. ## Q2: The experiments are limited to three tasks Following your suggestion, we evaluate our model on a more complicated task. As our main focus for the application is molecules, we evaluate our method on Alanine dipeptide (ALDP) in Cartesian coordinates. This task is highly challenging for neural samplers.As we can see in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/ramachandran.png, our algorithm still obtain reasonable performance on this task, which strongly support the effectiveness of our approach. ## Q3: The Taylor expansion seems to be a crude approximation when $\Delta T$ not close to 0 We totally agree. However, we found that this crude approximation provides a better exploration performance compared to using the derivative (with auto-diff) directly. We have provide a plot in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/extrapolation_ablation.png to illustrate this observation. This is possibly due to 2 reasons: 1. The derivative may not be robust, as we only train the model with a finite grid of temperatures; 2. Our approximation eventually forms a auto-guidance, which may have the potential to reduce the network imperfection. ## Q4: Figure 1 suggests that PTSD performs better than PT+DM, but Table 1 indicates that it is often outperformed by other methods We made a mistake in the previous LJ55 results. We now have updated the table in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/updated_main_table.png to reflect this. The energy function evaluation count decreased significantly, while also improving the energy-MMD. Note also the interatomic distance plot, which shows that our method beats BNEM in this regard. We can see that our approach delivers a performance that is better or on par with other baselines. We also note that even though sometimes the baselines outperform our approach, the improvement is very small. On the other hand, our method consistently delivers good results on these targets, and even works well on the challenging ALDP target. ## Q5: no error bars are reported Thank you for mentioning the missing or error bars, we will update them in our camera-ready version. ## Reply to Other Comments Or Suggestions Thank you for pointing out the inconsistency of the PF-ODE writing, the missing references, and clarity of the VE SDE writing. We will fix them. For the other questions: >How are swaps done? We use standard PT swap; the acceptance rate is in eq(2) in our paper. >How do we sample MW32 ground-truth? We follow FAB to first run iid samples from Double Well, and then stack 16 samples together. ## Responses to the Questions For Authors 1. We train our diffusion model by standard denoising score matching 2. For CMCD, and other diffusion samplers, it cannot directly train from data without increasing energy evaluation times. In fact, every time we calculate the loss, hundreds of energy evaluations. On the other hand, our pipeline does not need an extra energy evaluation and hence will be significantly more efficient compared to others. 3. We use 10000 samples
Summary: The paper introduces a new sampling algorithm—Progressive Tempering Sampler with Diffusion (PTSD)—which aims to efficiently sample from unnormalized densities by combining ideas from traditional parallel tempering (PT) and modern diffusion-based neural samplers. The method is evaluated on several multimodal targets, where it demonstrates significant improvements in sample quality and target density evaluation efficiency over existing neural samplers. Claims And Evidence: Yes the claims look clear and are supported by basic math and some limited experiments. Methods And Evaluation Criteria: Yes they do. Theoretical Claims: Yes I checked the math but the paper lacks proofs. Experimental Designs Or Analyses: Yes they look valid. Supplementary Material: There is no supplement for this submission. Relation To Broader Scientific Literature: I would say the findings are kind of limited and not very interesting to the broader scientific audience. Essential References Not Discussed: Everything is cited. Other Strengths And Weaknesses: Weaknesses: * the paper is hard to read in general and is applied only to a niche audience. * The proposed PTSD method is kind of slow in the experiments conducted. Any specific reason for that? * I would also say that the method is very limited to simple models. The authors have to try it to bigger and more state of the art models. * The experiments in general seem kind of simplistic and the results in Table 1 are also not that good. Is there a reason for that? Cause in its current form the paper is not strong either mathematically or experimentally. Other Comments Or Suggestions: No other comments. Questions For Authors: Please check the Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your review. We hope that our detailed responses would solve your concerns. If any questions or concerns arise, please do not hesitate to let us know, and we will address them properly. However, should you find our reply satisfactory, we kindly encourage you to raise your score. ## Q1: the paper is hard to read in general and is applied only to a niche audience Thank you for your comment. We will be happy to update our paper if you could kindly provide more information in detail about the part that you found hard to read. Additionally,we respectfully disagree with the argument that this paper is niche. Sampling from unnormalized densities is a long-standing task in many areas, including Bayesian inference, Molecular simulation, physics, statistical chemistry, etc. There are many approaches trying to address this problem. Here is a non-exhaustive list of recent advancements: [1, 2, 3, 4, 5]. ## Q2: The proposed PTSD method is kind of slow in the experiments conducted. Any specific reason for that? This seems to be a misunderstanding. As we can see from Fig 6 and Tab 2 in our manuscript, our method is actually the most efficient approach in terms of energy function evaluations. It is order-of-magnitude faster than other neural sampler baselines. Could you specify where you found our method slow? ## Q3: Proposed method is limited to simple models Our method does not rely on any specific architecture. One can apply any architecture to our proposed pipeline. Also, for the problem we are addressing, we believe the model (e.g., EGNN) we use is indeed “SOTA” as it balances well running cost and performance. The architecture we use is a standard choice for the n-body system and small molecules [4, 6, 7]. Therefore, it would be very helpful if you could clarify what kind of model you expect us to evaluate our approach on. ## Q4: The experiments in general seem kind of simplistic, and the results in Table 1 are also not that good. Is there a reason for that? Thank you for your comment. We used the wrong hyperparameter (model size) when we ran the results for Table 1, and hence LJ55 is slightly worse than expected. We have fixed this issue, and we can see the updated results in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/updated_main_table.png, where both the energy-MMD and data-W2 are improved. We also see that number of energy function evaluations of our approach is **less** than PT+DM across all 3 tasks. The visualization of the interatomic distance in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/LJ55_interatomic.png also reveals that our method does have advantages over BNEM in this regard, and is a strong competitor to PT+DM. Additionally, we provide results on **alanine dipeptide** in Cartesian coordinates, which is a challenging task where most of the baselines failed. This significantly highlights the capability of our approach. The experimental results can be found in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/ramachandran.png. ## Q5: Paper lacks proof Our approach relies on Taylor expansion and finite difference approximation for first-order derivatives. There are standard results in calculus. Could you please explain which part you suggest us to include a proof? [1] Noé, Frank, et al. "Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning." Science 2019. [2] Doucet, Arnaud, et al. "Score-based diffusion meets annealed importance sampling." NeurIPS 2022. [3] Midgley, Laurence Illing, et al. "Flow Annealed Importance Sampling Bootstrap." ICLR 2023. [4] Akhound-Sadegh, Tara, et al. "Iterated denoising energy matching for sampling from boltzmann densities." ICML 2024. [5] Vargas, Francisco, et al. "Transport meets Variational Inference: Controlled Monte Carlo Diffusions." ICLR 2024. [6] Hoogeboom, Emiel, et al. "Equivariant diffusion for molecule generation in 3d." ICML 2022. [7] Klein, Leon, Andreas Krämer, and Frank Noé. "Equivariant flow matching." NeurIPS 2023.
Summary: This paper focuses on the problem of sampling from the unnormalized densities. The authors recognize the drawbacks of MCMC with Parallel Tempering as well as the ones of recent neural samplers (mostly based on the diffusion process). In particular, they proposes a novel sampling method — Progressive Tempering Sampler with Diffusion — that trains diffusion models sequentially across temperatures. The main idea is to start from training the high-temperature diffusion models, which are used to generate a lower-temperature samples, slightly refined with MCMC approach. Such samples are then used to train the lower-temperature diffusion model. The method is compared against other sampler on standard benchmarks, where the authors focuses on the number of target evaluations during training and sample-based metrics. Claims And Evidence: The claims regarding findings the advantages and drawbacks of neural samplers and Parallel Tempering, as well as introducing a novel method (being more efficient in terms of number of target evaluations during training) are well justified. Same, regarding the theoretical findings. However, I think that the claimed superiority to neural samplers in terms of generated samples and better scalability is not well justified, mostly due to the limited empirical evidence. Methods And Evaluation Criteria: Yes, but they are limited. I would like to see the comparison against other neural samplers like PIS, DIS, or GFlowNets. Moreover, I’m concerned by the lack of standard metrics for sampling problems (logZ, NLL, etc.), which are missed in the experiments. In particular, the authors do not provide the results for FAB on LJ55, which was one of the first methods checked in this setting and the results for TVD for iDEM are various from the ones from the original paper. Theoretical Claims: Yes, I've checked and haven't found any obvious drawbacks. Experimental Designs Or Analyses: Yes, for part of my concerns, please see the Methods And Evaluation Criteria section. Moreover, I generally think that the samples quality evaluation is missing, e.g., plot of interatomic distances for LJ potentials (comparing other methods), logZ or NLL metric, etc. **Ablation studies:** I think that the influence of used sequences of temperatures during training, and the influence of a buffer size might be beneficial for this work. Moreover, I would like to veto see the comparison between models on their memory costs. Supplementary Material: I’ve checked the whole supplementary material. Relation To Broader Scientific Literature: The key findings are based on existing knowledge and methods (PT and diffusion), but the proposed method is novel. Essential References Not Discussed: The references are discussed in general. Other Strengths And Weaknesses: **Strengths:** [1] Proposing a novel method of combining the PT with diffusion for more efficient training of samplers. [2] The method seems to be theoretical justified, and significantly lowered the number of needed target energy evaluation. **Weaknesses:** [1] Missing results for some benchmarks (like FAB). [2] Missing significant metrics for sampling quality. [3] Lack of comparison of memory cost of training samplers, and lack of scaling PTSD into more complex problems (since it requires lower number of target energy evaluations). [4] Regarding all of the mentioned weaknesses, I think that this paper (in the current form) has limited significance. Other Comments Or Suggestions: For comments and suggestions, please see the previous sections. Questions For Authors: For questions, please see the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and your constructive comments. We now reply to your concerns one by one. Should you find our reply satisfactory, we kindly encourage you to raise your score. ## Q1: Limited empirical evidence & lack of scaling PTSD into more complex problems To showcase the superiority and better scalability of our method, we conduct experiments on a more complex system-Alanine Dipeptide in Cartesian coordinates, a task that most of the neural samplers failed on. The results of experiments are provided in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/ramachandran.png. As we can see, our algorithm provides high-quality samples on this challenging task, with very few energy function evaluations. (uses almost an order of magnitude less evals than FAB) We will include this results in our camera-ready version. ## Q2: lack of comparison against PIS, DIS, or GFlowNets Thank you for suggesting these baselines. However, we highlight that PIS, DIS and GFN essentially have the same property as DDS: they mostly use the same model architecture as DDS (parameter the network with a score term); and they require simulating the entire trajectory, which requires hundreds of energy calls to calculate the loss once. Therefore, we use DDS as a representation for this family of approaches. ## Q3: Lack of results for FAB on LJ55 The result for FAB on LJ55 is not shown because its training diverged in our experiments. Similar results were also observed in iDEM: From iDEM Fig 9, we can see FBA is significantly worse than iDEM. ## Q4: Different results for iDEM on LJ55 We note that in iDEM, they obtain their results on LJ-55 by running a few steps of Langevin dynamics on top of the generated samples. This can be seen from their codes: https://github.com/jarridrb/DEM/blob/main/configs/experiment/lj55_idem.yaml where they set “num_negative_time_steps: 10”. This means they run 10 steps of Langevin dynamics in https://github.com/jarridrb/DEM/blob/main/dem/models/components/sde_integration.py As this trick is applicable to our approach and all the baselines, we present the results without it. ## Q5: lack of logZ, NLL, etc. Computing logZ and NLL requires model density. Different baselines will have different ways to calculate logZ and NLL: For CMCD and DDS, we need to discretize the SDE into sequence of Gaussians; for diffusion models (ours and PT+DM), we can either use this approach, or estimate it with change of variable on PF-ODE. As different approaches have different discretizations or different underlying processes, this would make a fair comparison more difficult. In the end of the day, one cares ultimately about the quality of the samples. Therefore, we believe that sample-based metrics are more straightforward and comparable across methods. ## Q5: Lack of interatomic distances for LJ we have provided it in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/LJ55_interatomic.png. We see that PT+DM is a strong baseline for LJ55, but PTSD is a close second in terms of visually matching with the ground-truth distribution. We will add this visualization in our camera-ready version for a more comprehensive evaluation. ## Q6: Ablation studies Following your suggestion, we studied the effect of the amount of temperatures in the GMM task, and report the results in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/num_temp_ablation.png. We can see the trend is monotonic: all of the metrics improve with more temperature levels. This indicates we can always use more compute to obtain better results in our method. ## Q7: Lack of comparison of the memory cost of training samplers Our approach encounters a larger memory cost in two places. We now analyze these two costs one by one: 1. When we run MCMC on the samples from the buffer, if we parallelize them, we will encounter a larger memory cost compared to PT. In fact, we view this as an advantage rather than a limitation as this indicates that our method enables efficient parallelization of MCMC into multiple independent chains, with the number of parallel chains determined by the buffer size. On the contrary, standard PT can only parallelize over as many chains as the number of temperatures. The buffer size is normally much larger than the number of temperatures; our approach can, in fact, allow for more massive parallelization. 2. When we sample using our proposed temperature-guided score, we will need to save and call two models together in memory; we may also need to backprop through these two models. In total, they will increase the memory cost to ~4x of standard diffusion model sampling. We agree that this a potential limitation for our approach, and we will include a discussion on this in our camera-ready version. Thank you for this insightful question. This allows us to realize an additional advantage and limitation of our algorithm, which we believe have made our paper more solid and stronger. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and authors' work. Some of my concerns were addressed, but not all of them. I believe that the comparison against iDEM, not including their approach to use Langevin steps on top is a little unfair. Because of that, the results from iDEM differ much than those from this manuscript. I think some way to comparing the NLLs is needed, because we know that pure sample-based metrics (e.g., $W_2^2$) behaves differently than log-prob like measures. For example, finding the means of the modes is enough in many cases to obtain a low value of Wasserstein distance. I do not agree that we only care about the sample quality and that the sample-based metrics are enough to evaluate them. Finally, the presented Ramachandran plots for PTSD look worse than the ones from FAB original paper (Fig. 4), so I believe more extensive evaluation is needed. Once again, I thank the authors for their work, but I think the current version of the evaluation setting (e.g., irreproducibility of iDEM/FAB results, missing computational cost comparison) is not enough. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for the response and the further constructive suggestions. Below, we try to address your remaining concerns: ## 1. LJ55 results with Langevin steps: Following your suggestions, we provide an extra comparison against iDEM that includes their approach of using Langevin steps. Specifically, we rerun the LJ55 evaluation by using 1000 Langevin dynamics steps on top of the samples for all of the methods. We include the interatomic distance histograms for the methods with Langevin dynamics in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/LJ55_with_Langevin_interatomic.png. Below, we also list the TVD/MMD/W2 metrics with this evaluation method: | Method | TVD | MMD | W2 | |--------|-----|-----|-----| | iDEM | 0.41 | 0.11 | 15.8 | | BNEM | 0.07 | 0.003 | 15.7 | | PTSD | 0.28 | 0.05 | 16.8 | | PTDM | 0.14 | 0.017 | 15.8 | The overall comparative pattern remains the same. We will also include uncertainty intervals in the camera-ready version. However, we still note that comparing the methods without Langevin steps is the fairest way for three reasons: 1. These Langevin steps are not part of iDEM’s approach. 2. Langevin dynamics can be applied to all of the samplers efficiently. 3. Langevin steps can reduce the inherent differences between different samplers: the better the sampler is, the less improvement it can gain by running Langevin dynamics. Therefore, we will include both results (w and w/o Langevin) in our camera-ready version. ## 2. NLL Following your suggestions, we now provide NLL for our approach and baselines. To evaluate NLLs in a fair way, we used the same approach as iDEM by training diffusion models on samples generated by the different models, and defining the likelihoods using the resulting diffusion probability flow ODE, estimated using Hutchinson’s trace estimation trick and 2000 Euler integration steps. We include the results in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/nll.png. We can see that PTSD is always on the Pareto frontier of energy function evaluations and data log-likelihoods among the compared methods, often with large margins. ## 3. Differences between our alanine dipeptide results vs. FAB For the ALDP experiment, we would highlight three significant differences between our method and FAB: 1. In our experiment, the results are produced by using only $2.6\times 10^7$ energy-function-evaluations, while FAB uses $2.0 \times 10^8$, which is almost an order of magnitude more expensive (see Table 8 in [1]). As such, we think that these two models should be interpreted as targeting different energy evaluation ranges. 2. Our method samples in Cartesian coordinates, while FAB samples ALDP in Internal coordinate. Cartesian coordinates significantly complicate the task. 3. The FAB results are presented after importance resampling, whereas our samples are direct samples from the model. We updated the Ramachandran plots in https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/ramachandran.png, where the result of FAB is obtained from (Figure 4.) in [1]. We agree that a more detailed analysis of the results is useful, and will include more detailed comparisons like NLL in the updated manuscript. Thank you again for your valuable and constructive review. We hope that our responses have effectively addressed your remaining concerns, and hope that you could increase the rating accordingly if we have addressed your concerns successfully. [1] Midgley, Laurence Illing, et al. "Flow Annealed Importance Sampling Bootstrap." ICLR 2023.
Summary: The article describes a method based on diffusion model in order to sample from unnormalized densities, such as $p(x) P \frac{\tilde{p}(x)}{Z}$. The approach is based on parallel tempering: many diffusion models are trained to match the distribution $p$ at various temperature. The general idea being that at large temperature, it is easy both to produce samples from $p$ and to reproduce faithfully the distribution's density using the samples. Therefore, first models are trained at large temperature: samples are easily generated using MCMC and the diffusion model are trained upon them. Then, lower temperature diffusion model are trained using eq. (11) starting from the weights learned with a previously trained machine (at a slightly higher temperature). The samples are corrected using importance sampling to correct the bias. Claims And Evidence: The paper does not claim any particular theoretical or practical results. It is mostly based on an expansion of the score function as a function of a temperature parameter in order to learn diffusion model for the unnormalized density at various temperature. It shows that this approach seems to work in a set of benchmarks. Methods And Evaluation Criteria: The method is based on the training of various diffusion models to adjust on a probability density that is annealed from a "high temperature" regime where it is easy to sample from and from which samples are first generated with Monte Carlo to train the first stage of the diffusion models. Then, using both a Taylor expansion of the score function and importance sampling, it progressively learn the set of diffusion models to adjust toward the target distribution reaching $T=1$ for the annealing parameters, without the need to sample from the target distribution. The method is evaluated on a set of benchmarks and compare to other methods. Three different measures are used to compare the different models. Overall the criteria seems adequate for the considered setting. There is no comment about how hard these benchmarks are in the text, such as a measured Monte Carlo relaxation time or any quantitative criteria. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: I went very briefly through the SM. It mostly adds more details on the benchmarks, the evaluations metrics and the simulations. Relation To Broader Scientific Literature: The method is related to parallel tempering. An aspect which limits the use of PT is the presence of first order transition along the temperature path, which typically occurs when the data distribution has well separated modes which is quite common in ML. This requires then extensive number of temperatures. This is discussed for instance in Béreux et al. "Fast, accurate training and sampling of Restricted Boltzmann Machines" (ICLR 2025). I would imagine that the temperature expansion which is used in the diffusion model is going to break down as well, rendering the method inoperative in this case? Essential References Not Discussed: I did not find that an important reference was missing. Other Strengths And Weaknesses: - The proposed method combining PT and diffusion models is original, interesting and promising. - The main limitation might be the burden to train a large amount of diffusion models, when the number of temperatures levels become large the computational cost and how this compares to other methods is not clearly discussed Other Comments Or Suggestions: no additional suggestions. Questions For Authors: This work shows how to use tempering to learn unnormalized probability distribution. I have some questions: * the authors use the method for instance on Lennard-Jones particular. Is there any metric related to the physics of the probability distribution that can be used in such case to attest the "goodness" of the generated samples ? * How does the method depend on the number of temperatures ? * The method seems to have a better W2 distance in general than the other methods, is there a reason why it should be the case ? (the other indicators are not as different between the different methods). * I have the impression that one weakness of the method is the lack of "proof" that the algorithm should converge toward the correct probability distribution in some limit. For instance, with MCMC, in theory we can always perform a good sampling by increasing the number of MC steps. Would there be some similar results for the setting used in this work ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and your constructive comments. We hope that our detailed responses would solve your concerns. # Questions: ## Q1: Metrics related to physics We follow the metrics used in previous literatures, e.g., iDEM, NEM, DiKL, FAB, etc. In these baselines, they do not evaluate on any physical metrics. One potential metric to use is to estimate the expectation of some manually crafted function using samples generated by our approach, and compare it with the value estimated using ground truth samples. We can view this as a “toy” physical property. In practice, we use a quadratic function of the configuration, i.e., estimate $\mathbb E_{p(x)}[ x^T x ]$. We compare the estimated integral with the ground-truth using the Mean Absolute Error (MAE) and normalise the error as a percentage of the ground-truth integral value. We estimate 90% confidence intervals with bootstrap resampling. As we can see, our method can still deliver good results consistently. We will include this result in the camera-ready version. | Method | MW32 | LJ55 | |--------|--------------|----------------| | IDEM | 8.97 ± 0.61 | 28.7251 ± 0.02 | | FAB | 4.08 ± 0.14 | - | | DIKL | 3.19 ± 0.26 | - | | DDS | 9.89 ± 0.12 | - | | CMCD | 6.71 ± 0.26 | - | | BNEM | 10.61 ± 0.27 | 11.61 ± 0.05 | | PTSD | 1.49 ± 0.16 | 1.87 ± 0.07 | | PTDM | 4.32 ± 0.16 | 0.34 ± 0.04 | ## Q2: How does the method depend on the number of temperatures? More temperatures can (1) reduce the burden of training models for each temperature, and (2) can make our temperature guidance approximation more accurate. (1) is because, in practice, we always **fine-tune** our new diffusion model from the model trained on the last temperature. Using more temperature levels can reduce the difference between levels. For (2), we can look at Eq (11) in our manuscript to see this. When T1 -> T2, the guided score will become accurate. However, similar to PT, running our approach with too many temperatures can make the pipeline less efficient. In practice, we found matching the temperature schedule with that of standard PT can usually lead to a good performance. We provide an additional ablation study for it, by differing number of temperatures for training PTSD on the GMM problem: https://anonymous.4open.science/r/PTSD-rebuttal-icml2025/num_temp_ablation.png. ## Q3: Why is the W2 distance better, but the other metrics are not as different between different methods? This is an interesting observation. We suspect that this is because W2 directly measures the difference between sets of samples, while TVD and MMD are computed on the energy values for the samples, which loses more information. We can also see our approach achieves better results in the above, reflecting a consistent trend with W2. ## Q4: Convergence of the proposed method: > one weakness of the method is the lack of "proof" that the algorithm should converge toward the correct probability distribution in some limit. In fact, we can always get better results by either increasing the number of temperatures or increasing the local parallel tempering refinement steps. We can view these limits as just running annealed Langevin with a model trained along the sampling procedure. In this case, the only error will be the intrinsic model imperfection. Therefore, the convergence is in fact guaranteed. # Other concerns: ## Q5: Potential failure case for target distribution with well-separated modes Thank you for raising this insightful concern. We agree that our algorithm could potentially require many temperature levels for good performance if standard PT suffers due to some phase transition / separated modes. However, we emphasize that this limitation is not unique to our approach - standard PT can also require more chains and require task-specific design (for example, the trajectory PT in the case you mentioned). We will include this discussion in our camera-ready version, and we believe this discussion could make our paper stronger. ## Q6: Computational cost >The main limitation might be the burden to train a large amount of diffusion models, when the number of temperatures levels become large [...] We agree that this can be slow. However, in practice, we always **fine-tune** our new diffusion model from the model trained on the last temperature. Using more temperature levels can reduce the difference between levels, and hence, the training cost per level will also be reduced.
null
null
null
null
null
null
HyperIMTS: Hypergraph Neural Network for Irregular Multivariate Time Series Forecasting
Accept (poster)
Summary: The paper introduces HyperIMTS, a Hypergraph neural network for forecasting Irregular Multivariate Time Series (IMTS), which are challenged by irregular time intervals and unaligned observations. Existing models either use padded samples, disrupting sampling patterns, or set function and bipartite graphs, which struggle with unaligned dependencies. HyperIMTS addresses these issues by treating observed values as nodes in a hypergraph, with temporal and variable hyperedges enabling message passing. This irregularity-aware message passing captures both temporal and variable dependencies in a time-adaptive manner, improving forecasting accuracy. Experimental results show that HyperIMTS has good performance and low computational cost. Claims And Evidence: There are no problematic main claims. Methods And Evaluation Criteria: The model designed in the paper is both reasonable and effective for addressing the IMTS modeling problem. The dataset used is a commonly employed real-world dataset in IMTS modeling, which also provides practical guidance for solving real-world problems. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The experiments in the paper are sufficient, with appropriate comparison methods and thorough analysis. The use of commonly employed real-world datasets enhances the credibility of the results. Supplementary Material: I have read the whole supplementary material. Relation To Broader Scientific Literature: This paper details efforts to advance IMTS forecasting across various scientific domains. The paper provides a new perspective on the forecasting problem of ISMTS to some extent and effectively improves forecasting performance. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. **The motivation of the paper is not persuasive enough**: The analysis of existing methods’ issues is not very convincing. For example, the authors claim that while padding increases the amount of data the model must process, this is not a critical issue, especially if the padded data is highly accurate, as it could provide more useful information for IMTS with high missing rates. Additionally, the authors emphasize the requirement for alignment in existing methods but then focus on the relationships between observations at the same time point $ t $ as an improvement, but this approach also requires aligned observation points between sequences. 2. **Some expressions in the paper are vague or incomplete**: a) The definition of padding-based methods is unclear. According to the authors, padding includes directly filling in real values, RNN-based models that learn discrete hidden states and ODE-based models that learn continuous hidden states. However, there are significant differences between these approaches, and classifying them together is somewhat oversimplified. b) The classification of existing methods before the related work section is not comprehensive. I recommend revising this description to avoid misleading researchers who are not familiar with this specific field. Other Comments Or Suggestions: No. Questions For Authors: The method discussed in the paper is non-graph-based, but it simultaneously considers inter- and intra-series relationships. What is the main difference, aside from the update of correlations, compared to models that focus on these relationships? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your careful reading and in-depth thinking. We address the concerns as follows: ## W1. Regarding highly accurate padding **A1.** We acknowledge that padding has pros and cons, which should be weighed up in IMTS forecasting. ### 1. **Error accumulation** Imputation errors can be amplified during subsequent predictions, as noted in previous works [3][4]. We conduct additional experiments to verify it: - Pretrain each model for imputation. - Pad IMTS samples into fully observed via the imputation model. - Train the forecasting model on imputed samples. Better results are in **bold**: ||MIMIC III|PhysioNet'12 |---|---|--- |GraFITi|0.535(**33ms**)|**0.377(34ms)** |GraFITi(Impute)|**↑0.520**(245ms)|↓0.465(75ms) |Raindrop|**0.671(75ms)**|**0.438(52ms)** |Raindrop(Impute)|↓0.962(211ms)|↓0.820(85ms) |Crossformer|**0.641(184ms)**|0.408(**80ms**) |Crossformer(Impute)|↓0.962(292ms)|**↑0.265**(118ms) Over half of the results show **performance degradation** after imputation, while consuming **longer training time.** Although the experiment uses only a subset of models and datasets, we can still conclude that **existing models are not robust enough to provide highly accurate padding values**. ### 2. **Efficiency** The goal of our work is to improve forecasting accuracy **while maintaining high efficiency**. However, the above experiments demonstrate significant efficiency degradation. ### 3. **Informative missingness** As mentioned in Section 1 of Raindrop [2]: > the **absence of observations can be informative** on its own and thus imputing missing observations is not necessarily beneficial Absence actually reflects the **sampling pattern** of variables. E.g., in dataset PhysioNet'12, ALP and ALT, both liver function markers, can be observed to have nearly identical timestamps across all samples, indicating a *scheduled* sampling pattern. ## W2. Alignment in padding-based methods V.S. Our approach **A2.** We apologize for any confusion with respect to alignment. Revisions are made to clarify differences: ||Padding-based|Ours |---|---|--- |*where* to align|data preprocessing & subsequent|only in time-aware similarity |*how* to align|padding|select shared timestamps only, or padding Alignment itself is not a problem, but *where* and *how* to align can **affect efficiency**. Our method aligns only in calculation of time-aware similarity by selecting shared timestamps, thereby eliminating the need to handle large padded data in other network modules. ## W3. Definition of padding-based methods **A3.** We apologize for not fully clarifying the definition and scope of padding. In this work, padding involves adding values, either zeros or predicted values (imputation), **in the sample space** before input time series are fed into the neural network, rather than within the neural network's latent space. E.g.: ||RNN-based|ODE-based |---|---|--- |*latent* space|discrete|continuous |*sample* space (input)|zero-padded|zero-padded Further details on zero-padded input can be found in models' `forward` functions within the anonymized code repository provided at the end of abstract. ## W4. More comprehensive classification of existing methods **A4.** Thanks for your helpful advice! We revise them with the following improvements: 1. **Definition and scope of padding-based**: Clarify the definition scope is in sample space for input time series, instead of latent space. 2. **Classify methods from different perspectives**: - **model architecture**: (1) RNN-based; (2) ODE-based; (3) GNN-based; (4) Set-based; (5) Transformer-based. - **sample space padding methods**: (1) time-aligned padding; (2) patch-aligned padding; (3) non-padding. 3. **Compare with the standard in existing works**: Clarify the differences in classification standard with existing researches [5]. ## Q1. Regarding model difference **A5.** - Our method is implemented as a **hypergraph attention network**, summarized in A3 within rebuttal to Reviewer VDUB above. - Main difference: 1. efficiency: light-weight architecture 2. performance: (1) aware of sample-specific alignment; (2) learn both relationships in parallel instead of sequentially, better capturing time-variable correlations. [1] Silva, Ikaro, et al; "Predicting In-Hospital Mortality of ICU Patients: The PhysioNet/Computing in Cardiology Challenge 2012"; Comput Cardiol (2010) [2] Zhang, Xiang, et al; "Graph-Guided Network for Irregularly Sampled Multivariate Time Series"; ICLR 2022 [3] Wu, Sifan, et al; "Adversarial Sparse Transformer for Time Series Forecasting"; NeurIPS 2020 [4] Ma, Qianli, et al; "Adversarial Joint-Learning Recurrent Neural Network for Incomplete Time Series Classification"; TPAMI 2020 [5] Shukla, Satya Narayan, et al; "A Survey on Principles, Models and Methods for Learning from Irregularly Sampled Time Series"; arXiv:2012.00168 --- **Thank you so much for helping us improve the paper! Please let us know if you have any further questions:-).**
Summary: Irregular Multivariate Time Series (IMTS) are challenging due to irregular time intervals within variables and unaligned observations across variables, making it difficult to model temporal and variable dependencies. Existing IMTS models either use padded samples (which can introduce inefficiencies and distort sampling patterns) or represent data as bipartite graphs or sets (which struggle to capture dependencies among unaligned observations). To address these limitations, the authors propose HyperIMTS, a Hypergraph Neural Network for IMTS forecasting. In this approach, observed values are treated as nodes in a hypergraph, with temporal and variable hyperedges facilitating message passing among all observations. This irregularity-aware message passing allows the model to capture variable dependencies in a time-adaptive manner, leading to improved forecasting accuracy. Experimental results show that HyperIMTS outperforms state-of-the-art models while maintaining low computational costs. ## update after rebuttal Thank you for your response. After reviewing the rebuttal, my original concerns remain largely unaddressed. First, I believe that set-based methods are still capable of capturing correlations, as the embeddings of independently processed observations are ultimately aggregated to produce the final output. Second, as acknowledged by the authors in the rebuttal, the proposed model relies on a shared feature medium (hyperedge), like a bipartite graph that depends on a shared temporal medium (time nodes). That said, I recognize the novelty and potential impact of this work on the time-series literature, particularly in its approach to handling irregular time series. Therefore, I maintain my positive score. Claims And Evidence: The authors state that "However, sets (Set-based methods) typically do not account for correlations among observations." Can you provide more explanation why the set-based method fails to capture correlations among observations? The authors state that "bipartite graphs are unable to propagate messages between variables without shared timestamps.". However, I think the proposed method has a similar limitation because hyperedge (feature information) is required to capture temporal dependencies. Methods And Evaluation Criteria: I think the proposed method makes sense for the suggested purpose. Theoretical Claims: N/A Experimental Designs Or Analyses: I reviewed the experimental results in this paper. The inclusion of numerous baselines enhances the robustness of the findings, and the proposed method demonstrates strong performance. Supplementary Material: I didn't review the supplementary materials. Relation To Broader Scientific Literature: I believe this method has a significant impact on the time-series literature, as it introduces a novel approach for handling irregular time series models and establishes a unified benchmark for irregular time series forecasting. Essential References Not Discussed: No. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and encouraging feedback. We address the main concerns as follows: ## Q1. Why set-based method fails to capture correlations among observations? **A1:** We summarize the reasons from three aspects, using SeFT [1] as an example: ### 1. **Experimental aspect** - **Complexity** As mentioned **in Section 3.3 of the SeFT paper**: > "By contrast, our approach computes the embeddings of set elements **independently**, leading to lower runtime and memory complexity of $\mathcal{O}(n)$". SeFT is compared with vanilla Transformers that have an $\mathcal{O}(n^2)$ complexity. - **Classification accuracy** In the same section, the paper mentioned: > "Furthermore, we observed that computing embeddings with information from other set elements (as the Transformer does) actually **decreases generalization performance** in several scenarios". We note that this conclusion only applies to **classification** task, not **forecasting** task. ### 2. **Task aspect** SeFT was originally designed for classification rather than forecasting. The **task assumptions** used for time series classification and forecasting have minor differences: ||Classification|Forecasting| |---|---|---| |dependency *between* observations|Not necessary|Yes |*when* observations happened|Yes|Yes In classification, a few critical observations can be sufficient to determine the class label without considering their event order (e.g., a low heartbeat can signify potential health risks for patients); As for forecasting, future predictions depends on past observations, so their correlations are important. From experimental results, including dependencies between observations has been found to lower classification accuracy for SeFT, as mentioned previously. ### 3. **Modeling aspect** Set-based methods view observations as set elements, which are **invariant to their order**. The output of set functions doesn't change if observations are shuffled, which contrasts with models like RNNs or Transformers. ## Q2. Proposed method has similar limitation as bipartite graphs **A2:** We provide a detailed explanation of how our model learns variable dependencies without relying on shared timestamps. ### 1. **Preliminary** In our model: - **temporal dependencies**: learned via message propagations among observations *within the same variable* - **variable dependencies**: learned via message propagations among observations *between different variables*. ### 2. **Our model** Variable dependency learning is conducted through a **three-step** process. In Figure 2 of our work, variables $V_1$ and $V_3$ do not have shared timestamps. When propagating messages from observation $(t_1, V_1)$ (i.e., the blue "1" node) to $(t_2, V_3)$ (i.e., the orange "2" node), the process is: 1. **node to hyperedge**: node $(t_1, V_1)$ $\rightarrow$ variable hyperedge $V_1$ 2. **hyperedge to hyperedge**: variable hyperedge $V_1$ $\rightarrow$ variable hyperedge $V_3$ 3. **hyperedge to node**: variable hyperedge $V_3$ $\rightarrow$ node $(t_2, V_3)$ ### 3. **Bipartite graph** In contrast, the bipartite graph method propagates messages in a **two-step** manner. As depicted in Figure 1 (d), for variable dependencies, messages can only be propagated between $V_1$ and $V_2$ (*with* shared timestamps $t_1, t_2, t_3$), but not $V_1$ and $V_3$ (*without* shared timestamps). We describe the process of message propagation between $V_1$ and $V_2$ (*with* shared timestamps) for additional clarity: 1. **variable node to time node**: variable node $V_1$ $\rightarrow$ time node $t_1$ (or $t_2, t_3$) 1. **time node to variable node**: time node $t_1$ (or $t_2, t_3$) $\rightarrow$ variable node $V_2$ To summarize, bipartite graph relies on shared time nodes to propagate messages among variables, while our method HyperIMTS uses variable hyperedge interactions instead, which bypasses the requirement of shared timestamps. [1] Horn, Max, et al; "Set Functions for Time Series"; ICML 2020 --- **Lastly, thanks again for your careful review and appreciation! Feel free to let us know if you have any further questions or concerns :-).**
Summary: The paper introduces a hypergraph neural network designed for forecasting irregular multivariate time series, which are characterized by irregular time intervals within variables and unaligned observations across variables. Claims And Evidence: While the claims are generally supported, more comprehensive efficiency reporting and statistical analysis of performance differences would strengthen the evidence. Methods And Evaluation Criteria: The proposed methods align well with the IMTS forecasting problem. Theoretical Claims: The paper does not present formal theorems or proofs. Its theoretical foundation lies in the hypergraph structure and message-passing mechanisms, drawing from graph theory and neural network principles. Experimental Designs Or Analyses: The experimental design is robust and sound. Supplementary Material: I reviewed the supplementary material, which reinforces the main findings, particularly efficiency claims. However, it lacks detailed code snippets or full hyperparameter grids, limiting reproducibility. Relation To Broader Scientific Literature: HyperIMTS relates to prior work such as IMTS modeling, GNNs for time series, and HGNNs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: (1) Novel application of hypergraphs to IMTS forecasting, creatively combining existing ideas. (2) Avoids padding, reducing computational overhead, as shown in Figure 4. (3) Extensive comparison with 27 baselines across five datasets is a significant effort. Weaknesses: (1) Compared to the existing works, the hypergraph structure modeling may be challenging to implement and optimize practically. (2) As noted in Section 6, it cannot handle text or images, restricting applicability in some domains. (3) Resource-intensive compared to state-space models like Mamba. Other Comments Or Suggestions: None. Questions For Authors: see weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and valuable advice. We address the concerns as follows: ## W1. Efficiency reporting and performance analysis **A1:** Additional efficiency analyses can be found in **Appendix A.3**, and performance analyses on varying lookback lengths and forecast horizons are detailed in **Appendix A.2**, which can be summarized as: ### 1. **Insensitive to time length** Non-padding methods can handle longer time lengths with **smaller efficiency degradation**, compared to padding-based methods. - In Figure 6, non-padding methods **maintain high efficiency** for longer time lengths. - In Figure 7, although non-padding methods may not be the fastest on datasets with short lengths (47 timestamps for PhysioNet'12, 131 timestamps for Human Activity), they are **among the fastest for long input lengths** (971 timestamps for MIMIC-IV, 337 timestamps for USHCN). ### 2. **Best-performing on various lengths** - In Table 5, our method remains the **best-performing** at longer forecast horizons compared to the strongest baselines, where 5 out of 10 models are within 1 year [1-5]. - In Figure 5, our method has the **lowest MSE** in 13 out of 15 lookback length settings. ## W2. Codes and hyperparameter settings **A2:** We apologize for any clarity issues in presenting our code and hyperparameter settings. ### 1. **Code snippets** An anonymized repository link is available **at the end of the abstract, line 40**. An overview of the codes can be found in answer **A3** below. ### 2. **Hyperparameter settings** They are discussed in **Appendix A.4** and provided in the 'scripts' directory within the anonymized code repository. We follow the hyperparameter settings **from original papers or codes** for baselines, except for batch size (32), max epoch (300), and early stopping patience (10). The differences in critical hyperparameter settings for top-performing models on MIMIC-III are shown here: ||Ours|GraFITi|Warpformer|Crossformer| |---|---|---|---|---| |seg length|-|-|-|12| |hidden dimension|128|128|64|128 |attention heads|4|4|1|8| |drop out|-|-|0|0.05 ## W3. Challenging to implement and optimize practically **A3:** Code implementation of our model is provided **in `models/HyperIMTS.py` file within the anonymized code repository**. A summary of the code implementation is provided here: ### 1. **IMTS to hypergraph** `HypergraphEncoder` class, which outputs: - observation nodes: Eq.3, linear layers + ReLU. - temporal hyperedges: Eq.4, sinusoidal encoding. - variable hyperedges: Eq.5, learnable parameters. - incidence matrices: Eq.2, indicate for every hyperedge, which observation nodes are connected to it. ### 2. **Hypergraph learning** `HypergraphLearner` class, which is a **hypergraph attention network** similar to the backbone used in previous irregular time series models such as GraFITi [3] and Raindrop [6]: - node-to-hyperedge: Figure 3 (a), `MultiHeadAttentionBlock` class. - hyperedge-to-hyperedge: Figure 3 (b), `IrregularityAwareAttention` class. - hyperedge-to-node: Figure 3 (c), mainly includes self attention and linear layers with activations. ### 3. **Hypergraph to IMTS** Eq.15, A linear layer that maps from the hidden space to the sample space. ### 4. **Optimization** The core module `HypergraphLearner` consists of **only 2 layers with residual connections** around attention operations, achieving its best performance **within 50 epochs** across all datasets in our experiments. ## W4. Cannot handle text or images; Resource-intensive compared to SSMs These possible improvements are not included for the following reasons: ### 1. **Unfair comparisons** (text/images) Most SOTA time series models only accept time series as input. If our model takes more input data than others, it would be unfair for these baselines. ### 2. **Untested with HGNNs** (SSMs) Graph attention networks have been shown to be reliable in existing works [3][6]. However, SSMs have not yet been tested with HGNNs for time series yet, which might lead to unexpected issues like performance degradation. [1] Mercatali, Giangiacomo, et al; "Graph Neural Flows for Unveiling Systemic Interactions Among Irregularly Sampled Time Series"; NeurIPS 2024 [2] Shang, Zongjiang, et al; "Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting"; NeurIPS 2024 [3] Yalavarthi, Vijaya Krishna, et al; "GraFITi: Graphs for Forecasting Irregularly Sampled Time Series"; AAAI 2024 [4] Zhang, Weijia, et al; "Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach"; ICML 2024 [5] Liu, Yong, et al; "iTransformer: Inverted Transformers Are Effective for Time Series Forecasting"; ICLR 2024 [6] Zhang, Xiang, et al; "Graph-Guided Network for Irregularly Sampled Multivariate Time Series"; ICLR 2022 --- **We hope the above response can help solve your questions. Thanks again for your thorough review and looking forward to your reply!**
null
null
null
null
null
null
null
null
Neural Genetic Search in Discrete Spaces
Accept (poster)
Summary: The paper proposes a novel test-time search method that integrates genetic algorithm evolutionary mechanisms into deep generative models, particularly for sequential discrete generation tasks. NGS defines crossover as a parent-conditioned generation process, wherein offspring are generated by restricting the vocabulary based on selected parents, and introduces mutation by occasionally removing this restriction to enhance diversity. Experiments are conducted on routing optimization, adversarial prompt generation for language models, and molecular design, and show its effectiveness. Claims And Evidence: Claims and evidence in the paper look good to me. Methods And Evaluation Criteria: 1. The problem is defined clearly to me. In section 2.2, I would suggesting to add a flow chart to illustrate GA to audience who is not familiar with it. 2. Line 139-Line 149, why should we put the restriction on vocabulary? For example, in language generation, changing/replacing one single word may make the sentence look better. If we restrict the vocabulary, it seems that some tasks such as text editting can not longer be done. 3. I feel the proposed method in section is exactly the same with general GA, which may indicate that the paper lacks novelty. Also, will the proposed GA converge? 4. In results such as Table 1 and Table 2, I would suggest to add a bound/error bar to indicate if the comparisons are significant. 5. Is there any chance that the algorithm is trapped to a loop? 6. I feel in Table 5, for both CVRP and TSP, the baseline is just Kim et al., 2025, which is too limited as the author mentioned a lot of other works such as Kool et al., 2022; Ye et al., 2023; Kim et al., 2025. Theoretical Claims: No heavy theories are involved. Experimental Designs Or Analyses: In results such as Table 1 and Table 2, I would suggest to add a bound/error bar to indicate if the comparisons are significant. Supplementary Material: Supplemental material is well-structured. Relation To Broader Scientific Literature: I feel the paper just proposes a very general genetic algorithm by modifying some of its steps. To me it's definitely related to works that use genetic algorithm for data generation but not that novel. Essential References Not Discussed: References look good to me. Other Strengths And Weaknesses: Please refer to my comments in previous sessions. Other Comments Or Suggestions: 1. Please add more explanations to the caption of Figure 1. When I read that part, I still got confused about how GA works to generated offsprings. 2. Line 84, right side: "replace" => "replaces". Questions For Authors: Please refer to my comments in previous sessions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### GA illustration > 1. ... add a flow chart to illustrate GA Thanks for the valuable feedback. We’ve found that Fig 2 can also serve as a flow chart of general GA. We’ll revise the caption to further explain the overall GA process and our distinguished features: NGS leverage pretrained generative models within the GA framework. While conventional GAs typically relies on random initialization and problem-specific crossover and mutation, NGS begins with samples from the pretrained models and evolves the population by incorporating novel crossover and mutation mechanisms. ### Why restrict the vocabulary? The token-restriction in our **crossover** has the same purpose as crossover in traditional GAs: to focus search on promising regions near high-quality parents. By restricting vocabulary to tokens present in parents, we guide the generation process toward promising regions near parents. We agree that crossover alone would limit applicability for tasks like text editing. This is why the **mutation** is essential in NGS. When the mutation occurs, token restrictions are removed, allowing the model to introduce tokens not present in either parent. This is how “changing/replacing one single word” happens in our framework. In practice, we’ve found this balance between restricted crossover and strategic mutation effectively explores the solution space. We’ll make this point clearer in the revised manuscript. ### Novelty > 3. exactly the same with general GA ... lacks novelty > ... just proposes a very general genetic algorithm Thanks for raising these important points about novelty. First, Genetic Algorithms (GAs) are meta-heuristics that define general search protocols, but their effectiveness depends heavily on how components like crossover and mutation are designed—often requiring significant domain expertise. NGS introduces a novel approach by learning these operators using a sequential generative model, as detailed in Section 3.3.2. Second, our work is positioned within the deep learning domain, and our primary contribution is proposing NGS as a test-time search method for sequential generative models. To our knowledge, this is the first approach that integrates GAs directly into the generative process via neural models—unlike prior work that applies GAs to outputs post-generation. A novel combination of a traditional algorithm and a deep generative model should not be underrated. We appreciate the reviewer’s recognition of the method’s generality. We will clarify these points and better highlight our contributions in the revised manuscript. ### Convergence > 3. ... will the proposed GA converge? Unfortunately, we don’t have formal theoretical guarantees for the convergence. Please refer to our response to reviewer r57C (Theoretical guarantee) for detailed discussions regarding this point. > 5. ... trapped to a loop? NGS won’t get trapped in a loop as long as the pretrained policy has full support (which is usually true). Our stochastic mutation mechanism ensures that all $x\in\mathcal{X}$ have a positive probability of being generated. Specifically, $p_{\text{NGS}}(x) > 0$ whenever $p_{\theta^*}(x)>0$. This property prevents the algorithm from getting stuck exploring only a subset of the solution space. ### Bound/Error bar > 4. add a bound/error bar We provided the $\pm$ 1-sigma range over three independent runs in Fig 5. As depicted in the figure, the results are robust to randomness. We’ll include the comprehensive results with standard deviation for Tables 1, 2, and 5 in our camera-ready version. ### Baselines in Table 5 > in Table 5 ... the baseline is just Kim et al., 2025 We limit our baselines to the various search methods that can be used with the **same pretrained model** for fair evaluations. The works you mentioned rely on different pretraining schemes from what we used. Note that the “ACO” baseline represents [1] ’s performance. NGS significantly outperformed this method, which is notable because [1] already demonstrated superior performance compared to the other works you mentioned. For comprehensive comparisons against other algorithms, please refer to Kim et al. (2025). [1] Kim et al. “Ant colony sampling with GFlowNets for combinatorial optimization.”, AISTATS, 2025. ### Suggestions > add more explanations to the caption of Figure 1. Thanks for the suggestion! We will add the following: “Two parent chromosomes (Yellow, Blue) are crossover by following one of the parents at each step of the sequential generation process. Mutation (Pink) occasionally occurs that can take any allowed actions modeled by the generative model.” Also, please refer to Fig 3 for the detailed generation process. We will also improve the caption of Fig 3. > "replace" => "replaces". Thanks. We will fix it. --- **We appreciate your valuable input; please let us know if we can provide any further clarifications.**
Summary: Neural Genetic Search (NGS) which incorporate ideas from genetic algorithms with generative models as test-time compute method to improve quality/diversity of generation. Crossover: Generating each token autoregressively, masking all tokens not used in the parents. Mutation: random chance of removing the masking token restrictions at each token allowing for more diverse output. It evolves a population akin to GA; given some reward function between 0 and 1, they use rank-based priority sampling to push toward higher rewards. This approach is problem-agnostic, as long as autoregressive token-based models are used and that there is a reward. Its powerful, because it doesn't need backpropagation. They evaluate NGS on molecule generation, language models, and routing problems comparing to beam search, Monte Carlo Tree Search, ant colony and other search algorithms. Claims And Evidence: Claims: - novel method (yes, as far as I can tell) - effective test-time search method based on the 3 sets of experiments (yes, they have good evidence) - NGS can replace conventional decoding with improved robustness (yes, they have good evidence) - NGS can be viewed as an automated genetic algorithm with learned genetic operators (yes, but they should clarify that this is for autoregressive models with tokens) Methods And Evaluation Criteria: Yes the evaluation and problems tested are very diverse and convincing Theoretical Claims: There is no strong theory, its a clever algorithm to improve diversity/quality based on a reward without backprop Experimental Designs Or Analyses: See "Methods And Evaluation Criteria" Supplementary Material: No Relation To Broader Scientific Literature: Fits well with recent trend for test-time compute and its very general and can be applied to various situations, basically any autoregressive model. They properly cite GA with generative models and test-time compute literature as far as I can tell. Essential References Not Discussed: No Other Strengths And Weaknesses: . Other Comments Or Suggestions: . Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your encouraging review. We greatly appreciate your recognition of the novelty and effectiveness of our proposed method, as well as your positive remarks on the diversity and rigor of our experiments. > NGS can be viewed as an automated genetic algorithm with learned genetic operators (yes, but they should clarify that this is for autoregressive models with tokens) We appreciate your suggestion to clarify the scope of our claim regarding NGS as an automated genetic algorithm with learned operators. We will revise the sentence in lines 62-64 (left) to "NGS can be viewed as an automated genetic algorithm that leverages sequential generative models to learn genetic operators, eliminating the need for extensive labor for algorithm design." --- Rebuttal Comment 1.1: Comment: Genetic algorithms and evolutionary methods are always perceived and rated negatively at AI conferences because they don't have any theory. But I believe that this work is useful and this one especially is interesting to the community especially with the recent interest in test-time compute. I am writing this just to put things in perspective considering the low-score from the other reviewers. I will keep my score the same.
Summary: This paper proposes a test-time search method, Neural Genetic Search (NGS), for efficient searching of generative models in discrete spaces. NGS incorporates genetic algorithms into the generation procedure, where the generative model is used to generate offspring conditioned on parents. Experimental results across different domains showcase the effectiveness of NGS. Claims And Evidence: Most claims in the submission are supported by convincing evidence. However, there are several gaps in the proposed method and experiments to support the novelty and performance of this paper. Please refer to details in "Methods and Evaluation Criteria" and "Experimental Designs or Analyses" sections. Methods And Evaluation Criteria: Most parts of the proposed methods and evaluation make some sense. However, there remain several issues: - The method lacks theoretical guarantees or a rigorous understanding of the generated distribution. - The optimization target is not clear: is it to generate samples to maximize the evaluation criteria $r(x)$, or to generate reward-tilted distributions $\frac{1}{Z}p_0(x) e^{r(x)/\alpha}$? For each case, the methods to compare, the tasks, and the evaluations are different. For the case of reward maximization, the tasks and evaluations should be robust to reward hacking and there exist model-based optimization methods (eg. [1]) for this. For the case of the reward-guided generative model, the evaluations should consider some measures to verify that the generated sample remains valid samples with respect to the pretrained distribution, and there are many reward-guided sampling methods for this (eg. [2-4]). However, the paper lacks discussion and comparison to these methods. - An analysis of how the pretrained model helps the parent-conditioned offspring generation and what makes an optimal policy in the generation would be helpful. [1] Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization. NeurIPS 2024. [2] Unlocking guidance for discrete state-space diffusion and flow models. ICLR 2025. [3] Practical and asymptotically exact conditional sampling in diffusion models. NeurIPS 2023. [4] Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv 2024. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: - In the ablation study in Figure 6, for the mutation rate $\mu$ of NGS, a very small value (0.001) is always preferred. This seems to indicate that the mutation is harmful to the algorithm. - How does the performance scale with an increasing number of mutations? - The model lacks comparison with model-based optimization methods and reward-guided methods (see "Methods and Evaluation Criteria"). Supplementary Material: I review all parts of the supplementary material. Relation To Broader Scientific Literature: The paper is related to sequential generation, discrete space optimization, and evolutionary algorithms. Essential References Not Discussed: - Model-based optimization: [1] Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization. NeurIPS 2024. - Reward-guided sampling: [2] Unlocking guidance for discrete state-space diffusion and flow models. ICLR 2025. [3] Practical and asymptotically exact conditional sampling in diffusion models. NeurIPS 2023. [4] Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv 2024. Other Strengths And Weaknesses: The paper proposes an interesting method that combines evolutionary algorithms and discrete space generative models, and experiments on a variety of domains. I'm open to increasing my score if the author can provide a convincing clarification on the optimization target and a more rigorous understanding of the algorithm, as well as a complete comparison to related baselines. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Theoretical guarantees > The method lacks theoretical guarantees or a rigorous understanding of the generated distribution. Thanks for highlighting the importance of theoretical guarantees and analysis. We acknowledge that rigorous theoretical guarantees, such as formal convergence rates, are indeed valuable. However, deriving such guarantees for deep-learning-based search algorithms is exceptionally challenging. Moreover, in the genetic algorithm literature, existing guarantees are often contingent on idealized assumptions or specific problem classes, making them difficult to generalize widely. We will include this in our limitations of Section 6. The primary goal of this work is to provide a methodological and empirical contribution. We demonstrate NGS’s flexibility and effectiveness by presenting performance gains on four distinct routing problems, adversarial prompt generation for language models, and molecular optimization tasks. We believe these results clearly illustrate the practical benefits of our proposed approach across various domains. ### Optimization target and related works Our goal is to find a discrete object $x$ that maximizes a given objective function $f$ (lines 73-74). More specifically, we aim to find $x^*=\arg\max_{x \in \mathcal{X}} f(x)$, where $\mathcal{X}$ is a set of all possible objects. Our objective function $f$ can be ground truth, noisy simulation, or even proxy models (i.e., model-based optimization). - When the objective function is not group-truth-based, we use diversity as additional evaluation criteria; generating more diverse solutions leads to more robustness to the reward hacking (e.g., our red-teaming experiments). - NGS can be incorporated into MBO methods that use conservative proxy models, e.g., the one proposed in [1]. Verifying the effectiveness of NGS within the MBO framework is an interesting future work. We will include this discussion in the conclusion. Note that MBO methods are orthogonal to NGS, so they are not included as our baseline. Thanks for pointing out this important perspective. We will further emphasize our setup in the introduction and at the beginning of Section 3. [1] Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization. NeurIPS 2024. ### Analysis on pretrained model > An analysis of how the pretrained model helps the parent-conditioned offspring generation and what makes an optimal policy in the generation would be helpful. To clarify how the pretrained model supports parent-conditioned offspring generation, we refer to Equations (2)–(4). Note that, for simplicity, we omit the explicit notation $\theta^*$ for the pretrained policy. In Eq. 2, our crossover mechanism works by masking out tokens not present in the parents. So $p_{\text{cross}}$ is essentially the pretrained model with certain tokens masked out based on selected parents $\textbf{s}^1$ and $\textbf{s}^2$. In Eq. 3, we combine $p_{\text{cross}}$ with the original pretrained model $p$ using a binary random variable $M_{s_{t},\textbf{s}^1,\textbf{s}^{2},\mu}$ that indicates whether mutation occurs. If mutation occurs, we sample the next token from the full pretrained model $p$; if not, we sample from the masked version $p_{\text{cross}}$. Using a pretrained generative model is advantageous: - Reward-awareness: Since the model is trained to generate high-reward solutions, it implicitly captures useful patterns for guiding offspring toward promising regions—not just blending parents, but extrapolating toward better candidates. - Validity: In contrast to hand-crafted crossover rules, the pretrained model has already learned how to construct valid solutions, so offspring remain feasible without requiring domain-specific heuristics. As will be discussed in the following question, balancing exploitation (resembling high-reward parents) and exploration (introducing new components via mutation) is important to find optimal solutions. ### Effect of mutation rate > ... in Figure 6, for the mutation rate $\mu$ of NGS, a very small value (0.001) is always preferred. > How does the performance scale with an increasing number of mutations? Mutation encourages explorations during the generation procedure by removing the token restriction from crossover. Using a too-low value of $\mu$ makes it hard to escape from local optima. Although the results in Fig 6c for TSP seem to give better results with a lower mutation rate, this trend is not consistently observed in CVRP or the red-teaming LM task. We have included additional experiments analyzing the impact of different mutation rates (𝜇) on the red-teaming LM task using Llama-3.2 victim. As shown in the table below, our method remains robust across a range of 𝜇 values. |$\mu$|Toxicity|Diversity| |-|-|-| |0.01|0.699 ± 0.029|0.790 ± 0.013| |0.02|0.698 ± 0.009|0.791 ± 0.011| |0.05|**0.712** ± 0.008|0.791 ± 0.015| |0.1|0.681 ± 0.015|0.801 ± 0.010| |0.2|0.659 ± 0.012|**0.804** ± 0.008| --- Rebuttal Comment 1.1: Comment: Thank the authors for the rebuttal. I increased my score.
null
null
null
null
null
null
null
null
Sparse Video-Gen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity
Accept (poster)
Summary: The paper presents Sparse Video-Gen (SVG), a training-free framework aimed at accelerating the inference of video diffusion models. Diffusion Transformers (DiTs) are powerful for video generation but suffer from high computational costs due to the quadratic complexity of 3D full attention. The authors reveal two distinct sparse attention patterns in DiTs: Spatial Head and Temporal Head. SVG uses an online profiling strategy to identify these patterns for sparse attention and a novel hardware - efficient tensor layout transformation, along with customized kernel implementations. It achieves significant speedups (up to 2.28x and 2.33x on CogVideoX and HunyuanVideo respectively) while maintaining high-quality video generation. Claims And Evidence: Claims: The main claims are that SVG can accelerate video diffusion models, identify distinct sparse attention patterns, and achieve high-quality video generation at a faster speed. Methods And Evaluation Criteria: Yes Theoretical Claims: This paper do not contain theoretical claims. Experimental Designs Or Analyses: The analyses of the experimental results are detailed. Supplementary Material: Not specified in this review as the content of the supplementary material was not provided. Relation To Broader Scientific Literature: The paper is well-related to the broader scientific literature. It builds on previous work in efficient diffusion models, efficient attention and sparse attention. The authors discuss how their work is orthogonal to existing techniques such as decreasing denoising steps, diffusion model compression, and linear/low - bit attention, and can be combined with them for further gains. They also contrast their approach with existing sparse attention methods in LLMs, highlighting that those methods do not leverage the inherent redundancy of video data. Essential References Not Discussed: No essential references were identified as missing from the paper. The authors have covered a wide range of relevant works in the field of video generation, efficient attention, and diffusion models. Other Strengths And Weaknesses: ### **Strengths** 1. **Sufficient Model Evaluation**: SVG is evaluated on representative open-sourced video generation models, including CogVideoX and HunyuanVideo. ### **Weaknesses** 1. **Limited Technical Depth**: The paper appears to be mainly an empirical exploration. it lacks in-depth technical analysis beyond the proposed online profiling strategy and layout transformation. For instance, the theoretical analysis of the sparsity patterns mainly focuses on the computational savings and does not delve deeply into the underlying mathematical principles or theoretical insights that govern these sparse patterns in the context of video diffusion models. This makes the technical contribution seem rather shallow in the ICML context. 2. **Superficial Understanding of Sparsity**: In this paper, while they identify two types of attention heads based on sparse patterns (Spatial Head and Temporal Head), they do not provide a comprehensive exploration of why these patterns emerge at a fundamental level. There is no in-depth discussion on how the nature of video data, such as temporal continuity and spatial structure, precisely relates to these sparse patterns. Without a more thorough understanding of the root causes of sparsity, it is difficult to fully assess the generality and scalability of the proposed method. 3. **No video sample.** It is weird that a video generation paper do not have videos in the supplementary. Overall, I hold a positive perspective on this paper. Addressing the aforementioned issues will significantly enhance the paper’s quality. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful feedback and the opportunity to discuss our work further. Your comments have been invaluable in refining our approach and clarifications. > Weakness 1: In-depth technical analysis and underlying mathematical principles or theoretical insights that govern these sparse patterns in the context of video diffusion models. Answer: Here we provide the theoretical analyses of SVG. Consider a video with N frames, each containing F tokens. Every token can be encoded by an indices pair $(i, j)$, where $0 \leq i < N, 0 \leq j < F$. In the attention map we will flatten these two-dimensional indices into a 1D vector. Corresponding formula is $x = i \cdot F + j$. **Spatial Head**: For spatial head, let $a_1 > 0$ denote the threshold for spatial closeness between tokens. The spatial attention mask is defined as: $f_{s}\left( (i_1, j_1), (i_2, j_2) \right) = 1 \text{ if } |i_1 - i_2| < a_1, \text{ otherwise } 0$ For flattened indices $x_1 = i_1 \cdot F + j_1$ and $x_2 = i_2 \cdot F + j_2$, $f_{s}(x_1, x_2) = 1$ is equivalent to: $| \lfloor \frac{x_1}{F} \rfloor - \lfloor \frac{x_2}{F} \rfloor | < a_1$ The resulting attention map takes the form of block-banded structures: the attention mask will be on the main diagonal and also on neighboring $\pm(a_1 - 1)$ diagonals. **Temporal Head** For temporal head, let $a_2 > 0$ denote the threshold for temporal closeness between tokens. The temporal attention mask is defined as: $$ f_{t}\left( (i_1, j_1), (i_2, j_2) \right) = 1 \text{ if } |j_1 - j_2| < a_2, \text{ otherwise } 0 $$ For flattened indices $x_1 = i_1 \cdot F + j_1$ and $x_2 = i_2 \cdot F + j_2$, $f_{t}(x_1, x_2) = 1 \Leftrightarrow | (x_1 \bmod F) - (x_2 \bmod F) | < a_2 \Leftrightarrow | x_1 - F \cdot \lfloor \frac{x_1}{F} \rfloor - \left(x_2 - F \cdot \lfloor \frac{x_2}{F} \right) | < a_2 \Leftrightarrow \exists k, 0 <= k < 2N - 1, |(x_1 - x_2) - k F| < a_2$ The resulting attention map forms $2N - 1$ slanted diagonals that align along constant column index differences. These diagonals, often referred to as “slashes,” correspond to the token positions sharing similar temporal locations across frames. The corresponding points in the attention matrix yield a “slash-wise” pattern with width $a_2$. --- In addition, we would like to emphasize that our work also provides other technical depth beyond the proposed online profiling and layout transformation. Our contributions comprise four main aspects. (1) We identify the spatial and temporal sparse patterns, which unveil the potential for efficient acceleration. (2) We propose algorithmic innovations, including online profiling strategy and layout transformation that exploit these patterns. (3) We co-design highly optimized kernels with our algorithm to translate theoretical computation saving to on-hardware speedup. (4) We present thorough evaluations that SVG can achieve an end-to-end speedup of up to 2.33x while maintaining high visual quality with a PSNR of up to 29. > Weakness 2: Comprehensive exploration of why these patterns emerge at a fundamental level. Answer: In essence, these patterns arise because the pixels in a video often exhibit **high similarity** both within a single frame (spatially) and across consecutive frames (temporally). As similar patches in a video share nearly identical embeddings in the Q/K/V space, the self-attention mechanism naturally assigns high attention scores to these similar patches. Spatial Head: Within a single frame, neighboring pixels (and patches) often share similar colors, brightness values, and gradual transitions. If a patch in one region of a frame has a high attention score to itself, it tends to exhibit high attention to other patches in the same frame that share similar visual patterns. Temporal Head: Consecutive frames in a video, especially in static or slowly changing scenes, exhibit high similarity for the same patch location across time. If a particular patch has a strong correlation with itself (i.e., high attention to its own embedding), it will also place high attention on patches in other frames that share similar visual features. Traditional video compression techniques have similarly leveraged spatial and temporal redundancy to reduce data size and complexity. For example, H264 (used in MP4) uses intra-prediction and discrete cosine transform (DCT) to reduce spatial redundancy and uses inter-prediction and recording only the difference (motion vectors and residuals) between consecutive frames to reduce temporal redundancy. These well-known strategies highlight that video data inherently contains a high degree of spatial and temporal redundancy—precisely the property our sparse videogen approach leverages. > Weakness 3: No video sample. Video samples showcasing the results of our method can be accessed at the following anonymous link: https://drive.google.com/drive/folders/1jhEpZ69bKfyZWmoy63iS3FhECNnX-AZU?usp=sharing
Summary: This paper proposes Sparse VideoGen for efficient (accelerated) video generation, a method which can be applied to existing video generative diffusion models which use diffusion transformers. The paper mainly targets video generative models which operate on the spatio-temporal latents. The key idea is to classify the inherent sparsity in the attention heads into either the spatial head or the temporal head based on their sparsity patterns, and to sparsity the attention operation based on the classified result. The authors notice that the sparsity patterns can be dynamic i.e., the same head can exhibit varying sparsity patterns depending on the prompt or timestep. To handle this, the authors propose an online profiling strategy to determine the inherent sparsity pattern on-the-fly. Combined with the proposed hardware-aware tensor layout transformation and customized kernel implementations of the QK-norm and rotary positional embedding (RoPE), SVG achieves SoTA speedup of existing (spatio-temporal) video generative models. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence e.g., lossless accuracy in 3.2 supported by PSNR value, quality and efficiency results in Table 1, and sensitivity test on online profiling strategy ideas in Table 3. Methods And Evaluation Criteria: The proposed method and the evaluation criteria (videos generated from prompts provided in VBench, quality: SPNR, SSIM, LPIPs, VBench scores, and Efficiency: FLOPs, Latency, Speedup) make sense for the problem or application at hand. Theoretical Claims: The paper does not propose proofs or theoretical claims that require correctness checks -- the claim that 'sparse attention achieves lossless accuracy' has been sufficiently validated. Experimental Designs Or Analyses: The experimental designs and analyses seem sound and valid. Supplementary Material: The authors did not include any supplementary material. Relation To Broader Scientific Literature: As the authors have already mentioned, sparsity in transformers offer a great opportunity to reduce redundant computation e.g., in LLMs. However, the transformer architecture is likely to show largely varying sparsity patterns based on the task / field at hand e.g., video data has fundamentally different sparsity patterns. The findings in this work may be valuable for other fields where transformer networks are prominent e.g., MLLMs. Essential References Not Discussed: I could not find any significant essential reference which was not discussed. Other Strengths And Weaknesses: Strengths \ S1. Strong performances in terms of efficiency and quality. \ S2. Comprehensive experiments to validate the efficacy of each contribution made by SVG. \ S3. Well written and easy to follow. Weaknesses - I could not find any notable weaknesses from the paper. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: Once each attention head has been classified and sparsified to spatial / temporal heads, the overall pipeline may resemble video generative models which operate in the spatial domain i.e., pipeline consisting of interleaved spatial attention and temporal attention (e.g., SVD). How does the performance / efficiency compare to such models, which do not use 3D full attention? Such comparison would be valuable in understanding where SVG stands in the overall literature of video generative models. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contributions and the insightful comments. We respond to the questions below. We will revise the paper's final versions based on the reviewers' comments. > How does the performance / efficiency compare to models interleaving spatial attention and temporal attention (2D + 1D Attention)? Answer: Recent state-of-the-art models [1,2,3] have shifted toward using 3D full attention, which significantly improves performance at the cost of higher computational complexity. In contrast, although interleaved spatial and temporal attention [4,5,6] is computationally efficient, it suffers from **limited modeling capability**—especially in capturing complex spatiotemporal dependencies—leading to lower generation quality. Our method delivers video **generation quality on par with full 3D attention** models while operating at a **significantly lower computational cost**. Compared to 2D + 1D approaches, SVG achieves not only substantial improvements in visual fidelity but also stronger temporal consistency. In terms of efficiency, SVG sits between 2D + 1D and full 3D attention—achieving a compelling **balance of speed and quality**. For example, as shown in the table below, OpenSora-v1.2—a representative open-sourced model utilizing 2D+1D attention—achieves significantly lower VBench scores than Wan2.1, a 3D model of similar parameter size. By reducing attention redundancy, SVG effectively decreases attention FLOPs by 2.55x while preserving superior generation quality. Thus, SVG presents a practical and efficient alternative for high-quality video generation. | Model | Attention Pattern | #Params | Attention FLOPS | VBench | |---------------|-------------------|---------|-----------------|--------| | OpenSora-v1.2 | 2D+1D | 1.1B | 8.4×10^12 | 77.59 | | Wan2.1 | 3D | 1.3B | 7.4×10^13 | 82.91 | | Wan2.1 | SVG | 1.3B | 2.9×10^13 | 82.91 | [1] Kong, Weijie, et al. "Hunyuanvideo: A systematic framework for large video generative models." arXiv preprint arXiv:2412.03603 (2024). [2] Wang, Ang, et al. "Wan: Open and Advanced Large-Scale Video Generative Models." arXiv preprint arXiv:2503.20314 (2025). [3] Yang, Zhuoyi, et al. "Cogvideox: Text-to-video diffusion models with an expert transformer." arXiv preprint arXiv:2408.06072 (2024). [4] Zheng, Zangwei, et al. "Open-sora: Democratizing efficient video production for all." arXiv preprint arXiv:2412.20404 (2024). [5] Lin, Bin, et al. "Open-sora plan: Open-source large video generation model." arXiv preprint arXiv:2412.00131 (2024). [6] Ma, Xin, et al. "Latte: Latent diffusion transformer for video generation." arXiv preprint arXiv:2401.03048 (2024). --- Rebuttal Comment 1.1: Comment: Thank you very much for the response! The provided results are valuable in understanding that given similar number of parameters, leveraging SVG shows a high decrease in the attention FLOPs in comparison to using a full 3D attention, but still incurs higher FLOPs then the 2D+1D attention pattern -- which aligns with the response which mentions that "SVG sits between 2D + 1D and full 3D attention". I believe these results are valuable in understanding where SVG stands in the overall literature of video generative models, from the perspective of efficiency. Having read through other reviewers' comments and the corresponding author responses, I do not have further comments or concerns.
Summary: This paper addresses the critical challenge of computational inefficiency in Diffusion Transformers (DiTs) for video generation, caused by the quadratic complexity of 3D full attention operations. To tackle this issue, the authors propose Sparse VideoGen (SVG), a training-free inference optimization framework leveraging the inherent sparsity within 3D attention. Specifically, they reveal attention heads can be dynamically classified into two categories: Spatial Heads, dominated by spatially-related tokens within frames, and Temporal Heads, dominated by temporally-related tokens across frames. SVG employs an online profiling strategy to dynamically identify these sparse patterns during inference, coupled with a hardware-efficient tensor layout and customized kernel implementations. ###Post-rebuttal Thank the authors for the rebuttal. I have no further questions and will keep my original score, leaning towards accepting the paper. Claims And Evidence: The claims presented in the submission are supported by experimental evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate. Theoretical Claims: The paper does not contain explicit theoretical claims, formal proofs, or mathematical theorems that require verification. Therefore, there were no theoretical proofs to check. Experimental Designs Or Analyses: Here are two key questions highlighting potential experimental concerns: **Lack of Real Video Demonstrations**: The submission currently does not provide actual video demos, making it challenging to clearly evaluate the impact of sparse attention acceleration on generated video quality. It is better for authors to provide actual generated video examples (particularly corresponding to experiments shown in Figure 1 and Figure 6)? These demos would clarify how much video quality is affected by your acceleration method. **Performance on Challenging Scenes**: How does the method perform in more challenging scenarios? Specifically, could the authors clearly demonstrate the performance on difficult cases, such as those involving complex spatial-temporal dynamics (e.g., rapid movements, human interactions such as group dancing, fighting, or dynamic sports scenes)? Providing video demonstrations of challenging scenes—such as a football match, basketball gameplay, rope skipping, fast-paced dancing, or multi-person interactions—would strengthen the claims regarding spatial and temporal consistency. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader literature on efficient inference techniques for generative models, particularly diffusion-based video generation. Essential References Not Discussed: I did not identify any essential related works that are missing from the current citations. Other Strengths And Weaknesses: **Additional Weakness**: Insufficient Clarity in Implementation Details. The manuscript currently lacks clarity regarding the implementation specifics of the proposed online profiling strategy. It is unclear exactly when this profiling occurs—does the profiling need to be computed at every diffusion step, or is it computed just once or intermittently? Moreover, does the profiling dynamically vary across different prompts, layers, or diffusion timesteps? Clarifying and validating this would strengthen the paper. Specifically, could the authors provide experimental evidence or analysis showing the stability or variability of profiling patterns under different conditions (e.g., across prompts, layers, or timesteps)? Other Comments Or Suggestions: I have no additional comments or suggestions. Questions For Authors: Please answer the questions in the experimental analyses and Implementation Details Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments and feedback. We respond to the questions below. > Experimental Designs Or Analyses 1: The submission currently does not provide actual video demos. Answer: As requested by the reviewer, we include actual video demonstrations at the following anonymous link: https://drive.google.com/drive/folders/1jhEpZ69bKfyZWmoy63iS3FhECNnX-AZU?usp=sharing. Our method achieves **nearly lossless** video quality. > Experimental Designs Or Analyses 2: How does the method perform in more challenging scenarios? Such as those involving complex spatial-temporal dynamics. Answer: As suggested by the reviewer, we additionally include some representative video results of complex spatial-temporal dynamics in the following anonymous link: https://drive.google.com/drive/folders/1xscj4RcaOE-PeAXq6yiiUFqnTJqS5kaV?usp=sharing. Our method maintains **high video fidelity (PSNR=28.624)** under these challenging cases, such as scenes involving rapid motion and dynamic multi-person interactions. > Weakness 1: Insufficient clarity regarding the implementation specifics of the proposed online profiling strategy. When this profiling occurs, and does the profiling dynamically vary across different prompts, layers, or diffusion timesteps? Answer: Thank you for the valuable feedback. In our implementation, online profiling is performed at **every diffusion step** and **every layer** right before the 3D full-attention computation. This ensures that SVG dynamically adapts to the different prompts and spatial-temporal patterns at each step. We would like to emphasize that dynamic online profiling is essential to our method. We find that attention patterns exhibit **substantial variation**, especially across different steps and prompts. Static profiling patterns—whether fixed across layers, steps, or prompts—will lead to **performance degradation**, as demonstrated in Table 1, measured by PSNR, SSIM and LPIPS. The results are averaged over 10 examples. Our SVG uses the dynamic profiling strategy, consistently outperforming all static configurations. Table 1 | | PSNR (↑) | SSIM (↑) | LPIPS (↓) | |------------------------|----------|----------|-----------| | Static Across Layers | 22.365 | 0.846 | 0.187 | | Static Across Steps | 28.007 | 0.866 | 0.131 | | Static Across Examples | 26.955 | 0.803 | 0.185 | | SVG (Ours) | **29.062** | **0.879** | **0.122** | Static profiling patterns lead to significant quality degradation due to inaccurate identification of attention patterns. To illustrate this, we analyze the variation in attention patterns across different layers, diffusion steps, and data examples (prompts) by computing cosine similarity. Specifically, we employ oracle profiling to identify attention patterns for each head across different configurations. For example, we calculate the variation across different layers by computing a tensor with shape [num_layers, num_heads]. Each element in this tensor is labeled either -1 (indicating a spatial head) or 1 (indicating a temporal head). By calculating the cosine similarity between columns of this tensor, we quantify the variation of attention patterns. As demonstrated in Table 2, attention patterns exhibit **substantial variation**, necessitating a dynamic online profiling method. Table 2 | | Across Layers | Across Steps | Across prompts | |-------------------|---------------|--------------|----------------| | Cosine Similarity | 0.2597 | 0.7158 | 0.7238 |
Summary: This paper proposes Sparse VideoGen, a training-free method to optimize and accelerate video diffusion DiTs through an online profiling strategy and hardware-friendly implementation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: NA Relation To Broader Scientific Literature: diffusion model acceleration Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The observation that the attention layers in video diffusion can be divided into two types—pure spatial and pure temporal attention—is interesting and insightful. 2. The proposed online profiling strategy is well-defined, as the attention type in DiTs is highly dynamic depending on input data and denoising steps. Such a strategy is necessary to handle various scenarios effectively. 3. The proposed sparse attention acceleration is demonstrated to be efficient, as shown in Table 1. 4. The presentation is clear and easy to follow. Weaknesses: I do not have significant concerns. However, I am curious whether the observation that attention layers can be divided into solely spatial or temporal types is applicable to all diffusion models. For example, can the attention layers in multi-view diffusion models also be accelerated using the proposed method? Other Comments Or Suggestions: I am not familiar with this field. However, I think the proposed training-free method is effective in accelerating the large video diffusion model. Therefore, I tend to give a weak accept. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions and questions. Below, we address each point raised. > Q1: Can the attention layers in multi-view diffusion models also be accelerated? Answer: Recent multi-view diffusion models such as MVDream [1] and CAT3D [2] have introduced 3D attention mechanisms to improve cross-view consistency and spatial coherence. MVDream takes an initial step in this direction but is limited by a relatively short context length (≤ 4k tokens), which diminishes the practical benefits of attention acceleration. CAT3D adopts finer-grained image representations and extends the attention span in 3D. It goes further than MVDream by applying 3D attention over longer sequences and finer feature maps. CAT3D also highlights the critical scalability issue: applying full 3D attention to high-resolution feature maps (e.g., 64×64) leads to prohibitively large sequence lengths—up to 32k tokens—making the approach computationally expensive. As a result, CAT3D restricts full 3D attention to lower-resolution feature maps (32×32 and below), where the overhead remains manageable. This design underscores the computational bottlenecks present in naive 3D attention and presents a clear opportunity for approaches like SVG. Since SVG can leverage the spatial and temporal redundancy, it could potentially enable efficient 3D full attention at larger resolutions, unlocking performance and scalability gains that current methods avoid due to cost. However, CAT3D does not open-source their weights. We are willing to validate the effectiveness of SVG on them when the weights are available. [1] Shi, Yichun, et al. "Mvdream: Multi-view diffusion for 3d generation." arXiv preprint arXiv:2308.16512 (2023). [2] Gao, Ruiqi, et al. "Cat3d: Create anything in 3d with multi-view diffusion models." arXiv preprint arXiv:2405.10314 (2024).
null
null
null
null
null
null
CateKV: On Sequential Consistency for Long-Context LLM Inference Acceleration
Accept (poster)
Summary: This paper proposes CateKV, which improves the inference efficiency by adaptively evicting and retrieving KV cache based on sequential consistency. Claims And Evidence: Please see **Other Strengths And Weaknesses** Methods And Evaluation Criteria: Please see **Other Strengths And Weaknesses** Theoretical Claims: Please see **Other Strengths And Weaknesses** Experimental Designs Or Analyses: Please see **Other Strengths And Weaknesses** Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: Please see **Other Strengths And Weaknesses** Essential References Not Discussed: Please see **Other Strengths And Weaknesses** Other Strengths And Weaknesses: **Strengths**: 1. The paper is easy to follow, with clear writing and presentation. 2. The evaluation results are good and comprehensive. **Weaknesses**: 1. The main concern I have with this paper is the issues of scalability. In the paper, the largest model being evaluated is 9B. It would be better if the authors could provide results on larger models (30B/70B). 2. Some references [1-3] are missing in the related work for KV cache optimization. [1] Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference, MLSys 2024. [2] Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache, MLSys 2024. [3] ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV Caching, ISCA 2024. Other Comments Or Suggestions: Please see **Other Strengths And Weaknesses** Questions For Authors: Please see **Other Strengths And Weaknesses** Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your valuable time and constructive feedback. In the following, we provide our responses to each question. > Weaknesses 1: The main concern I have with this paper is the issues of scalability. In the paper, the largest model being evaluated is 9B. It would be better if the authors could provide results on larger models (30B/70B). We have tried our best to expand experiments on three larger models (30B&14B) in the following. Due to limited resources, we are unable to finish experiments on 70B models during this rebuttal period, but promise to include the results of 70B models in the final version. |Methods|Cache|N-S1|N-S2|N-S3|N-MK1|N-MK2|N-MK3|FWE|N-MQ|N-MV|QA-1|QA-2|VT|Avg| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |**Qwen2.5-32B**|100%|100.00|87.50|97.92|70.83|15.63|7.29|90.28|87.24|85.16|51.04|41.67|85.41|68.33| |SnapKV|41%|100.00|88.54|51.04|69.79|12.50|2.08|88.54|76.82|76.82|51.04|41.67|85.83|62.06| |PyramidKV|41%|100.00|87.50|46.88|66.67|8.33|1.04|84.02|66.93|67.71|48.96|41.67|84.79|58.71| |CateKV|41%|100.00|86.46|95.83|71.88|14.58|6.25|89.58|86.88|86.28|50.00|43.75|86.67|68.18| |**Yi-34B-200K**|100%|100.00|100.00|100.00|92.71|70.83|47.92|86.11|97.14|92.45|68.75|47.92|88.05|82.66| |SnapKV|41%|100.00|97.92|80.21|90.62|22.92|17.71|81.25|91.15|72.14|67.71|47.92|86.25|71.32| |PyramidKV|41%|100.00|100.00|68.75|91.67|26.04|12.50|82.99|91.15|79.43|69.79|47.92|86.46|71.39| |CateKV|41%|100.00|100.00|100.00|92.71|73.96|47.92|85.12|97.14|91.15|67.71|46.88|87.08|82.47| |**Phi-4-14B**|100%|100.00|97.92|100.00|100.00|97.92|100.00|98.96|98.96|99.22|80.21|67.71|100.00|95.08| |SnapKV|43%|100.00|100.00|7.29|100.00|93.75|3.13|99.31|97.66|99.22|82.29|66.67|100.00|79.11| |PyramidKV|43%|100.00|100.00|3.13|100.00|94.79|5.21|98.96|98.44|98.44|80.21|67.71|100.00|78.91| |CateKV|43%|100.00|98.96|100.00|97.92|98.96|100.00|99.31|98.44|99.48|78.13|67.71|99.79|94.89| The context length was set based on the maximum length supported by the model, with 128k for Qwen2.5-32B, Yi-34B-200K, and 16K for Phi-4-14B. The results in the above table demonstrate that CateKV scales effectively, delivering near-full-attention accuracy for 30B and 14B models, surpassing baseline methods such as SnapKV and PyramidKV. > Weaknesses 2: Some references [1-3] are missing in the related work for KV cache optimization. Thank you very much for recommending these references, and we will carefully incorporate the discussions of these papers in §2.1 (KV Cache Eviction Algorithm), which is initially summarized as follows: Keyformer proposes a score-based KV cache eviction algorithm that selectively retains only 'key' tokens with high attention weights, reducing cache size and memory bandwidth. Q-Hitter introduces a hybrid KV cache eviction criterion combining token importance (Heavy Hitters) and quantization-friendliness, enabling aggressive sparsification and low-bit quantization. ALISA designs a token-prioritization-based KV cache eviction algorithm using Sparse Window Attention (SWA) to dynamically reduce memory footprint, coupled with system-level optimizations for efficient caching-recomputation trade-offs. While these KV cache eviction methods demonstrate computational efficiency, they inherently incur information loss due to the reduction of attention across all heads or the quantization of data precision. In contrast, CateKV preserves the complete KV cache for adaptive heads, maintaining critical contextual information while still achieving efficiency through selective eviction in consistent heads. [1] Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference, MLSys 2024. [2] Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache, MLSys 2024. [3] ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV Caching, ISCA 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will increase my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you very much for your support. We will carefully follow the advice to include the analysis and the references into the revision.
Summary: This paper proposes a novel KV cache algorithm that calculates the coefficient-of-variation of attention tokens at each layer. Specifically, it identifies consistent head and adaptive head where most of the KV cache can be reduced from the consistent head. In experiments, author demonstrate the effectiveness of the method on multiple long-context evaluation datasets. The performance is superior against existing KV-cache methods. Claims And Evidence: In section 3.2, the author claims "To save the computational cost, we set an observation window that contains the last query tokens of the input to identify head types and critical tokens. " while I don't think it's fully justified. Methods And Evaluation Criteria: Yes, except one point 1. Why the method cannot be applied to traiditional LLM eval dataset? What does long-context matter here? Theoretical Claims: This paper does not include theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Figure 9, mostly observations on sequential consistency. Relation To Broader Scientific Literature: It can be used to accelerate the industry LLM serving. Essential References Not Discussed: I am wondering why [1] is not discussed nor compared in the paper. [1] Model tells you what to discard: Adaptive kv cache compression for llms, ICLR 23' Other Strengths And Weaknesses: Strengths: 1. The paper is easy to follow with good demonstrations and examples. Other Comments Or Suggestions: N/A Questions For Authors: 1. Can you apply CateKV on traditional LLM Eval tasks? 2. How is the method generalize without reference dataset ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your valuable time and constructive feedback. In the following, we provide our responses point-by-point. > Claims: About the design of observation window Here, we provide a more detailed explanation regarding the design of the observation window: 1. **The observation window helps avoid quadratic memory and computational costs**. In the pre-filling stage, the full attention matrix has size $n \times n$ (where $n$ is the context length). Analyzing the full attention matrix directly increases the computational complexity from $O(n)$ to $O(n^2)$, posing challenges in memory and computation, especially for long inputs. Using the full attention matrix for head classification led to an out-of-memory (OOM) error on a single A100 GPU for inputs larger than 8k tokens 2. **The attention within the observation window is more representative**. Due to the causal mask, the last query tokens capture global information similar to what is needed during decoding. Including intermediate query tokens may cause erroneous guidance, as these tokens have partial information. The table below shows that increasing the observation window size from 64 to 4k with $r = 0.4$ results in negative performance gains. |Methods|N-S1|N-S2|N-S3|N-MK1|N-MK2|N-MK3|FWE|N-MQ|N-MV|QA-1|QA-2|VT|Avg| |-|-|-|-|-|-|-|-|-|-|-|-|-|-| |CateKV ($L_{obs}=64$)|100.00|100.00|100.00|98.96|97.92|41.67|71.88|98.44|96.88|73.96|50.00|85.63|84.61| |CateKV ($L_{obs}=4000$)|100.00|100.00|98.96|97.92|97.92|19.79|66.67|99.21|97.92|70.83|48.96|76.25|81.20| We will enrich our explanation and add the above analysis in the submission to well justify our choice. Thank you very much for pointing out the insufficiency of this claim. > Q1: Applicability of CateKV to traditional LLM Eval tasks and the role of long-context We follow the research line of LLMs under long-context inference, which causes a significant bottleneck in memory consumption and computation latency. Regarding its practical value, some examples, like the RAG (retrieval-augmented generation) applications, long-document understanding, multi-turn interaction, etc., easily make the LLMs face long-context inference scenarios. In this case, it is critical to accelerate LLM inference to avoid the vanilla catastrophic response speed (it can be minute-level for A100 with 80G memory under 100k-long context). Naturally, the existing long-context acceleration methods, including CateKV, are applicable to traditional short-context LLM evaluation datasets. We tested the performance of CateKV and baseline methods on MMLU and MMLU-Pro, with all methods retaining approximately 25% of the KV cache and utilizing a one-shot prompting approach. The results in the table below show that CateKV still maintains performance comparable to full attention. It is worth noting that in short-context scenarios, attention to recent tokens is more likely to retain complete contextual information, allowing simpler methods (such as StreamingLLM) to maintain much of the performance. |Methods|MMLU|MMLU-Pro| |-|-|-| |Llama3-8B|59.70|30.71| |StreamingLLM|58.98|30.04| |SnapKV|59.60|30.43| |PyramidKV|59.60|30.43| |CateKV|59.60|30.71| > Essential Reference: [1] Model tells you what to discard: Adaptive kv cache compression for llms Thank you for the recommendation and we will incorporate the discussion of FastGen [1] in Section 2.2. For the comparison, we need to clarify that in our practice, FastGen easily incurs OOM when going beyond 8K. This is mainly because of the usage of the complete attention matrix during its pre-filling stage. We will add more explanations in the updated version. > Q2: How is the method generalize without reference dataset ? While CateKV uses a reference dataset to achieve static identification, the method proposed in Section 3.2 actually can perform without a reference dataset, yielding the dynamic identification. In the following table, we show that the dynamic identification method exhibits similar performance as the reference-based static identification method, both comparable to or surpassing full attention, highlighting the generalizability of CateKV. | Method | RULER-128K(Llama3) | Longbench(Llama3) | RULER-128K(Llama-3.1) | Longbench(Llama-3.1) | |-|-|-|-|-| | Full | 84.10 | 31.27 | 84.55 | 33.68 | | Dynamic ($r=0.4$) | 84.17 | 31.14 | 84.63 | 33.43 | | Static ($r=0.4$)| 84.61 | 31.48 | 84.66 | 33.70 | | Dynamic ($r=0.3$) | 83.49 | 30.61 | 81.59 | 32.89 | | Static ($r=0.3$) | 83.84 | 31.08 | 82.14 | 33.38 | However, the following drawbacks arise without a reference dataset: - At the pre-filling stage, the extra sample-wise computation needed for head identification increases computational overhead. - The need for a complete observation matrix in the pre-filling stage to assist classification prevents integration with sparsification-based pre-filling acceleration methods. We will include the above analysis in the submission to improve the clarity.
Summary: This paper studies the long-context inference acceleration of LLMs through sequential consistency patterns. By observation of distinguishing activation of attention heads, the authors classified them into consistent heads and adaptive heads, which can be used to promote the decoding acceleration. Different from previous methods that considered the acceleration of prefilling stages or decoding stages independently, the proposed method discovers the sequential consistency estimated at the prefilling stages efficiently help the decoding both in memory and computation speedup. Extensive experiments show the effectiveness of the proposed method. Claims And Evidence: The claims presented in the submission are well-supported by both intuitive understanding and empirical evidence. The authors provided the empirical statistics regarding the sequential consistency in the main part and appendix. The results demonstrated the substantial effectiveness of the proposed methods. Methods And Evaluation Criteria: The authors designed a proper method to distinguish consistent heads and adaptive heads, and on that basis, the efficiency in KV cache reduction and compute speedup has been gained. The evaluation follows the common bench, which covers different dimensions including the context length, different backbones and performance against the acceleration. Theoretical Claims: Not Applicable. Experimental Designs Or Analyses: I have checked experimental parts and their analysis including the complementary results in the appendix. The experimental results are sound and adequately demonstrate the effectiveness of the proposed CateKV. The results demonstrate the consistent acceleration benefit and the gain when combined with the conventional acceleration methods. Supplementary Material: I have checked the details of implementation and additional results as well as the analysis in the appendix. Relation To Broader Scientific Literature: LLM inference acceleration is tightly related to many areas in the recent AI communities, since it is a trend to integrate LLM to enhance the performance of many tasks. Since it is always time-intensive when the input or the output of LLMs is very long, it is very demanding to explore the inference acceleration of LLMs, which is critical to many AI applications in the era of foundation models. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths - The proposed method captures the essential patterns during the inference of LLMs, where the evidence of the sparse computation inherent in the transformers can be estimated along with the sequential consistency. The design is natural and simple yet effective. - The experiments and evaluations are comprehensive, covering a broad range of LLM inference benchmarks. The experimental results demonstrate the effectiveness of CateKV. Besides, the experiments combined with the previous acceleration methods demonstrate the orthogonality of CateKV. - The paper is well-written, clearly structured, and easy to follow. Weakness: - It is unclear how to set the threshold to distinguish consistent heads and adaptive heads. Do we need some searching-based strategy to find the best threshold or some other ways? - For figure 4, it seems that CateKV has the similar trend with Quest, which both are not good than Top-k. What does this mean? Is it means that the Top-k strategy is actually a good way to find the sparse compute structure? Please claim this details. - In Figure 6, why DuoAttention has a so rapid drop in performance at the ratio of 0.4. Please provided more analysis and claimed that why CateKV could maintain the effectiveness. Other Comments Or Suggestions: - It is better to add more discussion about why not choose a soft attention weight matrix to design the method. Intuitively, this can be more precise and flexible. The authors could add more discussion in terms of this point. Questions For Authors: - What is the difference between CateKV and DuoAttention? Can you provide more qualitative comparison between the selected tokens of CateKV and those of DuoAttention. Such an result can be more convincing to reflect the novelty of CateKV. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the supportive comments and the comprehensive evaluation of our method. In the following, we provide our responses point-by-point. > Weakness 1: About setting the best threshold We select the threshold based on a fixed ratio of consistent-to-adaptive heads, as while CV scores vary significantly across inputs, the proportion of heads exhibiting consistent patterns remains remarkably stable within the model architecture. In task-agnostic scenarios, determining the optimal threshold or adaptive head ratio for each sample is challenging. One potential approach is to incorporate attention weight recall as an additional criterion in the head classification process. If the attention weight recall exceeds a threshold (denoted as $t$), these heads can also adopt the consistent pattern. As shown in the table below (LLaMA-3-8B,$r=0.4$,$t=0.98$), this recall-threshold method enables input-adaptive ratio adjustment while maintaining performance (Avg 84.32% vs 84.61% CateKV). However, the marginal gains suggest this approach may not be universally optimal. We will further explore dynamic threshold adjustment methods in future work. |Methods|N-S1|N-S2|N-S3|N-MK1|N-MK2|N-MK3|FWE|N-MQ|N-MV|QA-1|QA-2|VT|Avg| |-|-|-|-|-|-|-|-|-|-|-|-|-|-| |CateKV ($r=0.4$)|100.00|100.00|100.00|98.96|97.92|41.67|71.88|98.44|96.88|73.96|50.00|85.63|84.61| |CateKV+recall threshold|100.00|100.00|100.00|98.96|97.92|42.71|71.88|98.18|96.35|71.87|50.00|83.96|84.32| |Average adaptive head ratio|34.38%|37.13%|37.16%|37.13%|38.67%|36.72%|35.54%|37.23%|37.20%|37.25%|37.89%|32.03%|36.53%| For task-aware scenarios, please refer to our response to Reviewer 3ByN'Weakness 2, where we discuss dynamic ratio selection in task-aware contexts. > Weakness 2: About the meaning of Topk curve. Figure 4 compares different methods' ability to approximate the ground-truth top-k attention weights during decoding. The top-k curve represents the upper bound of achievable recall, as it directly uses the actual highest attention weights (which are unknown in advance during real inference). This does not imply top-k is universally "better," but rather that it serves as the theoretical optimum. The key advantage of CateKV/Quest is that they avoid computing full attention, making them practical for real-world deployment. >Weakness 3 and Question: Comparison Between CateKV and DuoAttention 1. Differences between CateKV and DuoAttention: From a conceptual perspective, consistent heads emphasize the overall similarity of attention across different sequences, while Streaming Heads prioritize whether initial and recent tokens cover most of the attention. Thus, in terms of token selection, streaming heads only focus on the initial and recent tokens for attention, whereas consistent heads have a broader search space. To further illustrate this, we randomly selected 100 samples from WikiText and performed inference using LLama3-8B. Our analysis shows that approximately 72% of the tokens selected by consistent heads do not belong to the initial or recent tokens, demonstrating greater flexibility compared to streaming heads. 2. Analysis of the rapid drop: From the perspective of attention recall, the number of streaming heads within a model is inherently limited. As shown in Figure 4, a larger proportion of heads recall less than 0.8 of the attention weights under the streaming pattern. An overreliance on the streaming pattern results in significant information loss, which explains the rapid drop in the performance of DuoAttention when the ratio is below 0.4. In contrast, even with an aggressive increase in the proportion of consistent heads, the consistent pattern effectively preserves critical tokens, including those that capture global information from prior attention. These retained tokens enhance the robustness of CateKV, making it more effective than DuoAttention. > Suggestion: Using Soft Attention Weight Matrix for Method Design Utilizing a soft attention weight matrix is an intuitive approach. However, we believe that the values in the soft attention weight matrix reflect the importance of individual tokens, whereas the binarized observation matrix used in CateKV more effectively captures the relative importance of tokens. Since sparse attention methods aim to identify the top-k important positions, sequence similarity should be evaluated by the alignment of their top-k attention indices, which reflects a relative relationship. Additionally, the values of attention weights fluctuate with the query, and attention sinks in some heads complicate the use of a soft attention matrix for classification. We also conducted an experiment where we removed the binarization step from the identification process (a soft version). When $r = 0.5$, on the RULER-128K niah-multikey3 task, the performance dropped by 14.58 points (from 63.54 to 48.96) compared to full attention on LLama-3.1-8B. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' substantial effort in addressing my concerns. Overall, I think the idea is novel that utilizes the prefilling knowedge to promote decoding speedup. After considering the authors' rebuttal to my review and other reviewers' questions, I tend to maintain the acceptance of this submission and hope the authors carefully incorporate the suggestions. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition and support of our work. We will carefully consider the reviewers' suggestions and incorporate them in the subsequent version.
Summary: CateKV introduces a hybrid KV cache optimization method that improves long-context LLM inference efficiency by leveraging sequential consistency in attention heads. The key insight is that certain attention heads exhibit stable attention patterns across both pre-filling and decoding stages, while others remain highly dynamic. Based on this observation, CateKV: (1) Classifies attention heads into two types: Consistent heads: Retain only a subset of KV pairs based on pre-filling attention patterns. Adaptive heads: Retain most KV pairs to ensure flexible attention computation. (2) Uses a coefficient-of-variation (CV)-based scoring algorithm to differentiate between consistent and adaptive heads. (3) Applies selective KV retention, where consistent heads store a minimal set of critical tokens, while adaptive heads retain most of their KV pairs. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theory in this paper. Experimental Designs Or Analyses: The experiments are well-structured but could be strengthened with: - Latency breakdown. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: - Head-Level KV Compression Methods - Hybrid KV Retention Approaches Essential References Not Discussed: [1] MagicPIG: LSH Sampling for Efficient LLM Generation Other Strengths And Weaknesses: Strengths - Novel Sequential Consistency-Based KV Management. Unlike prior work that heuristically prunes KV caches (SnapKV, StreamingLLM) or dynamically retrieves KV pairs (Quest, ShadowKV), CateKV exploits consistent attention patterns across layers. Weaknesses - Assumption That Pre-Filling Attention Patterns Remain Consistent During Decoding. CateKV relies on the assumption that stable attention patterns persist, but certain adaptive behaviors in retrieval tasks (e.g., dynamically changing focus) may lead to inconsistencies. - Fixed Ratio of Adaptive vs. Consistent Heads. The paper fixes the adaptive head ratio (r = 0.4) without dynamically adjusting it based on task or context length, which may be suboptimal in some settings. - No Explicit Theoretical Bound on Eviction Performance. The CV-based classification method is empirically justified but lacks a formal guarantee on its worst-case token retention accuracy. Other Comments Or Suggestions: See Strengths And Weaknesses Questions For Authors: Thank you to the authors for their latest response. Although the underlying assumptions do not fully convince me, I will raise my score to a 3, considering that similar work has been accepted at top conferences. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your valuable time and constructive feedback. Below, we provide our responses point-by-point. > Experiments: Latency breakdown We very appreciate the reviewer's constructive suggestion, and follow the advice to present a fine-grained latency analysis ([here](https://anonymous.4open.science/r/CateKV-rebuttal-1A3F/latency_breakdown.jpg)) for your reference. >Missing References Thank you for your recommendation. We will carefully include MagicPIG and references suggested by other reviewers in the discussions of the submission. >W1: Assumption that ... may lead to inconsistencies. Given a certain long-context input, we conjecture that the functionality of each head can be well characterized, even for the head where the focus may change. Informally, it resembles the functionality of human brain regions, which are categorized to play different roles. This pattern stability is well supported by empirical findings in DuoAttention, our cross-model consistency ([here](https://anonymous.4open.science/r/CateKV-rebuttal-1A3F/cross_model.jpg)) and cross-task stability analysis ([here](https://anonymous.4open.science/r/CateKV-rebuttal-1A3F/cross_task.jpg)), which we believe is inherently related to the working mechanism of LLMs. Regarding the implication of this phenomenon, we would like to give a plausible explanation: during inference, LLMs rely on both - Heads that dynamically adapt to task-specific information, i.e., adaptive heads. For retrieval tasks with dynamic focus shifts, adaptive heads retaining more KV cache specifically handle such varying information. - Heads that consistently attend to globally important tokens, i.e., consistent heads. Figure in this [link](https://anonymous.4open.science/r/CateKV-rebuttal-1A3F/Similar_focus.jpg) demonstrates that the positions focused on by consistent heads exhibit strong similarity, indicating that some tokens are crucial for the model's understanding globally. >W2: Fixed Ratio of Adaptive vs. Consistent Heads. First, the reason that we chose a fixed ratio here is that we focus on the task-agnostic scenario where previous works primarily considered. This ensures a fair comparison with the baselines under the same settings. Second, CateKV can easily adapt to task-aware scenarios by setting task-adaptive head ratios. The table in this [link](https://anonymous.4open.science/r/CateKV-rebuttal-1A3F/task_aware.jpg) exhibits different adaptive head ratios for tasks on RULER-128K (LLaMA-3.1-8B), where adjusting the ratio per task allows for lower ratios without sacrificing accuracy. We will add a more detailed discussion in the revised submission. >W3: No Explicit Theoretical ... retention accuracy. To address the concern, we present the following theoretical analysis of CateKV. **Lemma 1:** Let $G$ denote our CV-based function hypothesis, $F$ denote the real-value class defined by the binary cross-entropy loss composite on $G$, and $N$ denote the sample number of the reference dataset. Then, with the probability $\delta$, we have the Rademacher complexity bound, $\forall f\in F, P_{\text{head}}\left(\mathbb{E}[f]-\frac{1}{N}\sum_{n=1}^Nf_n \leq 2\mathcal{R}_N(F)+\sqrt{\frac{2\log{\frac{2}{\delta}}}{N}}\right)\geq 1-\delta$, where $\mathcal{R}_N(F)$ is the conditional Rademacher average. Let $P_1$, $P_2$ denote respectively the probabilities of correctly classified consistent heads and adaptive heads, and $\bar{P}\_1$, $\bar{P}\_2$ denote respectively the probabilities of misclassified consistent heads and adaptive heads, where we have $P\_1+P\_2=P\_{\text{head}}$ and $\bar{P}\_1+\bar{P}\_2=1-P\_{\text{head}}$. Then, we can decompose the probability in the above lemma with fine-grained analysis as follows. **Theorem 1:** Let $\eta_1$, $\eta_2$ denote respectively the retention ratios of consistent heads and adaptive heads, and $\eta_1^*$, $\eta_2^*$ denote their optimal retention ratios correspondingly. Define the retion accuracy of different cases $r_{i,j}=\eta_i^*\mathbb{1}[\eta_j > \eta_i^*]+\eta_j(1-\mathbb{1}[\eta_j > \eta_i^*])$ by comparsing the retention budgets with the optimal budgets, and the hypothesis that the query attention score is the best description in order holds with the probability $\lambda$. Then, the token retention accuracy has the following inequalities, $P\_{\text{token}} = \lambda(r_{1,1}P\_1 + r\_{2,2}P\_2 + r\_{2,1}\bar{P}\_1 + r\_{1,2}\bar{P}\_2)\geq \lambda \left(\min(r\_{2,1}, r\_{1,2}) + \left[\min(r\_{1,1}, r\_{2,2})-\min(r\_{2,1}, r\_{1,2})\right]P\_{\text{head}}\right)$ In this theorem, three factors, i.e., $\lambda$, budge control part and head identification accuracy $P_{\text{head}}$ make critical effect about the worst-tken retention accuracy in CateKV, which are actually also verified in the submission (like Figures 3, 4, 7 and Tables 2, 3). Due to space limitation, we will give more details and remarks on this theorem in the reviewer-author discussion phase. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thoughtful rebuttal to my questions. While the explanation and references regarding the assumption of consistent heads—that is, heads which consistently attend to globally "important" tokens—are noted, I remain unconvinced by this assumption. Specifically, I am uncertain about the actual "importance" of these tokens to understanding or reasoning over the entire context. Are they really important, or could their prominence be attributed to other underlying factors? Furthermore, in extreme cases, might the importance of these tokens diminish as the generation length increases? Since LLMs are not explicitly trained on such assumptions, it is unclear whether this observation will generalize to future LLMs or if it could be mitigated during pre-training. Because if these tokens are not truly important, or may become less important in future contexts, this assumption could even be detrimental to model performance. Overall, I will maintain my current score, though I remain open to further discussion. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s valuable feedback. First of all, we would like to append some insights for the theoretical bound due to the space limitation in 1st-round rebuttal. **Remark 1:** From the Theorem 1, three critical factors matter w.r.t. the worst-token retention accuracy (i.e., the lower bound): - $\lambda$, whether we can find an effective and efficient measure to characterize the token correlation by score order that is correct in a probability as large as possible. - budget control, whether we can set the proper budget that can maximal gain by reducing most tokens when heads are correctly classified and simultaneously weakening the negative effect when heads are misclassified. - $P_{\text{head}}$, whether CV-based method can classify the head type as accurately as possible during token reduction. These three factors, i.e., $\lambda$, budge control part and head identification accuracy $P_{\text{head}}$ make critical effect about the worst-tken retention accuracy in CateKV, which are actually also verified in the submission (like Figures 3, 4, 7 and Tables 2, 3). Then, we would like to figure out **whether the reviewer's concern now remains on the retionality of the assumption**, so that we would not miss some other points. In the following, we give tri-fold discussion about the assumption. - **Similar phenomenon has been observed in prior works and leveraged (but different from CateKV).** Specifically, in *H2O* [1], the authors discovered that a small number of influential tokens, called heavy-hitters (H2), are sustainably crucial during generation, and in the Observation Section of *SnapKV* [2], the authors mention that consistent patterns can be identified prior to generation and remain consistent throughout the process, though these works have not found the head discrepancy in this phenomenon. In contrast, *DuoAttention* [3] found some head discrepancy but has not considered the token consistency. Prior works implicitly support the retionality of our assumption. - **The prominence of these tokens has several underlying factors, but still matters.** Except intrinsic importance of some tokens as supporting facts for generation, other tokens like initial tokens, punctuation marks, delimiters, spaces, and tokens positioned at the beginning or end of a sequence [4] also induce the consistent prominence, namely, attention sink. While these tokens may not necessarily hold intrinsic meaning, they are vital for the model to comprehend the overall context [4][5]. In the other words, even though some tokens have void semantic meaning, their functionality to relieve attention on other semantically meaningful tokens matters for LLM inference. - **Biologically plausible generalization of the consistent head assumption for future LLMs.** In long-context scenarios, the necessity of these globally important tokens becomes evident. Similar to the human brain, which selectively focuses on key pieces of information (such as chapter titles or conclusions in a narrative) to recall and maintain context, LLMs focus on crucial tokens that allow them to better understand and recall contextual information. This selective attention mechanism helps the model efficiently process extensive inputs and maintain performance in long-context scenarios. Thus, we believe that the head assumption observed in such contexts is like some sparsity entangled with LLM pre-training, which is reasonable, necessary and generalizble to future LLMs. In conclusion, we maintain that the assumption of consistent attention to certain tokens and heads, especially in long-context tasks, is a valid and beneficial strategy. This assumption is supported by both previous studies and our empirical observations. But we do appreciate the reviewer about his/her comments, even though the reviewer challenged our assumption. It is positive for research as the normal debate, and we would like to hear further comments about our above points. PS. If the reviewer has further comments but cannot add follow-up session, directly editing the previous session is also possible and we will keep reading if there are some update and interact with the reviewer. Thank you very much. [1] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models [2] SnapKV: LLM Knows What You are Looking for Before Generation [3] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads [4] Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs [5] Efficient Streaming Language Models with Attention Sinks
null
null
null
null
null
null
GMAIL: Generative Modality Alignment for generated Image Learning
Accept (spotlight poster)
Summary: The paper introduces GMAIL (Generative Modality Alignment for generated Image Learning). It is a framework for incorporating generated images into training pipelines while explicitly addressing the modality gap between real and generated images. Instead of treating synthetic and real images interchangeably, GMAIL fine-tunes a separate model for generated images and aligns it with a pre-trained model for real images. This alignment occurs in a shared latent space, enabling vision-language models (VLMs) like CLIP, LLaVA, and LLaMA3 to effectively utilize generated images for tasks such as image captioning, zero-shot image retrieval, and zero-shot image classification. Claims And Evidence: The primary claim is that treating generated images as a separate modality and aligning them with real images improves performance across various vision-language tasks. The evidence is compelling: 1. In Tables 1-5, consistent improvements across all evaluated metrics for multiple tasks. 2. In Table 6, ablation studies confirm the importance of the alignment component. 3. In Table 7, positive scaling trends show continued improvement with larger synthetic datasets. 4. In Table 13, quantitative similarity metrics show better alignment of real and generated image embeddings with their approach. 5. Figure 3 shows qualitative visualizations demonstrating the modality gap and how alignment bridges it. Methods And Evaluation Criteria: The methodology is well-explained and justifiable: 1. They utilize Stable Diffusion v2 to generate synthetic images from captions in existing datasets 2. A dual-encoder architecture separates generated and real image processing 3. Low-Rank Adaptation (LoRA) is used to fine-tune efficiently 4. Cross-modality alignment loss encourages similar embeddings for real and generated images with the same description The evaluation is comprehensive, covering: 1. Image captioning using standard metrics (BLEU, METEOR, CIDEr, SPICE, etc.) 2. Zero-shot image retrieval on COCO and Flickr30k 3. Zero-shot image classification across eight datasets 4. Long caption retrieval on ShareGPT4V 5. Visual question answering on ScienceQA Theoretical Claims: The theoretical foundation is sound: 1. Identifying a fundamental issue (modality gap between real and generated images) 2. Providing a clear theoretical explanation of why this gap leads to problems 3. Presenting a theoretical framework to address it The explanation of the modality gap and its effects is well-developed, particularly in Appendix C where they contrast single vs. dual modality approaches and explain why their cross-modality alignment is crucial. Experimental Designs Or Analyses: The experimental design is thorough and well-executed: 1. Multiple datasets of varying scales are used 2. The method is evaluated on diverse tasks to demonstrate generalizability 3. Appropriate baseline comparisons are included 4. The ablation studies isolate the contributions of different components Supplementary Material: The supplementary material is extensive and valuable, for example: 1. A detailed algorithm description provides clarity on implementation. 2. In Figures 4-9, additional visualizations of real and generated images. 3. Table 12 shows comprehensive ablation studies on LoRA rank and full fine-tuning. 4. In Table 9, Comparisons with additional approaches (SigLIP). Relation To Broader Scientific Literature: The authors position their work well within the broader literature on: Diffusion models for image generation Generated image learning Vision-language models They appropriately cite related work in each area and clearly articulate how GMAIL advances beyond previous approaches, particularly in addressing the modality gap problem. Essential References Not Discussed: The paper is well-referenced, but a few additional references could strengthen it: 1. Work on domain adaptation and domain gap in computer vision, as this bears similarity to the modality gap problem 2. More recent work on synthetic data generation for computer vision tasks beyond those cited Other Strengths And Weaknesses: Additional Strengths 1. The GMAIL framework addresses a real-world problem that will become increasingly important as generated images become more prevalent in machine learning pipelines. 2. The approach demonstrates effectiveness across different backbone architectures (CLIP, Long-CLIP, LLaVA, LLaMA-3), suggesting it's a general solution rather than one tied to a specific model architecture. Additional Weaknesses 1. The experiments rely exclusively on Stable Diffusion v2. Testing with other generative models (e.g., Stable Diffusion 3, FLUX) would better demonstrate the generality of the approach, as different generators may introduce different types of artifacts. 2. There's no exploration of whether the alignment approach makes models more or less robust to adversarial examples or out-of-distribution inputs, which would be valuable for understanding practical limitations. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's feedback and the questions raised. Below, we address and resolve each of these questions in our responses. > Domain Adaptation Literature Our approach is conceptually related to domain adaptation techniques [a,b], where the goal is to align feature spaces between source and target domains. However, GMAIL focuses on a unique modality gap between synthetic and real images, both from the same caption domain, requiring a different solution strategy. While traditional DA aligns features across different datasets, our method aligns representations across modalities (real vs. generated) within the same semantic context, using caption anchors. [a] Tzeng et al., Adversarial Discriminative Domain Adaptation, CVPR 2017 [b] Ganin & Lempitsky, Unsupervised Domain Adaptation by Backpropagation, ICML 2015 > References to Data Synthesis for Vision Tasks Thank you for the suggestion! We’ve now added and discussed all three works in Related Work: - DatasetGAN (CVPR’21) [1]: Efficient labeled data generation via GANs and semantic segmentation. - SegGen (ECCV’24) [2]: Mask2Img diffusion pipelines for boosting segmentation. - DiffuMask (ICCV’23) [3]: Pixel-level annotations via text-to-mask-to-image synthesis. We emphasize that GMAIL is complementary to these efforts: - They focus on generating high-quality labeled datasets. - We focus on bridging modality gaps when using synthetic data for training vision-language models. [1] DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort. CVPR 2021 [2] SegGen: Supercharging Segmentation Models with Text2Mask and Mask2Img Synthesis. ECCV 2024 [3] DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models. ICCV 2023 > Generator Generality We have now conducted experiments using FLUX, which introduces a more powerful and differently parameterized generation pipeline compared to SD2. These additional results allow us to test GMAIL's robustness to shifts in generator-specific artifacts. The performance improvements remain consistent with FLUX, indicating robust alignment across varying artifact styles and photorealism levels. | Generator | B@4 ↑ | CIDEr ↑ | SPICE ↑ | |:----------------:|:---------------:|:----------------:|:---------------:| | FLUX | 37.2 | 117.82 | 23.4 | | FLUX + GMAIL (ours) | **39.54** | **122.36** | **24.15** | > Robustness to Adversarial or Out-of-Distribution (OOD) Inputs Thank you for the insightful observation. To evaluate the robustness of our alignment-based GMAIL framework, we conducted two evaluations. 1) We evaluate GMAIL and baselines on ImageNet-A and ImageNet-R — well-established OOD benchmarks that test model resilience under distribution shifts. Naively training with generated images slightly reduces robustness, but GMAIL improves OOD generalization by aligning synthetic features more closely with real ones. | Model | ImageNet-A ↑ | ImageNet-R ↑ | |:----------------:|:---------------:|:----------------:| | CLIP | 51.2 | 66.7 | | CLIP + Gen. Images | 49.8 | 67.3 | | GMAIL (ours) | **53.6** | **69.1** | 2) We tested models using AutoAttack on zero-shot classification over ImageNet-1k (100-class subset). GMAIL improves clean and adversarial robustness, suggesting that alignment avoids overfitting to synthetic artifacts, a key cause of fragility in other generative training schemes. | Model | Clean Accuracy ↑ | Robust Accuracy ↑ | |:----------------:|:---------------:|:----------------:| | CLIP | 75.6 | 43.7 | | CLIP + Gen. Images | 77.0 | 41.2 | | GMAIL (ours) | **78.1** | **46.3** |
Summary: This paper introduces GMAIL, a novel framework designed to bridge the modality gap between generated and real images, a common issue that can cause mode collapse in training pipelines. The approach treats generated images as a distinct modality and aligns them with real images in the same latent space. By fine-tuning models on generated images while preserving a pre-trained model for real images, GMAIL achieves explicit alignment between the two modalities. This method results in significant performance gains across various vision-language tasks. Claims And Evidence: Yes. The claims made in the submission are fully supported. Methods And Evaluation Criteria: Overall, the proposed methods and evaluation criteria are very reasonable. Theoretical Claims: The theoretical claims are correct. Experimental Designs Or Analyses: There are a few questions I still have regarding the experiments: 1. Training on real images in the supplementary materials – The section mentions a comparison of COCO captioning performance using three strategies: real-only, mixed real-generated data, and GMAIL alignment. However, the corresponding table does not clearly indicate which settings correspond to real-only and mixed real-generated data. This is especially confusing for the only real data fine-tuning setting, as the entire paper discusses aligning real and generated data. If only real data is used, what exactly is being aligned? Moreover, why does this setting still show performance improvement? Besides, given that prior works (just for example, Tasks2Sim, From Fake to Real) have already demonstrated the effectiveness of mixed training, this paper should further experiment and discuss its impact on mix-pretrained models. 2. Embedding visualization – Typically, I would not consider real and generated images as different modalities. The visualization only shows the existence of a gap between them but does not prove that the gap is as significant as that between images and text. To substantiate the claim that generated data should be treated as a separate modality, the comparison should include embeddings from traditionally different modalities, such as images and text. Supplementary Material: I reviewed all the supplementary materials. Relation To Broader Scientific Literature: There is no relationship to broader scientific literature. Essential References Not Discussed: No. Other Strengths And Weaknesses: Overall, although mixing generated data and real data for training may not score highly in terms of novelty, this paper presents a well-structured and clearly articulated approach when specifically focusing on fine-tuning pre-trained models with generated images. The claims are clear, the methodology is well-explained, and the experiments are thorough. Additionally, the supplementary material provides ample and effective supporting experiments to further validate the proposed method. However, the experimental section has certain weaknesses. Please refer to the corresponding section for details. Other Comments Or Suggestions: Typo:line 84:tje should be the When mention "mode collapse", the author cited "A path towards autonomous machine intelligence version 0.9. 2", where I cannot find this phrase. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's feedback and the questions raised. Below, we address and resolve each of these questions in our responses. > Clarification Thank you for catching this ambiguity. We've clarified both the notation and purpose in the Table below. | Training Data | Alignment | B@4 ↑ | CIDEr ↑ | SPICE ↑ | |:----------------:|:---------------:|:----------------:|:---------------:|:----------------:| | Real only | ✗ | 32.15 | 108.35 | 20.12 | | Mix | ✗ | 36.15 | 115.35 | 22.95 | | GMAIL (ours) | ✔ | 38.12 | 119.53 | 23.75 | - Real only: Standard CLIP fine-tuned solely on real images (baseline). - Mix: Real + generated data combined, no alignment applied. - GMAIL: Separate encoders, aligned via our method. Why does “real-only” improve? Because fine-tuning ClipCap on real data improves over frozen pre-trained CLIP. However, it underperforms compared to models using generated data, especially GMAIL, which preserves real performance while benefiting from synthetic expansion. > Comparison to Tasks2Sim Great point. We have added experiments comparing Tasks2Sim in the Table below. GMAIL outperforms prior mix-only methods due to explicit modality disentanglement and alignment, rather than blending synthetic and real data blindly. | Method | B@4 ↑ | CIDEr ↑ | SPICE ↑ | |:----------------:|:---------------:|:----------------:|:---------------:| | Tasks2Sim | 36.65 | 115.72 | 23.02 | | GMAIL (ours) | **38.12** | **119.53** | **23.75** | > Embedding Comparisons Thank you for the suggestion! We calculated the average Cosine Similarity Between Modalities including Real, Gen, and Text in the Table below. Real vs. Gen gap is smaller than image-text, but still substantial, justifying separate modality treatment. Although the cosine similarity scores for Real vs. Text (0.44) and Gen vs. Text (0.42) appear numerically close, both are substantially higher than the similarity between Real vs. Gen (0.25). This discrepancy shows a significant gap between real and generated images, indicating that each type of image is more semantically aligned with textual descriptions than they are with each other. | Pair | Cosine Similarity ↑ | |:----------------:|:---------------:| | Real vs. Gen | 0.25 | | Real vs. Text | 0.44 | | Gen vs. Text | 0.42 | > Citation of LeCun (2022) Thank you for pointing this out. We originally referred to the broader concept of representation collapse (often called "mode collapse") in the context of over-optimization, as discussed in LeCun's vision report starting from Section 4.3 ("Training Energy-Based Models," p. 20 onwards). Specifically, LeCun describes how certain training methods risk collapsing learned representations, thereby reducing their generalization capacity. > Typo We fixed it. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It addresses most of my concerns. I will raise my rating to 'accept.'
Summary: This paper proposes a method to fine-tune a CLIP image encoder on synthetic image samples so that it can be used in training vision-language models with generated data. The "Gen-CLIP Flow" is learning two CLIP image encoders, one for real images and the other for synthetic images. The "Alignment with Vision-Language Models" uses the learned CLIP image encoders above to extract features from real and synthetic images separately, which are later fed into the Vision-Langauge models. The proposed training method shows performance improvement on zero-shot image retrieval on COCO, Flickr30k, zero-shot image classification, long caption retrieval Claims And Evidence: Although it is good that we see improvement on many visual-language tasks, the effectiveness of the synthetic data training is not well grounded as the experimental settings are not reasonable. As shown in Table 8, the authors use more training steps for synthetic training than baseline, which means the model received more training signals than synthetic training. I suggest the authors use the same training step for all experiments for a fair comparison. Methods And Evaluation Criteria: - Experiments on VLM benchmarks (e.g. MMMU) are missing, which will be more convincing for illustrating the influence of the visual-language alignment training. Theoretical Claims: No theory is involved. Experimental Designs Or Analyses: The experimental settings are not reasonable. - As shown in Table 8, the authors use more training steps for synthetic training than baseline, which means the model received more training signals than synthetic training. I suggest the authors use the same training step for all experiments for a fair comparison. Supplementary Material: I check the experimental setting and algorithm in the supplementary materials. Relation To Broader Scientific Literature: This paper is related to many works using data synthesis methods to boost model performance, especially in the field of computer vision and vision-langauge alignment. Essential References Not Discussed: The authors should discuss many more data synthesis methods for visual understanding, for example, DatasetGAN [1], SegGen [2], Diffumask [3]. [1] DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort. CVPR 2021 [2] SegGen: Supercharging Segmentation Models with Text2Mask and Mask2Img Synthesis. ECCV 2024 [3] DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models. ICCV 2023 Other Strengths And Weaknesses: Other Strength: - The idea of utilizing synthesis to help understanding is elegant. Other Weakness: - The writing of this paper needs significant improvement. I find myself confused while reading the method part although the method is relatively simple. For example, the description of Gen-CLIP Flow is not clear enough. I suggest the author add necessary formulate or diagram to help to understand. Also, there are many typos. - The Algorithm 1 is also confusing. Again, I suggest more formal and accurate statements in the paper instead of using words like "the aligned representation from f_g". Other Comments Or Suggestions: - In lines 188-190, "the model fine-tuned on generated images in the Gen-CLIP flow is deployed to process real images without further fine-tuning." Could the author explain this statement more clearly? I suggest using the necessary formula and notation. Questions For Authors: Please check the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's feedback and the questions raised. Below, we address and resolve each of these questions in our responses. > Training Steps Thank you for this observation. We have conducted additional experiments with equal training steps across all compared methods and included the results in the Table below. Even with identical training steps, GMAIL consistently outperforms both baselines, confirming that gains stem from our alignment design, not from additional training. | Method | B@4 ↑ | CIDEr ↑ | SPICE ↑ | |:----------------:|:---------------:|:----------------:|:---------------:| | CLIP (real only) | 32.15 | 108.35 | 20.12 | | CLIP (gen only) | 35.76 | 113.42 | 22.63 | | GMAIL (ours) | **37.92** | **117.6** | **23.42** | > Experiments on MMMU Thank you! We included results on MMMU, which is a benchmark that requires multimodal reasoning and visual-language alignment. GMAIL improves visual grounding and alignment with LLMs, demonstrating generalization to reasoning-based VLM tasks. | Model | Test Overall (%) | |:----------------:|:---------------:| | LLaVA | 44.7 | | LLaVA + GMAIL | **48.3** | > Method Description We apologize for the confusion. We’ve made the following changes to improve clarity: Figure 1 is now redesigned and clarified with modality-specific flows and alignment stages. We added formal notation in Section 3.1 (Preliminaries) and Section 3.2 (Gen-CLIP Flow). We also introduced the key equation for alignment and clarified dual-encoder roles. > Algorithm 1 We’ve rewritten Algorithm 1 in Appendix B with clearer, formal steps and precise mathematical notation. We replaced vague language like “aligned representation​” with formal notation, and split training into two distinct phases: Gen-CLIP Flow with LoRA adaptation and Inference/Transfer to Vision-Language Models (CLIPCap, LLaVA, etc.) > Sentence in Line 188-190 We use a dual projection structure. During inference, real images are encoded via the original encoder $f_r$ (not $f_g$). This avoids degradation from overfitting to synthetic artifacts in $f_g$. The aligned model still benefits from synthetic training via shared projection space. > References to Data Synthesis for Vision Tasks Thank you for the suggestion! We’ve now added and discussed all three works in Related Work: - DatasetGAN (CVPR’21): Efficient labeled data generation via GANs and semantic segmentation. - SegGen (ECCV’24): Mask2Img diffusion pipelines for boosting segmentation. - DiffuMask (ICCV’23): Pixel-level annotations via text-to-mask-to-image synthesis. We emphasize that GMAIL is complementary to these efforts: - They focus on generating high-quality labeled datasets. - We focus on bridging modality gaps when using synthetic data to train vision-language models. > Typos We carefully proofread the entire paper and corrected all identified typos. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. The added experiments further prove the effectiveness of the proposal. I am happy to see the paper accepted if they have added the additional discussion to the paper.
Summary: Authors propose a framework to train multimodal LLM on generated images. This work recognizes that generated images offer a different distribution than real images, and then should, in those settings, be recognized as a different modality altogether to be useful for training. Authors adopts the code base of the open source multimodal LLM LLaVA and perform multiple experiments and evaluation, demonstrating than ingesting generated images as a separate modality improves performance on multiple modality tasks, including image captioning, zero-shot image retrieval, zero-shot classification and caption retrieval. Claims And Evidence: Authors claim that considering generated images as a separate modality than non-generated images brings multiple benefits to train multimodal LLM. The alternative, -- considering both generated and real images as a same modality, exposes at risk of misalignment between the image space and the decoder text space and model collapse in general, as generated images are differing in nuanced but significantly ways. To support that claim authors conducted multiple experiments and evaluations. Methods And Evaluation Criteria: - Generated images are generated with diffusion models, and then used to train a CLIP based encoder. Authors introduces a cross-modality loss to ensure that generated images share the same representation space with the real image, while keeping their respective characteristics. - Authors use a shared caption as "anchor" between the two spaces, ensuring that generated and real images linked by that anchor are close, ie. have a high cosine similarity, in the share space. - The rest of the work is fairly standard, with the use of captioning evaluation metrics (BLEU, ROUGE, etc.) on common datasets (CoCo, etc.) and standard evaluation practices for image/caption retrieval. Theoretical Claims: - See 'Claims and Evidence' Experimental Designs Or Analyses: - The experimental settings are correct, and the evaluation is done thoroughly. One could have appreciated however to see a clear evidence of the risk of 'model collapse' when trained on generated image as used as justification for this work. Instead, as shown in Table 6, it seems models trained on generated images without considering them as an external modality are more performant, but not conversely, models that don't follow that strategy seem to only perform slightly worse, -- i.e there is not an evidence of such "model collapse". Supplementary Material: - Appendix provide clear details on the datasets and implementation details. Relation To Broader Scientific Literature: - Authors articulate their contributions with other works in the literature. Essential References Not Discussed: - Other Strengths And Weaknesses: - The overall paper is well written, and quite clear. Authors designed their experiments to respond to the claim made in the abstract/introduction, and show indeed improvements with their method. One could have hoped a better analysis of the 'model collapse' when trained on generated image without their method used as justification. Other Comments Or Suggestions: - L084: "tje" -> "the" - Figure 1: Use a serif font like the body of text, make it vectorized (\include{yourfigure.pdf}) so this works with accessibility readers. - Minor comment: The use of 'GMAIL' for the title of this work is a bit confusing, as it obviously sounds like the famous Google's mail service. Is that intended? One could have preferred Gen-Real alignment, or something that overlap less with other companies/services. Questions For Authors: Dear authors, thank you for your work: - Can you give more details about the model collapse when training on generated images without considering as modality loss? Is that something you have witnessed first hand? Can you share results on that point please? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's feedback and the questions raised. Below, we address and resolve each of these questions in our responses. > Model Collapse We agree. To clarify the severity and nature of the issue, we added a direct comparison between models trained on: Real images only (baseline), Generated images without alignment, Generated images with our GMAIL alignment. Training directly on generated images without alignment reduces generalization to real-world test data. GMAIL mitigates this through modality-aware learning. | Training Modality | Align | B@4 ↑ | CIDEr ↑ | SPICE ↑ | |:----------------:|:---------------:|:------------------:|:---------------:|:------------------:| | Real only | ✗ | 32.15 | 108.35 | 20.12 | | Generated only | ✗ | 36.15 | 115.35 | 22.95 | | Generated + GMAIL | ✔ | **38.12** | **119.53** | **23.75** | Without alignment, performance improves on synthetic data, but fails to transfer well to real data. GMAIL avoids overfitting to synthetic artifacts by learning aligned representations. > More Analysis Thank you for the suggestion. We now refer the term “model collapse” to better reflect what we observe: a divergence from real-image generalization, rather than complete failure. We also show latent space shifts. Without GMAIL, embeddings for the same image-caption pair differ significantly across real and synthetic inputs. | Alignment | Avg. Cosine Similarity ↑ | |:----------------:|:---------------:| | ✗ (no GMAIL) | 0.25 | | ✔ (GMAIL) | 0.89 | > Embedding Space Collapse Without Alignment To complement the cosine similarity, we include a t-SNE visualization (Figure 3, Appendix), which shows that real vs. generated images cluster apart without GMAIL, but overlap with GMAIL, supporting the hypothesis of latent space misalignment and collapse risk. > Scalability and Training Stability with GMAIL GMAIL introduces modest memory overhead but significantly improves convergence and stability, especially in large-scale training. We empirically found that GMAIL improves convergence speed and representation robustness with fewer training steps. | Method | Synthetic Data | Steps ↓| |:----------------:|:---------------:|:---------------:| | CLIP (gen only) | ✔ | 70k | | GMAIL (ours) | ✔ | 50k | > Effectiveness on Downstream Tasks We included experiments on: Alignment improvements to SigLIP models (Table 9) and Visual QA on ScienceQA (Table 10) in the appendix. We also include the results on MMMU benchmark, which is a benchmark that requires multimodal reasoning and visual-language alignment, in the Table below. GMAIL improves visual grounding and alignment with LLMs, demonstrating generalization to reasoning-based VLM tasks. | Model | Test Overall (%) | |:----------------:|:---------------:| | LLaVA | 44.7 | | LLaVA + GMAIL (ours) | **48.3** | > Naming This acronym was chosen for memorability, but we understand the concern. We’ve added a footnote clarifying no relation to Google. > Font and Typos We fixed the typo in Line 84, and Figure 1 now uses CMU Serif, is fully vectorized (PDF), and screen-reader accessible.
null
null
null
null
null
null
Optimal Information Retention for Time-Series Explanations
Accept (poster)
Summary: This paper proposes the Optimal Information Retention Principle to improve explanations of deep models for time-series data by minimizing redundancy and maximizing completeness using conditional mutual information. The authors develop ORTE, a framework that learns a binary mask to filter irrelevant patterns while preserving key temporal features. ORTE leverages contrastive learning for precise filtering and ensures stable optimization. Experiments demonstrate superior accuracy and completeness over existing methods. Claims And Evidence: 1. This paper employs the Optimal Information Retention Principle to guide the identification of explanatory temporal patterns within time series, including Semantic Information Retention, Minimum Redundant Information Retention, and Maximum Effective Information Retention. In Section 2, three criteria are used to support these claims. 2. The experiments and framework validate the contribution claims made in the introduction. Methods And Evaluation Criteria: 1. The method is well-organized and clearly presented, making it easy for readers to follow its main idea. In Figure 2, the Optimal Information Retention Principle is aligned with three components. 2. This paper follows TimeX's evaluation protocol, using four synthetic datasets and four real-world datasets as benchmarks. It employs Area Under Precision (AUP), Area Under Recall (AUR), and Area Under the Precision-Recall Curve (AUPRC) as evaluation metrics. There are no issues with this section. Theoretical Claims: This paper has three criteria: Semantic Information Retention, Minimum Redundant Information Retention, and Maximum Effective Information Retention. In the appendix, the authors provide proofs of the equivalence between these criteria and the implementation loss functions. The proof of Theorem 1 appears to be well-structured, but this reviewer is uncertain about the proof of Theorem 2, particularly the relationship between Equations 32 and 33. Experimental Designs Or Analyses: The experimental protocols are well-designed, incorporating quantitative analysis, visualization, and ablation studies. The results and analyses are consistent and well-aligned. Supplementary Material: The appendix are reviewed, including Section B, E, G. Relation To Broader Scientific Literature: This paper proposes the Optimal Information Retention Principle to guide the design of explainable methods in the time series domain and introduces the ORTE framework. Experimental results demonstrate the effectiveness of this approach. Essential References Not Discussed: There is no other reference papers from this reviewer. Other Strengths And Weaknesses: Strength: 1. This paper introduces a new framework, ORTE, for explainable time series models based on three criteria. The authors validate the effectiveness of these criteria through both theoretical and empirical analyses. 2. The paper is well-organized. Weakness: 1. In Figure 1, the description is not precise. Since the overlapping area represents mutual information, the green area should correspond to $H(X)$. 2 . The implementation of Criterion 2.3 is unclear. In line 167, the assumption states: "Assuming $\hat{X}$ has no overlap with the portions of H(X | Y )". However, the statement, "For the second term, since X and $\hat{X}$ share effective information in $X_m$ , they can naturally regarded as the anchor samples and the positive samples, respectively, in contrastive learning." contradicts this assumption. Other Comments Or Suggestions: Please find the comments above. Questions For Authors: Please find the comments above. Ethical Review Concerns: There is no ethical review concerns in this paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Comment:** We extend our sincere appreciation to Reviewer Szo4 for providing valuable feedback and acknowledgment of our research. - **Theoretical Claims:** This paper has three criteria: Semantic Information Retention, Minimum Redundant Information Retention, and Maximum Effective Information Retention. In the appendix, the authors provide proofs of the equivalence between these criteria and the implementation loss functions. The proof of Theorem 1 appears to be well-structured, but this reviewer is uncertain about the proof of Theorem 2, particularly the relationship between Equations 32 and 33. **Reply:** For Equations 32 to 33, it is described how to derive positive and negative samples and the corresponding loss function in contrastive learning. Specifically, we learn a mask matrix $M$ that can separate the effective features, and mask $X$ to get $X^m$. However, $X^m$ cannot be directly used as a positive sample, as this would introduce too many 0 values and cause OOD. To do this, we use an MLP to map $X^m$ to $\widehat{X}$. Conversely, the inverse mask filters out redundant features, which can be used as negative samples that are not relevant to the task. The positive sample $\widehat{X}$ and the negative sample $\widehat{X}^-$ thus constructed can be used for contrastive loss, namly, Equations 33. - **Other Strengths And Weaknesses:** - In Figure 1, the description is not precise. Since the overlapping area represents mutual information, the green area should correspond to $H(X)$. **Reply:** Thanks for your advice. We will fix this description in an updated version. - The implementation of Criterion 2.3 is unclear. In line 167, the assumption states: "Assuming $\widehat{X}$ has no overlap with the portions of $H(X | Y )$". However, the statement, "For the second term, since X and $\widehat{X}$ share effective information in $X_m$ , they can naturally regarded as the anchor samples and the positive samples, respectively, in contrastive learning." contradicts this assumption. **Reply:** For $\widehat{X}$ can be considered as the product of $X^m$ learned by MLP, while $X^m$ is considered as the valid feature retained after occlusion. Ideally, $H(X^m)$ should correspond to $I(X; Y)$ consistent. When the information of $Y$ is excluded, the remaining redundant information of $X$ can be considered as irrelevant to $\widehat{X}$, so "no overlap with the portions of $H(X | Y)$". Furthermore, $\widehat{X}$and $X^m$ have information consistency with respect to the label $Y$ that $X$ is oriented to, which is also the motivation for constructing positive samples. So $\widehat{X}$ can be used for contrastive learning from $X^m$. Accordingly, $X$ can be regarded as the anchor of contrastive learning. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I would keep the score. --- Reply to Comment 1.1.1: Comment: Thank you for your time and positive comments!
Summary: This paper proposes an explanation method called "ORTE" (the title of the paper) which uses the Optimal Information Retention principle to construct explanations for time series models. Specifically, given a time series classifier, they propose a mask generating model to produce a binary mask for each input, such that three important information theoretic conditions (Criteria 2.1 - 2.3) hold. These criteria essentially require (1) that the masked input be sufficient for prediction; (2) the masked input has the lowest possible mutual information with the full input; and (3) the complement of the masked input must contain as little information about the label as possible. Experiments demonstrate that the proposed explanations are close to the ground truth explanations, handily beating other methods in this area. Claims And Evidence: The claims made in the paper are mostly well supported by experiments. However the claim that " We achieve state-of-the-art performance on eight synthetic and real-world time series datasets compared to the latest competitive baselines" is a bit misleading: as the method performs on-par with others (Timex++) on experiments on real-world datasets (Figure 3). Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are standard and makes sense for the problem at hand. However, the paper misses a discussion of the interpretability of the masks: while the proposed masks are broadly faithful to the underlying model, it is unclear whether they are also interpretable or meaningful to a human domain expert. It would be good to know for example, if using this method results in greater insights than using other competing methods (e.g.: timex++)? A lack of discussion / experiments / visualizations on this aspect is a critical drawback of this paper. Theoretical Claims: I did not check the correctness of the proofs, as they are in the supplementary material. Experimental Designs Or Analyses: The experimental design is sound and valid. Supplementary Material: No. Relation To Broader Scientific Literature: This paper adds onto the literature on time-series explainability using feature attribution, and proposes a method to improve explanation faithfulness. In particular, this improves upon another SOTA time-series explanation method, called timex++, but augmenting the mask generation procedure to include more constraints, such as the minimality of the information between the masked and the full input; and the sparsity of the masks. Essential References Not Discussed: The paper misses discussion of key literature in the broader explainability literature that performs masking in a manner similar to this paper. For example: 1. Ruth Fong et al. "Understanding deep networks via extremal perturbations and smooth masks". CVPR 2019 2. Dabkowski and Gal. "Real time image saliency for black box classifiers".NeurIPS 2017 3. Chen et al, "Learning to explain: An information-theoretic perspective on model interpretation", ICML 2018 4. Yoon et al, "Invase: Instance-wise variable selection using neural networks", ICLR 2019 5. Jethani et al., "Have we learned to explain?: How interpretability methods can learn to encode predictions in their interpretations", AISTATS 2021 6. Bhalla et al, "Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability", NeurIPS 2023 All of these papers introduce methods for generating per-sample masks, and include discussions of the sufficiency, sparsity criteria similar to this paper. The missing discussion, and the lack of contextualization of the current methods against this larger literature is another critical drawback of this paper. Other Strengths And Weaknesses: Additional Weaknesses: - The paper introduces 5 criteria in total (criteria 2.1-2.3) + mask sparsity + mask continuity, where the latter two are described more as implementation details. It is unclear which of these criteria is sufficient for good empirical performance in the results shown. For example, what are the trade-offs of the mask continuity and sparsity constraints on the results? The paper lacks a discussion of these aspects. Additional Strengths: - The paper demonstrates good empirical performance, especially on recovering known explanations in synthetic settings. Other Comments Or Suggestions: N/A Questions For Authors: Could the authors please clarify the concerns raised regarding: 1. the human-interpretability of the mask explanations, and the impact of the "mask continuity" parameter? 2. the incomplete literature survey, and how the proposed method compares to the literature on masking explanations? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Comment:** We express our sincere gratitude to Reviewer w93i for providing comprehensive review, insightful perspectives, and thought-provoking questions. - **Claims And Evidence:** However the claim that " We achieve ... on real-world datasets (Figure 3). **Reply:** Thank you for your advice. We will revise the presentation to faithfully reflect the experimental conclusions. - **Methods And Evaluation Criteria:** However, the paper misses a discussion of the interpretability of the masks... A lack of discussion / experiments / visualizations on this aspect is a critical drawback of this paper. **Reply:** Thank you again for your constructive suggestions, which will be very helpful for a discussion on interpretation and expert cognition. We evaluate the meaning of mask-based explanations with expert explanations from quantitative and qualitative perspectives, respectively. 1) For synthetic data, we followed the same experimental setup as **TIMEX** and **TIMEX++** with pre-defined Ground-Truth explanations. Quantitative analysis of the consistency between salient features and Ground-Truth can accurately evaluate the interpretation effect. For visualization of synthetic data FreqShapes explanation, as shown in Figure R4 (https://anonymous.4open.science/r/orte/anonymous.pdf). Compared with the strongest baseline **TIMEX++**, our method maintains stronger consistency with the Ground-Truth. Likewise, we provided corresponding visualizations and discussions in the supplementary material of the manuscript. 2) For annotated real-world data, ECG data has expert-predefined Ground-Truth to refer to. Table 2 of the manuscript provides a quantitative evaluation. In order to further explore explain the results and human domain expert cognitive consistency, we visualize our method and the interpretation of the baseline method, as shown in Figure R3 (https://anonymous.4open.science/r/orte/anonymous.pdf). The Figure R3 compares our method with **IG**, **TIMEX++**, and **Ground Truth** (QRS complex waves). In addition, we invited clinicians to analyze the interpretation results. The professional physician proposed that the Ground Truth in the figure may be mixed with the P wave on the left that does not belong to the QRS wave, and the more accurate Ground Truth should be more narrow, which further verified that our method may provide an insight for the interpretation of real data. 3) For the real-world data without annotations, on the one hand, we followed TIMEX and TIMEX++, progressively occluding the saliency features at the bottom of the $p$-percentage. Ideally, the redundant features are assigned low saliency, and all the important features (i.e., completeness) are assigned high saliency. When the low saliency features are occluded, the prediction performance of the algorithm will be less affected. On the other hand, we followed the suggestion of reviewer QJfM and supplemented the imputation experiment to support the requirements of redundancy and completeness in the interpretation results, as detailed in the reply to the Claims And Evidence section of reviewer QJfM. - **Other Strengths And Weaknesses:** the trade-offs of the mask continuity. **Reply:** 1) For mask continuity, we follow the TIMEX experimental setup, we also investigated the influence of continuity, as shown in Figure R2 (c) (https://anonymous.4open.science/r/orte/anonymous.pdf). We tested AUPRC, AUP, and AUR on SeqComb-MV data. The results showed that continuity did not affect the algorithm significantly, so we chose a consistent setting. 2) For the sparse mask, Figure R2 (d) (https://anonymous.4open.science/r/orte/anonymous.pdf) investigated the sensitivity. In the interval $[0.001, 0.05]$, the interpretation of the algorithm is relatively stable. However, there will be mode collapse when the sparsity is too large, which is consistent with our expectation that effective features will be missed when the sparsity is too large. - **Questions For Authors:** - Q1. the human-interpretability of the mask explanations, and the impact of the "mask continuity" parameter? **Reply:** 1. For the **human-interpretability of the mask explanations**, please refer to the reply of **Methods And Evaluation Criteria**. 2. For the **impact of the "mask continuity" parameter**, please refer to the reply of **Other Strengths And Weaknesses**. - Q2. the incomplete literature survey, and how the proposed method compares to the literature on masking explanations? **Reply:** We will further clarify our literature survey, and **TIMEX** and **TIMEX++** as our baselines, following masking explanations, both are based on mask learning saliency distributions. And the other baseline method **Dynamask**, is also based on mask learning. In the updated version we will add a more extensive discussion, including time series and other general methods as you mentioned in Essential References Not Discussed. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal and, for the additional experiments! - **Mask interpretability**: Regarding "progressively occluding the saliency features", perhaps the authors have conflated **interpretability** and **faithfulness** of masks. While interpretability involves assessing whether the masks communicate the rationale for model outputs in human-understandable terms; faithfulness is about assessing whether the provided explanation is consistent with model behaviour. Note that it is possible for masks to be interpretable but not faithful; and also faithful but not interpretable. Also please note that "occluding saliency features" measures faithfulness and not interpretability. Having said that, the visualizations R3 and R4 are helpful to present, and provide an indication that the method may provide human interpretable masks. - **Incomplete Literature survey**: The rebuttal has unfortunately not addressed the central issue, that masking-based methods are common in the literature; dating back to at least 2017 (Dabkowski & Gal). It is unfortunately still unclear how the proposed method compares to this existing literature, and whether it can be interpreted as a straightforward application of these methods to time-series data. For the latter reason, I cannot increase my score at this time. --- Reply to Comment 1.1.1: Comment: We appreciate your thorough review and valuable suggestions. We particularly value your insightful analysis regarding interpretability and faithfulness, as well as your positive feedback on visualization R3 and R4. Regarding your latter concern about **Incomplete Literature survey**, we respond as follows: Mask-based methods are devoted to learning a mask matrix $M$, and the Hadamard products of the input $ X $ and $M$ are represented as the salient features. [1] proposed to learn a masking model by embedding category conditions into the U-Net structure to realize the saliency detection. The sparsity and smoothness terms optimize the objective function of mask learning, similar to that in our paper. [2] maximizes the mutual information between the selected features and the conditional distribution of the response variables from an information theoretic perspective, and develops a variational approximation to estimate this mutual information. Although it can effectively filter redundant information, it may cause the omission of important features, which is the completeness claimed in our manuscript. INVAS[3], inspired by the actor-critic methodology, proposes a selector network for mask learning, optimized in combination with a prediction network and a baseline network. Extreme perturbations[4], which are perturbations that have the largest impact on the network among all perturbations within a fixed region, are used to trade off the optimization term of the mask constraint. Moreover, this perturbation can be extended to intermediate activation layers to explore diverse properties of feature representations. [5] points out that explanation methods suffer from computational efficiency, inaccuracy, or lack of faithfulness. The lack of robustness of the underlying black-box models, especially to the erasure of unimportant distractor features in the input is a key reason why certain attributions lack faithfulness[6]. The Distractor Erasure Tuning method [6] is proposed that adapts black-box models to be robust to distractor erasure, thus providing discriminative and faithful attributions. Other interpretive methods specifically designed for time series have also received extensive attention in recent years. For example, Dynamask [7] considers the time dependence and learns the influence of dynamic perturbation operators on the mask. TIMEX [8] specifically designs surrogate models in order to avoid the inductive bias of other methods on time series data, and identifies explanation temporal patterns by aligning latent space feature consistency and predictive distribution consistency. As an advanced version of TIMEX, TIMEX++ [9] alleviates the trivial solutions and distribution shift based on information bottleneck. The above approaches provide practical solutions for mask-based explanations from different perspectives, including but not limited to heuristics, information theory, etc. However, there is still a lack of comprehensive consideration of information redundancy and completeness, which is easy to cause the mixing of invalid information or the lack of effective information. On the other hand, differences in data modalities may cause inductive bias. To this end, we propose the optimal information retention principle and derive the corresponding objective functions. Similar to [1], we adopt the sparsity and smoothness constraints on the mask. The learning of a Bernoulli distribution can provide an explanation probability basis for the mask, and contrastive learning is used to separate effective features from redundant features. We propose the ORTE method as a practical solution for time series data, which can be adapted to various time series models. In addition, we propose adapt-STE to decouple the discrete mapping process and alleviate the differentiable limitation. 1. Dabkowski and Gal. "Real time image saliency for black box classifiers".NeurIPS 2017 2. Chen et al, "Learning to explain: An information-theoretic perspective on model interpretation", ICML 2018 3. Yoon et al, "Invase: Instance-wise variable selection using neural networks", ICLR 2019 4. Ruth Fong et al. "Understanding deep networks via extremal perturbations and smooth masks". CVPR 2019 5. Jethani et al, "Have we learned to explain?: How interpretability methods can learn to encode predictions in their interpretations", AISTATS 2021 6. Bhalla et al, "Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability", NeurIPS 2023 7. Crabb´e, J et al, Explaining time series predictions with dynamic masks, ICML 2021 8. Queen et al, Encoding time-series explanations through self-supervised model behavior consistency. NeurIPS 2024 9. Liu, Z et al, Timex++:Learning time-series explanations with information bottleneck. ICML 2024
Summary: This paper introduces a novel approach to explainability in time-series deep learning models through an information-theoretic lens. The authors propose the "Optimal Information Retention Principle" which outlines three key criteria for high-quality explanations via information retention: ‘semantic’, ‘minimum redundant’, and ‘maximum effective’. Based on these they develop ORTE, a practical framework that learns binary masks to identify important patterns while filtering out redundant information. The framework incorporates contrastive learning to achieve a balance between low redundancy and high completeness in explanations. Experiments on both synthetic and real-world datasets demonstrate that ORTE outperforms existing SOTA explainability methods on similar metrics to those used by TIMEX/TIMEX++. Claims And Evidence: Overall, the core technical claims about ORTE's performance improvements are well-supported, but some broader claims about its practical advantages and generalizability would benefit from additional evidence. Well-supported: C1. Performance superiority: The claim that ORTE outperforms existing methods is convincingly supported by comprehensive experiments across multiple datasets. The quantitative results in Tables 1-2 show clear improvements over competitive baselines, particularly on AUPRC, AUP, and AUR metrics. C2. Theoretical foundation: The claim that information theory can provide optimization criteria for explanations is supported through detailed mathematical derivations and proofs in the paper and appendices. C3. Adaptability: The claim that ORTE works across different model architectures is demonstrated through additional experiments on CNNs and LSTMs in the appendix. Less convincing evidence: C4. Practical utility: claim lacks detailed comparison of computational demands versus simpler approaches, which would be important for real-world applications. Introducing (even theoretical) estimation of time/memory complexity would be welcome. C5. Real-world evaluations: while the occlusion experiments on real-world datasets without ground truth (Figure 3) show ORTE maintains high prediction AUROC, this is only a proxy measure and doesn't definitively prove the explanations are correctly identifying the truly important features. Other work has introduced evaluations by consulting subject-matter experts and comparing their interpretation of variables of importance with those generated by the proposed framework. Methods And Evaluation Criteria: The proposed method of using binary masks to identify important regions via a transformer-based mask generator -> contrastive separator -> predictive distribution aligner makes sense after the introduction of the three information retention criteria. The synthetic evaluations with ground truth are standard for time series interpretability and show the performance of the model against a wide range of competing methods. Although the real-world evaluation is somewhat lacking, the approach does not seem too far off. There is some evaluation missing when considering how features are being masked - it seems the authors are simply masking the bottom x-percentile but this does not take into account temporal dependencies within each dimension of a multivariate time-series - they should consider masking until a 1-std change or until the end of the series. Theoretical Claims: I reviewed the proofs provided in Appendices B.1 and B.2, which derive the objective functions from the information-theoretic principles. The mathematical derivations appear sound and follow established information theory principles. No errors were identified in these derivations, and the theoretical foundation appears solid. Experimental Designs Or Analyses: Again, the synthetic experimental design is comprehensive and appropriate: - The use of synthetic datasets with ground truth explanations (FreqShapes, SeqComb-UV, SeqComb-MV, LowVar) provides a clear baseline for evaluating explanation quality. - The ablation studies effectively isolate the contribution of each component in the framework. - The visualizations provide qualitative evidence that complements the quantitative metrics. However, the experimental design could be improved: - While real-world datasets from different domains were included (ECG, PAM, Epilepsy, Boiler) the experiments themselves were not rigorously compared. - The authors could provide runtime/computational complexity comparisons across methods - A hyperparameter sensitivity analysis could be completed Supplementary Material: I reviewed all the appendices and spent particular time in: A: Related work B: Additional theoretical proofs and derivations D: Experimental details E: Additional experiments on CNN and LSTM models G: Visual Comparisons The supplementary material is thorough and provides valuable additional evidence for the paper's claims, particularly the generalizability across model architectures. Relation To Broader Scientific Literature: The paper situates itself well within existing TS explainability literature, particularly at the intersection of time-series analysis and information theory. The authors appropriately acknowledge: - Prior work on local explanations in time series (Dynamask, TIMEX, WinIT) - Information-theoretic approaches in deep learning (Information Bottleneck, contrastive learning) - The challenges specific to time-series explainability (temporal dependencies, out-of-distribution issues) The work advances the field by providing a unified information-theoretic framework specifically designed for time-series data, addressing the limitations of methods that were originally designed for images or text. Essential References Not Discussed: The work does address most relevant work, however it excludes recent advances on applying multiple instance learning to time series tasks. Early et. al. (2024) [https://arxiv.org/pdf/2311.10049] and Chen et. al. (2024) [https://arxiv.org/abs/2405.03140] provide an introduction to these methods and how they provide inherent interpretability for multivariate time series classification problems from an information-theoretic perspective. Earlier work by Tonekebani et. al. (2020) [https://arxiv.org/abs/2003.02821] and Ismail et. al. (2020) [https://arxiv.org/abs/2010.13924] which helped set the stage for time series interpretability and provides other synthetic and real-world evaluations for improved experimentation could further support this work. Other Strengths And Weaknesses: S1. The paper provides a unique and theoretically sound unified optimization framework for TS explainability S2. The adapt-STE technique is a novel contribution that could be useful for other binary mask learning problems. S3. The synthetic evaluations show significant performance improvements compared with previous SOTA. W1. The real-world evaluations as previously mentioned. W2. No significant investigation into edge-cases like dealing with irregularly sampled or missing data. Other Comments Or Suggestions: N/A Questions For Authors: Q1. How does the computational complexity of ORTE compare to methods like IG or TIMEX++? Q2. How sensitive is the method to the choice of hyperparameters, particularly those in the contrastive learning component? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Comment:** We sincerely appreciate Reviewer voUp for offering valuable insights and recognizing of our work. - **Claims And Evidence:** - C4. Practical utility: claim lacks detailed comparison of computational demands versus simpler approaches, which would be important for real-world applications. Introducing (even theoretical) estimation of time/memory complexity would be welcome. **Reply:** We evaluated the computational requirements of interpretability methods on two real-world datasets (PAM and Epilepsy), as detailed in Table R1 (https://anonymous.4open.science/r/orte/anonymous.pdf). The computing infrastructure consisted of Ubuntu 18.04.6 LTS and an NVIDIA GeForce RTX 2080. Table R1 compares our method (**ORTE**) with four baseline approaches (**IG**, **Dynamask**, **TIMEX**, and **TIMEX++**) in terms of parameters (M), FLOPs (G), and Inference Runtime (s). Compared with **TIMEX** and **Timex++**, our inference time is slightly longer due to the *double*-STE to implement the regulation of discrete maps, but all three are in the same rank, i.e., less than 1 second. **IG** and **Dynamask** require longer inference time, which is consistent with expectations since both methods operate recursively on a single sample, while our method requires only one forward propagation. - C5. Real-world evaluations: while the occlusion experiments on real-world datasets without ground truth (Figure 3) show ORTE maintains high prediction AUROC, this is only a proxy measure and doesn't definitively prove the explanations are correctly identifying the truly important features. Other work has introduced evaluations by consulting subject-matter experts and comparing their interpretation of variables of importance with those generated by the proposed framework. **Reply:** Thanks for your advice. We consulted clinical experts and analyzed real-world **ECG** data, as shown in Figure R3 (https://anonymous.4open.science/r/orte/anonymous.pdf). Figure R3 compares our method with **IG**, **TIMEX++**, and **Ground Truth** (QRS complex waves). Although **IG** accurately locates some points of the QRS, more information is missed compared with the Ground-Truth. While **TIMEX++** highlights the salient features, it also mixes the redundant features, which makes it difficult to understand the prediction of the algorithm. Our method maintains less redundancy while highlighting more complete salient features, which verifies the optimal information retention principle proposed in this paper. Notably, clinical experts identified a critical annotation discrepancy: the reference Ground Truth inadvertently incorporated proximal P-wave components (non-QRS elements) at leftward leads. This observation suggests that the benchmarks may require narrower physiological bounds, which further supports that our method may provide new insight for the interpretation of real data. - **Other Strengths And Weaknesses:** - W1. The real-world evaluations as previously mentioned. **Reply:** For the evaluation of real-world data, on the one hand, we followed your friendly advice and consulted clinical experts as a supplement to the explanation evaluation, as mentioned in C5 reply of **Claims And Evidence**. On the other hand, we followed the advice of reviewer QJfM and supplemented the imputation experiments, as detailed in the reply to the Claims And Evidence section of reviewer QJfM. - W2. No significant investigation into edge-cases like dealing with irregularly sampled or missing data. **Reply:** We investigated edge-case of missing data, as shown in Table R2 (https://anonymous.4open.science/r/orte/anonymous.pdf). We compare the proposed method **ORTE** with the strongest baseline **TIMEX++** on SeqComb-MV data. We set three scenarios including random loss of 5% and 15% of data points and random loss one variable, respectively. The results show that our method performs well on AUPRC, AUP and AUR, which is consistent with the claims of the manuscript. - **Questions For Authors:** - Q1. How does the computational complexity of ORTE compare to methods like IG or TIMEX++? **Reply:** Please refer to C4 reply of **Claims And Evidence**. - Q2. How sensitive is the method to the choice of hyperparameters, particularly those in the contrastive learning component? **Reply:** We investigated the hyperparameter choice of contrastive learning (i.e., $ \alpha $), as shown in Figure R2(b) (https://anonymous.4open.science/r/orte/anonymous.pdf). When $0.1 \leqslant \alpha \leqslant 13 $, AUPRC and AUP slowly rise and gradually plateau, while AUR gradually rises. The AUP will decrease significantly when $\alpha $ is larger, that is, the explanation fails. A value proximate to 10 is recommended for our experimental applications. - **Essential References Not Discussed:** **Reply:** Thanks. We will supplement and analyze these references in the updated version. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for responding. After reviewing the updated responses including the expansion of the computational complexity, the hyperparameter reviews, and the additional experimental data I have further confidence in my current rating which I will keep. --- Reply to Comment 1.1.1: Comment: Thank you once again for your time and valuable suggestions. We will include the above discussions in the revised version.
Summary: The authors address redundancy and completeness in time-series explanation methods by deriving an Optimal Information Retention principle from information theory, which optimizes explanations by minimizing redundancy while maximizing completeness. Based on this principle, they propose ORTE, a novel explanation framework that ensures informative and non-redundant explanations. They validate their approach through empirical evaluations on eight synthetic datasets and four real-world datasets, demonstrating its effectiveness over existing explanation methods. Claims And Evidence: The paper proposes ORTE, an explanation framework designed to minimize redundancy and maximize completeness, verified through AURPC, AUP, and AUR on synthetic datasets with ground truth explanation labels. The evaluation methodology for synthetic datasets is well-structured and justified, as these metrics align with the paper’s theoretical objectives. Table 1 quantitatively supports the validity of ORTE in this controlled setting. For real-world datasets, the paper employs stepwise occlusion experiments (Figure 3) to assess explanation validity. The use of AUROC as a primary metric makes sense in measuring how explanations influence model predictions. However, there are some limitations in the evaluation approach. 1. Occlusion alone does not fully capture redundancy and completeness. The authors assume that a stable AUROC under extreme feature removal suggests explanation stability. However, if AUROC remains above 95% after occluding 99% of features, this may indicate a redundant explanation rather than a robust one. 2. Insertion experiments should complement occlusion. The paper does not include stepwise insertion experiments, which could provide a more balanced assessment. Insertion tests could verify completeness by measuring how much AUROC recovers when adding back features, while also assessing redundancy by identifying how many features are actually necessary. Methods And Evaluation Criteria: The validation strategy—conducted on eight synthetic datasets and four real-world datasets—is appropriate for the problem and application. The use of synthetic datasets with ground truth explanations allows for a well-controlled evaluation, and the metrics (AURPC, AUP, AUR) provide a reasonable measure of redundancy and completeness in this setting. However, for real-world datasets, the experiments are not sufficient to effectively demonstrate low redundancy and high completeness. The evaluation primarily relies on stepwise occlusion experiments, which, while informative, do not fully capture both aspects of explanation quality. As discussed in the Claims and Evidence section, integrating stepwise insertion experiments alongside occlusion would provide a more comprehensive assessment of redundancy and completeness. Theoretical Claims: The proofs for the two main criteria—minimum redundant retention and maximum effective retention—are provided in the supplementary material. Upon review, they appear to be correct with no immediate inconsistencies. Experimental Designs Or Analyses: As discussed in the Claims and Evidence section, integrating stepwise insertion experiments alongside occlusion would provide a more comprehensive assessment of redundancy and completeness, addressing current limitations in evaluating explanation quality. Additionally, a minor potential improvement could be to analyze the sensitivity of the hyperparameter controlling the noise level in negative samples used in contrastive learning. Since the method applies Gaussian noise to generate negative samples, adjusting the noise level could have some influence on the contrastive learning process. While this is not a critical issue, a brief sensitivity analysis could help ensure the robustness of the approach. Supplementary Material: The supplementary material was reviewed, particularly the proofs for the two main criteria—minimum redundant retention and maximum effective retention. These proofs appear to be correct with no immediate inconsistencies. Additionally, the experimental details provided further clarity on the methodology. The visual comparison and analysis sections were also examined, offering useful insights into the interpretability of the explanations. These visualizations help support the claims made in the main paper, reinforcing the evaluation of redundancy and completeness. Relation To Broader Scientific Literature: The paper builds on prior work in explainability by introducing ORTE, which optimizes redundancy minimization and completeness maximization. This aligns with information-theoretic approaches in explainability and contributes to time-series interpretability. While the Gaussian noise imputation technique for negative samples is not a major contribution, it serves as a simple way to avoid capturing OOD patterns, unlike the zero-padding approach mentioned in the related work section. Essential References Not Discussed: The paper appropriately cites works related to explainability, information-theoretic approaches, and time-series interpretability. Its main motivation—the lack of a unified optimization criterion across explanation methods—is closely related to the evaluation of interpretability techniques. To strengthen this connection, the authors could discuss ROAR ([1], NeurIPS 2019), which introduced a systematic framework for assessing interpretability methods and formalized insertion and occlusion (deletion) as key evaluation techniques. Referencing ROAR would further validate the effectiveness of ORTE's evaluation strategy. Reference:[1] Hooker, S., Erhan, D., Kindermans, P. J., & Kim, B. (2019). A benchmark for interpretability methods in deep neural networks. Advances in Neural Information Processing Systems, 32. Other Strengths And Weaknesses: Another strength is clear Motivation and Theoretical Grounding. The paper effectively frames the problem of unifying redundancy minimization and completeness maximization in explainability, providing a principled approach grounded in information theory. Other than Limited Experimental Evaluation for Real-World Datasets and Gaussian Noise Imputation for Negative Samples, there are no concerns. Other Comments Or Suggestions: 1. Typo. Line 208 (2.3) 2. Hyperlink issue. The hyperlinks for references and sections did not work on my end. This may be a local issue, but please double-check to ensure proper linking. Questions For Authors: - Comment on Figure 3. Beyond stating that the experimental results are better and more stable, what additional interpretations can be drawn from Figure 3? Specifically, how do the results relate to redundancy and completeness? A more detailed discussion on these aspects would enhance the interpretation of the figure. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Comment:** We sincerely appreciate Reviewer QJfM for the careful analysis of our work and valuable suggestions. - **Claims And Evidence:** 1. Occlusion alone does not fully capture redundancy and completeness. & 2. Insertion experiments should complement occlusion. **Reply:** Thanks again for your advice. We believe the insertion experiments can serve as a beneficial supplement to our occlusion experiments. We completed the insertion experiments by gradually inserting the bottom $p$-percentage salient features, as shown in Figure R1 (https://anonymous.4open.science/r/orte/anonymous.pdf). Similar to the occlusion experiments, the imputation experiments compare our method (**ORTE**) with three baseline methods (**Dynamask**, **TIMEX**, and **TIMEX++**) and random insertion (**Random**) on three real-world datasets (**PAM**, **Epilepsy**, and **Boiler**). The results show that **ORTE** achieves a lower AUPROC when inserting the bottom 75% of salient features. As the insertion percentage increases, the predicted AUROC gradually improves. When the insertion ratio reaches 97.5%, ORTE attains the highest AUPROC. This indicates that the most interpretable or informative features are concentrated in the high-saliency features, further validating the manuscript's claims of low redundancy and high completeness. In addition, we notice that **Random** always maintains a high AUROC, which is because the random selection of features does not distinguish the salient or importance of features and contains informative time points or segments. - **Other Strengths And Weaknesses:** Other than Limited Experimental Evaluation for Real-World Datasets and Gaussian Noise Imputation for Negative Samples, there are no concerns. **Reply:** 1) As replied in **Claims and Evidence**, the imputation experiments, serving as a complementary evaluation metric on real-world data, is able to jointly support the low redundancy and high completeness claims of this paper together with the occlusion experiments. 2) We investigated the impact of Gaussian noise intensity on negative sample constructions, as shown in Figure R2 (a) (https://anonymous.4open.science/r/orte/anonymous.pdf). We tested $ \sigma = [0.1, 0.3, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0] $ eight groups of noise intensities on the multivariate synthetic dataset (**SeqComb-MV**). The results show that AUP is lower at $\sigma \leqslant 1.0 $, while AUPRC and AUR are higher, which indicates that insufficient noise intensity may be not beneficial for the separation of interpretation patterns from redundant information. When $1.5 \leqslant \sigma \leqslant 3.0 $, AUP and AUPRC are higher, which indicates that the interpretation patterns of positive samples are effectively discriminable and do not produce OOD patterns. A value proximate to 2.5 is recommended for our experimental applications. And the more analysis of gaussian noise imputation for negative samples on other datasets will be included in the updated version. - **Questions For Authors:** Comment on Figure 3. ... how do the results relate to redundancy and completeness? **Reply:** For Figure 3, we stepwise occluded the bottom $p$-percentage salient features, following TIMEX and TIMEX ++. Ideally, the redundant features are assigned low saliency, and all the important features (i.e. completeness) are assigned high saliency. When the low saliency features are occluded, the prediction performance of the algorithm will be less affected. On the contrary, if the important features are identified as low saliency and are occluded, the prediction performance of the algorithm will be significantly reduced. On the one hand, this can reflect the requirements of redundancy and completeness in interpretation results, but also as pointed out in **Claims and Evidence**, due to the lack of ground-truth interpretation of real-world data, occlusion experiments still have the limitation of robustness. To this end, the imputation experiment can serve as a powerful supplement. Specifically, when the low saliency features are occluded or the high saliency features are inserted, the proposed algorithm can obtain higher prediction performance, which will further verify the claim of this paper. - **Essential References Not Discussed:** To strengthen this connection, the authors could discuss ROAR ([1], NeurIPS 2019) ... Referencing ROAR would further validate the effectiveness of ORTE's evaluation strategy. **Reply:** Thanks for the friendly suggestion, and we will discuss the reference of ROAR in the updated version. - **Other Comments Or Suggestions:** 1. Typo. Line 208 (2.3) & 2. Hyperlink issue. **Reply:** 1) We addressed this issue in the updated version. 2) We double-checked the hyperlinks to references and sections on multiple reading platforms. --- Rebuttal Comment 1.1: Comment: Thank you for adding the insertion experiment and hyperparameter sensitivity analysis. This additional evaluation complements the existing occlusion experiment and allows for a more balanced assessment. However, I believe that the current occlusion and insertion experiments still fall short of directly demonstrating low redundancy and high completeness. Specifically, since the experiments are conducted based on bottom-k% salient features—removing or adding the least salient features first—they only allow for an indirect evaluation of completeness. For a more direct assessment, it would be more appropriate to perform by adding top-k% salient features first, which would reveal how quickly the model performance recovers when only the most informative features are used. For an explanation to be low redundancy, the model performance should change significantly when either low- or high-saliency features are removed or added. This is because, when information is concentrated and non-redundant, small perturbations to the key features will lead to sharp performance changes. In contrast, high completeness implies that the top salient features alone should be sufficient to recover the model’s original performance. That is, the explanation must contain the majority of the essential information. Since the current experiments are based on bottom-k% insertion, we can evaluate completeness indirectly by observing how much the model’s performance recovers in the end. From this perspective, the proposed method demonstrates the highest recovery performance (excluding the random baseline) in the insertion experiment, which suggests that it captures more informative features than the other methods. Furthermore, the area under the performance curve in the occlusion experiment is comparable to or slightly better than the baselines, which is also a positive signal. Based on these improvements and observations, I am updating my score (3->4). --- Reply to Comment 1.1.1: Comment: Thanks for your valuable suggestions and positive comments!
null
null
null
null
null
null
Cross-City Latent Space Alignment for Consistency Region Embedding
Accept (poster)
Summary: This paper deals with a critical issue in the popular trend in urban computing, namely the cross-city latent space alignment problem in region representation learning, which extracts useful features from different types of urban data for many urban prediction tasks. The issue is that although the pre-training process is free from labels of tasks, doing the actual predictions still need a substantial amount of labels. Oftentimes the labels are not really accessible, so this erodes the principle motivation of region representation. The paper developed a method that can work without labels, by migrating patterns accumulated from another city to a city with no labels. A fundamental idea here is that the latent embedding spaces of the two cities for pre-trained can be aligned in an unsupervised way without ground truth and rules. The work achieved this by looking at both the overall distribution alignment and pairwise relationship preservation. This is overall an interesting and novel idea to address the critical issue in urban computing. Claims And Evidence: The overarching claim is the proposed one-stage method for both region representation learning and cross-city alignment can transfer the knowledge learned from a data-rich city to another one where labels are unavailable. The evidence is ample for this claim, including experimental results in multiple tasks, cities as well as baselines. The ablation such as “without one-stage” is convincing. The other claim is that only looking at embedding distributions rather than using human developed rules are sufficient. This claim is well backed by the experiments too, and it is encouraging to see the advantages of this method with little intervention. The figure8 for visually presenting the aligned and unaligned latent spaces is useful. Methods And Evaluation Criteria: The method is novel and can be one of the first unsupervised methods on this topic. The method simultaneously learns city-specific region embeddings, cross-city distribution matching and individual level matching. The idea is intriguing and solid. The evaluation criteria uses public datasets and well established metrics for this problem at hand. Theoretical Claims: The theoretical foundation of the paper primarily lies in the one-stage method where three components are simultaneously learnt – city-specific region embeddings, cross-city distribution matching and individual level matching. The theories underlying are solid. I have checked mathematical expressions of the model and training objective and found they are error free. Experimental Designs Or Analyses: The experiments are sound with valid tasks, datasets and evaluation metrics. The ablation is well designed with key components replaced or abandoned. For example, abandoning one-stage strategy yields less performance. Parameter sensitivity test is generally well done and makes sense. Supplementary Material: This part has been reviewed, which involves descriptions of data, baseline, and a number of visuals for aiding the understanding of the rationale and results. They do make sense and are useful. Relation To Broader Scientific Literature: In an even broader sense, there are increasing works in self-supervised pre-training in urban contexts. Some of them claim to be foundation models. Nonetheless, it is obvious that many of them only focus on specific cities - pre-training in multiple cities is scarce. This submission then ushers an interesting path: overpasses can be established for sharing the knowledge accumulated in different cities. This is interesting in a broader foundation model sense. Essential References Not Discussed: I do not find such related works. Other Strengths And Weaknesses: 1) A promising and sound method is proposed for overcoming a critical challenge in urban computing, which has not been fully explored. 2) The unsupervised method is novel without the need for data pairs in different cities and manual rules. 3) Solid experiments across cities and tasks. A point that is missing in the paper is what forms a good pair of cities for alignment. If the two cities are extremely different, will this method still work? This is worth further discussing. Other Comments Or Suggestions: There are some typos in the paper, and it needs carefully proofreading. Questions For Authors: Refer to weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to W1:** We appreciate the reviewer’s insightful question regarding the suitability of city pairs for knowledge transfer. At this stage of research in unsupervised cross-city alignment, we acknowledge that quantitatively defining the ideal criteria for city pair selection remains an open challenge. Testing numerous city pairs to identify patterns is non-trivial, and quantitatively characterizing a city’s traits—given the multidimensional nature of urban data—adds further complexity. While such analysis is beyond the scope of this paper, we agree it is a valuable direction for future work. To demonstrate the robustness of our method even when transferring knowledge between highly dissimilar cities, we conducted additional experiments using three cities of China and New York City (NYC). Due to data availability, we evaluated carbon emission prediction (the only task with ground truth in both cities), with results of CoRE and the two best baselines (HBA and HSA) summarized below: ||XA / NYC||NYC / XA|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|358.82|4.01|1274.9|59.47| |HSA|441.67|4.7|1646.09|66.75| |**CoRE**|**279.69**|**2.43**|**514.49**|**16.53**| ||BJ / NYC||NYC / BJ|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|325.18|3.64|531.04|19.52| |HSA|390.21|4.93|852.23|29.82| |**CoRE**|**220.83**|**1.44**|**356.71**|**7.65**| ||CD / NYC||NYC / CD|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|239.23|1.39|398.92|38.19| |HSA|280.35|2.15|601.38|49.17| |**CoRE**|**211.42**|**0.99**|**269.82**|**22.17**| The results show that CoRE maintains strong performance despite significant differences between cities of China and NYC, reinforcing its adaptability across diverse urban contexts. We acknowledge that further validation across more international city pairs would strengthen these conclusions, and we identify this as an important direction for future research. **Response to W2:** We sincerely appreciate the reviewer’s careful reading and valuable feedback. We will perform a round of grammar and consistency checks to further improve the clarity and readability of the paper in the revision. --- Rebuttal Comment 1.1: Comment: I have carefully read the rebuttal of the authors. Some of my concerns are addressed and I will keep my score as accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to review our rebuttal. We sincerely appreciate your decision to accept the manuscript and will carefully incorporate your valuable feedback to further improve it.
Summary: This paper addresses a critical question in urban computing: Can we align the latent spaces of different cities to leverage knowledge from one city for analyzing others? To tackle this issue, the authors introduce a one-stage Consistency Region Embedding method (CoRE), which combines region embedding learning with cross-city latent space alignment to generate compatible and comparable region representations. Additionally, they achieve cross-city alignment via two interdependent pathways: latent manifold alignment and latent individual alignment, which address both global and individual perspectives. Finally, the authors present comprehensive experiments across three downstream tasks using datasets from three cities. Claims And Evidence: They claim that the proposed CoRE method ensures compatibility and comparability of region representations from different cities within aligned latent spaces. The experimental results provide evidence to support their claims. Methods And Evaluation Criteria: They achieve cross-city alignment of latent spaces through two interdependent pathways: latent manifold alignment and latent individual alignment. In the latent manifold alignment component, they create virtual parallel common anchors in both spaces, serving as bridges to align the data manifolds based on relative representations. Meanwhile, in the latent individual alignment component, they implement a cross-city attention pipeline that transfers pairwise region correlations between the two spaces, ensuring embedding consistency at the individual region level. The proposed method is not only innovative but also effectively addresses the challenges of cross-city alignment, demonstrating its value and potential in urban computing. Theoretical Claims: I have reviewed them and found no obvious errors. Experimental Designs Or Analyses: This paper evaluates the proposed CoRE model across three cross-city urban prediction tasks using real-world datasets and compares its performance to various baselines. Table 1 and Figure 5 present the performance comparison results for the CA and CD datasets, along with findings from the ablation study and parameter analysis. The results are comprehensive and effectively support the authors' claims. Supplementary Material: I have read the appendix. Relation To Broader Scientific Literature: This paper addresses the challenge of aligning latent spaces across cities. On one hand, they align data manifolds of two spaces using the idea of relative representations; on the other hand, they ensure precise alignment of individual regions across cities through a cross-attention mechanism. The proposed approach eliminates the need for manual inter-city correspondence rules, enhancing its applicability to unsupervised latent space alignment tasks. Essential References Not Discussed: Related works are enough. Other Strengths And Weaknesses: Strengths: 1) The paper introduces CoRE, a novel framework that integrates region embedding learning with cross-city latent space alignment. This approach addresses the limitations of existing methods by enabling knowledge transfer across cities without relying on hand-crafted correspondence rules, which is a significant advancement in urban computing. 2) The authors conduct extensive experiments across three downstream tasks (GDP, population, and carbon emissions prediction) using real-world datasets from three cities. The results demonstrate that CoRE outperforms state-of-the-art baselines, showcasing its effectiveness in cross-city prediction tasks. 3) The paper not only provides a theoretical foundation for cross-city latent space alignment but also offers practical insights by eliminating the need for manual inter-city correspondence rules. This makes the method broadly applicable to unsupervised latent space alignment tasks in urban analytics. Weaknesses: 1) The paper focuses on socioeconomic predictions using three metrics (GDP, population, and carbon emissions). Expanding the evaluation to include more diverse benchmarks, such as traffic prediction or land use analysis, across different urban science settings would further demonstrate the robustness of the proposed method. Other Comments Or Suggestions: See strengths and weaknesses. Questions For Authors: Could you extend the evaluation of CoRE to other urban science tasks, such as traffic prediction, beyond the current focus on socioeconomic indicators (GDP, population, and carbon emissions)? If not, explain the reason. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to Q1:** We acknowledge that evaluating the proposed method across additional tasks would further strengthen its empirical validation. Following your suggestion, we conducted supplementary experiments on traffic flow prediction, using datasets from Xi'an (XA) and Chengdu (CD). Specifically, we trained the predictor using labeled data (i.e., region visit counts derived from taxi trajectories) and pre-trained region representations from one city (XA or CD). The trained predictor was then directly applied to predict traffic flow in the other city (CD or XA). The results are summarized below: ||XA / CD||CD / XA|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|148.62|6.42|155.53|11.47| |HSA|150.01|5.96|149.18|10.18| |**CoRE**|**143.79**|**5.12**|**127.59**|**8.46**| As shown, CoRE consistently outperforms the baselines, reinforcing its effectiveness. While we recognize that no single study can comprehensively demonstrate superiority across all urban computing scenarios, we believe the three primary tasks in our paper, along with this additional cross-city traffic prediction experiment, provide substantial evidence of our method's robustness. --- Rebuttal Comment 1.1: Comment: The author's responses have addressed my concerns. I will keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to review our rebuttal. We truly appreciate your positive feedback on our work. We will improve the manuscript based on your valuable feedback.
Summary: **Problem:** - This paper studies the region representation learning problem, with a special focus on cross-city learning - The cross-city representation learning problem is important because some cities have abundant labelled data, while many others do - Existing methods rely on heuristic translation functions to align disjoint embedding spaces - These typically require hand-crafted rules and require the definition of similar pairs in the two cities, X and Y - This is inherently a two-stage approach where translation is disconnected from learning **Solution:** - The authors propose CORE, which is a region embedding method that both learns and aligns semantic embedding space across cities CORE has three key modules: - A self-supervised module to learn the primitive embedding spaces of city X and city Y. - A latent manifold alignment module that aligns the entire representation space across X and Y using anchor points and reconstruction loss - A latent individual alignment module that tries to align the represents of semantically similar regions across X and Y. They do this through cross attention (X→Y and Y→ X) and reconstruction of the empirical covariance of the region embeddings: $Z^X (Z^X)^T$. Claims And Evidence: The authors primarily make claims around improved cross-city alignment. They construct experiments to test this hypothesis. Additionally, the authors perform an ablation to study to verify the marginal contribution of each of the components they introduce. Methods And Evaluation Criteria: As mentioned above, the authors perform experiments to test the cross-city alignment hypothesis. They compare primarily to different variants of Yabe et al. 2020, which constitutes the majority of the baselines. Additionally, they compare to a few domain adaptation methods. I believe that other baselines should probably be included in their evaluation. Here are some recent examples: - Yang, Guang, et al. "CARPG: Cross-City Knowledge Transfer for Traffic Accident Prediction via Attentive Region-Level Parameter Generation." Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023. - Jin, Yilun, Kai Chen, and Qiang Yang. "Selective cross-city transfer learning for traffic prediction via source city region re-weighting." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022. Other cross-city learning methods that do require representation learning could also be considered: - Yao, Huaxiu, et al. "Learning from multiple cities: A meta-learning approach for spatial-temporal prediction." The world wide web conference. 2019. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Yes, discussed above. Missing baselines Supplementary Material: I reviewed the entire supplement Relation To Broader Scientific Literature: The spatial data sparsity issue likely has many applications to areas of social science, civil engineering or environmental science Essential References Not Discussed: In addition to the missing baselines above, the authors could consider discussing other early works on region embeddings: - Jenkins, Porter, et al. "Unsupervised representation learning of spatial data via multimodal embedding." Proceedings of the 28th ACM international conference on information and knowledge management. 2019. Other Strengths And Weaknesses: **Strengths:** - The alignment modules are well explained and are well motivated - The ablation study is very useful for evaluating marginal impact - The improvement over the baselines studied indicates efficacy **Weaknesses:** - The authors test whether or not CORE can effectively perform cross-city prediction. However, I do think an obvious question is if this comes at the expense of within-city predictive performance. See below for additional thoughts/questions - Missing baselines - Figure 7 is not useful. The 3D barchart is very difficult to read or detect differences between hyperparameter settings. I would suggest using heatmaps instead Other Comments Or Suggestions: See thoughts about figure 7. I'm happy to see hyperparameter study but this figure is not useful. Questions For Authors: The authors test whether or not CORE can effectively perform cross-city prediction. However, I do think an obvious question is if this comes at the expense of within-city predictive performance. For example, in Table 1, the authors test the cross-city prediction setting of XA(X) → CD(Y) and CD(X) → XA(Y). The performance on this task is quite good (although there are likely additional baselines to try). My biggest concern is what happens to the individual city representations when predicting on data within that same city. For example, after using CORE, what is the predictive performance of GDP_X using Z_X and GDP_Y using Z_Y. Do these representations degrade? How do they compare to the larger body of region representation research? This is an important question and would warrant further study. Figure 2: Do you really mean GDP or do you mean income levels? Usually GDP is computed at state or national levels and is not tracked at low levels of spatial resolution. Table 1: The authors perform five trials and compute average MAE and MAPE over those trials. Why do they not present estimates of the standard deviations of these same statistics? I see they do some type of t-test; it would be helpful to get more details on how that t-test is constructed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Response to Q1\&W1:** We appreciate the reviewer raising this question about the trade-off between cross-city and within-city performance. To thoroughly investigate this aspect, we conducted new experiments comparing CoRE's within-city performance against established region embedding methods using human mobility data: HDGE [1] constructs flow graphs and applies graph embedding to learn region representations; ZE-Mob [2] models co-occurrence patterns between regions to learn representations. While we acknowledge that some recent methods incorporate multi-source data (e.g., POIs, imagery), we focused our comparison on mobility-based approaches due to data availability constraints in our experimental setup. We tested within-city performance (XA→XA, CD→CD, BJ→BJ) with three down-tasks. We also included our proposed preliminary region embedding model (named CoRE w/o Alignment, Section 3.1). The results in terms of MAE are presented below: |||XA/XA|||CD/CD|||BJ/BJ|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|GDP|Population|Carbon|GDP|Population|Carbon|GDP|Population|Carbon| |HDGE|207.36|616.37|175.24|174.39|701.82|73.84|124.55|708.58|202.60| |ZE-Mob|199.09|597.54|169.06|173.49|687.44|72.56|122.46|695.64|198.39| |CoRE w/o Alignment|165.36|498.38|140.81|146.35|611.24|64.34|107.13|595.11|174.82| |CoRE|172.91|508.78|143.53|148.31|621.58|65.88|107.82|608.95|176.29| The experimental results demonstrate that CoRE maintains strong within-city performance, with only marginal performance degradation compared to city-specific embeddings (CoRE w/o Alignment). Additionally, CoRE outperforms mobility-based baselines. More importantly, although it is true that adding the alignment module comes with a marginal sacrifice of within-city performance, the gain of our CoRE alignment method is something that cannot be overstated. It tackles a fundamental challenge in region representation that the embeddings are of little use without task-specific ground truth data. CoRE tackles this problem by aligning the embeddings across multiple cities, and thus enabling predictor reuse. [1] Region representation learning via mobility flow, CIKM. [2] Representing urban functions through zone embedding with human mobility patterns, IJCAI. **Response to Q2:** In Figure 2, we refer to regional GDP levels, not income. These GDP estimates are derived from a high-resolution dataset [3] that provides GDP values at a 1km × 1km grid scale. To adapt this data to our study: we mapped the gridded GDP values to our regions, aggregated the values by proportionally summing GDP within each region, and classified regions into three economic tiers based on these aggregated values. This approach allows us to analyze GDP variations at a finer spatial resolution than traditional state- or national-level metrics. [3] Forecasting China’s GDP at the pixel level using nighttime lights time series and population images. GIScience \& Remote Sensing. **Response to Q3:** We appreciate the reviewer's question regarding our statistical analysis. We did indeed calculate standard deviations for all metrics. For instance, the MAE standard deviations of our CoRE for the CD/XA pair were 6.03, 19.72, and 3.34 across the three tasks. Due to space constraints in the table layout, we chose not to include these values in the original submission. The paired t-tests were conducted as follows: 1) Computing performance differences between our method and each baseline for each of the 5 trials, 2) Performing two-tailed significance tests on these differences, 3) Testing the hypothesis that the mean difference equals zero [4]. The consistently significant results (p < 0.01) across all comparisons provide strong evidence that CoRE's improvements are statistically meaningful, not due to random variation. We will add these details to the revised manuscript. [4] Hull, David. Using statistical testing in the evaluation of retrieval experiments. SIGIR. **Response to W2:** We additionally conducted comparisons with CrossTReS and CARPG. While we considered the approach by Yao et al., we found it unsuitable for our unsupervised setting as it requires labeled data in both source and target cities. The results of CrossTReS, CARPG, and CoRE in terms of MAE are presented below: |||XA / CD|||CD / XA|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|GDP|Population|Carbon|GDP|Population|Carbon| |CrossTReS|205.48|767.26|153.38|208.17|776.82|148.05| |CARPG|190.41|653.27|136.02|186.83|641.94|143.88| |**CoRE**|**178.93**|**620.15**|**127.45**|**170.25**|**580.98**|**136.56**| The experimental results demonstrate that CoRE outperforms these baselines. Similar results are observed in the other four city pairs. These extensive comparisons further validate CoRE's effectiveness in cross-city knowledge transfer. We will incorporate these results into the revised manuscript. **Response to W3:** We will replace 3D barcharts with heatmaps in the revision. --- Rebuttal Comment 1.1: Comment: I appreciate the additional insights provided in the rebuttal. In particular, the within-city result sheds further light into the behavior of CoRE. I would encourage the authors to include this discussion in subsequent manuscripts. Additionally, the comparison against recent methods (CARPG and CrossSTreS) further validates the claims in the paper. After reading the authors' response I raise my score a 3. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to review our rebuttal. We sincerely appreciate your decision to raise the score. We will incorporate the within-city results and new baselines into the manuscript based on your valuable feedback.
Summary: This paper tackles the challenge of transferring urban region embeddings across cities. Instead of the commonly used two-stage approach—first learning city-specific embeddings and then mapping among them—the authors propose a unified one-stage framework called CoRE. The idea is to learn region embeddings from human mobility data while aligning the latent spaces of different cities in one joint process. In this paper, two key mechanisms are designed: one to embed regions from different cities into distinct latent spaces, and another to align their latent space manifolds while ensuring fine grained compatibility. Experiments conducted on datasets from three cities show that CoRE consistently outperforms several strong baselines on different downstream tasks. Claims And Evidence: This paper has three major claims: the first is that a unified framework works better than previous two stages works, which ensures compatibility and comparability of region representations from different cities within aligned latent spaces; the second is that cross-city alignment of latent spaces can account for the complex interplay between cities without relying on explicit anchoring pairs of region representations across cities based on human-crafted rules; the third is that by properly aligning the latent spaces, task predictors trained in a data-rich city can be directly applied to another city with minimal degradation in performance. This paper’s claims have been supported by very well designed and deployed experiments in my opinion. The experimental results clearly indicate that CoRE outperforms various baselines as shown in section 4.2.2, including methods based on hand-crafted anchoring pairs, unsupervised translation techniques, and standard domain adaptation approaches. The improvements in MAE and MAPE are not only consistent across city pairs but are also statistically significant (section 4.3). The ablation study proves that either the manifold alignment or the individual alignment module components are crucial for the overall performance (section 4.4). Visual analysis using t-SNE plots demonstrate that region embeddings from different cities become more tightly clustered and semantically consistent after alignment, which visually supports the authors’ claims (appendix A.4). The downstream tasks are not only on one issue, but on three very distinguished tasks, such as predicting GDP, population, and carbon emissions, showing the very strong capability of the paper applying on different domains for cross city knowledge transfer (Table 1 and Table 2). Overall, I am very positive with the claims that the authors claimed in the paper, and they have been shown very good evidence to support them. I am very satisfied with both the claims and evidence part. Methods And Evaluation Criteria: The authors start by constructing a mobility graph using human mobility data and then apply a graph attention network (GAT) to learn the initial embeddings. This choice is well justified as GATs are effective at capturing complex relationships in spatial data (section 3.1.2). To bridge the gap between the latent spaces of different cities, the authors introduce a set of virtual parallel common anchors. By computing relative representations of each city’s region embeddings with respect to these anchors, they align the global structure of the latent spaces using an L2 loss (section 3.2). Beyond aligning the overall structure, the method also leverages a cross-attention mechanism to ensure that pairwise correlations between regions are maintained across cities (section 3.3). This cycle-consistency approach forces the fine-grained structure of the embeddings to be preserved. The proposed CoRE is evaluated on predicting socioeconomic indicators (GDP, population, carbon emissions), which are realistic and meaningful tasks in urban analytics (Table 1 and Table 2). The use of MAE and MAPE is standard and appropriate for regression problems like these. The authors compare CoRE against a variety of strong baselines, including rank-based methods, hierarchical approaches, and domain adaptation techniques (Section 4.2.2 and section 4.2.3). This comprehensive set of comparisons adds credibility to the results. The paper also includes experiments that explore the impact of the number of virtual anchors and the weights for the alignment losses (section 4.4.1 and section 4.4.2). This analysis helps demonstrate the robustness of the approach. Overall, I believe that authors have done well with the methods and the evaluation section. The evaluation sections support the claims from the methods in different ways and I think that it is good enough to support all the method claims. Theoretical Claims: Overall, I am satisfied with the theoretical part. Some minor questions: 1. In equation 1, how the graphs of Gx and Gy are constructed in city x and y, this part is not clearly explained in the paper. One or two sentence explanation would be helpful for readers to understand this part. 2. In section 3.3 the destination region rj given ri, what are the meanings of rj and ri here? 3. How will equation 7 be calculated? Will the transpose be calculated first or later? 4. Equation 11 is a commonly used attention mechanism, the authors do not need to spend time explaining this part Experimental Designs Or Analyses: I have explained my thoughts on experimental designs and analysis in both Claims and Evidence and Methods and Evaluation Criteria. Overall I believe the experiment section is good enough to support the claims of the paper. I am satisfied with this part. Supplementary Material: The supplementary section provides more information on data, baselines, performance comparison with baselines, visualization of results and further ablation studies. I am good with the supplementary section. Relation To Broader Scientific Literature: I can see the potential of the paper’s impact not only limited in the ML application domain, but in a broader machine learning domain especially on the questions related to Latent Space Alignment. Latent Space Alignment is a common problem in many fields, for example, in e-commerce recommendation of different categories. The paper’s proposed methods could be deployed in other ML application fields with similar backgrounds with ease. Essential References Not Discussed: I think that the authors have provided enough references. Other Strengths And Weaknesses: Overall, I think that this is a strong paper. The paper is well written, the algorithms are very well and thoroughly designed, and this paper is on a very interesting topic. I have been enjoying reading the paper. I have explained my thoughts in detail in my previous comments. Some further suggestions: The paper used the data all from the cities in China. I am very curious to see how the algorithm would perform under cross-country cities (e.g., NYC to Beijing). Some experiments on this part would be very interesting. Figure 3 information is a bit too much, took me a lot time to understand this part. Why in Figure 6, population results are different from GDP and carbon results? Some explanation is needed for this part. Other Comments Or Suggestions: See Other Strengths and Weaknesses Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to W1:** We appreciate the reviewer's valuable suggestion regarding cross-country evaluation. To demonstrate the generalizability of our method, we conducted additional experiments using data from three cities of China and New York City (NYC) - cities with significantly different urban characteristics. Due to data availability constraints, we evaluated performance on carbon emission prediction (the only task with available ground truth in both cities). The results of CoRE and the two best baselines (HBA and HSA), measured by MAE and MAPE, are presented below: ||XA / NYC||NYC / XA|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|358.82|4.01|1274.9|59.47| |HSA|441.67|4.7|1646.09|66.75| |**CoRE**|**279.69**|**2.43**|**514.49**|**16.53**| ||BJ / NYC||NYC / BJ|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|325.18|3.64|531.04|19.52| |HSA|390.21|4.93|852.23|29.82| |**CoRE**|**220.83**|**1.44**|**356.71**|**7.65**| ||CD / NYC||NYC / CD|| |:-:|:-:|:-:|:-:|:-:| |Method|MAE|MAPE|MAE|MAPE| |HBA|239.23|1.39|398.92|38.19| |HSA|280.35|2.15|601.38|49.17| |**CoRE**|**211.42**|**0.99**|**269.82**|**22.17**| Our findings show that CoRE maintains strong performance even when transferring knowledge between these geographically and culturally distinct cities. This suggests our method's robustness in handling cross-country urban computing tasks. We acknowledge that further validation across more international city pairs would strengthen these conclusions, and we identify this as an important direction for future research. **Response to W2:** We thank the reviewer for the comment. Figure 3 shows how we align the hidden data spaces of two cities (like Chengdu and Xi’an) by using a few common anchor points as reference. Instead of comparing raw values directly, we compare how each data point relates to the anchors — like checking how far each region is from landmarks. First, we calculate these relative similarities in each city, and then use those to build a new space where everything is described based on those anchor relationships. We do this again to create a second, more refined version of that space. Finally, we align the two cities by minimizing the difference between their refined spaces, helping the model learn structural similarities even when the cities have different data distributions. I hope this comment can help you clarify the question. **Response to W3:** We thank the reviewer for the insightful comment. The reason the population results look different from GDP and carbon in Figure 6 is mostly because population patterns are naturally more uneven and complex across urban areas. People tend to live in places based on various personal and social factors—like job locations, housing affordability—which creates irregular and less predictable spatial distributions. GDP and carbon emissions usually reflect clearer, more structured patterns tied to specific economic or industrial areas. Because of this complexity, population prediction has relatively more fluctuation and benefits greatly at first from adding more anchors. In contrast, GDP and carbon emissions data don't need as many anchors to achieve stable predictions, which explains their relatively smoother performance trends when increasing N_a. **Response to minor questions:** We thank the reviewer for pointing out areas that could benefit from further clarification. First, regarding how the graphs are constructed for each city: the mobility graphs are created using human mobility data, where each node represents a region within the city. Edges between nodes represent how frequently people travel from one region to another. These frequencies are normalized based on the total number of trips originating from each region. Second, in Section 3.1.3, the terms r_i and r_j refer to urban regions. Specifically, r_i is the origin region and r_j is the destination region in a recorded trip. The model is trained to predict how likely people are to travel from one region to another based on these patterns. Third, for the computation described in Equation 7, the transpose of the region data is calculated first. After that, the model performs a matrix multiplication with a set of common anchor points. This operation generates a table that shows how similar each anchor is to every region in the city, helping align data across different cities. We will revise the manuscript to clearly explain these parts, so future readers can understand them more easily.
null
null
null
null
null
null
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Reject
Summary: The paper introduces RoLoRA, a federated fine-tuning framework for Large Language Models (LLMs) based on alternating optimization of LoRA adapters. RoLoRA optimizes both up-projection and down-projection matrices in LoRA adapters alternately, ensuring better convergence and adaptability in federated learning. Theoretical analysis demonstrates that RoLoRA achieves exponential convergence to the global optimum in a linear setting, outperforming methods that freeze down-projections. Empirical results on RoBERTa-Large and Llama-2-7B across GLUE, commonsense reasoning, and other tasks show that RoLoRA maintains robustness against increasing client numbers and reduced fine-tuning parameters while reducing communication overhead by half compared to standard LoRA fine-tuning. Claims And Evidence: The work states "... we explore the necessity of learning down-projection matrices and propose a federated fine-tuning framework with computational and communication advantages." in Lines 44 to 47 (second column). However, I am not entirely certain how RoLoRA is saving on either computation or communication. Methods And Evaluation Criteria: From Algorithm 1, it seems that the authors are considering 1 round made up of two stages where the first stage trains A matrices and the second stage trains B matrices. In that case, saying that "Please note that in all tasks, we compare the performance of the three methods under the same number of communication rounds." in Lines 321-323 (second column) is questionable, since RoLoRA is not abided by the traditional definition of a federated round. I would appreciate some clarification on communication cost per round of (a) server to all clients, and (b) one client to server. Seems like it should be exactly the same as original LoRA with Fed AVG setting. However, the lines 185 to 187 state "In each communication round, the number of trainable parameters in the model is effectively halved compared to FedAVG of LoRA." Theoretical Claims: 1. Why is LoRA rank $r$ is fixed to 1 and not generalized like other dimensions are, for section 5 "Analysis"? I think keeping $r=1$ is skewing the results of theorem 5.4, especially related to sufficient number of samples $m$. 2. For Theorem 5.4, $q$ would grow with the model dimension $d$? (And potentially with $r$ as well?) If so, how do we justify sufficient number of samples $m$ rising with $d$? Experimental Designs Or Analyses: 1. In Figure 3, the variance of RoLoRA seems to be higher than the baselines, why is that so? And why is that not the case for larger models, as shown in table 2? 2. With Figure 3, seems like the curves are cut off before all the baselines have converged. Can the authors please produce plots for, say total round count = 100? Supplementary Material: I have skimmed through the appendices with proofs, checking the intuition of the proof. Relation To Broader Scientific Literature: The contributions of RoLoRA can advance the understanding and application of federated learning techniques, particularly in the context of fine-tuning large language models. By demonstrating the benefits of alternating optimization and the importance of learning both projection matrices, the authors provide insights and practical methods for improving model performance in federated settings, which can potential pave the way for privacy-preserving finetuning of LLMs. However, the ideas seem pretty close to what FLoRA [1] has discussed, while the findings are opposite of FLoRA. (More on it in the next section of this review) [1] FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations (Wang et al., NeurIPS 2024) Essential References Not Discussed: FLoRA [1] and its baselines should be mentioned in detailed in this paper. According to my understanding, seems like FLoRA is advocating for what this work is finding a solution against. In FLoRA, the correct way to aggregate the A and B LoRA matrices are by first multiplying them for each client, while this works adds A and B matrices separately and then updates the model with the multiplication of those added A and B matrices. Seems like FedIT [2] (mentioned in FLoRA) is similar to RoLoRA as well. I would appreciate clarification on how RoLoRA is different than FedIT and FLoRA. I would also recommend the authors to add them as baselines. [1] FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations (Wang et al., NeurIPS 2024) [2] Towards Building the Federated GPT: Federated Instruction Tuning (Zhang et al., ICASSP 2024) Other Strengths And Weaknesses: Strengths: The paper is well-written and the work is well-motivated. The solution is elegant. Weakness #1: The biggest weakness of this work would be the lack of comparison against and discussion of FLoRA [1]. I have discussed it in some more details under "Essential References Not Discussed" part of the review. [1] FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations (Wang et al., NeurIPS 2024) Weakness #2: The related work section can be improved. Instead of just mentioning all the related works, I would appreciate it that the authors list down how RoLoRA differs from all those works. Other Comments Or Suggestions: Comment #1: Figure 1 can be improved with some more details. The boxes can be labeled with what they are (eg., Big blue box being W and smaller ones being A and B, the cloud icon being a server etc). Comment #2: A minor comment would be to that the use of "Large" language model to describe RoBERTa-Large might be questionable. It is rather a medium-sized language model. Questions For Authors: The theoretical analysis is insightful and backed by the empirical results, however, I would appreciate some comments about what differences or challenges non-linearity adds to RoLoRA. Particularly, if we repeat the two-layer non-linear NN experiment on a one-layer linear NN, do we see a bigger accuracy improvement between LoRA/FFA-LoRA and RoLoRA? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the detailed review. We address your concerns below: **Federated Round Definition, Communication and Computation Efficiency.** Thanks for the helpful comments. In our paper, each communication round refers to one upload/download of either matrix A or B. RoLoRA updates and transmits only one matrix per round, halving both communication and computation compared to FedIT—see [table](https://anonymous.4open.science/r/random-3C54/R4-2.png). We apologize for any confusion from the use of "iteration" in Algorithm 1, which was for alignment with Algorithm 2 and theoretical clarity. All experiments were rigorosly benchmarked using communication rounds. We will revise the manuscript to clearly distinguish rounds from iterations and add a footnote in Algorithm 1 to clarify this. **Rank-1 Analysis and Sample Complexity.** Thanks for the insightful questions. Regarding sample complexity, Theorem 5.4 shows that $q$ scales with $d$ for rank-1, and generalizes to $O(dr)$ for higher ranks. This follows from the $\epsilon$-net argument (Eq. 34), where the covering number grows exponentially with dimension $dr$ [1, Sec. IV.A], leading to the updated $q$ (Line 755). The sample complexity can be justified that in models like $Y=Xab^\top$, $2d$ unknown parameters necessitate sample complexity proportional to $d$. The rank-1 case highlights FFA-LoRA's core limitation—its inability to align down-projection—which persists empirically in higher ranks (see Sec. 5.2), in language tasks. We further discuss the generalization with Reviewer JAAe under Rank-1 Limitation and Generalizability, and note the core issue of constrained expressivity remains unchanged in the higher-rank case. [1]Nayer, S., & Vaswani, N. (2022). Fast and sample-efficient federated low rank matrix recovery from column-wise linear and quadratic projections. IEEE Trans. Inf. Theory **Higher variance in Fig.3.** Thanks for the question. The higher variance in Fig. 3 before convergence is expected due to different initializations, which can lead to varying optimization trajectories. After convergence, RoLoRA shows low variance, consistent with its stable performance in Tables 1 and 2. FFA-LoRA's variance differs across Tables 1 and 2 mainly due to task difficulty—it's low on simpler tasks (e.g., GLUE, BoolQ) but higher on complex ones (e.g., PIQA, SIQA) due to sensitivity to initialization, aligning with its weaker performance in Table 1 and 2. **Convergence curve for 100 Communication rounds.** The [figure](https://anonymous.4open.science/r/random-3C54/convergence-100.png) shows the 100-round extension of Fig. 3, where RoLoRA consistently converges faster and achieves the highest accuracy. Fig. 3 aimed to compare convergence under a fixed sample budget, not full convergence. As Table 2 shows, this budget is sufficient for all methods with 3 clients, but only RoLoRA fully converges with 50 clients—highlighting its efficiency in low-resource settings. **Differences from FLoRA and FedIT.** Thanks for the questions. FedIT is equivalent to our FedAVG with LoRA baseline ("LoRA" in our paper), and we provide extensive comparisons. We'll explicitly cite and discuss FedIT in the revision. We already discuss FLoRA (Lines 31, 113, 162). FLoRA shares RoLoRA's motivation but differs in approach: FLoRA aggregates full matrix products of LoRA-A and LoRA-B, while RoLoRA freezes one matrix for efficient, exact updates (Eq. 3-4). See Sec. 3.2 for discussion. We've added [table](https://anonymous.4open.science/r/random-3C54/flora.png) comparing RoLoRA and FLoRA under IID setting. In the 3-client setting, we ran 500 rounds and scaled rounds down proportionally with more clients to keep the total sample budget fixed. RoLoRA consistently outperforms FLoRA across tasks and client counts. While FLoRA eventually converges (e.g., 83.3% on MNLI after 4000 rounds), it does so much more slowly, highlighting RoLoRA's faster convergence and better scalability. Please also see [table](https://anonymous.4open.science/r/random-3C54/cost-compare.png) comparing the communication cost and time cost. RoLoRA and FFA-LoRA have the lowest communication/time costs, while FLoRA is much more expensive. **Fig.1 Improvement and Use of Large in RoBERTa-Large.** Thanks for the suggestions. We've updated the [figure](https://anonymous.4open.science/r/random-3C54/overview.png) and will refer to RoBERTa-Large simply as a language model, without implying scale beyond its name. **Related Works.** We agree and will revise the related work section to clearly highlight RoLoRA's key differences from prior methods. **RoLoRA on Linear Model.** As suggested, we ran an experiment removing the ReLU from NN on MNIST—see [figure](https://anonymous.4open.science/r/random-3C54/non-linear.png). Across both linear and non-linear settings, all methods perform similarly, with RoLoRA showing modest improvement in the non-linear case, likely due to its better utilization of the added expressiveness from ReLU.
Summary: The paper introduces RoLoRA, a federated fine-tuning framework that employs alternating optimization of LoRA adapters to enhance model expressiveness and robustness. By theoretically and empirically demonstrating the necessity of learning both down-projection and up-projection matrices, the authors show that RoLoRA outperforms existing methods (e.g., FedAVG of LoRA, FFA-LoRA) in terms of accuracy, communication efficiency, and robustness under varying client numbers and parameter budgets. The theoretical analysis on a linear model and extensive experiments on RoBERTa-Large, and Llama-2-7B validate the framework’s advantages. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper proposes a federated learning (FL) algorithm specifically designed for LoRA fine-tuning of LLMs. Therefore, existing federated LLM fine-tuning algorithms and broader federated learning literature on LLMs are relevant to this work. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The alternating optimization strategy for LoRA adapters is innovative, addressing the limitations of prior methods that either aggregate matrices inaccurately or freeze critical parameters. 2. The convergence analysis on a simplified linear model, while idealized, offers meaningful insights into the necessity of training both projection matrices. The exponential convergence guarantee strengthens the method’s credibility. Weaknesses: My primary concern is that the algorithm analysis is conducted only on a few special cases, such as the signal vector of the LoRA module, a linear regression objective, the homogeneous setting, and the Freezing-A scheme. While these analyses provide valuable insights, there remains a significant gap between these cases and the broader federated LLM fine-tuning framework proposed in this paper. A more comprehensive theoretical analysis covering the full scope of the proposed method would strengthen the work. In Lemma 5.3, the assumption that \delta^(t)\le \delta^(t-1)\le...\le \delta^0 appears overly restrictive and may be difficult to satisfy in practice. Additionally, the lemma's description does not specify which algorithm is used to obtain the stated bound. Moreover, since the error bound applies only to a, should b be frozen in this scenario? Clarifying these aspects would enhance the rigor and applicability of the theoretical analysis. Other Comments Or Suggestions: No. Questions For Authors: Please refer to the weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the thoughtful review. We appreciate the recognition of RoLoRA's contributions and the strengths of our alternating optimization strategy, theoretical insights, and empirical results. We address the main concerns below: **Limited Theoretical Scope.** We appreciate the reviewer's thoughtful feedback regarding the scope of our theoretical analysis. First, we would like to clarify that our convergence analysis, including comparisons between RoLoRA and FFA-LoRA, also covers the heterogeneous setting (Appendix A.5 at Line 1430). The analysis mirrors the homogeneous case: RoLoRA reduces the angle to the ground truth $\mathbf a$, ensuring convergence, while FFA-LoRA's loss remains tied to its initial angle. This has been discussed at Line 281 in maintext. While our theoretical analysis focuses on the rank-1 LoRA, linear regression setting, this case remains foundational and highly non-trivial. By directly comparing the solutions of RoLoRA and FFA-LoRA under this simplified yet fundamental scenario, we rigorously demonstrate the inherent limitations of FFA-LoRA in representing low-rank updates. This provides critical insights into the expressiveness of FFA-LoRA even in its most basic form, establishing a baseline for understanding its behavior. Our empirical validation on neural network and MNIST shows the theorem can be extended to higher-rank and non-linear setting. Critically, the results on realistic LLM tasks bridges theory and practice, showing that the phenomena observed in our theoretical framework persist in non-linear, non-convex settings. This alignment mirrors methodologies in prior works [1, 2], which adopt simplified setups to distill core principles before validating them in real-world contexts. A full theoretical comparison of federated LLM algorithms across all settings is currently infeasible. For neural networks, while convergence analyses exist, they rely on loss landscape assumptions (e.g., smoothness, PL-conditions) that preclude direct comparison of optima between algorithms. Instead, our work focuses on a tractable case where direct comparisons are feasible, allowing us to uncover provable limitations that also hold in real-world scenarios. [1]Collins, Liam, et al. "Exploiting shared representations for personalized federated learning." (ICML2021). [2]Collins, Liam, et al. "Fedavg with fine tuning: Local updates lead to representation learning." (NeurIPS2022) **Assumptions on decreasing angles.** Thank you for pointing this out. To clarify, although we assume a decreasing sequence of angles in Lemma 5.3, this result is used as an **intermediate** step in the proof of Theorem 5.4. In particular, we demonstrate in the proof of Theorem 5.4 at Line 1027 that the angle decreases in the first iteration. Building on this, we apply an inductive hypothesis to show that the decreasing trend holds for all subsequent iterations (see Line 1039). In summary, the decreasing angle sequence is not assumed in the proof of the main result (Theorem 5.4), but is instead derived as part of the argument. **Algorithm used for theoretical analysis.** Lemma 5.3 is tied to the update rule in Alg. 2. We will revise the lemma's description to make this connection more explicit. **Error bound on a.** Thank you for the insightful question. The error bounds in Lemma 5.3 and Theorem 5.4 are derived based on Alg. 2, which updates both $\mathbf{a}$ and $\mathbf{b}$. The reason the bound applies specifically to the angle distance of $\mathbf{a}$ is that $\mathbf{b}$ is fully optimized at each client, while $\mathbf{a}$ is updated via gradient descent. This ensures that $\mathbf{b}$ is always optimal with respect to the current $\mathbf{a}$, allowing us to focus the analysis on the convergence behavior of $\mathbf{a}$. We designed RoLoRA for the linear regressor using alternating minimization (for $\mathbf{b}$) and gradient descent (for $\mathbf{a}$) to decouple their updates and eliminate the potential impact of insufficient local updates of $\mathbf{b}$ on the convergence of $\mathbf{a}$. We will clarify this motivation and its implications in the revised manuscript. We hope these clarifications address the reviewer's concerns. Please feel free to reach out if the reviewer has any further concerns, and we would be glad to discuss them with the reviewer.
Summary: This work explores a better approach for training LoRAs in federated learning. Given a LoRA $B A$, the naive approach is to aggregate the updates of both weights simultaneously as in $\mathbb{E}_i[B_i] \mathbb{E}_i[A_i]$ for clients $i$ (which is not equal to $\mathbb{E}_i[B_i A_i]$). In this paper, the authors study the following method: train $B$ and aggregate, train $A$ and aggregate, and repeat (or do it in the opposite order). For linear regressors, $A$ is trained for one step and normalized after aggregation. The authors demonstrate that this algorithm performs better experimentally—even when taking the communication budget into account—and theoretically as well. They provide a high-probability analysis of this algorithm (with respect to normal initialization), and show that the LoRA $BA$ converges to the optimal LoRA $B^* A^*$ to an arbitrary error that depends on the number of iterations. On the other hand, other competitive approaches, such as LoRA-FFA, can only converge within an error proportional to $\lVert B^*\rVert$. ## update after rebuttal I thank the authors for clarifying most of my concerns. I stand by my recommendation for acceptance. Claims And Evidence: The authors claim that their method performs better than the baselines and converges to arbitrary error as the number of iterations increases. They also claim that FFA-LoRA is less robust than vanilla FedAvg. Furthermore, they claim that their method is robust across various tasks, number of clients, and number of parameters while still demonstrating communication efficiency. Indeed, the experiments and the analysis are comprehensive and support these claims well in my opinion. The authors mention in the introduction that their alternating algorithm is inspired by prior works in multitask linear representation learning, but its application in the context of LoRAs is novel. I initially thought that this could have been explored before in the literature, but the novelty seems to be true based on a quick literature review (see essential references section below), at least within the context of LoRAs and not taking into account the prior works mentioned by the authors. Methods And Evaluation Criteria: The baselines (FFA-LoRA and FedAvg) make sense. I'm not aware of any baselines that might be more competitive, let alone have convergence guarantees. The evaluation is done on various language tasks and is comprehensive enough to demonstrate the effectiveness of RoLoRA. Theoretical Claims: From a first reading, the proof seems to be correct in general, but I'm not sure about the exact constants, etc. The steps in the proof are explained equation by equation, which is great for verifying the correctness. Experimental Designs Or Analyses: The experimental design is sound. The experiments are federated learning setups of well-known language tasks (evenly splitting dataset among clients). This is good for demonstrating the shortcomings of the baselines that RoLoRA aims to fix. The analysis is done under a setup that is realistic to some extent (normal initialization of LoRA and bounded optimum $\lVert B^* \rVert$). However, the analysis is done for rank-1 LoRAs and the optimal output is linear wrt some optimal LoRA. It would be interesting to know whether the analysis can be extended to the general case for rank > 1 and whether the assumption of bounded optimum stil suffices as it is when fine-tuning a LoRA wrt an arbitrary loss. Also, the authors assume (in the proof) that the angles $\delta_t$ is a decreasing sequence, but is this always a valid assumption? It might not hold in the stochastic setting. This assumption should be stated clearly along with Assumption 5.1 / A.12. Supplementary Material: I checked all of the supplementary material. I read the proofs without checking the algebra in detail. The approach seems to be sound, as far as I'm concerned. I did not read Sec A.4.1 carefully (proof of proposition 5.5). Relation To Broader Scientific Literature: The proposed method is a straightforward adjustment to the way LoRAs are trained in FL. This is relevant to important applications in practice, including next-word prediction. An efficient method that performs significantly better than the baselines is highly appreciated. In addition, the analysis is extensive and can benefit future researchers working on alternating optimization of LoRAs, whether in FL or not. Essential References Not Discussed: Despite thinking that a similar method might have been proposed before, I did not really find a similar algorithm after doing a quick literature search. Perhaps the closest work that hasn’t been cited by the authors is [1]. Still, they only alternate the minimization within the same round and do not alternate the aggregations themselves. Another work that is not directly relevant but interestingly shares some similarity in their algorithm is [2] (if you take the gradient wrt aggregated local parameters and normalize the global update, it is similar). Based on the above, I believe that the authors have not really missed any essential references, but I have put these references for their interest. [1] *Federated Matrix Factorization: Algorithm Design and Application to Data Clustering*. Wang & Chang. 2020. [2] *Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity*. Mishchenko et al. 2023. Other Strengths And Weaknesses: The authors offer extensive experimental results and analysis. Most papers offer one part and make the other part unsatisfactory or leave it altogether. This is a strenght of this paper. The authors also explain the inspiration of their method, which is good transparency. The method is simple yet effective and does not sacrifice communication budget. Many algorithms proposed in the literature are unnecessarily complex with marginal improvements. It is not the case here. The authors also did not stop at demonstrating the superiority of RoLoRA experimentally, but offered an extensive analysis of their method on the simple problem, and further show that freezing down-projections is provably less robust. In terms of weaknesses, I would say that the proof is sometimes repetitive and the notation are not the best. Still, these might be stylistic choices that are ultimately tangible to the merit of this work. Other weaknesses can be found in the Experimental Design & Analyses section above. Other Comments Or Suggestions: I believe the expression for $\tilde{b}$ from line 5 in Algorithm 2 should be written clearly somewhere, even though it might be simple to derive, as it is used directly in the proofs in the appendix, e.g., equation (14). It is not even shown in Table 3. The proof for Proposition 5.5 is a bit too long. I believe there could be a way to make it more compact, but I cannot offer any concrete suggestions other than reusing steps in the proof and putting them under a lemma or something. Questions For Authors: - Some steps in the proof might not be directly generalizable to rank>1. For example, how would you generalize (17)? Because, In general, $\lVert A^{-1} \rVert $ is not necessarily smaller than $\lVert A \rVert^{-1} $. - What’s the difference between Lemma A.11 and the first part of the proof of Theorem 5.4 in Sec A.4? - How does equation (87) follow? - Performance in Table 12 are pretty close for all methods, including the normal LoRA method. Why do you think that is the case? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and detailed assessment of our paper. We are encouraged by the overall positive evaluation and would like to clarify several points. **Rank-1 Limitation and Generalizability to higher ranks.** Thanks for pointing this out. Although our analysis is conducted under the rank-1 setting, it remains highly nontrivial. In particular, we provide a direct and rigorous comparison between the solutions of RoLoRA and FFA-LoRA, clearly demonstrating the limitations of FFA-LoRA in this fundamental case. This already yields valuable theoretical insights into the core differences between the two methods. Prior works on similar algorithms and settings (See Sec 4.1 and 4.2 in [1], and Appendix A.3.1 and A.3.2 in [2]) have successfully analyzed both rank-1 and higher-rank cases using comparable techniques. This suggests that the rank-1 and higher-rank analyses share underlying structures and intuition. We believe our proof can be extended to higher ranks with additional technical work, but doing so would not change our main conclusion regarding FFA-LoRA's limited expressiveness. We outline key steps toward this extension and leave it for future work. *Orthonormalization in Algorithm.* For the rank-1 case, it is only required to normalize the updated $a$ to unit length (Line 12 of Algorithm 2). To maintain orthonormality for the higher-rank case, we need to include a QR step $A^+=AR$ where $A$ is updated from through GD (cf. Line 11 of Algorithm 2), ${A^+}^\top A^+ = I_r$, and $R$ is upper-triangular. *Error metric.* For the rank-r case, we define the subspace distance of two $d\times r$ matrices (with orthonormal columns) as the following:$ {SD}( A, A^*) = ||( I_ d- A A^\top ) A^*||_ F$ This is a direct generalization of the rank-1 case. Geometrically speaking, $ {\rm SD}( A, A^*) = \sqrt{\sum_{r'=1}^r \sin^2(\theta_i)}$, where $\theta_1,\cdots, \theta_r$ are the principal angles between the column spaces of $ A$ and $ A^*$. *Generalizing Eq. (17).* For rank-$r$, we similarly bound $|\bar{{B}} - {G}|_{op}$ by ${\rm SD}({A}, {A}^*)$, where ${G} = B(A^*)^\top {A}$. The main technical step is controlling $|({A}^\top {X}_i^\top {X}_i {A})^{-1}|{op}$, handled via Bernstein's inequality and an $\epsilon$-net (see [3], Sec. IV-E). *Condition-Number Dependecies.* In the rank-1 case, the convergence and sample complexity depend on the norm of the ground-truth signal vector $b^*$, as shown in Eq. (13,44,123). For higher-rank settings, this dependency generalizes to the operator norm (i.e., the largest singular value). However, the operator norm alone does not fully characterize the problem complexity. Instead, the condition number, which is the ratio of largest and smallest singualr value, is critical. The condition number is reduced to 1 when rank=1. [1]Jain et al., "Low-rank matrix completion using alternating minimization," STOC 2013 [2]Thekumparampil et al., "Statistically and computationally efficient linear meta-representation learning," NeurIPS 2021 [3]Nayer, S., & Vaswani, N. (2022). Fast and sample-efficient federated low rank matrix recovery from column-wise linear and quadratic projections. IEEE Trans. Inf. Theory **Assumptions on decreasing angles.** Thank you for pointing this out. While Lemma 5.3 assumes a decreasing angle sequence, it's only an intermediate step for Theorem 5.4. In Theorem 5.4's proof (Line 1027), we show the angle decreases in the first step and use induction to extend this to all iterations (Line 1139). Stochastic setting: Our analysis is deterministic; extending to the stochastic case is left for future work. We'll clarify this in the revision. **Difference between Lemma A.11 and the first part of the proof of Theorem 5.4.** Thank you for the careful reading. Eq. (123) and Eq. (166) are essentially the same—Eq. (123) assumes a decreasing angle, while Eq. (166) proves this for the first iteration. Setting $t=0$ in Eq. (123) gives Eq. (166). We restated the proof for clarity and completeness but will revise it to be more concise. **Eq. (87).** Thank you for pointing this out. Eq. (87) follows by normalizing Eq. (86). The reference to Eq. (87) on Line 929 was a typo—it should be Eq. (88). We'll correct this in the revision. **Performance in Table 12.** Thank you for the observation. LLaMA-2-7B is already strong on MMLU due to its pretraining. Since MMLU focuses on factual recall, not task-specific adaptation, PEFT methods like LoRA, RoLoRA, or FFA-LoRA offer limited gains. This is also observed at Sec 5.3 in [4]. In contrast, we show clear improvements on task-specific adaptation tasks such as GLUE and commonsense reasoning tasks. [4]Guo, Pengxin, et al. "Selective Aggregation for Low-Rank Adaptation in Federated Learning." (ICLR 2025). **Related works.** Thanks for the helpful suggestions and for taking the time to check for related work. We will cite them in the related work section.
Summary: This paper introduces RoLoRA, a federated fine-tuning framework that employs alternating optimization for LoRA-based adaptation. RoLoRA addresses the expressiveness limitations of FFA-LoRA (Sun et al. '24)in low-parameter settings while preserving communication efficiency. The authors provide theoretical proof of RoLoRA’s convergence in a single-layer linear regression model with rank-1 LoRA and highlight the drawbacks of FFA-LoRA, which freezes the LoRA module’s down-projection matrix (A), restricting model expressiveness. Empirical results demonstrate that RoLoRA consistently outperforms baselines and achieves significantly faster convergence. Claims And Evidence: The paper makes two key claims: **1.FFA-LoRA has reduced expressiveness due to frozen down-projection matrices.** The authors argue that FFA-LoRA’s inability to update the down-projection matrix (A) limits model expressiveness, making optimization harder. Their theoretical analysis, using a simplified linear regression model, suggests that freezing A prevents the model from fully converging to an optimal solution. Equation (10) and Proposition 5.5 illustrate that, under this restriction, the global objective remains dependent on the initialization of A, leading to suboptimal performance. However, while this analysis demonstrates that freezing A introduces significant limitations, it does not rigorously prove that updating only the up-projection matrix (B) is fundamentally insufficient for optimization in all scenarios. **2. RoLoRA improves robustness while maintaining efficiency.** RoLoRA introduces alternating updates to the A and B matrices, addressing the expressiveness limitations of FFA-LoRA while preserving its computational and communication efficiency. Unlike FFA-LoRA, which is sensitive to initialization and fine-tuning parameter budgets, RoLoRA achieves more stable optimization and better generalization across different settings. Empirical results across linear models, toy neural networks, and large language models (RoBERTa-Large, Llama-2-7B) demonstrate that RoLoRA consistently achieves higher accuracy and faster convergence than both FFA-LoRA and other baselines. Methods And Evaluation Criteria: The alternating update strategy in RoLoRA is well-motivated, and the paper evaluates it across various tasks and models. However, the evaluation lacks key comparisons: - Missing Baselines: The study does not include important FL+LoRA methods, such as FlexLoRA (Bai et al., 2024), which is crucial for a direct performance and efficiency comparison. - Limited to IID Data: The experiments assume IID data distributions, without evaluating non-IID settings, which are common in real-world federated learning. Since LoRA-based fine-tuning can be particularly sensitive to data heterogeneity, testing RoLoRA under non-IID conditions would better assess its robustness. - Lack of Differential Privacy Analysis: The study does not analyze differential privacy (DP), despite privacy being one of the key promises of FL. Since inexact model updates can be more problematic in DP-constrained environments, assessing RoLoRA’s performance under privacy-preserving conditions would strengthen its claims. **Reference:** Bai, Jiamu, et al. “Federated fine-tuning of large language models under heterogeneous tasks and client resources.” arXiv preprint arXiv:2402.11505 (2024). Theoretical Claims: The authors provide a convergence analysis for RoLoRA in a single-layer linear regression model with rank-1 LoRA and demonstrate that FFA-LoRA’s objective function is influenced by the initialization of the down-projection matrix (A). They also analyze a heterogeneous federated setting, but the results primarily focus on angle convergence rather than proving global optimality. However, the theoretical claims have several limitations: - Limited Scope of Convergence Analysis: The convergence proof is restricted to a simplified linear regression model with rank-1 LoRA, which does not directly extend to more complex neural networks or real-world federated learning (FL) settings. The analysis does not account for data heterogeneity, model heterogeneity, or non-convex objectives, which are common in FL. - Expressiveness Limitation Not Fully Proven: While the paper argues that FFA-LoRA is less expressive due to freezing the down-projection (A), it does not rigorously prove that learning A is strictly necessary for effective fine-tuning in all fewer-parameter settings. The results suggest a limitation but do not quantify its impact beyond the linear setting. - RoLoRA’s Superiority Over FFA-LoRA Not Mathematically Established: The theoretical results support the authors’ intuition that alternating updates improve optimization, but they do not formally establish that RoLoRA consistently outperforms FFA-LoRA in general federated learning scenarios. A stronger theoretical claim would require proving that FFA-LoRA cannot reach the same optima under certain conditions. While the theoretical insights provide useful intuition, they do not constitute a rigorous proof that RoLoRA is strictly superior to FFA-LoRA in all cases. Extending the analysis to non-convex settings, heterogeneous data distributions, and multi-layer models would strengthen the claims, although I do understand the challenges. Experimental Designs Or Analyses: While the empirical results demonstrate RoLoRA’s effectiveness, additional experiments could further strengthen its claims and generalizability: - Robustness Across Different Rank Sizes: Testing RoLoRA with varying rank sizes would help assess its stability and effectiveness in low-parameter settings, particularly where parameter efficiency is critical. - Impact of Learning the Down-Projection: A more detailed empirical analysis of the role of the down-projection matrix (A) would clarify how much expressiveness is lost when A is frozen and whether learning it is always beneficial. - Evaluation Under Differential Privacy Constraints: Since privacy is a key motivation for federated learning, assessing RoLoRA’s performance under differential privacy (DP) would provide insight into its robustness in privacy-preserving scenarios. - Experiments in Heterogeneous Client Settings: Evaluating RoLoRA under non-IID data distributions and client heterogeneity would better reflect real-world FL conditions, where clients have different data distributions and computational constraints. Supplementary Material: The appendix provides detailed proofs of the theoretical claims. However, the presentation is difficult to follow, making it challenging to fully grasp the logical flow of the arguments. Providing a high-level explanation of the proof structure and key intuitions would significantly improve readability. I randomly checked several equations and inequalities and found them to be correct. While the theorem statements appear well-formed and the conclusions seem reasonable, I cannot confidently verify the overall correctness of the proof due to its complex presentation. A clearer breakdown of the key logical steps would enhance accessibility and confidence in the results. Relation To Broader Scientific Literature: The idea is simple yet the analysis is non-trivial. However, the contribution remains limited without empirical studies in more realistic FL settings, such as differential privacy and non-IID data. Essential References Not Discussed: The paper should discuss Koo et al. (2024), “Towards Robust and Efficient Federated Low-Rank Adaptation with Heterogeneous Clients”, which employs a similar alternating LoRA approach even in non-IID federated settings. Other Strengths And Weaknesses: Additional Concerns: **No Experiments on in-exact update settings:** RoLoRA is presented to address the issue of in-exact model updates in a more robust and efficient way. However there is no evaluation under differential privacy or highly heterogeneous client settings. **Weak link between theoretical analysis and claims:** Theoretical results do not conclusively prove the necessity of down-projection learning or weakness of FFA-LoRA in fewer parameter settings. but merely provides intuition on constraints of freezing-A scheme. Other Comments Or Suggestions: L023: theoretical analysis -> a theoretical analysis L106: We adopts -> We adopt Please reorganize the proof for better readability. Questions For Authors: - Can you provide a stronger theoretical justification or empirical validation that freezing A is always suboptimal, especially in low-parameter regimes? - Have you tested RoLoRA in non-IID settings, where inexact model updates are more problematic? - Have you considered evaluating RoLoRA with DP mechanisms (e.g., DP-SGD, DP-FedAvg)? - Can you provide additional experiments to confirm its stability in extreme low-rank settings, where parameter efficiency is crucial? - Can you provide ablation studies comparing performance when A is learned versus when it is frozen? - How does RoLoRA compare to FlexLoRA in terms of communication efficiency, convergence, and model accuracy? (I do understand the memory complexity issue, but you could perform a small-scale experiment) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive review. We address the key concerns below: **FFA-LoRA's Reduced Expressiveness and RoLoRA's Theoretical Superiority.** Thank you for the constructive comments. We show in Proposition 5.5 (proof in Appendix A.4.1) that FFA-LoRA is suboptimal in our linear setting. This result holds for any unit vector $\mathbf{a}$ and corresponding $\mathbf{b}$ obtained by fully minimizing the local loss, as in Line 5 and 7 of Algorithm 2. The same expected loss bound applies to RoLoRA. As discussed in Line 271-280, substituting RoLoRA's reduced angle $\epsilon$ at Line 259 into Eq. (11) shows its expected loss can be made arbitrarily small, unlike FFA-LoRA, whose loss is limited by the initial angle. **Scope of Convergence Analysis** Our convergence analysis also extends to the heterogeneous setting (Appendix A.5), following the same logic as the homogeneous case. RoLoRA reduces the angle between $\mathbf{a}$ and $\mathbf{a}^*$, ensuring convergence to the global optimum, while FFA-LoRA's loss remains limited by its initial angle, as discussed at Line 281. Though based on a simplified linear model, the proof is non-trivial. The linear setting allows direct comparison due to its unique global minimum, unlike neural networks, where only convergence to local minima can be shown and final losses across methods are not directly comparable. **Non-IID robustness.** Thank you for the constructive comments. We provide a theoretical analysis under a heterogeneous linear setting (Appendix A.5) and evaluate RoLoRA's robustness to non-IID data using a two-layer neural network (Fig. 2). Additionally, we ran experiments on a language model under non-IID conditions—see the **[table](https://anonymous.4open.science/r/random-3C54/noniid.png)**. RoLoRA consistently outperforms LoRA, FFA-LoRA, and FlexLoRA on MNLI and QQP across varying data heterogeneity and client counts. **Re-Organization of Proof.** Thank you for the thoughtful feedback. We already provide a high-level overview of the main proof at Line 271. To improve clarity, we will revise the appendix to include a more detailed outline and highlight key intuitions behind the technical steps. **Differential Privacy.** Thanks for the suggestion. We itegrated NbAFL [1] and the results are in the **[table](https://anonymous.4open.science/r/random-3C54/dp.png)**. We use $\epsilon = 10, \delta = 1e-6$. In this setting, RoLoRA outperform others across MNLI and QQP tasks. [1]Wei, Kang, et al. "Federated learning with differential privacy: Algorithms and performance analysis." IEEE transactions on information forensics and security 15 (2020) **Extreme low-rank stability.** Thank you for raising this point. Our experiments already include rank 1, 2, 4, and 8 settings (see Fig. 4, 6-8 in Appendix B.2.2 at Line 1743), demonstrating RoLoRA's robustness under tight parameter budgets. Table 1 and 6 show results with increasing clients at rank 4 and 8. We've also added experiments with rank 2 and different numbers of clients—see the **[table and figure](https://anonymous.4open.science/r/random-3C54/rank2.png)**—where RoLoRA still shows strong convergence and competitive accuracy. **Ablation study of the role of A.** Thank you for the insightful suggestion. To address this, we conducted an experiment comparing performance of FFA-LoRA, RoLoRA, and different mixing strategies under the setting with 50 clients. In these strategies, for example, 20%RoLoRA+80%FFA-LoRA means we finetune with RoLoRA (where A is learned) for the first 20% of communication rounds, followed by FFA-LoRA (where A is frozen) for the remaining 80%. The results are shown in the **[figure](https://anonymous.4open.science/r/random-3C54/ablation-A.png)**. We observe that finetuning with RoLoRA generally leads to faster convergence and higher final accuracy, highlighting the benefits of learning A, especially in early training. **Comparison with FlexLoRA.** Thank you for your comment. Our paper already compares RoLoRA and FlexLoRA across ranks and client counts (Table 6, Line 1705). We've added convergence results in a 50-client setting **[figure](https://anonymous.4open.science/r/random-3C54/convergence-100.png)** and a **[table](https://anonymous.4open.science/r/random-3C54/cost-compare.png)** comparing communication and time costs. Results show RoLoRA's clear advantages in large-scale, low-resource settings. **Related works.** Thanks for taking the time to check for related work. We will discuss Koo et al. (2024) in the related work section. --- Rebuttal Comment 1.1: Comment: Thanks for the response, which have resolved my major concerns. I have raised my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thanks for the acknowledge of our work and for raising score. Thanks again for your time and effort in reviewing our paper.
null
null
null
null
null
null
TopInG: Topologically Interpretable Graph Learning via Persistent Rationale Filtration
Accept (poster)
Summary: In this paper, the authors propose a GNN interpretation framework named TopInG, which applies topological data analysis and persistent homology to separate the graph into rationale and noise subgraphs. The method is theoretically sound, and the experiments show improvements in interpretation performance. Claims And Evidence: The claims in the paper are generally well supported. However, the claim that TopInG performs better on tasks with variable rationale subgraphs is only empirically verified on a synthetic dataset. This claim would be more convincing if more diverse real-world datasets were included, and the post-hoc interpretation methods are also compared. Methods And Evaluation Criteria: Yes the methods and evaluation criteria make sense. The datasets selection can be improved by adding more diverse real-world dataset. Theoretical Claims: No I didn't check the correctness proofs or theoretical claims, as I'm not familiar with the TDA theory. Experimental Designs Or Analyses: Overall, the experimental design is sound. The datasets used include both synthetic and real-world datasets. The evaluation metrics, including ROC-AUC and accuracy, are common and appropriate choices. However, in the experiment shown in Fig. 3, it is questionable why post-hoc methods were not included. Additionally, the real-world datasets are quite similar. Adding more diverse datasets could be beneficial. Supplementary Material: No I didn't review the supplementary material. Relation To Broader Scientific Literature: The paper is closely related to GNN interpretability and TDA. Applying persistent homology from TDA to GNN interpretability is novel. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: 1. The paper opens a new direction in the area of GNN interpretability by applying topological data analysis. 2. The theoretical foundation of the proposed method is solid, and the authors provide theoretical guarantees that their method can find a unique optimum. Weaknesses: 1. The experiments are limited to synthetic datasets and simple real-world datasets and do not demonstrate the method’s potential for real-world applications. Other Comments Or Suggestions: 1. ‘variiform', 'varriform' seem like typos, should it be 'variform'? 2. Fig. 1 is hard to understand, even with the caption. Ideally, it should convey the core idea of the proposed method, and the caption should include all necessary definitions of the notations used in the figure. For example, $\mathcal F$ and $\mathcal T$ are not defined in either the figure or the caption. Questions For Authors: 1. The description of how the dataset BA-HouseOrGrid-nRnd is generated is a bit confusing. Is it correct that label 1 means the graph contains (a) only houses, (b) only grids, or (c) an equal number of houses and grids? Does label 0 mean the graph contains (a) neither houses nor grids or (b) an unequal number of houses and grids? 2. Are the assumptions in Theorem 3.5 realistic for real-world graphs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and invaluable feedback. Below, we provide detailed responses to all your comments or concerns. ### Q1: BA-HouseOrGrid-nRnd Generated is a Bit Confusing Thank you for pointing this out! We clarify the label definitions as follows: * Label 0: The graph contains neither houses nor grids. * Label 1: The graph contains (a) only houses, (b) only grids, (c) an equal number of houses and grids. We do not consider cases where houses and grids appear simultaneously in different quantities, as this would introduce significant combinatorial complexity. To ensure a balanced dataset and avoid potential bias in model training or evaluation, we first guarantee that the number of graphs with label 0 and label 1 is equal. Furthermore, within label 1, we generate an equal number of graphs for conditions (a), (b), and (c), so as to maintain a stable and unbiased distribution across subcategories. ### Q2: Are the Assumptions in Theorem 3.5 Realistic for Real-world Graphs? We consider the assumptions made in Theorem 3.5 to be reasonably realistic for a range of real-world graphs, especially in structured domains such as molecular graphs. For instance, in datasets like MUTAG and BENZENE, there often exists a relatively small rationale subgraph (e.g., a specific functional group) embedded within a larger graph, which aligns well with our assumption of a distinguishable and minimal rationale G_X​. Our theoretical assumptions, while idealized, significantly relax the strict conditions required by previous works, making our method theoretically better suited for handling variable rationale structures. Empirical evidence strongly supports the practical validity of these assumptions. Specifically, previous works (e.g., GSAT, GMT, DIR) assume that within each category, the corresponding rationale substructure is globally invariant across the entire data distribution—a much stronger and less practical condition compared to ours. This stricter requirement explains why existing methods struggle with variform rationales. **Regarding more real-word dataset** We acknowledge requests for additional real-world datasets. However, graph interpretability requires not just graph labels, but also ground-truth rationale annotations identifying causal subgraphs. The scarcity of such comprehensively labeled datasets is a recognized challenge in this field (Zhang et al., 2024). Our experiments follows established benchmarking practices. Furthermore, we have tested out-of-distribution (OOD) generalization by training TopInG on BA-HouseOrGrid-nRnd with fixed motif count (n=4) and testing on varying counts (n=2,3,5,6). Despite the distribution shift, TopInG maintains stable prediction and interpretability performance without degradation, as shown in Figure 14 (Appendix). To further mitigate the concern, here we conducted an additional experiment on MNIST-75sp [1]. This dataset contains noisy rational subgraphs with more complex and varied topologies. [1]Knyazev, B., Taylor, G. W., and Amer, M. R. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems, pp. 4204–4214, 2019. Our results, with details in the table below, show that TopInG achieves the best interpretation and good prediction scores compared to existing SOTA methods. This suggests that our topological signal contributes meaningfully to explanation quality, even when tested on a real-dataset with more complex and noisy varing rationale subgraphs. |method| Interpretation Performance (AUC)| Prediction Performance (Acc.)| | -- | -- | -- | |DIR| 32.35| 88.51| |MAGE| 67.50| 89.43| |GSAT| 80.47| 96.20| |GMT-Lin| 82.98| 96.01| |TopInG| 84.50| 95.20| **Regarding the typo of 'variiform'** Thank you for pointing this out. We will correct the spelling. **Post-hoc methods missing for Fig3 experiment** We prioritized comparing interpretable methods (DIR, GSAT, GMT) since our work specifically addresses the variform rationale challenge observed in these approaches. Post-hoc methods show diverse performance on BA-HouseOrGrid dataset, suggesting this is not neccessarily a common challenge across post-hoc methods. We tested MAGE on the dataset BA-HouseOrGrid-nRnd for n=(2,3,4), and observed that MAGE has consistent performance close to 100%. It indicates that post-hoc methods like MAGE are not very sensitive to the variation of rationale structures. **Regarding the notations in Fig.1** To improve its readability and self-containment, we will revise the figure caption to explicitly define all notations used, including: * F, which denotes the filtration applied to the graph. * T, which refers to the resulting topological invariant (e.g., persistence diagram).
Summary: This paper presents a framework, TopIng, designed to enhance the interpretability of GNNs by enabling them to automatically identify key subgraphs that influence prediction outcomes. It uses TDA to characterize the growth process of key subgraphs and employs topological distance loss to help the GNN distinguish between relevant and irrelevant parts. Overall, the framework makes the GNN more transparent, stable, and trustworthy, in addition to achieving accurate predictions. Claims And Evidence: The authors make several key claims that are partially supported by experimental results and theoretical analysis, but there are still some issues. TopIng can effectively handle subgraphs with variable structures: Although the authors tested the robustness of the method using the synthetic dataset BA-HouseOrGrid-nRnd, this dataset only simulates a limited range of variations and does not fully explore more complex real-world scenarios. Methods And Evaluation Criteria: The TopIng method proposed by the authors is based on Topological Data Analysis (TDA) and uses persistent homology to identify stable rationale subgraphs, incorporating topological differences as a loss function. This approach is innovative theoretically, and experimental results show that it performs well across multiple datasets. However, there are some limitations: On the MUTAG dataset, TopIng's prediction performance is lower than that of GSAT. The paper identifies the reasons for this and improves performance using TopIng-0, but does not fully discuss the limitations and applicability of the method on simpler structured datasets. Additionally, there are shortcomings in the evaluation of explanation quality. Using AUC to measure explanation quality is limited in scope and fails to comprehensively reflect the accuracy of the explanations. Theoretical Claims: The TopIng framework shows some potential in handling subgraphs with variable structures. The theorem assumes that |Eε| > |EX| (the number of edges in the rationale subgraph) is always smaller than that of the complement graph. This may not hold true in certain real-world scenarios. Additionally, the authors mention that computational complexity limits the use of higher-order homology (beyond the second order), but they do not analyze the impact of this limitation on the theoretical guarantees. Experimental Designs Or Analyses: The experimental design is reasonable, and the choice of datasets, baseline comparisons, and result analysis support the effectiveness of TopIng. However, there is a lack of systematic transfer learning validation across different types of datasets, making it difficult to assess the method's adaptability in diverse application scenarios. Supplementary Material: no Relation To Broader Scientific Literature: The contributions of TopIng are closely related to existing research in the fields of TDA, explainable graph learning, and optimal transport. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The proposed topological discrepancy has a strong mathematical basis, elevating the comparison of graph structures to the topological level. This is an important contribution to the theory of graph learning. - The method is successfully applied to different backbone networks (GIN and CINpp), demonstrating the framework's nature and independence from specific neural network architectures. Weaknesses: - Persistent homology computation is computationally intensive on large-scale graphs. Nevertheless, the paper acknowledges this issue. - The method requires simultaneous optimization of multiple loss components and the adjustment of several hyperparameters, which may increase engineering complexity and tuning burden in practical applications. Other Comments Or Suggestions: There are also studies about rationalization in the field of NLP. Would it be possibile to discuss some of them in the related work? [1] Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization [2] Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization [3] Enhancing the Rationale-Input Alignment for Self-explaining Rationalization [4] D-Separation for Causal Self-Explanation [5] Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint [6] MGR: Multi-generator Based Rationalization [7] FR: Folded Rationalization with a Unified Encoder Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and invaluable feedback. Below, we provide detailed responses to all your comments or concerns. ### Regarding the weakness "Persistent homology computation is computationally intensive on large-scale graphs." For larger graphs, we have mentioned several promising directions for speedup the computation, in Appendix D. For example: (1) Development of efficient GPU implementations. (2) Using carefully designed GNN models to approximate persistent homology computations. ### Regarding more real-world dataset evaluation We acknowledge requests for more real-world datasets. However, graph interpretability requires not just graph labels, but also ground-truth rationale annotations identifying causal subgraphs. The scarcity of such comprehensively labeled datasets is a recognized challenge in this field (Zhang et al., 2024). Our experiments follows established benchmarking practices. Furthermore, we have tested out-of-distribution (OOD) generalization by training TopInG on BA-HouseOrGrid-nRnd with fixed motif count (n=4) and testing on varying counts (n=2,3,5,6). Despite the distribution shift, TopInG maintains stable prediction and interpretability performance without degradation, as shown in Figure 14 (Appendix). To further mitigate the concern, we conducted an additional experiment on MNIST-75sp [1], which contains noisy rational subgraphs with richer topologies. [1]Knyazev, B., Taylor, G. W., and Amer, M. R. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems, pp. 4204–4214, 2019. Our results (table below) show that TopInG achieves the best interpretation and good prediction scores compared to existing SOTA methods. This suggests our topological signal contributes meaningfully to explanation quality, even when tested on a real-dataset with more complex and noisy varing rationale subgraphs. |method| Interpretation Performance (AUC)| Prediction Performance (Acc.)| | -- | -- | -- | |DIR| 32.35| 88.51| |MAGE| 67.50| 89.43| |GSAT| 80.47| 96.20| |GMT-Lin| 82.98| 96.01| |TopInG| 84.50| 95.20| ### Regarding "The method requires simultaneous optimization of multiple loss components and the adjustment of several hyperparameters, which may increase engineering complexity and tuning burden in practical applications." We appreciate the reviewer's concern regarding the potential complexity introduced by optimizing multiple loss components and tuning associated hyperparameters. While this may appear to increase the engineering burden in theory, in practice, we fixed most hyperparameters across all datasets without extensive tuning. Despite this, TopInG consistently achieved strong performance, suggesting that the method is not heavily sensitive to hyperparameter choices, and the additional loss components can be integrated in a stable manner. We believe this demonstrates that while the design includes multiple components, the practical tuning burden is low, and the framework remains accessible for real-world applications. ### Regarding "There are also studies about rationalization in the field of NLP. Would it be possible to discuss some of them in the related work?" We appreciate the reviewer's suggestion to incorporate discussions on rationalization studies from the Natural Language Processing (NLP) domain into our related work section. Incorporating insights from NLP rationalization studies inspire advancements in graph-based interpretability methods.​ Rationalization has been extensively studied in NLP literature, emphasizing causal feature selection, faithful rationale-input alignment, and stable training dynamics. Recent advances include alternatives to the maximum mutual information (MMI) criterion, highlighting input utilization and causal feature isolation [1,2,4]. Other studies enhance rationale fidelity by enforcing semantic alignment with original inputs [3], and propose methods to stabilize joint rationale-predictor training through asymmetric learning rates, multi-generator ensembles, and unified encoders [5,6,7]. However, unlike NLP domains where rationales often manifest as contiguous text spans, graph domains often exhibit "variform rationale subgraphs" that can differ significantly in form, size, and topology even among instances of the same class. We will include a brief discussion in the related work section to acknowledge related NLP studies.
Summary: The paper introduces TopInG, a topologically interpretable graph neural network (GNN) framework leveraging persistent homology to identify stable rationale subgraphs for model explanations. Key contributions include: (1) a novel *rationale filtration learning* technique that models graph generation processes to capture persistent topological features (0th/1st homology) across scales, and (2) a *topological discrepancy* loss measuring structural differences between rationale and non-rationale subgraphs. The method addresses limitations of prior interpretable GNNs in handling variable subgraph structures (e.g., differing sizes, topologies) by encoding multi-scale topological persistence. Experiments show TopInG outperforms state-of-the-art baselines (e.g., GSAT, DIR) in prediction accuracy (e.g., 50.21±3.22 Acc on MUTAG) and interpretation quality (e.g., 100% AUC on synthetic datasets), particularly in scenarios with diverse rationale subgraphs. Theoretical guarantees ensure the learned rationale subgraphs align with ground-truth causal mechanisms under optimal conditions. Claims And Evidence: **Key claims lacking convincing evidence:** 1. **Handling variable subgraphs via topological persistence**: While persistent homology captures multi-scale features, the paper’s synthetic datasets (e.g., BA-HouseOrGrid-nRnd) exhibit controlled variability. Real-world datasets like MUTAG lack complex topological structures (e.g., cycles), limiting validation of 1st homology utility. The claim is **not fully supported** for diverse real-world graphs. 2. **Superior interpretation quality (100% AUC on synthetic data)**: Perfect AUC scores suggest overfitting to synthetic tasks. For example, BA-2Motifs has trivial 0th homology (no cycles), making topological analysis redundant. The **evaluation lacks noisy or adversarial examples** to test robustness. 3. **Theoretical guarantees**: The topological discrepancy loss is framed as a lower bound of the Wasserstein distance under Gromov-Hausdorff metric: $\mathcal{L}_{\text{topo}} \leq W_{\text{GH}}(\mathbb{P}, \mathbb{Q})$ However, the proof assumes optimal filtrations and ignores graph noise, limiting practical relevance. **Methodological flaws**: - **Dataset limitations**: Experiments rely heavily on synthetic graphs with simplistic topologies (e.g., MUTAG lacks cycles). The variiform rationale challenge is tested on BA-HouseOrGrid-nRnd, which may not generalize to real-world heterogeneity. - **Baseline comparisons**: GSAT and GMT-LIN are compared under constrained backbones (GIN/CINpp), but their hyperparameters (e.g., GSAT’s $r$ ) are not exhaustively tuned, risking unfair comparisons. - **Ablation gaps**: The impact of topological discrepancy vs. standard regularizers (e.g., Gaussian) is not rigorously isolated. Methods And Evaluation Criteria: The proposed methods (topological filtration, discrepancy loss) align with the goal of capturing stable rationale subgraphs but **rely heavily on synthetic datasets (e.g., BA-2Motifs) with simplistic topologies (no cycles, trivial 0-homology)**, limiting validation of 1st-homology utility. Evaluation metrics (AUC/ACC) are standard but **perfect synthetic AUC scores suggest overfitting**, and **real-world datasets (MUTAG) lack topological complexity** to test claims. Backbone choices (CINpp) suit topological data, but comparisons to GIN-based baselines may not fully stress-test scalability. Theoretical Claims: **Theorem 3.5** assumes minimal subgraphs $G^*_X$ and ignores graph noise, making its uniqueness guarantee **theoretically valid but practically fragile**. The proof relies on idealized filtrations and does not address overfitting risks when \(\|E_X\| \ll \|E_\epsilon\|\) (common in real graphs), weakening its real-world applicability. Experimental Designs Or Analyses: See comments below Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: See comments below Other Comments Or Suggestions: - **Content Ambiguities**: - "variiform rationale subgraphs" lacks clear prior definition. (abs, and remark 3.6) - **Dataset Descriptions**: - SPmotif0.5/0.9 and BA-HouseOrGrid-nRnd lack clarity on construction/split details. Questions For Authors: 1. The paper emphasizes handling variiform rationale subgraphs but relies heavily on synthetic datasets (e.g., BA-2Motifs) with trivial 0-homology and no cycles. How would TOPING perform on real-world datasets with richer topological structures (e.g., molecular graphs with complex cycles)? If no such experiments were conducted, does this not undermine the claim that 1st-homology analysis is critical for real-world interpretability? 2. The perfect AUC scores on synthetic datasets suggest potential overfitting, especially since BA-2Motifs lacks cycles. Did the authors test on synthetic graphs with adversarial noise or conflicting topological features (e.g., cycles irrelevant to labels)? If not, how can the method’s robustness to such challenges be assured? 3. Theorem 3.5 assumes minimal subgraphs (\(G^*_X\)) and ignores graph noise. How does the method handle real-world graphs where noise or spurious edges dominate (\(\|E_\epsilon\| \gg \|E_X\|\))? Does the bimodal prior for edge filtration collapse under such conditions, as observed in GSAT? 4. The comparison with GSAT and GMT-LIN uses constrained backbones (CINpp/GIN). How sensitive are TOPING’s gains to backbone choice? For instance, would the method maintain performance on Transformers or MPNNs, which lack explicit topological inductive biases? 5. The paper mentions future work on efficient persistent homology computation but does not address scalability challenges (e.g., large graphs). What specific strategies would address this, given that persistent homology’s \(O(n^3)\) complexity (Hofer et al., 2020) could limit practical deployment? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Clarifications and Corrections We thank the reviewers for their detailed feedback and appreciate the opportunity to clarify a few misunderstandings and highlight concerns already solved in our Appendix: * BA-2Motifs do include cycles in their rationale subgraphs. All datasets used in our work contain complex topological structures. * MUTAG graphs contain cycles, even if the rationale subgraphs may not. Appendix E.5 contains illustrative examples of every dataset we used. * "overfitting" refers to poor generalization from training to test data, which does not fit the content since all scores we reported are based on test data. We discuss a more related topic called "out-of-distribution (OOD) generalization" in Q1 below. * The formula "\mathcal{L}{\text{topo}} \leq W{\text{GH}}(\mathbb{P}, \mathbb{Q})" is not from our submitted manuscript. ## Methodological Notes * GSAT’s hyperparameter r is tunned following author-recommended strategy: initially set to 0.9 and gradually decay to 0.7 (Appendix E.3, line 892). * Additional ablation studies are provided to support the impact of topological discrepancy vs. Gaussian(Appendix E.3, Table 5). * Backbone choices have been discussed in Appendix E.3 line 914. ## Q1: To evaluate on more Real-World Datasets We acknowledge requests for more real-world datasets. However, graph interpretability requires not just graph labels, but also ground-truth rationale annotations identifying causal subgraphs. The scarcity of such comprehensively labeled datasets is a recognized challenge in this field (Zhang et al., 2024). Our experiments follows established benchmarking practices. Furthermore, we have tested OOD generalization by training TopInG on BA-HouseOrGrid-nRnd with fixed motif count (n=4) and testing on varying counts (n=2,3,5,6). Despite the distribution shift, TopInG maintains stable prediction and interpretability performance without degradation, as shown in Figure 14 (Appendix). To further mitigate the concern, we conducted an additional experiment on MNIST-75sp [1], which contains noisy rational subgraphs with richer topologies. [1]Knyazev, B., Taylor, G. W., and Amer, M. R. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems, pp. 4204–4214, 2019. Our results (table below) show that TopInG achieves the best interpretation and good prediction scores compared to existing SOTA methods. This suggests our topological signal contributes meaningfully to explanation quality, even when tested on a real-dataset with more complex and noisy varing rationale subgraphs. |method| Interpretation Performance (AUC)| Prediction Performance (Acc.)| | -- | -- | -- | |DIR| 32.35| 88.51| |MAGE| 67.50| 89.43| |GSAT| 80.47| 96.20| |GMT-Lin| 82.98| 96.01| |TopInG| 84.50| 95.20| # Q2: Test on Graphs with Conflicting Topologies TopInG is evaluated on the two datasets, SPMotif (synthetic) and MUTAG (real-world), which include non-informative or spurious cycles irrelevant to labels. The experiment results (Table 1) confirm that our model is robust. # Q3: Regarding Theorem 3.5 Our theoretical assumptions in Theorem 3.5, while idealized, significantly relax the strict conditions required by previous works, making our method theoretically better suited for handling variable rationale structures. Empirical evidence strongly supports the practical validity of these assumptions. Specifically, previous works (e.g., GSAT, GMT, DIR) assume that within each category, the corresponding rationale substructure is globally invariant across the entire data distribution—a much stronger and less practical condition compared to ours. This stricter requirement explains why existing methods struggle with variform rationales. We would greatly appreciate it if the reviewer could clarify the intended meaning of "overfitting risks" in the context of |E_X| \ll |E_\epsilon|. As we mentioned before in "Clarifications", "overfitting" does not fit the content. Additionally, the inequality |E_X| \ll |E_\epsilon| is covered by the weaker assumption |E_X| < |E_\epsilon| used in our theorem. Moreover, in our experiments, nearly all datasets tested exhibit the property that rationale subgraphs are significantly smaller than the full graphs (e.g. |E_\epsilon| > 10*|E_X|). TopInG consistently performs well across these scenarios, further supporting that the assumptions underlying Theorem 3.5 do not limit its practical utility. # Q4: How Sensitive are TopInG’s Gains to Backbone Choices? As disucussed in Appendix E.3, TopInG performs more robustly with backbones that preserve topological structures (e.g., CIN). Some backbones like GCN and GAT can struggle with interpretability or prediction, as noted in (Bui et al., 2024). # Q5: What Specific Strategies Would Address the Scalability Challenges? For larger graphs, we have mentioned several promising directions for speedup the computation, in Appendix D line 807.
Summary: This work proposed a new framework which applies TDA tool to interpret the persistent rationale subgraph in graph learning problems, showing effective result performance on motif classification task, evaluated on Single Motif, Multiple Motif and Real Dataset. Claims And Evidence: The authors provide experiments, theoretical analysis, ablations and visualizations to validate the proposed framework. Methods And Evaluation Criteria: The proposed method applies a good design on TDA based GNN interpretation framework. However, it would benefit more from further real-world generality evaluation. Theoretical Claims: The proofs in the appendix are checked, no significant issues were found. Experimental Designs Or Analyses: The experimental designs look valid, but additional datasets for validating the Theorem 3.5 would be desired, e.g. datasets where rationales are hierarchical. Supplementary Material: Appendix from A to E was reviewed Relation To Broader Scientific Literature: Relation to interpretable graph neural networks (GNNs), topological data analysis (TDA), and robust learning with variform rationales. Essential References Not Discussed: PersGNN[1], related to TDA with GNN task, should be added in references. [1] Swenson, Nicolas, et al. "PersGNN: applying topological data analysis and geometric deep learning to structure-based protein function prediction." arXiv preprint arXiv:2010.16027 (2020). Other Strengths And Weaknesses: Strengths: This work proposed a new framework of interpretable graph neural networks integrated with TDA tool to interpret the persistent rationale subgraph in graph learning problems. The experimental results and theoretical analysis sound robust, evaluated on motif-like datasets. Weaknesses: 1. Evaluation on datasets where rationales are hierarchical is missing. 2. The runtime efficiency of the proposed framework seems missing. Other Comments Or Suggestions: It'll be more persuasive to provide the evaluation on datasets where rationales are hierarchical. Questions For Authors: The evaluation of the runtime efficiency of the proposed framework seems missing. It is desired to know the efficiency and comparison with related methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and invaluable feedback. Below, we provide detailed responses to all your comments or concerns. ### Q1: Evaluation on Datasets with Hierarchical Rationales We appreciate the reviewer's insight into the importance of hierarchical rationales in interpretable graph neural networks. Indeed, this is an emerging area with significant potential. However, graph interpretability requires ground-truth rationale annotations, and such comprehensively labeled datasets are scarce (Zhang et al., 2024). Our experiments follow established benchmarking practices. Furthermore, we tested out-of-distribution (OOD) generalization by training TopInG on BA-HouseOrGrid-nRnd with fixed motif count (n=4) and testing on varying counts (n=2,3,5,6). TopInG maintains stable performance across these distribution shifts, as shown in Figure 14." **Regarding more real-word dataset** To further demonstrate the practical applicability of TopInG on real-world tasks, we conducted additional experiments on the MNIST-75sp dataset [1], which is widely used for evaluating graph-based models on visual tasks. In this dataset, each MNIST image is converted into a superpixel graph, where nodes correspond to superpixels and edges represent spatial adjacency. Nodes with nonzero pixel values provide ground-truth explanations, making it suitable for evaluating interpretability methods. This dataset consists of 60,000 graphs for training and 10,000 for testing, with an average of 70.57 nodes and 590.52 edges per graph. Notably, the ground-truth explanatory subgraphs vary in size across samples, adding to the challenge. We trained all models for 30 epochs on a single RTX 4090 GPU (over 26 hours). For TopInG, we used default hyperparameters and only reduced the coefficient of the topological loss, without extensive tuning. The table below presents both interpretation and prediction performance: |method| Interpretation Performance (AUC)| Prediction Performance (Acc)| | -- | -- | -- | |DIR| 32.35| 88.51| |MAGE| 67.50| 89.43| |GSAT| 80.47| 96.20| |GMT-Lin| 82.98| 96.01| |TopInG| 84.50| 95.20| As shown, TopInG achieves the best interpretability performance (AUC = 84.50) while maintaining competitive prediction accuracy. This result demonstrates TopInG's strong potential for real-world graph-based tasks involving complex, variable-sized explanations. [1]Knyazev, B., Taylor, G. W., and Amer, M. R. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems, pp. 4204–4214, 2019. ### Q2: Runtime Efficiency of the Proposed Framework We appreciate the reviewer's concern regarding the runtime efficiency of our framework. In Appendix D, we provided a theoretical analysis of the runtime complexity. In brief, the time complexity can be as fast as O(nlogn), without considering GPU acceleration. Practically, we provide representative results on two datasets to give a sense of our method's efficiency. On BA-2Motifs, each training epoch takes approximately 0.45 ± 0.12 minutes, and on SPMotif (which is a more complex and larger dataset), the runtime is approximately 9.20 ± 2.35 minutes per epoch. All experiments were conducted on a single RTX 4090 GPU, and importantly, our method consistently converges within 20 epochs across all datasets. Although our method is relatively slower per epoch due to the incorporation of TDA, this added cost is justified by the significant performance gains. Unlike many baseline models that typically require 50 to 100 epochs to converge, TopInG achieves convergence within 20 epochs, while consistently achieving perfect or near-perfect AUC scores on multiple datasets. We hope this additional information clarifies the practical efficiency of our approach. ### Regarding PersGNN: Thank you for providing the reference. We will mention this work in the revised version. As a related work, PersGNN applies TDA to analyze the structure of protein graphs by combining persistent homology with GNNs to capture both local and global structural features for improved protein function prediction. While our work focuses on leveraging TDA to enhance the interpretability of GNN models through persistent rationale filtration, PersGNN primarily aims to improve prediction performance on protein structure-function relationships via a novel TDA-enhanced GNN architecture. This distinction highlights complementary applications of topological methods in graph learning.
null
null
null
null
null
null
Discovering a Zero (Zero-Vector Class of Machine Learning)
Accept (spotlight poster)
Summary: The authors propose a mathematical framework for representing data classes as vectors in a vector space. The goal is to enable operations such as addition (class union) and scalar multiplication (class complement) to improve machine learning classification. The main contributions are: the introduction of the Zero-Vector Class, a novel approach to represent class boundaries, the application of this framework to enhance neural network learning, and improvements in classification accuracy and continual learning. The study shows that the Zero-Vector Class allows networks to learn the true data manifold, leading to clearer decision boundaries and better performance on classification tasks. Claims And Evidence: The authors claim that the Zero-Vector Class improves classification by enabling neural networks to learn the true data manifold. The evidence includes mathematical proofs establishing a vector space for class representation, experimental results on MNIST and CIFAR-10, and comparisons between models trained with and without the Zero-Vector Class. The main claims are: the Zero-Vector Class refines decision boundaries, it enables unary class learning, and it facilitates continual learning. The experiments support these claims by demonstrating reduced misclassification in empty feature space, improved performance in single-class training, and effective knowledge transfer in continual learning. However, the scalability claim is less supported, as results on CIFAR-10 indicate challenges in higher-dimensional spaces, suggesting the need for further validation on more complex datasets. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of class representation in a vector space. The introduction of the Zero-Vector Class and the use of set operations on classes align with the goal of improving classification and continual learning. The evaluation criteria are also strong. Theoretical Claims: No major flaws were found in the provided proofs, but additional empirical verification on complex datasets would strengthen the theoretical claims. Experimental Designs Or Analyses: From my point of view, the experimental design appears sound. 1. The paper evaluates the Zero-Vector Class using well-known datasets (including MNIST and CIFAR-10) to test its effectiveness in classification. 2. The authors compare models trained with and without the Zero-Vector Class to evaluate its impact on decision boundaries and classification performance. 3. The introduction of the Occupancy Factor and Purity Factor provides a structured way to evaluate the clarity and accuracy of class boundaries. 4. The continual learning experiments demonstrate the potential of the Zero-Vector Class for modular and scalable learning without retraining entire models. Supplementary Material: I reviewed the supplementary material, focusing on the additional theoretical derivations, experimental details, and evaluation metrics. Relation To Broader Scientific Literature: ​The paper's key contributions align with and extend existing concepts in machine learning and embedded representations. Representing data classes as vectors in a vector space is reminiscent of embedding techniques, which map high-dimensional data into lower-dimensional spaces while preserving meaningful semantic relationships. This approach facilitates operations like addition and scalar multiplication, akin to how embeddings capture semantic similarities in natural language processing. Essential References Not Discussed: ​The paper's key contributions are grounded in established concepts within machine learning, particularly vector space models and support vector machines (SVMs). Could you reference foundational works that have significantly influenced this domain, like word2vec: https://arxiv.org/abs/1301.3781 ? Other Strengths And Weaknesses: Strengths: 1. The paper exhibits notable strengths in its originality and potential significance. 2. The authors introduce a novel mathematical framework that represents data classes as vectors in a vector space. 3. This offers a fresh perspective on class representation in machine learning. 4. The Zero-Vector Class allows neural networks to learn the true data manifold rather than just decision boundaries. Weakness: 1. The scalability of the Zero-Vector Class to high-dimensional datasets is not thoroughly explored. 2. Limited discussion on computational complexity and efficiency of the proposed method. Other Comments Or Suggestions: 1. Overall the paper is well-written. 2. While the method is evaluated on MNIST and CIFAR-10, a discussion on its applicability to higher-dimensional datasets would enhance its impact. Questions For Authors: 1. How does the Zero-Vector Class perform on high-dimensional datasets, and have you tested it beyond MNIST and CIFAR-10? 2. What are the computational costs of integrating the Zero-Vector Class, and how does it compare in efficiency to standard classification methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your careful and constructive review, which motivated us to perform additional experiments on ImageNet-1K embeddings and to analyze computational complexity explicitly. Your feedback has significantly strengthened our paper. # Response on Scalability and Computational Efficiency We first address computational complexity to support the discussion on scalability. ## Q2. Computational Cost To analyze this clearly, we divide the computational requirements for any $c$-class classifier into two distinct parts: 1. Pre-softmax logit computation 2. Softmax computation Let a **Zero-Exclusive Network** be a classifier that does not use the zero-vector class during training. Suppose there are $n$ training points, and let the computation needed to compute pre-softmax logits be $O(f(n))$—for instance, $f(n)$ might represent millions of computations. Computing the softmax function then takes $O(n \cdot \text{softmax}(c))$ computations, where $\text{softmax}(c)$ represents the complexity of softmax over $c$ classes. Now consider modifying the Zero-Exclusive Network by adding an extra node at the output to classify the zero-vector class; we call this the **Zero-Inclusive Network**. In this scenario, we incorporate the zero-vector class in training. The addition of the new node introduces extra calculations dependent on the number of nodes in the preceding layer. Letting $L$ denote the number of nodes in the layer preceding this new node, the additional computation required is $O(nL^2)$. The computational complexities during training and inference are summarized in the following table: | Classifier | Training Cost | Inference cost | |----------------|-----------------------------------------------|------------------------------------------------| | Zero-Exclusive Network (base classifier) | $O(f(n)) + O(n \cdot \text{softmax}(c))$ $\approx O(f(n))$ | $O(f(n)) + O(n \cdot \text{softmax}(c)) $ $ \approx O(f(n))$ | | Zero-Inclusive Network ($k$ zero-vector data points in training) | $O(f(n+k) + (n+k)L^2) + O((n+k) \cdot \text{softmax}(c+1)) $ $\approx O(f(n+k))$ | $O(f(n) + nL^2) + O(n \cdot \text{softmax}(c+1)) $ $\approx O(f(n))$ | Typically, $O(f(n))$ dominates $O(nL^2)$ since $L$ corresponds only to the final internal layer. There is a trade-off between computational overhead and purity improvement. This trade-off depends significantly on the intended applications leveraging the full potential of the Zero-Vector framework. We will include this complexity analysis in the additional page for the final submission. ## Q1. Scalability We acknowledge scalability as a critical concern. Our motivation for exploring high-dimensional embeddings comes from the success of embedding-based models like VQ-VAE, Stable Diffusion, and MAE, where internal dimensions (512–1024) capture complex distributions effectively. To test this, we conducted experiments on embeddings (dim: 768) from a pretrained MAE on ImageNet-1K: - Selected 10 random ImageNet classes and obtained embeddings using MAE - Generated corresponding Zero-Vector Class data within this embedding space. - Trained two classifiers: - **Zero-Inclusive Network** (trained with Zero-Vector Class data) - **Zero-Exclusive Network** (trained without Zero-Vector Class data) Results: - **Purity Factor:** - Zero-Exclusive Network purity fluctuated significantly (0.1–0.9), indicating substantial misclassification of empty regions in the feature space. - Zero-Inclusive Network purity remained consistently high and stable (0.7–1.0), indicating effective suppression of misclassification in empty regions. - **Accuracy:** - Both networks reached similar test accuracy (~82%), showing no degradation from the Zero-Vector Class. These experiments demonstrate the practical scalability of our framework to high-dimensional feature spaces, directly addressing the reviewer’s concern. [Click here to view additional results on ImageNet-1K embeddings.](https://drive.google.com/file/d/1lB6JvipbRdom8n-8U7JFScrIG1LXMhKV/view?usp=sharing) Similarly, in NLP Transformer models, next-token prediction is a classification over a fixed vocabulary, naturally allowing inclusion of the Zero-Vector Class as an extra token. While NLP experiments are left for future work, typical embedding sizes (~512) are smaller than those already tested (e.g., 784 for MNIST), supporting the method’s practical feasibility in NLP tasks. # Regarding References This work explores a novel direction, and we found it challenging to identify direct precursors. Thank you for suggesting foundational works like word2vec and SVMs—we agree these are relevant in spirit and will incorporate such references in the final version. We welcome any further suggestions as well. We believe these clarifications resolve the concerns raised. If you find these points convincing, we would appreciate a reconsideration of your Overall Recommendation. Thank you again for your valuable review and time.
Summary: This paper proposes a mathematical framework for handling classes in datasets as vectors in a vector space, including a Zero-Vector Class which can be regarded as the absence of a class. They introduce all theoretical foundations and discuss two applications of their framework, namely "clear learning" and "unary class learning". The approach is validated using MNIST and CIFAR-10 datasets. Claims And Evidence: The main content of this work is the theoretical foundation of the Zero-Vector Class. The authors provide well-supported evidence for their claims in the form of experiments and visualizations of their theoretical concept. However, there are some assumptions that lack evidence, e.g. the assumption that the Zero Vector Class acts like a uniform distribution and an additive identity. However, the emperical experiments show that the learning approaches work. Methods And Evaluation Criteria: The proposed methods make sense for the problem at hand. They demonstrate the theoretical assumptions, but also reveal limitations in scalability, which is left for future work. Theoretical Claims: See above. Experimental Designs Or Analyses: The experiments and analyses are sound. However, the training procedure for the individual neural networks is not described in much detail. Supplementary Material: The code for the models is provided as supplementary material, which I checked sparingly. Relation To Broader Scientific Literature: The authors contribute to a broad range of literature on the theoretical foundations of learning algorithms. In the future more experiments should be conducted for more diverse benchmarks beyond MNIST and CIFAR-10. Essential References Not Discussed: There are no essential references not discussed that I know of. Other Strengths And Weaknesses: The idea is very interesting and a great contribution. The theoretical assumptions and limitations concerning scalability should be investigated more in the future. Other Comments Or Suggestions: Small errors: - line 043: "many techniques (...) that allow" instead of allows - the alternation between upper and lower case is irritating (e.g. line 044: "the Neural Network", in the next column: "If a neural network exhibits..", "Logit" vs. "logit". - line 037 (right column): "of the combined class's" should be "classes'" I think - line 139: "need not be an.." should be "does not need to be.." - line 132 (right column): "in the left-hand side" should be "on the left-hand side" - Beginning of Section 3: there is an article missing "set of Valid Logit.." - When using equations the punctuation is missing - On the right column of page 4 the equations are not numbered - Instead of "refer the subsection" it should be "see subsection .." - line 234 (right column): "the PDF of [0] is considered" instead of "consider" - line 310: "the learning does not make much sense" Questions For Authors: - How does the training procedure for clear learning look like? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your careful and meticulous review, especially your identification of small errors with exact line numbers; this level of detail is invaluable for improving the manuscript's clarity and quality. # Clarification on Training Procedure for Clear Learning Thank you for raising this important point. We acknowledge the training procedure was not fully detailed in the original submission and appreciate the opportunity to clarify. Clear Learning follows a standard supervised classification setup using Cross-Entropy loss and SGD optimization. The only change is the inclusion of the Zero-Vector Class. This is done by uniformly sampling points from the input space, labeling them as the Zero-Vector Class, and adding them to the training data. The classifier’s output layer includes one extra node to represent this class. All other components—architecture, loss function, optimizer, and hyperparameters—remain identical to baseline training. This makes the method easy to integrate into existing pipelines with minimal changes. We used standard network architectures appropriate for each task: - For MNIST and the 2D datasets (e.g., Figure~6), we used fully connected networks. - For CIFAR-10, a conventional convolutional neural network. - For the equation discovery task, a multidimensional Taylor-series-based classifier. Given the range of tasks, we’ve included implementation code in the supplementary material and will release the full codebase upon acceptance. Training details will also be clarified in the final revision. # Assumptions about Zero-Vector Class (Mathematical Rigor \& Clarity) We appreciate your comment regarding the assumption that the Zero-Vector Class behaves like a uniform distribution and an additive identity. While our main goal was to convey the intuition leading to the framework, we agree that formalization is valuable. Your feedback helped us clarify this aspect more rigorously As described in Section 2 (Characterization of Logits) of the main paper, if the data PDF is $f(x)$, a predefined threshold $\alpha_{f}$, can determine class membership. Thus tuple $(f(x), \alpha_{f})$ defines that class, say class$-f$. More generally, let $f_T(x)$ be the threshold of the PDF $f(x)$, and the tuple $(f(x), f_T(x))$ defines that class. Consider a black-box operator that takes a function as input and returns a threshold function suited for that PDF. For example, $(f(x) + k, f_T(x) + k)$ defines the same class-$f$ for any real-valued constant $k$. All such tuples preserve the decision boundary and thus represent the same underlying class. Let 𝓜 be the set of monotonically non-decreasing functions. Then each tuple in the set $[f(x)]:= $ {$( M \circ f(x),\ M \circ f_T(x) ) \big| M \in 𝓜$ } also defines the same class-$f$. Here, $M \circ f(x)$ denotes $M(f(x))$, written this way to reduce bracket clutter. **Definition of set V:** $V = $\{$[f(x)], [g(x)], [h(x)], .... $} **Addition:**$\forall_{[f(x)], [g(x)] \in V}$ $[f(x)] + [g(x)] := [f(x) + g(x)]$ **Scalar multiplication:** $\forall_{\lambda \in \mathbb{R}}$ $\forall_{[f(x)] \in V}$ $\lambda [f(x)] :=[\lambda f(x)]$ The remaining vector space properties follow as in the main paper. Here, we focus on the additive identity (the zero-vector). Let $[I(x)]$ is the identity of this vector space. Then, for any vector $[f(x)]\in V$ $[f(x)] + [I(x)] = [f(x)] $ ⇒ $[f(x) + I(x)] = [f(x)] $ ⇒ $[(f + I)(x)] = [f(x)] $ which implies: {$ (M ∘ (f + I)(x), M ∘ (f + I)_T(x)) | M ∈ 𝓜$} $=$ {$ (M ∘ f(x), M ∘ f_T(x)) | M ∈ 𝓜 $} If $I ∈ 𝓜$, then the above holds trivially. Hence, the additive identity must be a monotonically non-decreasing function. Since every vector in $V$ corresponds to a class (i.e., a PDF), there must exist a PDF representing the zero-vector class that is monotonically non-decreasing. We now analyze its form: ## Case 1: PDF of the Zero-Vector is Constant A constant PDF corresponds to a uniform distribution. Hence, the zero-vector class corresponds to a uniform distribution over its support. ## Case 2: PDF is Strictly Monotonically Increasing Let $A$ be the volume spanned by a PDF that is strictly monotonically increasing and has zero probability outside the volume $A$. Since a PDF must integrate to one over its support, as $A \to \infty$, the PDF appears approximately constant within any finite, localized region. Thus, samples will appear uniform within any localized region. From both Case 1 and Case 2, it follows that data sampled from the zero-vector appears uniform within any finite local region. Hence, the Zero-Vector Class effectively behaves as a uniform distribution. We invite you to look at the computational cost and scalability discussed in response to reviewer **gWUe**. If our clarifications have addressed your concerns, we would be grateful if you would consider updating your overall recommendation. Thank you again for your time and thoughtful review. --- Rebuttal Comment 1.1: Comment: Thank you very much for the further clarification and considering my comments. I will revise my score. --- Reply to Comment 1.1.1: Comment: We deeply appreciate your acceptance of our work and your thoughtful suggestions, which have significantly improved the clarity and rigor of our manuscript—particularly around the training procedure and formalization. Given the foundational nature of this concept, we feel a strong responsibility to ensure it reaches and informs the broader ML community, helping shape future research around clearer, more interpretable representations. As we’ve fully addressed your thoughtful feedback, we kindly invite you to consider increasing your recommendation to further support this direction and its broader impact. Thank you again for your time, insight, and constructive review.
Summary: This paper introduced a novel technique to regularize neural networks' decision boundary by introducing the so-called "zero-vector class". The paper established some mathematical properties of the zero-vector class concept and derived a simple method to improve on the neural networks in different tasks. Claims And Evidence: The paper claims to have invented the notion of "zero-vector class" and applied it on classification problems. The concept is useful in many situations, such as single-class learning and continuous learning, and improving the decision boundary. The paper studied the empirical benefit in the Appendix which I have not been able to check. Methods And Evaluation Criteria: The main part of the paper is consisted of building the formal framework of Valid Logit Functions and the vector space they constitute. There are few empirical claims discussed in the paper but their results were presented in the Appendix, which I have not checked in full. Theoretical Claims: The theoretical claims are the valid logit functions can be partitioned into equivalence classes and these classes form a vector space under certain conditions. It is proved in the main paper and I believe it is correct. Experimental Designs Or Analyses: This paper have some experimental results showing the empirical properties of adding a "zero class" in the classification task. Even though it is not a overwhelmingly better method compared to not adding the class, it did changed the neural network's decision boundary in some ways, and improved on the purity factor defined in the paper. However, it is unsure whether this approach could scale when the domain of tasks change. Supplementary Material: see above Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: see above Other Comments Or Suggestions: n/a Questions For Authors: In the paper, the author assumed that we could sample the "zero class" simply by drawing from a uniform distribution. However, it is quite common that the data distribution we care about lie on a low dimensional manifold (or a neighborhood of that manifold) which we can hardly sample efficiently from. For example, we can not easily sample random images besides Gaussian noises. I am curious how the authors would consider sampling from the "zero class" in those cases. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review and for positively highlighting the theoretical contributions of our work. # Q. Regarding Sampling from Low-Dimensional Manifolds Your concern about sampling from low-dimensional manifolds is valid. Directly sampling from such manifolds is generally impractical. Crucially, however, our method explicitly does **not require manifold sampling**. Instead, we uniformly sample from the entire input space—a process that is both simple and ensures maximal entropy. Why is this effective? Uniform sampling guarantees maximal entropy and thus explicitly represents regions of the input space associated with high uncertainty. These samples intentionally represent areas the classifier should not confidently assign to known classes. By introducing the Zero-Vector Class, we encourage the classifier to place decision boundaries precisely around true data manifolds, effectively learning the manifold structure indirectly. This is clearly demonstrated in Figure 6 (bottom row): without the Zero-Vector Class (bottom left), the classifier's boundaries incorrectly extend into empty regions. In contrast, with the Zero-Vector Class (bottom right), decision boundaries closely follow the true data distribution. This result validates our simple yet powerful sampling approach. # Regarding the Strength of Empirical Results We respectfully disagree with the assertion that improvements from our method are marginal. The gains are substantial, measurable, and practically significant. Figure 6 visually demonstrates how the Zero-Vector Class dramatically improves decision boundary alignment with the true manifold. Moreover, quantitative evidence strongly supports our method: - In Figure 18 (MNIST), the purity factor for classes 3 and 8 drops below 5% without the Zero-Vector Class, meaning the classifier confidently misclassifies 95% of the region. With the Zero-Vector Class, purity exceeds 95\% implies misclassifies only 5\% of the region. - A similar pattern occurs in Figure 29 (CIFAR-10). High purity explicitly indicates decision regions closely align with true data manifolds. Achieving this high purity directly enables a variety of applications (e.g., single-class learning, continual learning, equation discovery), which are otherwise infeasible with traditional classification methods. # Scalability to High-Dimensional Data We appreciate your question about scalability, and explicitly evaluated it through additional experiments inspired by the practical success of embedding-based models (e.g., VQ-VAE, Stable Diffusion, and Masked Autoencoder [MAE]). Specifically, we: - Selected embeddings (768-dimensional) from 10 random classes in ImageNet-1K using a pretrained MAE model. - Generated corresponding Zero-Vector Class data uniformly in this embedding space. - Trained two classifiers: - **Zero-Inclusive Network** (with Zero-Vector data) - **Zero-Exclusive Network** (without Zero-Vector data) Results demonstrate clear scalability and effectiveness: - **Purity Factor:** Zero-Exclusive Network purity fluctuates dramatically (0.1–0.9), indicating misclassification of empty space. Zero-Inclusive Network purity remains consistently high and stable (0.7–1.0). - **Accuracy:** Both classifiers achieve comparable accuracy (~82%), confirming no accuracy penalty from the Zero-Vector Class. These experiments clearly validate our method’s scalability to high-dimensional embedding spaces used by modern neural networks. [Click here to view additional results on ImageNet-1K embeddings.](https://drive.google.com/file/d/1lB6JvipbRdom8n-8U7JFScrIG1LXMhKV/view?usp=sharing) For NLP applications, Transformer models typically operate in embedding spaces with dimensions (~512) lower than those we have already successfully tested (e.g., MNIST: 784). Since next-token prediction in NLP is formulated as a vocabulary classification task, incorporating the Zero-Vector Class as an additional vocabulary token is straightforward and promising. Explicit NLP experiments will be pursued in future work. We believe these clarifications resolve the concerns raised. If you find these points convincing, we would appreciate a reconsideration of your Overall Recommendation. Thank you again for your valuable review and time, and the insightful question. --- Rebuttal Comment 1.1: Comment: Thank you for providing the additional results on Image-1K embeddings. After reviewing the updated findings, I have revised my score to reflect the improvements. However, as someone who is not very familiar with this specific line of work, I advise readers to interpret my evaluation with appropriate caution. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful reconsideration and for updating your evaluation based on the additional ImageNet-1K experiments. We greatly appreciate your engagement with our clarifications and results, which helped us strengthen the manuscript significantly. We view the review and rebuttal process as a core part of scientific progress, and we’re confident that readers will interpret all evaluations—including yours—as essential contributions to that process. We believe this work presents a foundational perspective on class representation in machine learning, with broad relevance to tasks such as anomaly detection, continual learning, and the interpretability of decision boundaries. Given its conceptual importance, we feel a responsibility to ensure this idea is made accessible to the wider ML community—especially at a venue like ICML, where emerging directions often shape future research. If you find that the revised results and responses have addressed your original concerns, we would sincerely appreciate your further endorsement. Your support could meaningfully increase the visibility of this work and encourage broader exploration of its ideas. Thank you again for your time and thoughtful feedback.
Summary: The paper introduces a novel mathematical framework for understanding class representations in machine learning by defining classes as vectors in a vector space. The core idea revolves around the concept of a Zero-Vector Class, which corresponds to a data class with a uniform distribution. This conceptualization enables various applications, including clear learning, unary class learning, set operations on classes, continual learning, data generation, and equation discovery. The framework is demonstrated through experiments on the MNIST and CIFAR-10 datasets. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand Theoretical Claims: I did not check the correctness of any proofs for theoretical claims Experimental Designs Or Analyses: I check the soundness/validity of any experimental designs or analyses. Supplementary Material: I review the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are not related to the broader scientific literature Essential References Not Discussed: I am unfamilar with this area and not sure whether there are essential references that should be discussed Other Strengths And Weaknesses: **Strengths**: 1. Novel Theoretical Contribution: The paper presents an innovative way of treating classes as vectors, enabling set operations on class regions in a mathematically rigorous manner. The introduction of the Zero-Vector Class provides a fresh perspective on classification and decision boundaries. 2. Improved Decision Boundaries (Clear Learning): The paper argues that incorporating the Zero-Vector Class leads to clearer and more realistic decision regions in neural networks, avoiding misclassification in empty feature space regions. 3. Facilitates Unary Class Learning: The proposed approach allows neural networks to learn a single class in isolation, which can be useful in anomaly detection and other real-world applications where only positive examples are available. 4. Continual Learning Without Retraining: The Zero-Vector Class allows for incremental learning without catastrophic forgetting, as new classes can be integrated without redefining previous decision boundaries. 5. Potential for Data Generation and Equation Discovery:The method can be used to generate synthetic data by following gradients of logit functions. It can also help discover mathematical equations representing classes through Taylor-series-based learning. **Weaknesses**: 1. Mathematical Rigor & Clarity: While the paper provides proofs and derivations, some sections could be presented in a more structured and rigorous manner. Certain notations and definitions (e.g., the validity of the equivalence sets and operations on them) could benefit from clearer formalization. 2. Scalability to High-Dimensional Data: The framework's applicability to more complex datasets (e.g., ImageNet or NLP tasks) remains unclear. The Zero-Vector Class is primarily tested in lower-dimensional spaces; its efficiency in high-dimensional feature spaces needs further exploration. 3. Computational Efficiency:The paper does not discuss the computational overhead of incorporating Zero-Vector Classes in training. The potential trade-off between training time and classification accuracy should be analyzed. Other Comments Or Suggestions: See weaknesses Questions For Authors: 1. Stability of Zero-Vector Class Training: Given that Zero-Vector Class data is sampled from a uniform distribution, does this introduce instability or difficulties during training? Did you observe any vanishing/exploding gradients or convergence issues? 2. The paper draws parallels between Zero-Vector Class training and Energy-Based Models (EBMs). How does the proposed method differ in terms of learned representations, training stability, and generalization? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer for their detailed and constructive review. The questions raised are insightful and have helped us further refine and improve our work. # Q1. Stability of Zero-Vector Class Training We observed no instability during training with the Zero-Vector Class. The Zero-Vector Class uses identical network architecture and loss function as standard training. Empirically, we find training stability improved when incorporating the Zero-Vector Class: - Purity Factor plots (e.g., "Figure 18 shows MNIST, Figure 29 shows CIFAR-10") are **more stable** during training when the Zero-Vector Class is included. - Test Accuracy plot (e.g., "Figure 18 shows MNIST, Figure 29 shows CIFAR-10") is stable irrespective of zero-vector class used in training or not. - No vanishing/exploding gradients and convergence issues as training metrics either improved or remained stable. # Q2. Differences from Energy-Based Models (EBMs) We clearly distinguish our method from EBMs as follows: - **Representation:** EBMs learn a single global energy function representing data likelihood.\ Our method explicitly learns separate class logits, each class individually via a standard classification network. This results in distinct, interpretable per-class distributions. - **Training Stability:** EBMs typically require specialized training procedures (e.g., Langevin dynamics sampling), leading to potential instability or complex hyperparameter tuning. \ Our method uses standard supervised classification (cross-entropy loss) with the Zero-Vector Class, providing inherent training stability without extra sampling or specialized optimization. - **Generalization:** EBMs are susceptible to mode collapse, potentially failing to capture the complete data distribution.\ Our method explicitly learns the underlying data distribution through class logits (**Equation 27**), converging to the true distribution as the number of data points increases. This ensures comprehensive generalization to the true data manifold, thus avoiding mode collapse. # Scalability and Computational Efficiency ## Computational Efficiency We analyze computational complexity in two parts: 1. Pre-softmax logits: complexity $O(f(n))$ for $n$ data points. 2. Softmax calculation: $O(n \cdot \text{softmax}(c))$ for $c$ classes. Adding the Zero-Vector Class introduces one extra output node, adding overhead $O(nL^2)$, where $L$ is the preceding layer size. Complexities summarized: | Classifier | Training Cost | Inference Cost | |------------|---------------|----------------| | Zero-Exclusive | $O(f(n))+O(n\cdot\text{softmax}(c))\approx O(f(n))$ | $O(f(n))+O(n\cdot\text{softmax}(c))\approx O(f(n))$ | | Zero-Inclusive ($k$ extra points) | $O(f(n+k)+(n+k)L^2)+O((n+k)\cdot\text{softmax}(c+1))\approx O(f(n+k))$ | $O(f(n)+nL^2)+O(n\cdot\text{softmax}(c+1))\approx O(f(n))$ | Typically, $O(f(n))$ dominates, making overhead manageable, especially given improvements in purity. ## Scalability Inspired by successful embedding-based models (VQ-VAE, Stable Diffusion, MAE; typical dimensions: 512–1024), we explicitly tested scalability on embeddings (dimension:768) from Masked Autoencoder (MAE, ImageNet-1K): - Selected 10 random ImageNet classes, generated embeddings and corresponding Zero-Vector data. - Trained Zero-Inclusive (with Zero-Vector data) and Zero-Exclusive classifiers. Results confirm scalability: - **Purity Factors:** 0.7–1.0 for Zero-Inclusive Network vs. 0.1–0.9 for Zero-Exclusive Network, demonstrating improved boundaries. - **Accuracy:** Both networks achieved similar accuracy (~82\%), confirming no accuracy degradation. [Click here to view additional results on ImageNet-1K embeddings.](https://drive.google.com/file/d/1lB6JvipbRdom8n-8U7JFScrIG1LXMhKV/view?usp=sharing) In NLP Transformers, next-token prediction tasks classify over vocabulary, naturally permitting Zero-Vector Class integration. Typical NLP dimensions (~512) are lower than those already tested (e.g., MNIST:784), confirming practical feasibility. Explicit NLP experiments remain future work. # Response on Mathematical Rigor & Clarity: Our goal was to clearly convey the core intuition behind our framework and support it empirically. We agree formalization is valuable and have provided rigorous mathematical explanations in response to Reviewer **dsGt**, which we encourage you to read for more details. # Regarding Essential References: Given the novelty of this approach, identifying directly related prior work was challenging for us as well. We hope this contribution opens a new line of exploration. We sincerely appreciate your detailed review and hope our responses clarified key concerns. Feedback from other reviewers was positive; if our clarifications resolved your concerns, we would appreciate your reconsideration of the Overall Recommendation. Thank you for your valuable evaluation and time. --- Rebuttal Comment 1.1: Comment: Thank you for all your detailed responses. I will revise my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your reconsideration and the increased evaluation score. Your insightful comments have meaningfully shaped and improved the clarity and depth of our manuscript. We genuinely believe this work introduces a foundational concept with far-reaching implications for the ML community. Given its importance, we feel a strong responsibility to ensure the idea is clearly communicated and widely understood, helping guide future research along scientifically grounded and impactful directions. In direct response to your earlier concerns, we conducted additional experiments and provided detailed clarifications—addressing high-dimensional scalability (via ImageNet embeddings), computational complexity, training stability, and the distinctions from Energy-Based Models (EBMs). If you find that these clarifications and results fully resolve your earlier concerns, we kindly ask you to consider revising your recommendation to a clear Accept. Your support would play a key role in helping this contribution reach the community it is intended to serve. Thank you again for your thoughtful and constructive feedback.
null
null
null
null
null
null
Theoretical guarantees on the best-of-n alignment policy
Accept (poster)
Summary: This paper provides a theoretical analysis of the best-of-$n$ policy $\pi^{(n)}$, which is a simple inference-time method for aligning language models where $n$ samples are drawn from a reference policy $\pi_{ref}$, the highest ranking one based on a reward function is selected. The authors first disprove a commonly used analytical expression for $KL (\pi^{(n)} || \pi_{ref})$. They demonstrate that the formula $\widetilde{KL}\_n := \log(n) - \frac{n-1}{n}$, which is widely cited in the literature, is actually just an upper bound on the true KL divergence. They theoretically characterize when this bound is tight and when it can be arbitrarily loose. Additionally, the paper develops a new estimator for $KL (\pi^{(n)} || \pi_{ref})$ that more accurately tracks the true value, as demonstrated by experiments. The authors also analyze the win rate of $\pi^{(n)}$ against $\pi_{ref}$. This analysis shows that the win rate is upper bounded by $\frac{n}{n+1}$ and characterizes when this bound is tight. The authors also compare best-of-$n$ with another rejection sampling approach called rewind-and-repeat, ultimately showing the superiority of best-of-$n$ in terms of win rate vs. KL divergence tradeoffs. The paper concludes that the actual win rate vs. KL divergence tradeoffs for best-of-$n$ are better than what has been reported in the literature when using the incorrect formula, and that very good tradeoffs are achievable with $n < 1000$. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No, but they seems to be correct by intuition. Experimental Designs Or Analyses: I reviewed all experiments mentioned in the main text. My concern is with the experiment in Figure 4, where the authors test language models on benchmarks. The reward is set as the log-likelihood of the reference model, which already correlates with the policies, thus reducing generality. I suggest the authors provide more explanation about this choice, justify this approach, or discuss a more general reward function. Supplementary Material: No Relation To Broader Scientific Literature: This paper is significant to the field of language model alignment for several reasons: - It corrects a fundamental misunderstanding about the KL divergence of best-of-$n$. It also provides theoretical justification for why best-of-$n$ performs so well empirically, showing that its win rate vs. KL divergence tradeoffs are actually better than previously thought. - It offers theoretical bounds and a more accurate estimator for KL divergence that can be used to better evaluate alignment methods. These results can give hints on choosing the best $n$ in practice. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths - **Theoretical guarantees:** The paper provides mathematical analysis, establishing bounds on both KL divergence and win rates. It corrects a misunderstanding in the literature by showing the formula $\widetilde{KL}\_n := \log(n) - \frac{n-1}{n}$ is only an upper bound for $KL (\pi^{(n)} || \pi_{ref})$. - **Practical estimator:** The proposed KL divergence estimator has practical value for researchers evaluating alignment methods, as it more accurately captures the true KL divergence, as demonstrated by experiments. - **Comparative analysis:** The comparison with rewind-and-repeat provides context on why best-of-$n$ performs well in practice through the lens of tradeoffs between KL divergence and win rates. ### Weaknesses - **Limited empirical validation:** While the paper includes examples, it lacks extensive empirical validation on real language models and datasets. More comprehensive experiments would strengthen the claims. Additionally, in the experiment in Figure 4, the reward is highly correlated with the reference policy, which represents a constrained case. Other Comments Or Suggestions: No Questions For Authors: Regarding the empirical validation on benchmarks, what will the results be if the reward model is no longer the log likelihood of $\pi_{ref}$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful review and encouraging comments. > My concern is with the experiment in Figure 4, where the authors test language models on benchmarks. The reward is set as the log-likelihood of the reference model, which already correlates with the policies, thus reducing generality. I suggest the authors provide more explanation about this choice, justify this approach, or discuss a more general reward function. We have now experimented with other tasks and other rewards (e.g., generation length) to showcase that this is not just an issue that pertains to the log-likelihood reward. See https://anonymous.4open.science/r/bon. We would like to reiterate that anytime there is a non-trivial chance of collision between the outcomes in a set of size $n$, the analytical formula for the KL divergence overestimates the true KL. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiment results -- they look good to me. I'll keep my positive score.
Summary: This paper discusses the best-of-n policy, a method used for inference-time alignment of generative models. The key idea is to draw n samples from a reference policy, rank them based on a reward function, and select the highest-ranking sample. The passage critiques a commonly used analytical expression in the literature, which claims that the KL divergence between the best-of-n policy and the reference policy is equal to log(n)−(n−1)/n. The authors disprove this claim, showing that it is instead an upper bound on the actual KL divergence. They also explore the tightness of this bound and propose a new estimator for the KL divergence that provides a tighter approximation. Additionally, the authors analyze the win rate of the best-of-n policy against the reference policy, showing that it is upper bounded by n/(n+1) . They derive bounds on the tightness of this characterization and conclude by analyzing the tradeoffs between win rate and KL divergence. Their findings suggest that very good tradeoffs can be achieved with n<1000 Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Assumption 2.2: It is posited that the output space of the language model is finite. This assumption simplifies the theoretical analysis; however, in practical applications, the output space of generative models is typically infinite (as in natural language generation tasks), which may limit the universality of the theory. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make good sense for the problem or application at hand. Theoretical Claims: I have examined all the theoretical proofs to ensure their accuracy. Experimental Designs Or Analyses: The experiment did not involve validation with real generative models, resulting in uncertainty regarding the practical application effectiveness of the theoretical results. In particular, the output space of real generative models is typically large and complex, necessitating further verification of the applicability of the theoretical results in these scenarios. Supplementary Material: I have reviewed all the supplementary material to ensure their accuracy. Relation To Broader Scientific Literature: This paper refutes the commonly used expression for KL divergence in the literature, demonstrating that it merely serves as an upper bound to the actual KL divergence. Furthermore, a novel estimator for KL divergence is proposed. Essential References Not Discussed: Most of the essential references have already been cited within this paper. Other Strengths And Weaknesses: Strengths: 1. This paper not only analyzes the KL divergence but also delves into the win rate of the best-of-n strategy, proving an upper bound of the win rate to be n/(n+1). Furthermore, the authors derive a trade-off relationship between the win rate and KL divergence, demonstrating how adjusting n can achieve a better balance between performance and model alignment in various scenarios. 2. This paper introduces a novel estimator for KL divergence and validates its effectiveness through numerical experiments. This estimator provides a more accurate reflection of the KL divergence between the best-of-n strategy and the reference strategy. Weaknesses: 1. This paper does not provide detailed guidance on the specific implementation and optimization of the best-of-n strategy in practical applications. Other Comments Or Suggestions: I do not have any other comments or suggestions. Questions For Authors: I do not have any important questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful review and encouraging comments. > This assumption simplifies the theoretical analysis; however, in practical applications, the output space of generative models is typically infinite (as in natural language generation tasks), which may limit the universality of the theory. We would like to mention that (1) in practice, language models have a finite output tokens and will eventually stop hence practically the number of possible generated sequences is finite (even though it could be very large), which is covered by our theory; (2) this assumption could be relaxed using limit arguments, and we are happy to include a relaxation of it; (3) this result has already been extended to models with potentially continuous outputs (which would not even be countable) by Mroueh (2024) (subsequent to our work) so the extension beyond what is proved here is certainly possible. Mroueh, Youssef. "Information theoretic guarantees for policy alignment in large language models." arXiv preprint arXiv:2406.05883 (2024). > The experiment did not involve validation with real generative models, resulting in uncertainty regarding the practical application effectiveness of the theoretical results. In particular, the output space of real generative models is typically large and complex, necessitating further verification of the applicability of the theoretical results in these scenarios. We would like to mention that we already showed a certain case with prompts from AlpacaEval dataset and Gemma2 IT 9B in Figure 4. We have now expanded the scope of experiments with more tasks and rewards in https://anonymous.4open.science/r/bon > This paper does not provide detailed guidance on the specific implementation and optimization of the best-of-n strategy in practical applications. While we agree with the reviewer that we do not provide an optimization of best-of-n, our results imply that with n~100-1000, win rate against base model will be already saturated without the need for excessively large n which also implies that KL divergence of ~5 is enough to reach a good policy. This implies that RLHF practitioners can aim to keep the KL divergence of their aligned models <10 and achieve good policies, which also significantly helps mitigate reward overoptimization.
Summary: This paper revisits the best-of-n alignment policy: given n samples from a reference language model, pick the sample that scores highest under a reward (alignment) function. A long-used formula in prior papers, $D_{kl}=log(n)-(n-1)/n$, has been cited as the KL divergence of the best-of-n policy from the reference. The authors show that although this expression frequently appears, it is only an upper bound—not an exact value—in many realistic scenarios. They derive exact or tighter bounds, provide a new practical KL estimator, and show that in some regimes (particularly if a small number of high-probability completions exist under the reference), the conventional formula can substantially overestimate the real KL drift. In addition to these theoretical clarifications, they also: (1) Analyze blockwise best-of-n (where re-ranking occurs multiple times during generation). (2) Compare best-of-n to “rewind-and-repeat,” another rejection-sampling-like policy. (3) Show how their refined analysis can help practitioners accurately measure or cap KL divergence in alignment pipelines—important for settings where controlling distribution shift is crucial (e.g., compliance, RLHF with KL constraints). Claims And Evidence: The paper’s key claims are relatively well substantiated by proofs and by numeric examples carefully chosen to highlight the regimes where the old formula fails. Methods And Evaluation Criteria: The real data experiments are limited and do not show large-scale or broad tasks. The variance of the proposed KL estimator in practical, large n sampling is not deeply explored. Theoretical Claims: The main theorems (e.g. Theorem 3.1, Theorem 3.4, Theorem 5.3) use standard tools from information theory (KL divergence definitions, integrals, combinatorial arguments) and appear consistent. The proofs in the appendices logically match the statements in the main text. **Potential concerns**: The theorems assume finite support or at least that the model probabilities for outcomes can be well-defined in a somewhat discrete sense. Real LMs often have large vocabularies, but the authors argue that for bounding or approximate computation, this finite-support assumption can be relaxed in practice. Experimental Designs Or Analyses: The chosen synthetic examples clearly demonstrate the paper’s main theoretical points.The real prompts confirm that skewed probability distributions are common in real queries. **Potential Weaknesses**: The real data experiments are limited and do not show large-scale or broad tasks. The variance of the proposed KL estimator in practical, large n sampling is not deeply explored. Supplementary Material: The appendices contain extended proofs (e.g. for blockwise best-of-n, Lemma 6.1, Theorem 6.3). Additional experimental details and further discussion of alternative rejection sampling strategies are also present. Everything in the supplement aligns well with the main text. Relation To Broader Scientific Literature: - **Alignment & KL Regularization**: The paper ties into RLHF and reward-based alignment (Christiano et al., Ouyang et al., etc.). Many works track or constrain $KL(\pi||\pi_{ref} )$ as a measure of “distribution drift.” - **Controlled Decoding**: Ties well to prior re-ranking or decoding approaches like beam search, blockwise best-of-n, or other re-ranking methods. - **Relation to Rejection Sampling**: They connect best-of-n and “rewind-and-repeat” as forms of rejection sampling, referencing literature on approximate generation, safe decoding, or iterative expansions. - **Practical Gains**: The new results clarify that in a “$\delta$-bound” setting (where the reference rarely repeats outcomes) the old formula is indeed tight, but in the presence of a few large-prob outcomes, the real KL can be drastically smaller. Essential References Not Discussed: The authors already cite many highly relevant alignment and preference-based optimization works (including those that have used or mentioned the log(n) formula for best-of-n). No glaring omission is evident. Overall, the bibliography covers standard RLHF, preference optimization, and controlled decoding references. Other Strengths And Weaknesses: **Strengths** - Clarification of a widely cited formula: The old rule was occasionally treated as exact, which could be misleading in some alignment or compliance scenarios. - Practical new estimator: Offers a run-time method to measure “how much drift best-of-n actually used,” potentially letting practitioners adapt n on the fly or confirm they have not exceeded a KL budget. - Extensions: The coverage of blockwise decoding and comparisons to alternative rejection sampling (rewind-and-repeat) broadens applicability. **Weaknesses** - Empirical scope: The paper mostly uses small or contrived examples. Large-scale, high-n tasks in more varied domains could strengthen claims about real-world use. - Variance analysis: The new estimator’s variance or confidence intervals in complex distributions is left for future work. Other Comments Or Suggestions: - (1) While it is likely that in many real tasks with big n and small probabilities per token, the difference is small, showing this in a broader study would underscore the conditions under which the new analysis provides a “significant” improvement. - (2) Because the paper is about “how far the policy can deviate,” it might be worth an explicit mention that tighter estimates can help compliance or risk-management teams ensure that the model does not drift too far from a safe baseline. - (3) The authors’ theoretical perspective might help design an adaptive best-of-n procedure that halts once an approximate KL threshold is reached, or once a certain minimum reward is achieved. Questions For Authors: **Estimator Variance** - How large can the variance of the proposed KL estimator get in real usage (especially if $\epsilon_n$ is extremely small)? - Would you recommend running it repeatedly and averaging over many sequences, or do you envision an online, per-sample approach? **Blockwise vs. Full-Sequence** - Have you observed in practice (beyond the toy examples) that blockwise best-of-n greatly outperforms single-step best-of-n in reward–KL tradeoffs, or do you expect diminishing returns? **Real-World Deployments** - Can you give a concrete scenario (e.g. in compliance or enterprise alignment) where the old formula’s overestimation of KL meaningfully hampered performance or forced overly conservative constraints? A real case study would clarify the direct practical payoff. **Infinite/Very Large Vocabularies** - Your proofs rely on finite support or a finite set of “possible outcomes” in each context. For large-vocabulary LMs, do you see a direct extension or an approximate argument? Would it still hold to treat large but finite vocab sizes similarly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful review and encouraging comments > The real data experiments are limited and do not show large-scale or broad tasks. The goal of the experiments was to show cases where the could be a gap between the analytical formula and the true KL. We have now experimented with other prompts for machine translation and other rewards, namely output length, and still show that the KL estimates could be loose in more tasks. See https://anonymous.4open.science/r/bon > The variance of the proposed KL estimator in practical, large n sampling not deeply explored. Would you recommend running it repeatedly and averaging over many sequences, or do you envision an online, per-sample approach? Thanks for this very important question. The proposed estimator is upper bounded by the analytical formula, and hence by log(n). It is also non-negative and lower bounded by 0. Hence, a crude upper bound on its standard deviation is $\log(n)$. Thus, if the estimator is averaged out over $M = O(\log(n) \log\frac{1}{\delta})$ responses the standard deviation could be driven down to below $\delta$. Given that we are generally interested in $n < 1000$, the dependence on $n$ is manageable. Having said that, given each of the M batches has $n$ iid samples (total of $M \times n$ iid samples), we can use a bootstrapped estimator and should be able to remove the dependence on $\log(n)$. We will think more about the variance estimation and include it in the next iteration. We would like to also mention that the variance of the estimator in Figure 2 is exactly zero given all outcomes are the same likelihood. We have now also computed and plotted the standard deviation of the estimator in Figure 3 in the new PDF. See https://anonymous.4open.science/r/bon. > The theorems assume finite support or at least that the model probabilities for outcomes can be well-defined in a somewhat discrete sense. Real LMs often have large vocabularies ... The finite support assumption could be relaxed in practice by limit arguments given that PMF is bounded, and we will include this relaxation in the next version. We would like to also mention that subsequent to our work, Mroueh (2024) in fact has proved a variant of Theorem 3.1 under much weaker assumptions that even applies to continuous distributions such as diffusion process. Mroueh, Youssef. "Information theoretic guarantees for policy alignment in large language models." arXiv preprint arXiv:2406.05883 (2024). > Showing this in a broader study would underscore the conditions under which the new analysis provides a “significant” improvement. A concrete scenario (e.g. in compliance or enterprise alignment) where the old formula’s overestimation of KL meaningfully hampered performance? We would like to emphasize that the message of this paper is not just about providing a new estimator for KL divergence. We also prove that the existing formula is an upper bound and hence the works that use it still give guarantees on the win rate vs KL divergence of best-of-n. > Because the paper is about “how far the policy can deviate,” it might be worth an explicit mention that tighter estimates can help compliance or risk-management teams ensure that the model does not drift too far from a safe baseline. Thanks for the suggestion. Will explicitly mention it. > The authors’ theoretical perspective might help design an adaptive best-of-n procedure that halts once an approximate KL threshold is reached, or once a certain minimum reward is achieved. Best-of-infinity with halting when a certain threshold on reward is reached is akin to rewind-and-repeat which we analyze in Section 6, where we show that the resulting tradeoff between win rate and KL is less favorable compared to best-of-n. However, the resulting tradeoff between win rate and cost (number of decoding trials) is more favorable than best-of-n, which is important in test-time scaling. Hence, it remains to be seen how to best design a method that achieves Pareto optimal tradeoffs between compute, reward, and KL divergence (which captures preservation of capabilities other than what is captured by reward), which might involve combining the rewind-and-repeat with best-of-n for some finite n as the reviewer suggests. We will add a discussion to this end in the concluding remarks as an area for future work. > Have you observed in practice (beyond the toy examples) that blockwise best-of-n greatly outperforms single-step best-of-n in reward–KL tradeoffs, or do you expect diminishing returns? Blockwise best-of-n has been studied comprehensively by (Mudgal et al., 2024). While the reward vs KL tradeoffs are generally not better than best-of-n (due to the fact that best-of-n is already almost optimal), blockwise best-of-n generally allows to achieve similar reward vs KL tradeoffs with ~10x smaller n, which means a 10x reduction in test-time compute. Mudgal, Sidharth, et al. "Controlled Decoding from Language Models." ICML, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and thoughtful rebuttal, as well as for addressing the concerns I raised in the review. I appreciate the additional information and clarifications you have provided, and I acknowledge the efforts to improve the paper based on the feedback. --- 1. *Real Data Experiments*: I’m glad to see that you’ve conducted additional experiments with various tasks, such as machine translation and output length as a reward. While I appreciate these further experiments, I still believe that more large-scale, varied real-world tasks would strengthen the empirical validation of your claims, particularly for more complex scenarios. It would be beneficial to see how your findings generalize across a wider range of tasks, especially with larger models. --- 2. *KL Estimator Variance*: Your explanation of the variance of the KL estimator and the use of bootstrapping is insightful. I appreciate the inclusion of standard deviation plots in the new version, which help clarify the behavior of the estimator in practice. However, I still think that further exploration of the variance, particularly in high n scenarios, would provide a clearer understanding of the estimator's reliability in real-world settings. As mentioned, this could be explored in future iterations to ensure the estimator’s stability across different distributions. --- 3. *Finite Support Assumption*: Thank you for the clarification regarding the finite support assumption and the reference to Mroueh’s (2024) work. It’s reassuring to know that there is progress on relaxing these assumptions for continuous distributions, and I look forward to seeing the extension of your results in that direction in the next version. This would further bolster the applicability of your work to large-vocabulary language models. --- 4. *Concrete Scenario and Practical Impact*: I appreciate your emphasis on the theoretical contribution, which shows that the old formula is an upper bound. I also welcome the decision to explicitly mention the impact of tighter estimates for compliance and risk-management teams, which would make the practical value of your work even more apparent. As you mention, understanding where the old formula’s overestimation hampers performance in real cases is an important next step, and I look forward to seeing such discussions in future versions of the paper. --- 5. *Adaptive Best-of-n Procedure*: I find the comparison to rewind-and-repeat insightful, and I agree that combining both methods for a Pareto-optimal tradeoff between compute, reward, and KL divergence could be a promising direction for future work. I look forward to the addition of this discussion in the concluding remarks, as it presents a valuable avenue for enhancing best-of-n’s applicability in real-world use cases. --- 6. *Blockwise Best-of-n*: Thank you for the clarification regarding blockwise best-of-n and the work by Mudgal et al. (2024). The reduction in test-time compute is indeed a compelling reason for using blockwise best-of-n, especially in large-scale scenarios. Although, as you note, the tradeoffs may not always be better than single-step best-of-n, it seems that the approach could still yield practical benefits in terms of compute efficiency. --- Overall, I appreciate the detailed response and the additional insights provided. The clarifications regarding the variance of the estimator, blockwise best-of-n, and the relaxation of finite support assumptions strengthen the paper’s contributions. While I still believe that more extensive real-world experiments and further exploration of variance would be valuable, I recognize the theoretical and practical significance of your work, and I maintain my overall score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the reviewer taking the time to engage with our paper and responses and providing an additional round of feedback. We are fully in agreement with your assessment and will provide individual responses below. > Real Data Experiments We agree with the reviewer that more large-scale experiments would strengthen the findings, especially for practitioners. We plan to perform additional experiments on all prompts from Anthropic Helpfulness, Harmlessness, and Reddit text summarization to broaden the scope of the empirical study in the next version of this paper. Due to the limited time, we couldn't get to this during the rebuttal period. Please let us know if you have any other suggestions. > KL Estimator Variance We agree that better understanding the variance of the estimator is important. As promised we will include theoretical bounds on the variance of the KL estimator in the next version of the paper and will explore understanding it better to the extent possible. > Finite Support Assumption As promised, we will include this relaxation in the next version of the paper. > Concrete Scenario and Practical Impact As promised, we will include these discussion points in the next version of the paper. > Adaptive Best-of-n Procedure Thanks again for this suggestion. We will include these discussion points in the next version of the paper. > Overall, I appreciate the detailed response and the additional insights provided. The clarifications regarding the variance of the estimator, blockwise best-of-n, and the relaxation of finite support assumptions strengthen the paper’s contributions. While I still believe that more extensive real-world experiments and further exploration of variance would be valuable, I recognize the theoretical and practical significance of your work, and I maintain my overall score. Thanks again for your insightful feedback, and for engaging with us in multiple rounds of discussions. We are committed to addressing these shortcomings to the full extent in the next version of the paper.
Summary: The paper provides theoretical analyses for the widely used Best-of-N (BoN) policy, especially focusing on the KL divergence from the reference model and its win-rate. They first point out that the conventionally used formula for the KL divergence does not hold and actually gives only an upper bound. They evaluate the gap between the true KL divergence and the upper bound, showing the non-triviality of the gap under when N is sufficiently large (Section 3). They also propose an alternative estimator for the KL divergence, which is still a conjecture but experimental results indicate the plausibility of the conjecture (Section 4). Next, they analyze the win-rate of the BoN against the reference model, and provide an upper bound and its tightness evaluation (Section 5). Meanwhile, they also analyze the rewind-and-repeat model $\pi_{\Phi}$ with a threshold $\Phi$, as a variant of BoN model. In such cases, both the win-rate and KL divergence can be exactly calculated (Section 6). Finally, they discuss on the conventional evaluation of the win-rate vs KL divergence tradeoff, and they conclude that the previous results are too pessimistic and can be improved with their results in Section 3 and Section 5 (Section 7). Claims And Evidence: - Claim 1: The conventionally used formula for the KL divergence does not hold and actually gives only an upper bound, especially when N is sufficiently large. - Theorem 3.1, 3.4 analyze the tightness of the formula, and Theorem 3.6 shows the looseness when N is sufficiently large. - Claim 2: The proposed estimator for the KL divergence might give a tighter upper bound than the conventionally used formula. - empirically checked in Figure 2-4. - However, the claim is theoretically still a conjecture. - Claim 3: They analyzed the win-rate and show that it can be approximated by $\frac{N}{N+1}$ as expected in previous literature when N is sufficiently small. - proved by Theorem 5.3, 5.4. - Question: How the equation (34) can be derived? - Claim 4: In the case of the rewind-and-repeat model $\pi_{\Phi}$ with a threshold $\Phi$, which can be seen as a variant of BoN model, both the KL divergence and win-rate can be explicitly evaluated in terms of the probability distribution of the reference model. - provided by Lemma 6.2 and Theorem 6.3 - Claim 5: The previous (empirical) results on the tradeoff curve between the win-rate and KL divergence are too pessimistic and can be improved by their results. - theoretically obtained as corollaries of Theorem 3.1, 3.4, 5.3, 5.4. (Theorem A.6, A.7) - empirically checked in Figure 7-8. Overall, their claims are solid and well-supported by theoretical and empirical results. Methods And Evaluation Criteria: The proposed estimator for the KL divergence has some evidences both theoretically and empirically, but the actual guarantee is still a conjecture. However, this is not a fault of the paper since it is explicitly stated in Conjecture 4.4. Theoretical Claims: I roughly checked all proofs in Appendix. Experimental Designs Or Analyses: Their experiments seem to be well-designed. Supplementary Material: See Theoretical Claims. Relation To Broader Scientific Literature: One of the key contributions of this paper is that it fixes the conventional misunderstanding on KL divergence between the BoN and reference models. The other analyses are also valuable to the community of inference-time alignment. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See Claims and Evidences. Other Comments Or Suggestions: N/A Questions For Authors: See Claims and Evidences. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your insightful review and encouraging comments. > How the equation (34) can be derived? Eq. (34) is derived by combining Theorem 5.3, i.e., Eq. (30), and Corollary 5.5, i.e., Eq. (33). We will clarify this. > The proposed estimator for the KL divergence has some evidences both theoretically and empirically, but the actual guarantee is still a conjecture. However, this is not a fault of the paper since it is explicitly stated in Conjecture 4.4. While we acknowledge this weakness of our work. We would like to mention two points: * We have tested this conjecture in tens of thousands of randomly generated numerical examples with varying support sizes and n and have not found a counter example. * If we replaced $\epsilon_n$ with $\epsilon_\infty$, then the estimator would lead to an actual upper bound on the KL divergence (Corollary 4.2), which at least suggests that for large $n$ the estimator gives an upper bound.
null
null
null
null
null
null
LADA: Scalable Label-Specific CLIP Adapter for Continual Learning
Accept (poster)
Summary: This paper presents LADA (Label-specific ADApter), an approach for continual learning with vision-language models like CLIP. LADA enhances scalability and performance by generating discriminative, label-specific features. Unlike existing methods that partition parameters across tasks, LADA appends lightweight memory units to the frozen CLIP image encoder, enabling task-agnostic knowledge aggregation while preventing catastrophic forgetting through feature distillation. LADA ensures efficient training without updating the frozen parameters of CLIP. Experiment results show that LADA achieves state-of-the-art performance in continual learning. Claims And Evidence: The authors claim that “Our method is scalable and efficient, adding only small sets of learnable memory units for novel tasks, without requiring gradient propagation to the frozen CLIP image encoder.” However, the evidence provided in Table 4 primarily focuses on ablation studies without direct comparisons to other methods. To better support this claim, the authors are encouraged to include efficiency metrics of existing methods for a more comprehensive evaluation. Methods And Evaluation Criteria: Yes. Theoretical Claims: There’s no theoretical claims. Experimental Designs Or Analyses: The experimental design is well-structured, and the analyses are generally strong. However, additional comparisons in ablation studies, particularly in terms of computational efficiency, would further support the paper’s claims. Supplementary Material: The provided source code is a valuable addition for reproducibility. Relation To Broader Scientific Literature: This work eliminates the need for auxiliary prompt or adapter selection, offering a unified and scalable solution for continual learning scenarios. Essential References Not Discussed: 1. The related work section should include a dedicated subsection discussing continual learning (CL) methods based on CLIP that utilize parameter-efficient fine-tuning (PEFT) techniques, such as prompting, adapters, and LoRA. 2. While many early CL methods using PEFT involve a selection process for determining the appropriate prompt or adapter for a given task sample, recent efforts aim to avoid this selection step. The authors should discuss related works such as CODA-P (CVPR’23), APG (ICCV’23), and EvoPrompt (AAAI’24). 3. The paper lacks discussion and comparison with prior CL methods that leverage prototypes for pseudo-feature replay. Some methods, such as HiDe-Prompt (NeurIPS’23), employ a single prototype per class, while others, such as CPP [1], utilize multiple prototypes per class. A discussion on these approaches would enhance the contextualization of LADA within the broader CL landscape. [1] Steering prototype with prompt-tuning for rehearsal-free continual learning. WACV 2024 Other Strengths And Weaknesses: Other stengths. 1. The paper is generally easy to follow. 2. The proposed method demonstrates superior performance in Cross-domain TaskAgnostic Incremental Learning setting. Other weaknesses. 1. Some notations are not clearly introduced. For example, what does the $\mathbf{i}$ represent in Eq. (5)? 2. More explaination is needed for the statement “a linear weighting between the logits produced by LADA and the corresponding text features”. Clarifying this process is crucial for understanding how final predictions for seen classes are made. Other Comments Or Suggestions: Missing “:” at the end of the line above Eq. (9). Questions For Authors: My primary concern is the lack of discussion on related works. Please refer to the section “Essential References Not Discussed” for further details. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer oKy3, Thank you for your detailed review. We address your concerns one by one in the followings. > **W1 Claims And Evidence:** the evidence provided in Table 4 focuses on ablation without comparisons to other methods. **Experimental Designs Or Analyses:** computational efficiency would further support the paper’s claims. **A1** We conducte a computational efficiency experiments. We compare our method with other methods using a consistent input batch size of 64 in the full-shot setting. Time comparisons for MoE-Adapters and ours were conducted on a single NVIDIA 4090 GPU. The results are summarized in the table below: |Method|Train Params (M)|Max GPU Memory (GB)|Time (s/batch)| |-|-|-|-| |LWF|149.6|31.42|--| |ZSCL|149.6| 25.67|--| |MoE-Adapters|59.8|21.83|0.337| |Ours| **11.2**|**18.51**|**0.289**| > **W2 Essential References Not Discussed:** > > 1. The related work should discuss CL methods based on CLIP that utilize PEFT techniques. > 2. The authors should discuss related works such as CODA-P, APG, and EvoPrompt. **A2** We appreciate this suggestion and will add a dedicated subsection titled **"Continual Learning Methods Based on PEFT"** to the related work section of the revised manuscript: Parameter-efficient fine-tuning (PEFT) has been widely adopted in continual learning to enhance representation learning for specific tasks. Prompt-based methods, such as L2P [1], expand the prompt pool as new tasks are learned, relying on auxiliary losses. **CODA-Prompt** [2] propose a end to end prompt selection methods to increase plasticity. **APG** [3] introduces a prompt generator to reduce the gap between pretraining and future tasks, while **EvoPrompt** [4] proposes an adaptive and continuous prompting approach to alleviate issues of selection mismatches and limited prompt shareability. MoE-Adapters [5] insert a mixture of adapters into the image encoder, activating a few for each task. However, these methods typically involve an auxiliary parameter selection step during inference to extract image features using the expected prompts or adapters, which can lead to misassignments and degrade classification performance. > **W3 Essential References Not Discussed:** The paper lacks discussion and comparison with prior CL methods that leverage prototypes for pseudo-feature replay, such as HiDe-Prompt and CPP. **A3** We are grateful for the reviewer’s suggestion. We will incorporate a discussion on these approaches in the related work of the revised manuscript: Prototype-based methods leverage pseudo-feature replay to mitigate forgetting in continual learning. **HiDe-Prompt** [6] optimizes hierarchical components with task-specific prompts using a single prototype per class, while **Contrastive Prototypical Prompt** [7] employs task-specific prompt-tuning with contrastive learning and multiple prototypes per class to address semantic drift and prototype interference. However, as highlighted in our introduction, while backward forgetting has been effectively addressed, these approaches often overlook the impact of pseudo-feature replay on the model’s original generalization ability, leading to forward forgetting. > **W4** what does the **i** represent in Eq. (5)? **A4** We apologize for the oversight. The notation **i** represents the image embedding, as defined in line 115 of the original manuscript. We will explicitly clarify this directly at Eq. (5) in the revised version. > **W5** More explanation is needed for the statement “a linear weighting between the logits produced by LADA and the corresponding text features”. Clarifying this process is crucial for understanding how final predictions for seen classes are made. **A5** We regret the confusion. The linear weighting is a fixed hyperparameter (set to 1.0 in experiments), not learned, balancing LADA logits and text features equally. This will be clarified in the revision. > **W6 Questions For Authors:** My primary concern is the lack of discussion on related works. **A6** We have rewritten the related work in **A2** and **A3** . We commit to integrating these revisions into the manuscript. [1] Wang, Zifeng, et al. Learning to prompt for continual learning. *CVPR*. 2022. [2] Smith, James Seale, et al. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. *CVPR* 2023. [3] Tang, Yu-Ming, et al. When prompt-based incremental learning does not meet strong pretraining. *ICCV* 2023. [4] Kurniawan, Muhammad Rifki, et al. Evolving parameterized prompt memory for continual learning. *AAAI* 2024. [5] Yu, Jiazuo, et al. Boosting continual learning of vision-language models via mixture-of-experts adapters. *CVPR* 2024. [6] Wang, Liyuan, et al. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. *NeurIPS* 2023. [7] Li, Zhuowei, et al. Steering prototypes with prompt-tuning for rehearsal-free continual learning. *WACV* 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, which addresses most of my concerns, particularly regarding the clarification of technical details and the discussion of related work. Therefore, I maintain my original recommendation of "Weak accept". --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback and for acknowledging our rebuttal. We appreciate your thoughtful review and are glad that our clarifications addressed your concerns.
Summary: This paper primarily proposes an adapter-based method initialized with class cluster centers for CLIP-based Cross-Domain Task-Agnostic Incremental Learning(X-TAIL). This approach enables class discrimination within a unified feature space using class-specific parameters without requiring task-specific parameter selection. Additionally, it models old class features using the Gaussian Mixture Model (GMM) and gets old class features by sampling from the GMM distributions. The approach achieves state-of-the-art performance in X-TAIL. ##update after rebuttal After carefully reviewing the authors' responses and the other reviewers' comments, I appreciate that my main concerns have been adequately addressed. I am now inclined to recommend acceptance. Claims And Evidence: Yes, their claims are clear and convincing Methods And Evaluation Criteria: The proposed method makes sense for the problem Theoretical Claims: The paper does not include formal theoretical Claims or proofs. Experimental Designs Or Analyses: The experimental design and analysis follow the setup established in previous works introducing this problem. The study utilizes ten different domain datasets as ten sequential tasks for continual learning. This experimental design is well-founded and appropriate. Supplementary Material: I reviewed the supplementary material, including the definitions of metrics, implementation details, and additional experimental results. Relation To Broader Scientific Literature: This work builds upon prior research in incremental learning and pre-trained model fine-tuning. Unlike prompt tuning in incremental learning areas, there is no need to select additional parameters. Class features are trained using kmeans cluster center initialization instead of random initialization of the vanilla adapter Essential References Not Discussed: There are several essential references related to Distribution-Preserved Training that are either briefly mentioned or not cited in the paper. [1] proposes a method that augments prototypes using Gaussian noise. While it is mentioned in the paper, its similarity to the proposed approach, where GMM sampling is used to augment prototypes, warrants a more detailed discussion. [2], [3], and [4] all adopt the strategy of sampling old class features from a class-specific Gaussian distribution, yet they are not cited. The key distinction of this work is the use of a Gaussian Mixture Model (GMM) instead of a simple Gaussian distribution, which should be explicitly discussed. Additionally, the method [5], which is highly relevant to the Training of Label-Specific CLIP Adapter, is only cited but not discussed. Although [5] focuses on a CLIP-based few-shot adapter, its overall structure, initialization strategy, training approach, non-linearity design, and final feature extraction process share significant similarities with the proposed method. A more detailed discussion is needed to clarify these connections. [1] Zhu, F., Zhang, X. Y., Wang, C., Yin, F., & Liu, C. L. (2021). Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5871-5880). [2] Tang, Y. M., Peng, Y. X., & Zheng, W. S. (2023). When prompt-based incremental learning does not meet strong pretraining. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1706-1716). [3] Zhang, G., Wang, L., Kang, G., Chen, L., & Wei, Y. (2023). Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 19148-19158). [4] Huang, L., Cao, X., Lu, H., & Liu, X. (2024, September). Class-incremental learning with clip: Adaptive representation adjustment and parameter fusion. In European Conference on Computer Vision (pp. 214-231). Cham: Springer Nature Switzerland. [5] Zhang, R., Zhang, W., Fang, R., Gao, P., Li, K., Dai, J., ... & Li, H. (2022, October). Tip-adapter: Training-free adaption of clip for few-shot classification. In European conference on computer vision (pp. 493-510). Cham: Springer Nature Switzerland. Other Strengths And Weaknesses: The novelty of this paper is questionable. The key module, Label-Specific CLIP Adapter, bears a strong resemblance to the CLIP-based few-shot learning method Tip-adapter [5]. Tip-adapter initializes the adapter by using multiple features from same-class images to set the corresponding class-specific weights in the adapter. It also explores fine-tuning this initialized adapter to further improve performance. The adapter structures are highly similar, and the output formulation of Tip-adapter is identical to Equation (6) in this paper. Additionally, both methods employ the same technique to convert inner products into non-negative values. The overall framework follows a similar strategy, where the final classification result is obtained by weighting the adapter’s output and the text encoder’s output. The primary difference lies in the experimental setup: this paper focuses on full-shot datasets, allowing the use of K-means to obtain class sub-centers for initialization, whereas Tip-adapter, designed for few-shot scenarios, directly initializes the adapter using a limited number of sample features. Additionally, this paper incorporates a continual learning setting, introducing the freezing of old class classifiers—a widely used technique in the continual learning domain. However, this adaptation is a general strategy rather than a novel methodological contribution. Other Comments Or Suggestions: In the ablation study results, the improvement of BF achieved by the presented modules appears to be limited. Further analysis is needed to clarify and justify the effectiveness of these modules. Additionally, on page 5, in the "Overall Framework" section, the authors mention that "the final prediction is obtained by applying a linear weighting between the logits produced by LADA and the corresponding text features." It is unclear whether this linear weighting is a hyperparameter and how it is determined. Further clarification on this aspect would be beneficial. Questions For Authors: Similarity to Tip-adapter: The proposed Label-Specific CLIP Adapter appears to be highly similar to Tip-adapter, with key similarities in initialization, structure, and final classification formulation. The primary differences seem to be (1) using K-means clustering for adapter initialization instead of directly leveraging a few-shot feature set, and (2) incorporating a continual learning setting with frozen classifiers for old classes. Given these similarities, could the authors clarify the novelty of their approach beyond these modifications? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer vJJv, Thank you for your detailed review. We address your concerns one by one in the followings. > **W1 Other Strengths And Weaknesses:** Label-Specific CLIP Adapter bears a strong resemblance to the CLIP-based few-shot learning method Tip-Adapter [5]. **Questions For Authors:** Similarity to Tip-Adapter. **A1** We agree with the reviewer that the adapter module in LADA shares similarities with Tip-Adapter. **However, our key contribution lies in generalizing this module for continual learning tasks—a scenario where Tip-Adapter fails to perform effectively.** We demonstrate that the adapter (which we term *label-specific memory*) can naturally capture task-specific data distributions and mitigate forgetting. To maintain scalability and simplicity, we initialize the memory cache using k-means clustering and adopt Tip-Adapter’s output formulation. In our view, adapting Tip-Adapter to continual learning is nontrivial: LADA introduces significant improvements to address both backward and forward forgetting. Moreover, LADA seamlessly extends to few-shot learning, matching Tip-Adapter’s original setting. Under the 16-shot benchmark (using CLIP ViT-B/16, following [6]), LADA outperforms Tip-Adapter-F by **1.6%** (see table below). Notably, it achieves this with only **1/4** of the feature embeddings required by Tip-Adapter, underscoring its efficiency. ||Aircraft|Caltech101|DTD|EuroSAT|Flowers|Food|Pets|Cars|Sun397|Average| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Tip-Adapter-F|44.6|95.7|70.8|85.9|96.2|86.8|92.6|82.3|76.0|81.2| |Ours|**49.6**|**96.4**|**71.5**|**86.8**|**97.4**|**87.7**|**94.0**|**84.8**|**76.8**|**82.8**| The performance results for Tip-Adapter-F are sourced from recent work [6]. **Summary of contributions** **Contribution 1:** We design a text encoder fine-tuning framework which enhances the classification ability and naturally obtains the transfer ability of CLIP model. **Contribution 2:** We propose LADA, a lightweight CLIP adapter that condenses task-agnostic knowledge to transform CLIP representation into label-specific features, eliminating the need for parameter selection as seen in previous mainstream methods. **Contribution 3:** We propose Distribution-Preserved Training which controls the influence of prototypes by incorporating their contribution weights into the loss function. We not only addresses backward forgetting but also tackles forward forgetting, a challenge overlooked in prior methods. > **W2 Essential References Not Discussed:** There are several essential references related to Distribution-Preserved Training that are either briefly mentioned or not cited in the paper. **A2** Thanks for the reviewer highlighting these relevant papers and we will include citations and discussions. Our approach aligns with a standard Gaussian distribution when $\lambda_2=1$, similar to the prototype augmentation method in [1]. When $\lambda_2 > 1$, our loss function (Equation 10) leverages the mixture weights $\pi$ from the GMM to perform more sophisticated augmentation. As shown in Table 4, when $\lambda_2=1$, the *Transfer* performance is lower compared to when $\lambda_2 > 1$. This indicates that the effectiveness of DPT while prior methods in [1-4] do not address *forward forgetting* in pre-trained models, our approach explicitly tackles this issue. > **W3 Other Comments Or Suggestions:** The improvement of BF achieved by the presented modules appears to be limited. **A3** In Tab.2 and Tab.3, our basic fine-tuning framework (BF) already outperforms several baseline methods, demonstrating the challenges prior approaches face in task-agnostic settings. In the full-shot setting, these modules improve *Transfer* by an average of **2.3%**, *Average* by **2.0%**, and *Last* by **1.3%** compared to the BF framework alone. > **W4 Other Comments Or Suggestions:** It is unclear whether this linear weighting is a hyperparameter. **A4** We regret the confusion. The linear weighting is a fixed hyperparameter (set to 1.0 in experiments), not learned, balancing LADA logits and text features equally. This will be clarified in the revision. We hope this rebuttal addresses your concerns. Please let us know if further refinements are needed! [1] Zhu, Fei, et al. Prototype augmentation and self-supervision for incremental learning. *CVPR* 2021. [2] Tang, Yu-Ming, et al. When prompt-based incremental learning does not meet strong pretraining. *ICCV* 2023 [3] Zhang, Gengwei, et al. Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. *ICCV* 2023. [4] Huang, Linlan, et al. Class-incremental learning with clip: Adaptive representation adjustment and parameter fusion. *ECCV* 2024. [5] Zhang, Renrui, et al. Tip-adapter: Training-free adaption of clip for few-shot classification. *ECCV* 2022. [6] Zanella, Maxime, et al. Low-rank few-shot adaptation of vision-language models. *CVPR* 2024.
Summary: Instead of partitioning parameters across tasks, this paper proposed LADA appended lightweight, labelspecific memory units to the frozen CLIP image encoder, enabling discriminative feature generation by aggregating task-agnostic knowledge. The method achieves state-of-the-art performance in continual learning settings on several datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: It makes sense in general Theoretical Claims: Most proofs were checked. Experimental Designs Or Analyses: Most experiments were checked. Please refer to Other Strengths and Weaknesses. Supplementary Material: Yes Relation To Broader Scientific Literature: The method appends lightweight, label-specific memory units to the frozen CLIP image encoder, enabling discriminative feature generation by aggregating task-agnostic knowledge. Essential References Not Discussed: In [1], RANPAC also projects features into a higher-dimensional feature space, which has been proven to be beneficial for continual learning. It has not been demonstrated that the effect of LADA is better than that of nonlinear random projection [1]. [1]McDonnell, Mark D., et al. "Ranpac: Random projections and pre-trained models for continual learning." NeurIPS,2023. Other Strengths And Weaknesses: 1. It is recommended to add a flowchart of the LADA method or end-to-end loss function in the Overall Framework section, which will make the method better understood. 2. LADA projects features into a higher-dimensional feature space, which has been proven to be beneficial for continual learning [1]. However, in the ablation experiment, it has not been demonstrated that the effect of LADA is better than that of nonlinear random projection [1]. I think [1] should be included in the reference. 3. Experiments on some important datasets are necessary, such as CIFAR100. It is recommended to supplement. [1]McDonnell, Mark D., et al. "Ranpac: Random projections and pre-trained models for continual learning." NeurIPS,2023. Other Comments Or Suggestions: Please refer to Other Strengths and Weaknesses. Questions For Authors: Please refer to Other Strengths and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer EaVq, Thank you for your detailed review. We address your concerns one by one in the followings. > **W1**: **Essential references not discussed of [1]** and the second point in Other Strengths And Weaknesses of **comparisons with RanPAC [1]** **A1**: We acknowledge the reviewer’s comments regarding the article and address the concerns about the omission of RanPAC [1] as an essential reference, as well as the need for a comparison between RanPAC and our proposed method. 1. Core Idea of RanPAC RanPAC [1] projects features into a higher-dimensional space, which has been shown to benefit continual learning. RanPAC [1] tackles multicollinearity in CLIP’s original feature dimensions by employing **random Gaussian projection matrices** and **nonlinear activation functions** (e.g., ReLU). Specifically, it extracts CLIP features using a frozen encoder, projects them via a Gaussian matrix $W$, applies nonlinear transformations, and uses ridge regression on 0-1 encoded labels to train classifiers. This approach leverages second-order statistics to reduce feature redundancy and enhance classifier accuracy. 2. Key Differences and Advantages of LADA - **Feature Optimization** RanPAC optimizes features using static **random Gaussian projection matrices** to reduce multicollinearity in CLIP features, but it lacks adaptability to task-specific nuances, often failing to capture fine-grained or diverse class characteristics effectively. In contrast, LADA introduces a **label specific memory** dynamically amplifying CLIP feature most relevant to each class. This process condenses features into sparse, high-dimensional representations tailored to individual tasks, enhancing discriminability without modifying the frozen CLIP encoder. By leveraging this class-specific adaptability, LADA achieves superior plasticity and stability. - **Cross-Task Adaptation** RanPAC’s dependence on **fine-tuning the CLIP encoder for the first task** incurs substantial computational overhead and risks eroding CLIP’s pre-trained general knowledge, making it impractical for continual learning across diverse tasks. In contrast, LADA takes a fundamentally different approach by **freezing the CLIP encoder entirely** and training lightweight a **label-specific adapter**. 3. Empirical Validation To compare LADA with RanPAC, we integrate RanPAC into our framework (denoted as BF+RanPAC+DPT, where BF refers to baseline fine-tuning and DPT to Distribution-Preserved Training) and evaluate it against our method (BF+LADA+DPT). In the X-TAIL 16-shot setting, the results demonstrate that BF+LADA+DPT outperforms BF+RanPAC+DPT by **1.4% in Transfer**, **1.4% in Average**, and **1.0% in Last** metrics, averaged across the 10 tasks. ||Aircraft|Caltech101|DTD|EuroSAT|Flowers|Food101|Mnist|Pets|Cars|Sun397 |***Average***| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | **Transfer**| |BF+**RanPAC**+DPT|--|74.4|35.4|34.6|63.4|83.3|36.9|87.6|64.7|60.8|60.1| |Ours|--|**75.0**|**36.1**|**35.9**|**66.3**|**83.7**|**42.1**|**88.0**|**65.3**|**61.4**|**61.5**| |**Average**| |BF+**RanPAC**+DPT|47.0|86.7|61.0|71.4|83.3|84.3|59.5|89.4|68.6|62.2|71.3| |Ours|**49.1**|**91.0**|**61.3**|**71.6**|**84.4**|**85.0**|**62.8**|**89.7**|**69.2**|**62.9**|**72.7**| |**Last**| |BF+**RanPAC**+DPT|47.1|90.3|68.3|87.2|96.5|85.4|93.4|**93.7**|84.2|74.4 |82.1| |Ours|**49.6**|**93.7**|**69.3**|**86.9**|**96.7**|**86.9**|**93.8**|**93.7**|**84.6**|**76.0**|**83.1**| [1] McDonnell, Mark D., et al. "Ranpac: Random projections and pre-trained models for continual learning." NeurIPS,2023. > **W2**: It is recommended to add a flowchart of the LADA method or end-to-end loss function in the Overall Framework section **A2**: We appreciate the reviewer’s suggestion. The overall framework and end-to-end loss function of LADA are available in an [anonymized link](https://anonymous.4open.science/r/ICML25-LADA-4B78/README.md). We will include them in the revised manuscript. > **W3**: Experiments on some important datasets are necessary, such as CIFAR100. **A3**: Thank you for your suggestion. CIFAR-100 is indeed an important dataset. However, due to overlapping class names with other datasets, it was not included in the original X-TAIL dataset to ensure accurate evaluation. Nevertheless, we have conducted additional experiments on **Inclusion of CIFAR-100 in the X-TAIL** to further demonstrate the robustness of our approach. Under the 16-shot setting, our method still achieves state-of-the-art results, with improvements of **4.3%**, **2.0%**, and **1.0%** on the Transfer, Average, and Last metrics, respectively. Please refer to the [anonymized link](https://anonymous.4open.science/r/ICML25-LADA-4B78/README.md) for detailed experimental results. We hope this rebuttal addresses your concerns. Please let us know if further refinements are needed! --- Rebuttal Comment 1.1: Comment: Thank you for the responses, which have addressed most of my concerns. I have re-rated the paper to “weak accept”. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your thoughtful review and for carefully considering our responses. We truly appreciate your recognition of our efforts in addressing the concerns, and we are grateful for your updated rating.
Summary: The paper proposes a task incremental learning approach for CLIP encoders. In contrast to using adapters for the image encoder, it learns task (or class) specific memories represented by learnable vectors. The classification is performed via dot products of these task specific memories and CLIP image embeddings. The training mechanism attempts to prevent forgetting by multiple techniques including freezing memories of old tasks, separating samples from old tasks from the new ones, suitable modeling of distribution of clusters of past memories etc. The method also performs fine tuning for the text encoder for the new classes with mechanism to prevent forgetting. The method was tested on multiple standard datasets, compared against relevant baselines and evaluated in different setting to illustrate capability of learning new classes as well as retaining classification performances on the old ones. Claims And Evidence: --- Methods And Evaluation Criteria: I found the method distribution is sloppy and confusing. 1. It is not mentioned in the introduction or at the beginning of the method section that the proposed method also finetunes the text encoder. Line 121 starts a new section titled "A simple baseline" which confused me as whether this is a baseline method that was compared against or part of the overall approach -- I think it is the latter. It should have been clearly mentioned that finetuning text encoder is part of the proposed approach. 2. The finetuning of text encoder does not clearly mention the full form of the loos function used. If I understood correctly, Eqns 3 and 4 are used for current and past tasks respectively and the total loss is a summation of the two. But it is not clear in the text. 3. Similarly, Line 206 suddenly, and without much justification, introduces GMM modeling of the p^i_j vectors. This again would confuse the reader, is Eqn 8 or 10 used for previous task? Here too, the writeup does not clearly state the full loss function. 4. The introduction is unnecessarily long and not well written. Theoretical Claims: --- Experimental Designs Or Analyses: Sufficient experimentation (with analysis) was performed to show the strength of the method. Evaluation in 3 settings (transfer, last and avg) as well as illustration of performance consistency over learning steps in Fig 2 are both good attempts to convince the reader of the merits of this approach. An analysis or ablation on why GMM modeling was needed would be good to justify this choice. Supplementary Material: --- Relation To Broader Scientific Literature: Will rely on other reviewers for novelty assessment. Essential References Not Discussed: --- Other Strengths And Weaknesses: --- Other Comments Or Suggestions: Is "CLIP Adapter" the right term for the technique proposed? Would it not give the impression that the method is using adapters for image encoder before one reads the intro? Something like Task Memories for Scalable Continual Learning with CLIP Encoders might be more reflective of the method? Just a thought. Questions For Authors: --- Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 4dD8, Thank you for your detailed review. We address your concerns one by one in the followings. > **W1** It should have been clearly mentioned that finetuning text encoder is part of the proposed approach. **A1** We sincerely apologize for the lack of clarity and fully agree with your assessment that the role of text encoder finetuning should have been explicitly stated early in the paper. We will make the following modifications in the next version. 1. **Introduction (Section 1):** - We will add a paragraph in the introduction explicitly stating that our proposed approach jointly optimizes both the image features and text encoders. Specifically, we will emphasize that we first establish a text encoder fine-tuning approach for continual learning by combining parameter freezing and distillation techniques. 2. **Methodology (Section 3):** - We will explicitly outline the full pipeline, including text encoder fine-tuning for continual learning, before introducing implementation details. Also, an [overall framework](https://anonymous.4open.science/r/ICML25-LADA-4B78/README.md) will be added, illustrating both text encoder fine-tuning module and Label Specific CLIP Adapter module. - The section titled **"A Simple Baseline via Text Encoder Fine-tuning"** will be renamed to **"Text Encoder Fine-tuning Framework for Continual Learning"** to clarify its role as an ablated variant of our approach. We will add a transition sentence: "To address continual learning in X-TAIL senario, we first introduce a baseline framework with text encoder finetuning only." > **W2** The finetuning of text encoder does not clearly mention the full form of the loss function used. If I understood correctly, Eq. 3 and 4 are used for current and past tasks respectively and the total loss is a summation of the two. **A2** We apologize for the confusion. In the revised manuscript, we will add the following sentence in Section 3.2 after Equation 4: **"The total training loss is the sum of the current task loss (Eq. 3) and the distillation loss (Eq. 4), enabling joint optimization of both current and past task objectives."** > **W3** Is Eq. 8 or 10 used for previous task? Here too, the writeup does not clearly state the full loss function. **A3** We will clarify that total loss combines Eq. 7 (current task) and **Eq. 10** (Distribution-Preserved Training loss for **previous task**) in next version. > **W4** The introduction is unnecessarily long and not well written. **A4** Thank you for the constructive feedback. We will revise the introduction by: 1. Merge paragraphs 2-3 to concisely present the dual challenges of *memory stability* (preserving past knowledge) and *learning plasticity* (adapting to new tasks) in continual learning. 2. Remove detailed discussions of prior methods (e.g., regularization-based approaches) and shift them to the Related Work section. > **W5** An analysis or ablation on why GMM modeling was needed would be good to justify this choice. **A5** We acknowledge that the the choice of GMM should be explicitly stated. We will add the following discussions in the next version of paper. **Choice of GMM**: Distilling only cluster centers loses fine-grained distribution information. GMM models the **full feature distribution** of past tasks, which controls the influence of prototypes by incorporating their contribution weights $\pi$ into the loss function Eq. 10. **Impact of GMM**: Table 3 shows DPT based on GMM boosts Transfer and Average performance. Table 4 shows increasing the number of image prototypes $\lambda_2$ further improves Transfer performance. > **W6** Is "CLIP Adapter" the right term for the technique proposed? Would it not give the impression that the method is using adapters for image encoder before one reads the intro? **A6** Thank you for raising this concern. The term "CLIP-Adapter" has been recognized in vision-language research [1] to denote lightweight feature adaptation after the frozen CLIP backbone extracts representations, not input-level modifications. This naming convention emphasizes post-encoder refinement for downstream tasks, aligning with its technical definition and community usage. We will clarify this distinction explicitly in the revised manuscript to avoid potential misinterpretation. [1] Gao, Peng, et al. Clip-adapter: Better vision-language models with feature adapters. *International Journal of Computer Vision* 132.2 (2024): 581-595. > **W7** Will rely on other reviewers for novelty assessment. **A7** Please refer to **A1** and **Summary of contributions** of **reviewer vJJv**. We hope this rebuttal addresses your concerns. Please let us know if further refinements are needed!
null
null
null
null
null
null
In-Context Adaptation to Concept Drift for Learned Database Operations
Accept (poster)
Summary: The paper introduces an online adaptation framework to tackle concept drift in learned database operations. The topic is related to the application of machine learning. The proposed FLAIR leverages in-context adaptation through dynamic context memory and Bayesian meta-training, enabling models to adjust to evolving data distributions without retraining. The framework comprises a Task Featurization Module for standardizing inputs and a Dynamic Decision Engine pre-trained on synthetic data to generalize across tasks. Experiments on benchmarks like STATS and JOB-light demonstrate FLAIR’s competitive performance in severe concept drift. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The paper introduces an online adaptation framework to tackle concept drift in learned database operations. The topic is related to the application of machine learning. The proposed FLAIR leverages in-context adaptation through dynamic context memory and Bayesian meta-training, enabling models to adjust to evolving data distributions without retraining. The framework comprises a Task Featurization Module for standardizing inputs and a Dynamic Decision Engine pre-trained on synthetic data to generalize across tasks. Experiments on benchmarks like STATS and JOB-light demonstrate FLAIR’s competitive performance in severe concept drift. Essential References Not Discussed: NA Other Strengths And Weaknesses: # Strengths 1. The topic of concept drift is related to the machine learning field. 2. Solving the application of database with concept drift has the potential to boost real-world applications. 3. The proposed FLAIR is evaluated on several datasets against other baselines. # Weaknesses 1. In the proposed method, the Bayesian meta-training step relies on synthetic datasets sampled from predefined priors. Although it seems to improve the performance, the synthetic data may fail to capture real-world distribution complexities, leading to gaps in generalization. The paper lacks validation of how synthetic priors align with actual drift patterns encountered in dynamic databases. 2. Besides, the current framework assumes immediate availability of execution results for context memory updates. In the real system, complex queries or distributed systems may introduce latency or partial feedback, limiting the framework’s applicability in real-time or large-scale environments. Hence, the authors should consider more complex experimental scenarios. 3. The experiments focus on moderate-scale datasets There is no analysis on the computational overhead, memory footprint, or latency when applied to high-velocity data streams or petabyte-scale databases. # Remark To me, this paper would be a better fit for the internation conferences on database, where the audience would benefit more than the machine learning field. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer C5eR, We sincerely thank you for the constructive feedback that helps improve our work. We address the concerns you raised below. The full new results are provided in: https://anonymous.4open.science/r/ICML25-7F63/ICML25.pdf **[Q1-Synthetic Priors]** We clarify that FLAIR's DDE is meta-trained using tasks from BNNs and SCMs, which model complex dependencies and causal structures fundamental in the data stored in real-world databases. The priors span diverse functional forms reflecting causal relationships in real query-data-output relations and are trained in a fully Bayesian manner over a wide hyperparameter space broader than any single-task point estimate. We also adopt Occam’s razor bias as [1], favoring lower-parameter models, which aligns with Bayesian and cognitive principles. We validate the generalizability of the synthetic priors empirically: We meta-train FLAIR once and for all and test it across 4 core analytics tasks (i.e., CE, AQP, classification), reflecting distinct data-query patterns and dynamics. The strong and consistent performance across all these tasks, even superior to the strong baseline fine-tuning on real-world data, confirms the generalizability of our synthetic priors. As suggested, we further validate this with new experiments simulating insert/delete/update-heavy drifts. Results below (details in Table A) show robust and optimal performance, especially in update-heavy (UH) cases. This confirms the alignment between our priors and actual drift patterns. |||Mild|||Severe|| |-|-|-|-|-|-|-| |Data|IH|DH|UH|IH|DH|UH| |STATS|2.85|2.96|3.76|3.35|3.26|3.96| |Job-light|1.68|1.59|1.92|6.26|6.58|7.21| **[Q2-Latency/Partial Feedback]** We agree with you that the immediate availability of execution results may not always hold. To address this, as suggested, we extend our evaluation to consider more complex scenarios and design two settings: Delayed feedback: context memory updates every k steps (5% and 10% of its size) rather than in real time. Partial feedback: 5% and 10% of context pairs are randomly dropped to mimic missing or partial feedback. The results below (details in Table G) show that: (1) partial feedback performs slightly better, suggesting recency of execution results matters more than completeness. (2) FLAIR is resilient to both cases, which confirms its applicability in real systems. |Data|Drift|Delay 5%|Delay 10%|Partial 5%|Partial 10%|FLAIR| |-|-|-|-|-|-|-| |STATS|Mild|4.79|5.32|4.52|4.88|4.49| ||Severe|5.62|5.92|5.53|5.61|5.47| |Job-light|Mild|2.45|2.56|2.40|2.49|2.36| ||Severe|8.10|8.26|8.06|8.12|7.95| **[Q3-Larger Datasets]** We share your view that scalability is a key consideration beyond effectiveness. As discussed in Appx G, FLAIR is designed with scalability in mind with complexity linear scaling w.r.t. input and context size. Its runtime involves only a feedforward pass over compact context memory for adaptation, without backward gradient updates. We evaluated FLAIR on real and widely accepted benchmarks STATS and JOB-light, which show low inference latency (Sec 4.3) and stability with context memory scaling (Fig 11, Appx J1). To further evaluate the scalability of our FLAIR, we add new experiments on a larger dataset TPC-H [2](10GB, 86M records, complex many-to-many joins). Results in Table H show that FLAIR outperforms baselines in all cases, with 3.5% and 7.2% gains under mild and severe drift. Across all benchmarks, FLAIR maintains low latency(\~10 ms/query), compact memory footprint(~5 MB), and stable storage overhead, showing its suitability for high-throughput, large-scale systems. |Data|Throughput(q/s)|Memory Footprint(MB)|Latency(ms/q)|Storage Overhead(MB)| |-|-|-|-|-| |STATS|109.17|4.97|9.16|47.68| |Job-light|121.21|5.02|8.25|47.23| |TPC-H|93.28|5.11|10.82|47.72| **[Remark]** We clarify that our work sits at the intersection of machine learning and data systems. The key insights of our work lie in addressing two fundamental ML challenges: how to enable on-the-fly adaptation without retraining and how to achieve context-aware prediction, as stated in lines 036–042. These challenges are central to real-time analytics on dynamic structured data for high stacks applications such as healthcare and stock trading. Thus, we focus on typical structured data analytics tasks, addressing these challenges with contributions grounded in ML, i.e., in-context adaptation and Bayesian meta-training, achieving promising performance. We believe this work contributes to the broader ML community and will resonate with a wide ML audience working on concept drift, structured data analytics, and real-time systems, complementing ongoing efforts in these areas. We are grateful for the chance to discuss our work's potential and wish to thank you again for your valuable input. We hope to have addressed your concerns and would highly appreciate your consideration for re-evaluating your initial rating. [1]Tabpfn. ICLR2023 [2]https://www.tpc.org/
Summary: This paper addresses the challenge of concept drift in database operations. Its primary contribution is the introduction of an in-context adaptation framework to tackle this issue. The proposed method, FLAIR, comprises two essential components: a Task Featurization Module and a Dynamic Decision Engine. The effectiveness of FLAIR is demonstrated through both theoretical analysis and experimental evaluation. Claims And Evidence: 1. The intuition behind the in-context adaptation framework is outlined in Theorems 3.1 and 3.2, which I find reasonable. 2. The experimental results demonstrate that the proposed method performs well in real-world applications. However, I am concerned that the baseline methods used for comparison are somewhat outdated and may not specifically address the distribution shift problem. I would expect a comparison with more recent approaches designed to tackle distribution shift. Methods And Evaluation Criteria: I am not well-versed in database operation problems, but the evaluation criteria appears reasonable to me. However, I am confused about how the authors define the mild drift and severe drift. Theoretical Claims: I do not consider this paper to have theoretical innovation. However, since its primary focus is on experimental analysis, I believe the current theoretical discussion is sufficient to illustrate and validate the authors' intuition. I don't find any issues in the proof. Experimental Designs Or Analyses: In Figure 7, the authors only compare with some very classical methods. I am confused with the information conveyed in this figure. Supplementary Material: I go through the proof. Relation To Broader Scientific Literature: This paper introduces the concept of "concept drift." However, from my perspective, it appears to be no different from the distribution shift problem. Further clarification is needed. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 5u5N, We thank you for your recognition of our work and your insightful comments. Below, we provide detailed responses to the specific concerns you raised. Full tables/figures are in: https://anonymous.4open.science/r/ICML25-7F63/ICML25.pdf > expect a comparison with more recent approaches. As suggested, we have added recent baselines: SOLID [1], a context-aware fine-tuning adaptation method, and DtACI [2], an adaptive conformal inference (ACI)–based fine-tuning method. Results below (details in Table F) show that FLAIR outperforms both under drift. This is because SOLID’s residual-based detection and DtACI’s ACI mechanism miss subtle, continuous query-driven drift in databases, reducing accuracy, especially in tail cases. In contrast, FLAIR’s in-context adaptation ensures timely and better adaptation to continuous drift. |Data|Method|Mild|Severe| |-|-|-|-| |STATS|SOLID|5.12|5.56| ||DtACI|5.48|6.86| ||FLAIR|4.49|5.47| |Job-light|SOLID|2.41|8.36| ||DtACI|2.54|8.12| ||FLAIR|2.36|7.95| > how the authors define the mild drift and severe drift? Thank you for pointing this out. We clarify the drift setting below: Mild Drift: randomly select 50% of records from the database and independently permute their column values, altering data distribution and inter-column correlations. Severe Drift: randomly select 60% of records, independently permuting their columns, and performing random insertions, deletions, and updates, which affects 10% of the total data (keeping the total data size constant). This setting follows the recent work [3]. We will refine the descriptions in Figure 10 and Appx H6 for clarity. > confused with information conveyed in Figure 7 We appreciate your valuable feedback. We selected classical methods in Figure 7 for their interpretable decision boundaries, which provide an intuitive understanding of model behavior under concept drift. To enhance comparison, we have added a recent method Type-LDD [4], a drift-aware classifier via knowledge distillation, in Figure C. While Type-LDD surpasses classical methods, FLAIR outperforms it due to the Type-LDD’s delayed adaptation from its detect-then-adapt strategy. > the concept of 'concept drift' appears to be no different from the distribution shift problem We agree that clearly distinguishing our core focus concept drift in databases from the broader notion of distribution shift is important, and we would like to further clarify this as suggested. Distribution shift typically refers to a mismatch between training and test data distributions, often without inherent temporal evolution. In contrast, concept drift in our work refers to the ongoing, temporal evolution of data and queries in operational database settings. In databases, this form emerges naturally: runtime queries continuously trigger insert/delete/update operations, incurring cumulative changes in both data and query. These drifts are ongoing and unpredictable, requiring models to adapt in real time rather than being trained repeatedly with new data distributions. To clarify this clearly, we refine our problem definition as follows: **Definition** (Concept Drift in Databases) Let $\mathbf{d}_t$ denote the underlying data of a database at time t, and $\mathbf{q}_t$ denote a user query at time t. Given the data-query pair $(\mathbf{d}_t,\mathbf{q}_t)$, let $\mathbf{y}_t$ represent the corresponding prediction output (e.g., row counts in cardinality estimation). Concept drift occurs at time t if the joint distribution of queries, data, and predictions changes, i.e., $P_t(\mathbf{q},\mathbf{d},\mathbf{y})\ne P_{t+1}(\mathbf{q},\mathbf{d},\mathbf{y})$. Here, drift can arise from two distinct but interrelated sources: (1) Query drift, from evolving user behavior. (2) Data drift, caused by frequent insert/delete/update operations changing underlying data distributions. Notably, changing data not only changes the marginal distribution $P(\mathbf{d})$, but also affects the conditional distribution $P(\mathbf{y}|\mathbf{q},\mathbf{d})$, i.e., the same queries may yield different outputs over time. This suggests that concept drift in databases involves shifts in the joint distribution of queries, data, and predictions, and their interaction. We'll further clarify this distinction and update our problem formulation accordingly. We hope our responses above have sufficiently addressed your concerns and can improve your evaluation of our work. [1] Calibration of Time-Series Forecasting- Detecting and Adapting Context-Driven Distribution Shift. KDD 2024 [2] Conformal Inference for Online Prediction with Arbitrary Distribution Shifts. JMLR 2024 [3] Detect, Distill and Update: Learned DB Systems Facing Out of Distribution Data. SIGMOD 2023 [4] Type-LDD: A Type-Driven Lite Concept Drift Detector for Data Streams. TKDE 2024
Summary: This paper focuses on the issue of concept drift in dynamic database environments, which is an interesting and challenging research problem. To address this problem effectively, an online adaptation framework, FLAIR, has been developed. Sufficient experiment and analysis show the performance of the proposed method. I have some suggestions for further improvement. Claims And Evidence: Yes, sufficient experiment on different datasets and baselines have been conducted to evaluate the performance of the proposed method, and detailed analysis of the results also verify the efficiency. Methods And Evaluation Criteria: The proposed method has been clearly introduced and evaluated, and the evaluation criteria have been chosen appropriately, but I have some suggestion for this part, as shown below: 1. In the dynamic database, the data may have incremental updates, leading to concept drift. However, the users’ queries may also change as time goes on. So, does this work consider this situation when both the data and query change? 2. It seems the proposed method does not have the drift detection process, so, how to identify the impact of concept drift on model performance, which is very important for learning adaptation. 3. For the proposed FLAIR, a meta-trained part is embedded, which has been frozen, so how to ensure the efficiency of this part when concept drift occurs? I understand you have extracted features from data and query, but the adaptation process of the meta-trained model should be explained clearly in Figure 3. Besides, training and fine-tuning are common solutions for concept drift adaptation, and the proposed method dynamically adapts to new concepts guided by the context memory during inference, have you compared the performance of these three learning strategies? 4. The runtime complexity analysis of the proposed is required. Theoretical Claims: The author gives the theoretical analysis of model generalization error bound analysis with sufficient proof and analysis. However, the target of the proposed method is to handle concept drift in a dynamic database environment, I have not seen the author analyze bound under concept drift respectively. That is to say, the theoretical analysis should embed the term of concept drift that occurs dynamically. Experimental Designs Or Analyses: I have reviewed the experimental designs and analysis, two benchmark datasets have been chosen for the experiment, I still suggest the author add more datasets for model evaluation. Besides, the parameter setting of the proposed method and all the baselines should be introduced in this paper, and a experiment for parameter analysis is needed. Supplementary Material: I have reviewed the supplementary material, especially the theoretical analysis and experiment results analysis. Relation To Broader Scientific Literature: This paper address the issue of concept drift in dynamic database environments, which attracts high attention in recent years. Concept drift learning is a challenge research topic in data mining, concept drift occurs in dynamic data and many previous studies develops drift detection and adaptation method. This paper develops method for concept drift adaptation in the scenario of dynamic database, which is an interesting topic in this area, and sufficient comparison experiment verified the performance. Essential References Not Discussed: The key contribution is an online adaptation framework, called in-context adaptation for learned database operations under concept drift, I have seen many database related reference has been listed, however, the related work about concept drift learning is insufficient, I suggest adding more reference about concept with detailed discussion. Other Strengths And Weaknesses: 1. Is this research work in the supervised learning setting? I know the definition of concept drift is $P_{t}(y|x) \neq P_{t+1}(y|x)$ is based on the setting of supervised learning, so, does the data in the database with different category (at each time point) have the ground truth (label)? 2. For definition 2.1, I think this definition should show the difference of concept drift that occurs in dynamic data stream and dynamic database. Please explain and define it again. Other Comments Or Suggestions: Based on the comparison of learning efficiency in Figure 4, the runtime complexity analysis of the proposed is required. Questions For Authors: I suggest the author clarify the setting of this research work, supervised learning (regression or classification? or both of them?), and the experiment of regression and classification should be analyzed separately. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer uD1S, We sincerely thank you for your positive review and insightful comments. We address your concerns below and attach new results here: https://anonymous.4open.science/r/ICML25-7F63/ICML25.pdf > Q1: does the work consider both data and query change? Yes, we considered both data and query change in Appx H6 (line 1287). We evaluated the query change of SELECT/UPDATE/INSERT/DELETE with varied ratios, and among them, UPDATE/INSERT/DELETE incur data change. We further evaluate our pretrained model on insert/delete/update-heavy (IH/DH/UD) query changes. Results below (details in Table A) show that FLAIR adapts consistently well to both query and data changes across different settings. |||Mild|||Severe|| |-|-|-|-|-|-|-| |Data|IH|DH|UH|IH|DH|UH| |STATS|2.85|2.96|3.76|3.35|3.26|3.96| |Job-light|1.68|1.59|1.92|6.26|6.58|7.21| > Q2: how to identify the impact of concept drift? Unlike reactive detect-then-adapt methods, FLAIR uses the dynamic context memory to adapt on the fly via the dynamic decision engine, avoiding the detection overhead. Incremental drift tests (Figure 5 in Sec 4.4) and our new abrupt drift tests below (details in Table B) confirm that FLAIR achieves the best performance without explicit detection. |Data|PG|ALECE|DDUp|FT|FLAIR| |-|-|-|-|-|-| |STATS|176.38|12.63|6.91|5.75|4.15| |Job-light|19.41|16.24|6.65|6.25|3.26| > Q3: how to ensure efficiency of meta-trained part? have you compared common learning strategies? DDE is meta-trained only once to approximate $q_{\theta}(y|x,\mathcal{C})$ across diverse tasks. During inference, it remains frozen and adapts efficiently via a forward pass using context $\mathcal{C}$, built from recent query-output pairs, thereby enabling training-free adaptation. As suggested, we update Figure 3 for clarity (see Figure A) and add training from scratch (RT) besides the fine-tuning (FT) and distillation (KD) baselines (details in Table C). Results show that RT and FT underperform in adaptation to new concepts, as they fit outdated data, while FLAIR archives timely and better adaptation. |Data|Drift|RT|FT|FLAIR| |-|-|-|-|-| |STATS|Mild|4.97|5.35|4.49| ||Severe|5.59|5.02|5.47| |Job-light|Mild|3.25|2.45|2.36| ||Severe|8.21|8.09|7.95| > Q4: runtime complexity analysis FLAIR's initialization complexity is $O(\delta \sum n_i)$ with $O(N_v)$ for updating modified records. Query encoding cost $O(N_J+N_F)$ is negligible. TFM incurs $O(d_a\delta^2)$ (self-attention) and $O(d_a\delta)$ (cross-attention). DDE scales linearly as $O(d_a\varrho)$ with context size $\varrho$. Please see Appx G for more details. > theoretical analysis should embed concept drift We clarify that we model concept drift through insertions and deletions, which are key drivers of drift in databases, and derive worst-case bounds. As suggested, we will make the link to concept drift clearer in Sec 3.4. **[Additional Dataset]** We add TPC-H [1] (10GB, 86M records, 8 tables, 61 attributes). Experimental settings follow Appx H6. Results below show that FLAIR outperforms all the baselines, especially in tail and severe cases (details in Table D). |Drift|FT|PG|ALECE|DDUp|FLAIR| |-|-|-|-|-|-| |Mild|3.75|36.15|8.97|6.58|3.62| |Severe|6.11|88.75|38.05|9.65|5.67| **[Parameter Analysis]** As suggested, we clarify key parameter settings: the bin number is 40, context memory size is 80, the task encoder 8 attention layers with 8 heads, and DDE 12 layers with 4 heads. Detailed settings for FLAIR and baselines will be added to Appx H4. We add analysis on bin number $\delta$ (see Figure B), showing trade-off between accuracy and training time. Appx J provides more sensitivity analysis on context memory. **[Related Work]** As suggested, we expand related work with more references and discussion (see Table E). We group existing methods into: Lazy: retrain models after drift detection, high cost; Incremental: adapt gradually, respond slowly; Ensemble: maintain model pool to cover varied concepts, resource-heavy. In contrast, FLAIR proposes a new in-context adaptation paradigm, enabling timely adaptation without retraining. **[Setting Clarification]** We clarify that our work is in the supervised setting. For CE and AQP, labels come from query execution (Appx H1). For regression/classification, ground truth is available pre- and post-drift, with results in Appx J1/J2, respectively. We'll clarify this in Sec 4.5. **[Definition Refinement]** As suggested, we refine Definition 2.1 to distinguish drift in databases from drift in data streams. In databases, predictions depend on both query and data, and the data evolves due to query-driven operations. We thus refine the input into two distinct but interrelated parts: query and data, both can incur drift in databases. Due to space limit, please see our response to Reviewer 5u5N-Q4 for the updated definition. We hope our clarifications and new results have addressed your concerns and can improve your evaluation of our work. [1] https://www.tpc.org/ --- Rebuttal Comment 1.1: Comment: I have read the author's reply. Thank you very much. The author has explained and answered my comments in detail and provided experimental analysis to verify the effectiveness and innovation of the work. I will improve the score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your invaluable feedback and for raising the score. Your insightful comments are truly inspiring and have greatly enhanced the quality of our work. We sincerely appreciate your time and effort.
null
null
null
null
null
null
null
null
Feedforward Few-shot Species Range Estimation
Accept (poster)
Summary: The paper introduces FS-SINR ("few-shot spatial implicit neural representations") for few-shot species range estimations, which is trained on citizen science location data. A key feature of FS-SINR is that once it has been trained, it can be used during inference to predict the range even of previously unseen species in a feed-forward way. The overall approach of FS-SINR is that during inference it takes as input a set of spatial locations (called "context locations") -- together with (optional) metadata, e.g. a textual or image description -- and then it feeds all info through a Transformer, which outputs a species encoding. To assess presence / absence, a query location x is embedded separately (using the same embedding function f as is used for the context locations), and then the species and query location embeddings are multiplied and fed to a sigmoid for binary decision making. The experimental results show that FS-SINR obtains state-of-the-art results on this task (in two different and relevant metrics), especially in the low-data regime. ## update after rebuttal I thank the reviewers for their responses to mine and the other reviewers' comments. There is consensus on accepting the paper and I also keep my accept recommendation. Nice paper! Claims And Evidence: Yes, most claims are supported by clear and convincing evidence (such as the statement that FS-SINR obtains SOTA results on two benchmarks, which is clearly shown e.g. in Fig. 3, and also the results appear extra reliable given that the results holds across two relevant metrics, as well as due to the strong emphasis that has been put on making the baselines as good / comparable as possible, as detailed in Appendix A.2). The one claim I did not find quite as fully convincing is (abstract): "..., in a fraction of the compute time, ...". I did not see much more discussion on compute time in the paper, except around L185 that reads: "Our approach is computationally efficient in that once the species embedding is generated it can then be efficiently multiplied by the embeddings for all locations of interest to generate a prediction for a species’ range." <-- That makes sense to me, but given the statement around 'fraction of compute time', I was hoping to see some actual runtime numbers, ideally in the paper, of the proposed method vs baselines. Methods And Evaluation Criteria: Yes, the methods / evaluation criteria definitely make sense for the problem at hand. Examples that support my assessment: * The evaluation is done on not only one, but two, of the common species range estimation (SRE) datasets, which makes claims e.g. about FS-SINR being a new SOTA more reliable. * Also, not only one but two evaluation criteria are used (mean average precision (MAP), but also distance-weighted MAP), both of which are standard for SRE (to the best of my knowledge), and in both of which FS-SINR is the best. * Good that many / most results (e.g. Fig 3) contain error bars with a few seeds, further reinforces the findings. * Finally, several relevant baselines are used for the comparisons (and these are well-detailed in Appendix A2). Theoretical Claims: No, theoretical claims were not made. Experimental Designs Or Analyses: Yes, I did so for all things in the main paper. I found the experimental designs / analyses to be sound and valid. Many of the reasons for this assessment are already explained under "Methods And Evaluation Criteria", but here are some additions: * **OBS:** Before I carefully checked Figure A3 (appendix), I had written the following as an experimental design I was _missing_, but then it is actually covered in the appendix, so it instead **furthers the soundness and validity of the experimental designs and analyses** (thus disregard the stuff in quotation marks as something to address in the rebuttal -- it is something good!): "In Sec. 3.2.1, it is discussed how the flexible FS-SINR framework allows for leveraging e.g. image / text info, assuming such info is also available in training. That's great. But what I would have wanted to see somewhere (e.g. appendix) was how the _amount_ of such metadata training affects inference. In other words, what happens if such metadata is used for 10% of the "training instances", 20%, ... 100% (and of course also with 0%). Right now it is as I understand it 50% of the time that text is used, and image is used 90% of the time (?). Similarly -- and also relating to the sentence "Note, we train FS-SINR such that it can use arbitrary subsets of these input tokens during inference" (right before "4. Experiments) -- it would have been nice to see how the 'extra-info-agnostic' approach currently used compares to a variant with 'always-extra-info'. One would perhaps expect that a model that _always_ obtains the metadata would outperform one that is agnostic to the amount of metadata (?)." * I liked the use of qualitative analyses as well, as they furthered my intuitive understanding of FS-SINR, e.g. - Fig. 4 clearly shows the impact that a text input can have - Fig. 7 provides good intuitive insight to the effect of increasing the number of context locations in a few-shot setting. + A small negative, however, is that in my view it is not entirely easy to see that the caption statement "As we increase thh number of context locations, the predictions become closer to the expert ranges" always holds. Perhaps it would be possible to compute some quantitative metric for each case, to see that this metric actually improves? (In particular, some times things seem to saturate between 5 and 10 locations.) Supplementary Material: Yes, I checked some parts, with a particular focus on these parts (which were good in my view): * The results around Figure A3 (see the "OBS" comment in the previous box response). * Also other relevant figures in terms of quantitaitve results, e.g. Fig. A2. It looks good to me. * Also had a look at various additional qualitative results, that looked good in my view. Relation To Broader Scientific Literature: I think the authors succeed in clearly placing this work relative to the broader scientific literature, and in particular how it improves upon / relates to the existing scientific literature on species range estimation (including that older non-DL-based approaches were included, not just modern DL-based variants). The FS-SINR approach most clearly builds upon the SINR approach by Cole et al. (2023), and also the LE-SINR extension (Hamilton et al. 2024), but improves LE-SINR in that it FS-SINR can incorporate image (i.e. not just text, as LE-SINR) as metadata, and also that FS-SINR, different from LE-SINR, does not require retraining a classifier for each new species observation added. The authors by the way note that other approaches, here based on (Lange et al., 2023) and (Snell et al., 2017), also do not require inference time retraining -- so FS-SINR is not entirely novel in that regard -- but those methods on the other hand perform much worse, as shown in the results chapter. Essential References Not Discussed: To the best of my limited literature knowledge, no essential references were missing. Other Strengths And Weaknesses: STRENGTHS * Great that an extensive Limitations as well as Impact Statement chapter were included. These are important parts that should not be neglected, and I think they provide insightful comments. * Building on the above, the problem of species range estimation is clearly a highly important one. So the topic of the paper itself is very relevant given the climate and ecological crises we are in. WEAKNESSES * It would have been good if the paper had looked into some form of uncertainty quantification. I would assume that is a very important thing to be aware of (i.e. a models rough uncertainty of predictions). A simple starting point may be to train individual models and look at ensembles and individual-model-deiviations from ensemble predictions. * It was unclear to me what "FS-SINR" was an abbreviation of first. I had to look up the SINR abbreviation by checking the Cole et al 2023 reference. Consider writing it out more clearly in the paper. Other Comments Or Suggestions: * It may be good to state whether "species" refers to animal species or plant species (or both). I believe the latter is the focus when talking about species range estimation, but would still be good to know. * I think perhaps there is a typo in eq. (1), where the capital S should be a lower-case s, in consistency with the previous notation? * Double word "the the" at Line 349. Questions For Authors: * Do you have an idea as to why, based e.g. on Fig. 3, results are better using FS-SINR-text than FS-SINR-image+text (yes, I saw the discussion on text being more informative than image in the main paper, but the _combination_ text+image seems to me should be at least as good as text-only (?))? * Can you please provide some info on inference runtimes of FS-SINR vs other methods, to show that the claim on 'fraction of compute-time' holds? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank **nqXQ** for their careful reading of the paper and constructive suggestions. **[nqXQ-1] Quantification of computational efficiency.** Below we report inference timings for different models (with 1 location + text), reported as the time taken in seconds to generate all evaluation species weights for LE-SINR and FS-SINR on the CPU. We observe that FS-SINR can generate the species embedding vector in as little as 2% of the time taken for LE-SINR which has to perform test time optimization to train a linear classifier for each species. In addition to not requiring any training for held-out species, FS-SINR also has fewer overall parameters compared to the SINR baseline (8.2M vs. 11.9M, see L214 Col2). We will expand on these timing results in the final revised text. |Model|Time| |-|-| |LE-SINR|631.3 | |FS-SINR|14.3 | **[nqXQ-2] Results as more context locations are added to Fig 7.** The quantitative evaluation in Fig 3 shows that as more context locations are added we observe an increase in agreement with the expert-derived range maps. However, we agree with nqXQ that the results do start to plateau and additional context locations provide diminishing returns (at least when averaged across all species in the IUCN and S&T evaluation sets). As requested, we report the AP for each of the species from Fig 7 below. We observe an increase in performance when increasing the number of context locations for these species. |Context locs. |Common Kingfisher |European Robin |Black and White Warbler | |-|-|-|-| |1 |0.59 |0.70 |0.49 | |2 |0.63 |0.72 |0.52 | |5 |0.79 |0.74 |0.59 | |10 |0.82 |0.78 |0.68 | **[nqXQ-3] Uncertainty quantification.** This is a really interesting suggestion. Taking inspiration from Poggi et al. [a] we include a sparsification-based uncertainty evaluation where data is progressively removed based on the uncertainty estimates derived from an ensemble of three FS-SINR models (see Sec 4.1 in [a]). These results are on S&T using range text with different numbers of context locations. We report Sparsification Error AUC (SEAUC) and Area Under the Random Gain (AURG). AURG is positive and increases when more context locations are provided demonstrating that the ensemble is better at estimating its uncertainty than random chance and becomes more accurate as more context locations are provided. |Context locs. |MAP|SEAUC|AURG| |-|-|-|-| |0 |0.66 |0.68 |0.03 | |1 |0.68 |0.71 |0.03 | |5 |0.71 |0.75 |0.04 | |10 |0.73 |0.77 |0.04 | |20 |0.74 |0.79 |0.05 | We also visualize the predicted mean and variance for the ensemble model using the species from Fig A16. We observe high variance in locations where the individual models in the ensemble differ (e.g. South America in 2nd row). The result can be found here: https://postimg.cc/VdNTgjD0 [a] Poggi, On the uncertainty of self-supervised monocular depth estimation, CVPR 2020 **[nqXQ-4] Performance of text model -- with and without images.** As discussed on L294 Col1, text describing a range is inherently more informative than images. However, in Table 1 we observe that images are still a valuable supervision source when no other meta-data is available (i.e. row 10 vs 4). Adding images with text does not hurt performance on S&T (row 11 vs 9), but does result in a drop in the more challenging IUCN dataset. This same pattern is apparent in Fig 3. The potential explanation here is that images provide sufficiently weaker signal, and greater opportunity to overfit to incorrect spurious features, thus negatively impacting performance. However, it is worth noting that even in the case of IUCN, FS-SINR with image and text still outperforms the recent state-of-the-art LE-SINR (see Fig 3). **[nqXQ-5] FS-SINR abbreviation.** Thanks for the suggestion. We will clarify this at the start of the paper. **[nqXQ-6] Plant or animal species?** We will clarify this. As noted on L241 Col1, we use the same training data as Cole et al. 2023 which contains observations for 47,375 species of plants *and* animals. **[nqXQ-7] Typos.** Thanks for flagging these two typos, we will fix them.
Summary: This paper outlined a new approach for few-shot species range estimation. The goal is to outline geospatial regions where an animal is likely to live based on previous observations of occurrence. The authors' approach builds upon Spatially Implicit Neural Representation (SINR) models designed to estimate species range based on location alone. Their new FS-SINR approach leverages a Transformer-based head and a novel set of 'context' locations, giving the model examples of where a new out of distribution organism might be found, in addition to the desired query location at inference. They benchmark their approach against the original SINR model, an active learning approach, and another SINR-based model that encodes free-text descriptions of species range sourced from the internet. They tested performance in few-shot and zero-shot situations, measuring performance using the IUCN and S&T baselines articulated in the original SINR paper. They registered marginally improved performance against the existing models. While the headline MAP numbers are comparable, their model does not require retraining for every new species under observation at test time---FS-SINR achieves that performance improvement without requiring expensive new training cycles. ## Update after rebuttal Thanks to the authors for their responses to all the reviewer comments. My assessment remains unchanged and seems in line with that of the other reviewers. Claims And Evidence: The claims made by the authors seem sound. Their results and claims---namely that their model performs well in the few-shot case---are supported by their experiments and well-articulated in the paper. Methods And Evaluation Criteria: - The proposed methods seem appropriate as does the chosen evaluation data. The authors compared against the three closest model types and a few generic baselines. They also experimented with different multi modal inputs to assess the added value of including images, free-text metadata, etc. - The authors should specify the spatial resolution of their model. Did they aggregate the presence data in tiles along the line of the original SINR paper? Or did they use a different strategy and/or spatial scale? Theoretical Claims: N/A Experimental Designs Or Analyses: - At line ~260 they describe holding out species in the union of the IUCN and S&T baselines 'unless otherwise stated.' I haven't been able to spot any notes of exceptions. Are there particular instances where an animal from that union was included in training? - I am a little confused about the zero-shot experiments described in section 4.3 (starting on page 6). Was the experiment prompting the models with the name of an unseen species and some combination of additional metadata? Or something else? Some of that information shows up in the caption of Table 1, but it is not effectively laid out. A little clarification in the beginning of the section would be helpful. - In section 4.4 the authors reference the 'ecologically relevant breakdown' of their results in appendix D. Space permitting, it would really strengthen the paper to include some of that material in the main body. Part of the value of their model appears to be robustness to the domain biases and distribution shifts they articulate in that appendix. At least a paragraph summarizing the major findings from the appendix seems appropriate. Supplementary Material: The authors included an extensive supplement with lots of example plots. I read the text, especially appendix D, but did not get to look at all the figures in detail. Relation To Broader Scientific Literature: This vein of work is of keen interest to ecologists and conservation biologists. The results in appendix D as it pretains to underlying sample biases is especially relevant given increasing recognition that sample bias impacts our understanding of ecological patterns and processes (e.g. Hughes et al., 2021; https://doi.org/10.1111/ecog.05926) Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: On page 7, the authors should provide a direct reference to appendix D containing the 'ecologically relevant breakdown' of results. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank **HT4t** for their helpful questions. By addressing these comments we believe that the description of the data processing and the evaluation protocol in the revised paper will be much clearer. **[HT4t-1] Spatial resolution of the model and data aggregation.** We use the same training and evaluation data as SINR (Cole et al. 2023), where the main difference is that by default the evaluation species are not included in our training data. As in SINR, FS-SINR uses continuous coordinates as input so there is no spatial scale explicitly defined for training, i.e. it is implicit. For the evaluation locations, we follow the same pre-processing steps as SINR, i.e. H3 cells at resolution five, which results in 2M total cells, each with an average area of ~250km^2. We will clarify this in the revised text. **[HT4t-2] Held out data comment on L260.** In Table 1 there are results for models that also train on the evaluation species (i.e. TST), but otherwise by default these evaluation species are held out of the training data. **[HT4t-3] Zero-shot experimental protocol.** We follow the same zero-shot evaluation protocol as LE-SINR (Hamilton et al. 2024). Specifically, for FS-SINR we either provide habitat (HT), range text (RT), or an image (I) for each evaluation species as input. Additionally, we also can evaluate our model when no meta-data is provided as we simply use the output of the class token as the species embedding vector (see Fig 2). In this case, the class token will be the same for all species. With the exception of the taxonomic rank text (TRT) variant which is inspired by the hierarchical species name encoding from Sastry et al., no specific species name text is provided to any of our models. We will update the relevant text to make this clearer. **[HT4t-4] Summarize ecological findings.** On L50 Col2 we note that a significant proportion of species only have a small number of location observations available. This indicates that, beyond common charismatic species, the task of species range estimation is a few-shot learning problem. We believe that the primary impact of our work relevant to ecology researchers will be better species range estimates in this few-shot regime compared to existing work (e.g. ~10% MAP compared to SINR in Fig 3 given only 10 observations). In Appendix D we also provide additional analysis that we believe will be of interest to ecologists, e.g. the spatial distribution of errors in Fig. A22, performance by continent in Fig. A23, performance versus range size in Fig. A25, and performance versus taxonomic group in Fig. A26. We will update the text to make these observations more clear and to point to relevant results in the Appendix. Thanks for the suggestion. **[HT4t-5] Hughes et al., 2021.** Thanks for flagging this paper. We will reference it in Sec 5 as it is an excellent reference related to the discussion of biases in natural world data.
Summary: The authors work within the problem setting of species-range estimation, where, given a latitude--longitude pair and a target species, the task is to determine the probability of being able to find that species at that location. Motivated by the large number of species for which only sparse sightings have been recorded, the authors propose a method for few-shot species-range estimation. Their formulation enables them to generalize to new species at inference time without retraining, and to condition off non-location data to improve coverage. ## update after rebuttal The authors' willingness to address outstanding issues is appreciated. Following their rebuttal commitment to add experiments, improve contextualization of results, and update the title, the recommendation is updated from Weak Accept (3) to Accept (4). The marginal quantitative improvement over prior work and the unresolved issues with image-conditioning performance limit the recommendation it can be given (from Strong Accept), but the paper is solid, well-written, and relevant to the ICML audience, and should be accepted. --- The plot image provided in the *Reply Rebuttal Comment* is also interesting. The authors should consider examining instances where image conditioning is most quantitatively harmful. Are the images consistently 1) of non-animal "evidence of an organism" or do they 2) closely resemble (either in human or model perceptual space) species observed during training that have distinct ranges? Claims And Evidence: Yes, the claims are supported by the presented evidence. Methods And Evaluation Criteria: Yes, the benchmark is appropriate for the chosen task. Theoretical Claims: N/A. The authors do not make claims that might necessitate a formal proof. Experimental Designs Or Analyses: Yes, the experimental design appears sound. Supplementary Material: Yes, I reviewed the supplementary material in its entirety, though sections of the Appendix were skimmed. Relation To Broader Scientific Literature: The work is not the first to evaluate few-shot generalization in species-range estimation, which makes the title somewhat confusing, but the authors generally contextualize their work fairly. Essential References Not Discussed: No, the works referenced appear fairly comprehensive. Other Strengths And Weaknesses: ### Strengths 1. The paper is well-written and was easy to follow. 2. The authors' extensive Appendix ablations are appreciated. 3. The task is interesting and well-motivated by data scarcity. ### Weaknesses 1. Performance improvements over the LE-SINR (Hamilton et al., 2024) text-conditioned baseline in both the few-shot and zero-shot evaluations appears somewhat marginal. The proposed approach does outperform it, and the modeling decisions enable simpler generalization to new species, but this may be limiting of contribution. The characterization of the few-shot performance as "impressive" on L360 should either be better contextualized or removed. 2. The addition of image conditioning appears on average quantitatively harmful, but the authors do not seem to acknowledge this. The result is unintuitive and is deserving of scrutiny. Questions include: 1) Does the issue persist with different visual encoders? 2) Are the same results observed when the model is trained without any text conditioning? 3) What happens if the model is instead trained on text embeddings of captions automatically extracted from the images (using another model)? Other Comments Or Suggestions: 1. The title does not seem to clearly reflect the contribution of the paper, as the authors do not claim to introduce the few-shot setting, which was also evaluated in Hamilton et al. (2024). The authors should consider updating it to highlight what makes the work unique. 2. The $t$ in $\mathcal{C}^t$ doesn't appear to be defined until Appendix 2.4. Consider mentioning it after first usage. 3. It appears an error to describe the model as "invariant to the number . . . of the context locations," which seems should necessarily affect the output. 4. The authors should additionally report model performance in the non-few-shot setting. 5. Can the model make use of multiple images simultaneously? The lack of numbering on $t_j$ and $a_j$ on L183 lends the impression no. 6. The authors should use `\citet` for citations that are referred to as sentence objects. 7. The "Prototype" baseline is referenced on L245, pages before it is defined. 8. The authors should additionally define "MAP" on L237 as the figure caption is likely to be read before L265. 9. The "We" referenced at the beginning of L436 is confusing. It should be clarified that it does not refer exclusively to the paper authors like it is used to a couple sentences later. 10. The bibliography is very well formatted. 11. The figures in the supplementary materials should be moved around so that they are located near where they are referenced. They're now often separated by multiple pages. L002 (and throughout): Few-shot -> Few-Shot L154: Give -> Given L209: (e.g., images) -> (i.e., images) L238: very low-setting setting -> ?? L258: currently best -> best currently L267: Baslines -> Baselines L340: (ours) -> (Ours) L582: "heads". -> "heads." Questions For Authors: The authors are encouraged to in particular engage with Weakness #2. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank **JzC8** for their helpful suggestions. **[JzC8-1] Performance compared to LE-SINR.** We outperform the recent LE-SINR in both the few-shot (Fig 3) and zero-shot (Table 1) settings even though we do not require any training on the evaluation species as in LE-SINR, thus making us much faster (see **[nqXQ-1]**). These differences can be as large as 4% MAP (e.g. row 8 vs 9 in Table 1). We will update the text on L360 Col2 to better characterize the performance improvement in a more measured way. **[JzC8-2] Value of adding images.** We discuss the image results on L294 Col1 where we say: “Perhaps unsurprisingly, in general we observe that image information is not as informative as text. This can be explained by the fact that a range text description provides much more context than an image of a previously unseen species”. Models trained on image data and no text (row 10) perform substantially better than models trained with no images or text (row 4) on both datasets (see Table 1). This shows that images are a valuable source of supervision, when no other signal is available. In practice, detailed text describing the ranges of most species does not exist, but we may have images. However, even weaker text (e.g. habitat description in row 6) is better than using images and combining images and text, at best results in the same performance (row 11 vs 9 for S&T) or slightly degrades it (row 11 vs 9 for IUCN). We will update this discussion in the revised text. **[JzC8-3] Using a different visual encoder.** As suggested, we conducted additional experiments using a different visual encoder -- a DINOv2 backbone. Perhaps unsurprisingly, this model performs worse than the features we obtain from the iNaturalist trained EVA-02 Vit (L205 Col2) in the paper. We will add these new results to the revised paper. |Context locs.|IUCN EVA|IUCN DINO|SNT EVA|SNT DINO| |-|-|-|-|-| |0|0.19|0.13|0.38|0.28| |1|0.45|0.40|0.49|0.44| |5|0.62|0.56|0.66|0.63| |10|0.65|0.60|0.70|0.67| |20|0.66|0.61|0.71|0.68| **[JzC8-4] Results without using any text information.** Below we also report results for a model trained using *only* images and no text. During training the model sees between 1 to 5 images per species. Again this performs worse than when using text information, but providing images at inference helps. As noted above, text supervision is simply much more informative than images. However, using image data is still helpful when no other data is available (e.g. row 10 vs 4 in Table 1), and for most species we do not have text describing their ranges, but we can more easily obtain images. |Eval. images|IUCN|S&T| |-|-|-| |1|0.17|0.35| |2|0.19|0.38| |3|0.20|0.39| |4|0.21|0.39| |5|0.21|0.40| **[JzC8-5] Results using automatically generated text captions.** This is an interesting suggestion. Current vision-language models with open-weights (e.g. BLIP2) are not yet capable of generating detailed descriptions for fine-grained species images, and instead provide relatively coarse captions, e.g. “a horse standing in a field”. We generated text captions for our images using BLIP2 and evaluated an already trained FS-SINR on these captions. Here we used both no prompt and two different prompts to produce captions. “No prompt” produced captions like *"a small bird is perched on a branch of a tree with flowers on it"*, “What species is this?” produced captions like *"rufous-bellied hummingbird"* and “Where is this?” produced captions like *"the savannas of south africa"*. Results on IUCN below show that in all cases captions are worse than using the original images they are generated from. |Context locs.|Image|No prompt|What species?|Where is this?| |-|-|-|-|-| |0|0.19|0.07|0.09|0.11| |1|0.45|0.27|0.30|0.24| |5|0.62|0.53|0.56|0.52| |10|0.65|0.60|0.62|0.58| |20|0.66|0.64|0.65|0.62| **[JzC8-6] Can the model make use of multiple images simultaneously.** Yes, this is possible. Conditioning on one image at inference time, as in the paper results in an MAP of 0.19 on IUCN (row 10 Table 1). If we instead use four images at inference, we obtain an MAP of 0.21. Thanks for the interesting suggestion. **[JzC8-7] Results in the non few-shot setting.** As requested, we conducted additional experiments beyond the 50 context locations used in the paper. Interestingly, even though this FS-SINR model was never trained on more than 20 context locations, adding more locations at inference time does not degrade performance. In the table below, shown for IUCN, we observe that performance saturates at around 50 context locations and the evaluated models gain a very small boost going from 20 to 1000 locations. |Context locs.|20|50|500|1000| |-|-|-|-|-| |SINR (No Text)|0.61|0.64|0.65|0.65| |LE-SINR (Text)|0.64|0.66|0.67|0.67| |FS-SINR (Text)|0.67|0.68|0.68|0.68| **[JzC8-8] Additional suggestions, minor comments, reordering images, and typos.** Thanks for flagging these. We will address them in the revised text. --- Rebuttal Comment 1.1: Comment: The authors' additional evaluation is appreciated, though the primary concern of image-conditioning performance remains unanswered. **[JzC8-1]** Yes, as stated in the original review, it is clear that the proposed method outperforms LE-SINR. That said, it does not seem appropriate to characterize differences "as large as 4% MAP" as "impressive." These differences are much too marginal to name the work "Few-Shot Species Range Estimation." As the authors acknowledge, they do not introduce the setting (there is even a Related-Work section bearing the same name). As such, it seems like an implicit misrepresentation---a title is not exactly a claim, but should be representative of a work. If a paper were named "Object Detection," you'd expect it to either introduce the task or be the grand paper that solves the problem; this work is neither. Instead, the real contribution appears to be in that the method is feedforward and that it can be conditioned on a set of context locations during inference. As such, the paper seems much more aptly named "Feedforward Few-Shot Species Range Estimation" or "Contextualized Few-Shot Species Range Estimation." The authors are strongly suggested to consider updating the title to make it representative of their contribution. **[JzC8-2]** It is a plausible conclusion that "image information is not as informative as text." What does not make sense, however, is that adding images is on average quantitatively harmful. It is unclear why the authors appear unwilling to address this. Copied below are two results tables with added columns to highlight the effect of adding images to both text and no-text FS-SINR. Red indicates that adding images harmed performance (negative). Table A3 (IUCN): |# Context|Image|NoText\Image|Image − NoText\Image|Text+Image|Text|Text+Image − Text| |-|-|-|-|-|-|-| |0|0.19|0.05|$\color{green}{+0.14}$|0.46|0.52|$\color{red}{-0.06}$| |1|0.45|0.48|$\color{red}{-0.03}$|0.55|0.57|$\color{red}{-0.02}$| |2|0.54|0.56|$\color{red}{-0.02}$|0.59|0.60|$\color{red}{-0.01}$| |3|0.58|0.60|$\color{red}{-0.02}$|0.61|0.62|$\color{red}{-0.01}$| |4|0.60|0.62|$\color{red}{-0.02}$|0.62|0.63|$\color{red}{-0.01}$| |5|0.62|0.63|$\color{red}{-0.01}$|0.63|0.64|$\color{red}{-0.01}$| |8|0.64|0.65|$\color{red}{-0.01}$|0.64|0.65|$\color{red}{-0.01}$| |10|0.65|0.66|$\color{red}{-0.01}$|0.65|0.66|$\color{red}{-0.01}$| |15|0.66|0.67|$\color{red}{-0.01}$|0.66|0.67|$\color{red}{-0.01}$| |20|0.66|0.67|$\color{red}{-0.01}$|0.66|0.67|$\color{red}{-0.01}$| |50|0.67|0.67|0.00|0.67|0.68|$\color{red}{-0.01}$| Table A4 (S&T): |# Context|Image|NoText\Image|Image − NoText\Image|Text+Image|Text|Text+Image − Text| |-|-|-|-|-|-|-| |0|0.38|0.18|$\color{green}{+0.20}$|0.64|0.64|0.00| |1|0.49|0.50|$\color{red}{-0.01}$|0.66|0.66|0.00| |2|0.57|0.58|$\color{red}{-0.01}$|0.67|0.67|0.00| |3|0.61|0.61|0.00|0.68|0.68|0.00| |4|0.64|0.64|0.00|0.69|0.69|0.00| |5|0.66|0.65|$\color{green}{+0.01}$|0.70|0.70|0.00| |8|0.69|0.68|$\color{green}{+0.01}$|0.71|0.71|0.00| |10|0.70|0.69|$\color{green}{+0.01}$|0.71|0.72|$\color{red}{-0.01}$| |15|0.71|0.70|$\color{green}{+0.01}$|0.72|0.72|0.00| |20|0.71|0.71|0.00|0.72|0.72|0.00| |50|0.72|0.71|$\color{green}{+0.01}$|0.73|0.73|0.00| Image conditioning improves zero-shot performance but not consistently in any few-shot scenario, even in the absence of text. Why is this? All image-conditioning evaluations should be moved to the related-work section if not core to the work. **[JzC8-3]** The visual-encoder evaluation is appreciated. From this (where performance becomes even worse relative to the NoText\Image model), it appears that the model is most likely being overfit to the image embeddings, harming generalization. The authors should consider evaluating after a single epoch (adjusting hyperparameters). **[JzC8-4]** The image-only evaluation is appreciated. The authors should not, however, make claims such as the following without making it explicit they are referring solely to the zero-shot setting: > However, using image data is still helpful when no other data is available (e.g. row 10 vs 4 in Table 1), and for most species we do not have text describing their ranges, but we can more easily obtain images. Is this setting ecologically meaningful? In what practical setting would an ecologist have an image but 1) no idea where the image was taken and 2) no ability to describe the image in text? It is reasonable to expect that the zero-shot task may become valuable in the future, but it seems now too toy to be the sole justification for including image conditioning in the main paper, when it is otherwise harmful. **[JzC8-5]** The experiment is interesting. Human-written captions should be considered for a future evaluation. **[JzC8-6]** This seems a duplicate of **[JzC8-4]**; the evaluation is appreciated. **[JzC8-7]** This evaluation, and the observation that performance saturates at 50 samples, are interesting and the authors should consider including them in the paper. --- Reply to Comment 1.1.1: Comment: We thank **JzC8** for carefully reading our rebuttal and engaging in the discussion. We provide additional responses below. **[JzC8-1]** We are very happy to update the title of the paper to better reflect our main contribution, e.g. “Feedforward Few-Shot Species Range Estimation”. We agree that other works, discussed in our related work section, have already performed evaluation on the problem of few-shot species range estimation. However, the two most relevant papers have either explored active learning using more difficult to obtain “presence-absence” data (Lange et al. NeurIPS 2023) or focused on demonstrating the impact of text more generally (Hamilton et al. NeurIPS 2024). Our paper compares to these works, in addition to previously untested baselines, in a like-for-like way. However, we agree that it is very important that readers do not get the wrong impression from our title. As suggested, we will also update the text to characterize the performance improvement from our approach using more measured language. Thanks for these suggestions. **[JzC8-2]** We commented on the limitations, potential reasons for overfitting, and worse results of adding image supervision in response **[nqXQ-4]**. Apologies, as we should have linked to this in our initial response to this question. We acknowledge that images are not very helpful overall and will ensure that this point is clear in the text, i.e. we will expand the discussion on L294 Col1 and L373 Col1 where we currently discuss that images are not as informative as text. To better understand the value of image information, we performed an additional analysis where we compared per-species performance for a model with text and either with or without images. Results can be found here: https://postimg.cc/NyqzzJ8m Values greater than zero indicate that the model with images performs better for a given species, and below zero indicates it performs worse. We observe that while both models have the same overall average, the individual per-species performance differs. This can be attributed to the fact that images help for some species, but actually hurt for others. As noted by **JzC8**, in general, images do not help when we evaluate different model variants and datasets. We do not think the quantitative image results detract from our main contribution. There are some interesting results that we believe may be of interest to researchers in this space. For example, Fig 5 illustrates plausible guesses for some images, including non-species images. The “Blue Duck” in Fig A21, illustrates one of the issues with images. Our images are sourced from iNaturalist, the current best large-scale dataset for species classification, where the criteria for inclusion is that an image must contain “evidence of an organism”. The example in Fig A21 only contains footprints and a human hand. While our model manages to localise predictions to some coastal regions, this information may actually harm performance when more informative location data is also provided, as the model may struggle to ignore the image token entirely. **[JzC8-3]** We agree that overfitting is another possible cause of the slightly lower performance with images. We noted this in our original response **[nqXQ-4]** where we said: “The potential explanation here is that images provide sufficiently weaker signal, and greater opportunity to overfit to incorrect spurious features, thus negatively impacting performance“. Training the image encoder for a single epoch is an interesting suggestion, we will explore this further for the final version of the paper. **[JzC8-4]** As suggested, we will clarify that this statement only holds in the *zero-shot* setting. More generally, we will also update the text to put the image results into context. Regarding the real-world validity of the zero-shot setting, there may be rare cases where a specimen exists but no location information is available (e.g. old museum specimens). However, we agree that this would not be common. The main purpose of these results is to demonstrate that other forms of meta-data, beyond text, are applicable with our model. There is growing interest in learning joint embedding spaces for different modalities in ecological applications (e.g., Sastry et al. 2025), and future extensions of our work could make use of other forms of data which may be more ecologically relevant such as confirmed absences of species, environmental conditions, satellite imagery, or genetic information. **[JzC8-5]** We agree, these results are interesting. They point to some of the limitations of current captioning models. **[JzC8-7]** Thank you, we will add this to the paper.
Summary: This paper introduces FS-SINR, a novel Transformer-based approach for few-shot species range estimation that can predict ranges for previously unseen species without requiring retraining. The model architecture combines a location encoder for processing geographic coordinates, a frozen GritLM text encoder for species descriptions, a frozen EVA-02 ViT image encoder for species images, a Transformer encoder that processes combined input tokens, and a species decoder that generates final range predictions. The model uses learned token type embeddings to handle different input modalities and is trained using a modified version of the SINR loss function. On the IUCN and S&T benchmark datasets, FS-SINR achieves state-of-the-art performance, particularly in low-data scenarios (1-10 observations), and can make effective predictions even with a single context location. The model can also generate zero-shot predictions using only text or image inputs, with performance improving when multiple types of context information are combined. Unlike previous approaches, it requires no retraining for new species. The authors note several limitations, including that predictions are deterministic rather than probabilistic, performance depends on the quality and availability of text/image metadata, the approach is subject to biases in training data distribution, and currently only handles presence data rather than confirmed absences. The paper validates these claims through extensive experiments and ablation studies comparing different model components and training strategies. ## update after rebuttal I thank the reviewers for their responses. As there is consensus on accepting the paper, I will keep my accept recommendation. Good work! Claims And Evidence: Most claims are well supported. There are certain claims that lack substantial. Examples: - "During training, we supply FS-SINR with 20 context locations per training example, though we find that the model performance is very robust to the number of context locations provided during training." - There is no quantification of the computational efficiency claimed throughout the paper - The paper does not explain how the insights from few-shot species prediction will have downstream impact on biodiversity analysis - The initial part of the paper makes claims about support for images, but the experiments reveal limited success. It is unclear if images can be used in practical settings. - It is unclear how bias in training data from North America and Europe effect prediction globally. Methods And Evaluation Criteria: Overall, method and evaluation make sense. I did not understand why the second term in the loss function is needed. It is possible that some species co-exist with each other. Wouldn't that loss term discourage learning that behavior? It is unclear how the model adapts to new species which are out of distribution to the training data. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Overall experiment design is sound. I found the zero-shot results difficult to follow, can be presented better. The figures also are unreadable in black and white, can be improved. It is unclear why SINR outperforms the proposed method when the species data is in the training set. Why doesn't the proposed method performance scale with the number of observations? Given the loss function used, I would like to see how well the model captures species co-occurrence. It is unclear why only precision is used as a metric, and recall is ignored. Supplementary Material: Did not review Relation To Broader Scientific Literature: The paper's contributions build upon and advance several lines of prior work in species range estimation and machine learning. In terms of few-shot learning for range estimation, it improves upon traditional methods like SINR (Cole et al., 2023) which require model retraining for new species, and LE-SINR (Hamilton et al., 2024) which introduced text-based range estimation but still needs retraining. It's the first method to enable feed-forward range prediction for new species without retraining, showing better performance in low-data scenarios (1-10 samples) than Active SINR (Lange et al., 2023). While previous works have explored using different data types separately - SINR with only locations, LE-SINR with text, Dollinger et al. (2024) with satellite imagery, and Teng et al. (2023) with species images - this paper provides a unified framework that can flexibly combine all these modalities. In terms of architectural innovation, it introduces transformers to species range estimation, building on recent work using attention mechanisms for geographic tasks (Russwurm et al., 2024), whereas most previous methods relied on MLPs or CNNs. Essential References Not Discussed: It is unclear how the proposed method compares against traditional Bayesian approaches, such as Golding et al. (2016) [1] Golding, N. and Purse, B.V., 2016. Fast and flexible Bayesian species distribution modelling using Gaussian processes. Methods in Ecology and Evolution, 7(5), pp.598-608. Other Strengths And Weaknesses: Covered everything above Other Comments Or Suggestions: The symbols used in Section 3.1 can be simplified Questions For Authors: Included my questions above, none critical to the decision Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank **QZrb** for their constructive comments. **[QZrb-1] Robustness to number of training locations.** An ablation of the number of training context locations is provided in Fig. A2. We will update L250 Col1 to more clearly point to this result. As can be seen, fewer context locations (i.e. 5) performs worse, but the difference between 20 and 50 is small. **[QZrb-2] Quantifying efficiency.** Please see response **[nqXQ-1]**. **[QZrb-3] Downstream biodiversity analysis.** Please see response **[HT4t-4]**. **[QZrb-4] Value of images.** Please see response **[JzC8-2]**. **[QZrb-5] Data bias.** As noted on L422 Col1, there are spatial biases in the training data we use, which has more data from North America (NA) and Europe (EU) (Fig. A24). The IUCN evaluation data is not as biased, and from the results in Fig. A23 we can see that we obtain better performance for species in NA and EU, but performance in the less sampled South America is still strong. Addressing training biases is an important question which we leave for future work. **[QZrb-6] Co-occurrence and second term in eqn 2.** This loss is borrowed from prior work (Cole et al. 2023). Without the second term in eqn. 2, the loss is trivially optimized by predicting that every species is present in every location (see Sec 3.2 in Cole et al. 2023). It is true that the second term penalizes the model for predicting that multiple species are present at the same location. However, the large value we use for $\lambda$ (see L596) means that this penalty is much weaker than the one for failing to predict that an observed species is present. Thus, if two different species occur near the same location, the model is encouraged over different batches to predict that they are both present. **[QZrb-7] Performance on out of distribution species.** To clarify, by default no data for any of the species in the evaluation datasets is observed during training (L261 Col1). By training on tens of thousands of species, FS-SINR can generalize to previously unseen species at test time. The results in Fig. A23 provide an indication of performance on less common regions. Evaluating truly out of distribution species, i.e. ones that bear no relationship to the training data, is an interesting question but would likely require new evaluation datasets. We opted to use the standard IUCN/S&T datasets so that we could compare fairly to existing work. **[QZrb-8] Performance when training on evaluation species.** In Table 1 we compare to SINR when data from the evaluation species are observed during training (i.e. TST in row 1 vs 3). It is not surprising that SINR performs better as it learns a unique embedding vector for each species (even evaluation ones), whereas FS-SINR must learn the mapping from a small number of locations to a species’ range. These results could be considered as positive as they demonstrate that FS-SINR is not overfitting by simply memorizing the text for each species. Text supervision is very sparse compared to the informative location observations used by SINR to learn its per-species encoding. **[QZrb-9] Performance wrt number of observations.** In Fig 3 we observe that for nearly all methods tested, performance improves as more observations are provided. The largest improvements are observed when going from few (e.g. 10) to many observations but begins to plateau as the number approaches 50. This is consistent with results from existing work where we see more data provides diminishing returns (e.g. Cole et al.). **[QZrb-10] Evaluation metric - recall?** We report performance using both mean average precision (MAP) (Fig 3) *and* a distance weighted variant of it (Fig A27). MAP is the standard metric from existing work (e.g. Cole et al. and Hamilton et al.). As a reminder, average precision is the area under the precision-recall curve, which is based on precision *and* recall across a range of thresholds. **[QZrb-11] Comparison to other approaches.** The Gaussian Process (GP) approach in Golding et al. is designed for presence-absence data, but we can adapt it to our presence-only setting using pseudo-negatives. We train a GP classifier using an RBF kernel and a logit link function and a Random Forest (RF) classifier. Both are implemented using sklearn using the raw coordinates as input, with a separate classifier trained per-species. We find that they do not perform as well as FS-SINR which is trained jointly on multiple species. This is especially noticeable on the more challenging IUCN dataset. The results can be found here: https://postimg.cc/0zxWzDFB **[QZrb-12] Misc.** We will improve the readability of the figures and simplify the notation.
null
null
null
null
null
null
PieClam: A Universal Graph Autoencoder Based on Overlapping Inclusive and Exclusive Communities
Accept (poster)
Summary: The document presents PieClam (Prior Inclusive Exclusive Cluster Affiliation Model), a novel graph autoencoder that enhances the existing BigClam framework by utilizing overlapping inclusive and exclusive community structures for node representation learning. PieClam introduces a new log cut distance for measuring graph similarity and demonstrates universality in approximating any graph structure with a fixed number of communities, surpassing prior models in link prediction and anomaly detection tasks. Experiments validate the model's effectiveness across benchmark datasets, showing that it achieves competitive performance while offering a generative capability through prior distribution learning in the community affiliation space. Claims And Evidence: The claims presented in the manuscript are well-supported by the experiments conducted and the theoretical proofs provided. The authors assert that PieClam can accurately reconstruct graphs and achieve competitive link prediction and anomaly detection outcomes. Evidence for these claims includes: Comprehensive experimental results on multiple datasets, including Squirrel and Texas, which confirm the model's performance against established baselines such as AA, VGAE, GAT, LINKX, and Disenlink. The introduction of log cut distance, a new metric aimed at quantifying similarity between graphs, which is substantiated by both empirical and theoretical discussions in the paper. Theoretical proofs (Theorems 3.7 and 3.8) confirming the universality of the model, indicating its capability to approximate various graph structures efficiently. While most claims are convincingly backed by data, further clarity on the implications of using the log cut distance in sparse graphs would enhance the overall argument. Methods And Evaluation Criteria: The methods employed in the manuscript, including the structure of PieClam and the evaluation criteria for performance assessment, are appropriate for the problem domain. The authors effectively leveraged benchmark datasets for rigorous testing and comparison, enhancing the robustness of their findings. Specific elements to note include: The incorporation of both inclusive and exclusive communities broadens the model's applicability to various graph types, making it adaptable for real-world scenarios. The use of the Lorentz inner product for decoding, allowing a flexible representation of nodes in the affiliation space, is a strong methodological choice that lends depth to the investigation. The section on experimental design and analyses confirms that the evaluation strategies align well with the objectives of the study, providing a solid foundation for claiming model effectiveness. Theoretical Claims: The theoretical claims, particularly regarding the model's universality and the correctness of the proofs, are adequately presented. Notable points include: The completeness of the proofs for Theorems 3.7 and 3.8 highlights the rigorous mathematical grounding of the claims surrounding PieClam’s ability to approximate any graph utilizing a fixed parameter budget. However, further discussion surrounding any assumptions made during the proof processes would strengthen the rigor of this section and provide additional assurance of the theoretical underpinnings. Engagement with these theoretical proofs can help peer reviewers assess the significance and flexibility of the proposed method against existing literature. Experimental Designs Or Analyses: The experimental framework and analyses appear sound, demonstrating the validity of the proposed model. Key considerations include: The selection of datasets (such as Squirrel and Texas) is relevant and conducive to assessing graph autoencoding performance across various scenarios. Detailed accounts of the experimental setup, including the usage of Nvidia GPUs and the iterative optimization process, enhance reproducibility and clarity regarding the methodological rigor. A deeper exploration of potential biases in the datasets or evaluation methods may warrant further scrutiny to ensure the validity of the conclusions drawn. Supplementary Material: The manuscript includes supplementary materials that enhance understanding, particularly in detailing architectural designs and additional experimental details. Key areas of review in the supplementary content include: Model architecture specifics that elucidate how inclusivity and exclusivity are processed within the learning framework. Appendices providing confidence intervals and additional experimental results that support the conclusions drawn in the main text. Engagement with these materials indicates the authors have provided a comprehensive context for the results and interpretations put forth. Relation To Broader Scientific Literature: The contributions made by PieClam are significant within the landscape of graph representation learning. The use of overlapping community structures represents a notable advancement from predecessor models like BigClam and continue to resonate with core themes in community detection literature. The model's introduction aligns with recent efforts to refine graph generative models and embed node representation learning in a probabilistic framework. Notably, the discussion of related works supports the authors' claim while presenting a complete narrative on how PieClam fits within the broader scientific discourse. Essential References Not Discussed: The manuscript could benefit from including related works that have addressed the integration of features in graph models or explored universal properties of graph representations. For instance: Recent advancements in normalizing flows for better feature representation could establish a clearer context for PieClam's approach. Works exploring the strengths and weaknesses of using similar graph distance measures in sparse networks should also be cited. Inclusion of these references would provide a more rounded understanding of how PieClam positions itself within contemporary research on graph models. Other Strengths And Weaknesses: Strengths of the paper include: The originality of combining inclusive and exclusive communities to enhance node representation. The clear presentation and methodological framework that makes the findings accessible and reproducible. Weaknesses may involve: The potential narrow focus on node affiliations without sufficiently addressing node features in edge conditions, which may limit applicability across diverse graph types. A more elaborative discussion on model limitations would be beneficial. Other Comments Or Suggestions: A careful proofread of the text could address minor typographical issues observed in sections discussing model evaluations. The authors could consider including additional visualizations to further illustrate the model's capabilities and performance comparisons. Questions For Authors: How do you envision incorporating node features into the edge probability calculations in future iterations of PieClam? Could you elaborate on the potential applications of the log cut distance metric for sparse graphs, and your plans for addressing current limitations? What specific challenges did you encounter during the implementation of the prior distribution learning, and how did you overcome them? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank that reviewer for the positive and in depth evaluation of our paper. >**Claims And Evidence:** >>...further clarity on the implications of using the log cut distance in sparse graphs would enhance the overall argument. *Response.* Thank you for this suggestion. We already shortly discussed this shortcoming in the Conclusion (Page 8): "Another limitation of our analysis is that the log cut distance is mainly appropriate for dense graphs. Future work will extend this metric to sparse graphs..." but we agree that this discussion may be too short. **If accepted, we will extend the discussion about sparsity, stating that the analysis is mostly appropriate for dense graphs, but the method is empirically appropriate also for sparse graphs.** Regarding applicability of the method to real-world sparse graphs, the method was tested against sparse graphs in anomaly detection and link prediction benchmark, where we got competitive performance, demonstrating that the method performs well empirically also for sparse graphs. >**Theoretical Claims:** >> However, further discussion surrounding any assumptions made during the proof processes would strengthen the rigor of this section... *Response.* As a principle in mathematical writing, all assumptions are clearly and explicitly stated in the body of the theorem. No additional assumptions are allowed to be taken during the proof, as this would undermine the correctness of the theorem. We believe to have complied with this basic principle. However, if the reviewer found a hidden assumption in the body of the proof, please let us know where specifically this is made so we can fix it. >**Essential References Not Discussed:** >> The manuscript could benefit from including related works that ... explored universal properties of graph representations. .. Works exploring the strengths and weaknesses of using similar graph distance measures in sparse networks should also be cited. *Response.* We will add paragraphs to the Extended Related Work appendix about this. We will cite papers about universality of GNNs as functions from graphs to vectors (graph classification or regression). For example: S. Chen, S. Lim, F. Memoli, Z. Wan, and Y. Wang. Weisfeiler-Lehman meets Gromov-Wasserstein. ICML. 2022. S. Chen, S. Lim, F. Memoli, Z. Wan, and Y. Wang. The Weisfeiler- Lehman distance: Reinterpretation and connection with GNNs. Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG ML). 2023. J. B¨oker, R. Levie, N. Huang, S. Villar, and C. Morris. Fine-grained expressivity of graph neural networks. NeurIPS. 2023. L. Rauchwerger, S. Jegelka, R. Levie. Generalization, Expressivity, and Universality of Graph Neural Networks on Attributed Graphs. 2025. We will also add a short discussion about sparse graph similarity measures that can potentially be used to define a version of log-cut distance appropriate for sparse graphs. For example: **Stretched Graphons:** C. Borgs, J. T. Chayes, H. Cohn, and N. Holden. Sparse exchangeable graphs and their limits via graphon processes. Journal of Machine Learning Research. 2018. **$L^p$ graphons:** C. Borgs, J. Chayes, H. Cohn, and Y. Zhao. A theory of sparse graph convergence i: Limits, sparse random graph models, and power law distributions. Transactions of the American Mathematical Society. 2019. **Graphings:** L. Lovász. Large networks and graph limits, volume 60. American Mathematical Soc., 2012. **Graphops:** Backhausz and B. Szegedy. Action convergence of operators and graphs. Canadian Journal of Mathematics, 74(1):72–121, 2022. >**Weaknesses:** >> The potential narrow focus on node affiliations without sufficiently addressing node features in edge conditions, which may limit applicability across diverse graph types... *Response.* First, we already addressed this issue in the Conclusion: *"One limitation of PieClam is that, for attributed graphs, it only models the node features through the prior in the community affiliation space, but not via the conditional probabilities of the edges (given the community affiliations). Future work will deal with extending PieClam to also include the node (or edge) features in the edge conditional probabilities."* Still, we often observe in experiments that not using the features can still lead to competitive performance with methods that do use the features. We wrote in section D3 of the appendix on page 25: *"Note that, since the prior is not used in classification, there is no added benefit from utilizing node features when training the prior, as proposed in 2.4. Therefore, the link prediction algorithm relies solely on the graph’s topological structure and the prior functions only as regularization."* **Nevertheless, we will extend this discussion in the camera-ready version of the paper if accepted.**
Summary: The paper introduces PieClam, a universal graph autoencoder that extends traditional community affiliation models by incorporating both inclusive and exclusive communities. The method uses a novel decoder based on the Lorentz inner product to overcome the triangle inequality limitations of previous models, thereby accurately representing graphs with structures like bipartite components;Also, the paper proposes a generative extension through a learned prior ) . The authors perform experiments on tasks such as anomaly detection and link prediction and show competitive performance against state-of-the-art baselines. Claims And Evidence: The authors claim that PieClam is a universal autoencoder that can approximate any graph (with a fixed parameter budget per node) and that it outperforms or competes with existing methods on several benchmarks. The paper supports these claims with both theoretical results (specifically , this can be shown in Theorems 3.7 and 3.8) and extensive experimental evaluations on synthetic datasets and real-world tasks. While the theoretical proofs appear sound under the presented assumptions, the evidence on sparse graphs is a bit less comprehensive. Further empirical validation in more diverse settings would improve the support for universality claims. Methods And Evaluation Criteria: I believe that the proposed method is well motivated and builds on prior models by extending the affiliation space to include exclusive communities. The use of the Lorentz inner product for decoding is a novel move that helps address challenges in previosu approaches. The evaluation (both synthetic experiments, anomaly detection, and link prediction) uses appropriate metrics and baselines. A minor comment is that, additional discussion on scalability (especially for large or sparse graphs) and hyperparameter sensitivity would be valuable. Theoretical Claims: The paper provides rigorous theoretical proofs of universality using the log cut distance framework. I checked the proofs for the main theoretical results (Theorems 3.7 and 3.8), and while they are largely convincing, some assumptions (e.g., regarding graph density) might limit their direct applicability to real-world sparse graphs. Clarifications on these limitations would improve the overall presentation. Experimental Designs Or Analyses: THe experimental design is thorough, that includes both synthetic and real-world datasets. The comparisons with several baselines in anomaly detection and link prediction are informative. Nonetheless, it would be useful to see more detailed analyses on the model’s performance with respect to computational efficiency and robustness to hyperparameter choices, especially since the training involves simultaneous optimization of node embeddings and a prior. Supplementary Material: The supplementary material is detailed by providing detailed proofs, additional experimental details, and extended discussions on related work. This material helps clarify many technical aspects. Relation To Broader Scientific Literature: The work is very relevant to the scientific literature. Essential References Not Discussed: A couple of references that are relevant: 1. Chanpuriya et al. 2020. Node Embeddings and Exact Low-Rank Representations of Complex Networks 2. Jia et al. 2019.,CommunityGAN: Community Detection with Generative Adversarial Nets Other Strengths And Weaknesses: Strengths: – Novel use of community affiliation models to include exclusive communities – Theoretical guarantees of universality add significant depth to the contribution. – Competitive empirical performant on multiple benchmarks Weaknesses: – The analysis appears more suited to dense graphs; applicability to very large, sparse graphs is less clear. – The training procedure (involving simultaneous optimization of node embeddings and the generative prior) might be computationally demanding. – Additional experiments on hyperparameter sensitivity and scalability would further strengthen the paper. Other Comments Or Suggestions: Check my previous section. Questions For Authors: Check my previous section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and in depth evaluation of our paper. >**Claims And Evidences:** >> ...the evidence on sparse graphs is a bit less comprehensive. Further empirical validation in more diverse settings would improve the support for universality claims. *Response.* The experiments in the current version of the paper (anomaly detection and link prediction) are already on sparse graphs. For example, the Texas graph has 183 nodes and 279 edges. We ran additional experiments on link prediction on the OGB-ddi dataset. We will add more experiments in the final version. | Model | Hits@20 test | AUC |------------------------|----------------------|------------- | **IeClam** | **90.72 ± 2.35** | 99.89 ± 0.00 | NCN | 76.52 ± 10.47 | 99.97 ± 0.00 | NCNC | 70.23 ± 12.11 | 99.97 ± 0.01 | GraphSAGE | 49.84 ± 15.56 | 99.96 ± 0.00 | GCN | 49.90 ± 7.23 | 99.86 ± 0.03 | SEAL | 25.25 ± 3.90 | 97.97 ± 0.19 | PEG | 30.28 ± 4.92 | 99.45 ± 0.04 | BUDDY | 29.60 ± 4.75 | 99.81 ± 0.02 | Node2Vec | 34.69 ± 2.90 | 99.78 ± 0.04 | MF | 23.50 ± 5.35 | 99.46 ± 0.10 | Neo-GNN | 20.95 ± 6.03 | 98.06 ± 2.00 | GAT | 31.88 ± 8.83 | 99.63 ± 0.21 | CN | 17.73 | 95.20 | AA | 18.61 | 95.43 | RA | 6.23 | 96.51 | Shortest Path | 0 | 59.07 >**Methods And Evaluation Criteria:** >>...additional discussion on scalability (especially for large or sparse graphs) and hyperparameter sensitivity would be valuable. *Response.* Regarding scalability, we will add to the camera ready version of the paper a discussion that shows that both the memory and time complexity of all Clam models are linear in the number of edges. We already wrote about the computational complexity of BigClam on Page 3, Column 1, Line 144: *"In order to implement the above iteration with $O(|E|)$ operations at each step, instead of $O(N^2)$, the loss can be rearranged as..."* Moreover we wrote about BigClam, Page 3, Column 1, Line 158: *"We observe that the optimization process is a message passing scheme"* We agree that we have not sufficiently clarified that all other Clam models are trained with a message passing scheme, and share their complexity with BigClam: linear in the number of edges. We only wrote on Page 4, Column1, Line 208: *"This loss can be efficiently implemented on sparse graph by the formulation..."* **We will clarify in the camera ready version that all Clam models have linear complexity with respect to the number of edges.** Regarding hyperparameter sensitivity, in Appendix D6 we do an ablation study where we show the impact of certain parameters on the performance of the model. We can extend this appendix to study sensitivity to more hyperparameters. >**Theoretical Claims:** >>...some assumptions (e.g., regarding graph density) might limit their direct applicability to real-world sparse graphs. *Response.* We already shortly discussed this in the Conclusion (Page 8): "Another limitation of our analysis is that the log cut distance is mainly appropriate for dense graphs. Future work will extend this metric to sparse graphs..." but we agree that this discussion may be too short. **If accepted, we can extend the discussion about sparsity, mainly stating that the analysis is mostly appropriate for dense graphs (but the method is appropriate also for sparse graphs).** Regarding applicability of the method to real-world graphs, the method was tested against sparse graphs in anomaly detection and link prediction benchmark, where we got competitive performance, demonstrating that the method performs well empirically also for sparse graphs. >**Experimental Designs Or Analyses:** >>Nonetheless, it would be useful to see more detailed analyses on the model’s performance with respect to computational efficiency... especially since the training involves simultaneous optimization of node embeddings and a prior. *Response.* Note that the prior is a node-wise computation, so it is dominated by the rest of the log likelihood terms in optimization. We wrote on Page 4, Column 2, Like 216: *"Observe that the PieClam loss is similar to the IeClam loss, only with the addition of the prior acting as a per node regularization term"* We will extend the text in the camera-ready version and write that, as a reuslt, adding the prior does not asymptotically increase the complexity of the model. >**Essential References Not Discussed:** We will properly refer to the suggested papers in the camera-ready version of the paper if accepted. >**Weaknesses:** *Response.* See our responses above.
Summary: The submitted manuscript introduces PieClam, a graph autoencoder that learns node embeddings by maximizing the log-likelihood of the decoded graph. It extends the well-known BigClam method in two ways. First, PieClam incorporates a learned prior on the node distribution in the embedding space, shifting from a simple pairwise interaction approach to a full-fledged graph generative model. Second, while it retains BigClam’s focus on sets of nodes with high connectivity (inclusive communities), it also introduces the notion of exclusive communities, i.e., groups of nodes that exhibit strong disconnection. This dual capacity to identify both inclusive and exclusive communities is enabled by a new graph similarity measure called the log cut distance, through which the authors demonstrate that PieClam is a universal autoencoder capable of approximating any graph distribution within a uniform bound. Empirical results are provided in tasks such as graph anomaly detection and link prediction. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. The paper’s focus on discovering both inclusive and exclusive communities naturally lends itself to tasks that test how well a model captures nuanced graph structure. Evaluating on anomaly detection and link prediction is appropriate because these tasks directly assess whether the learned embeddings and generative assumptions effectively capture both the presence and absence exclusive of edges. Theoretical Claims: The theoretical claims appear to be correct, although I have not personally verified the proofs in detail. Experimental Designs Or Analyses: I carefully reviewed the experimental setup and the corresponding analyses, which appear reasonable. Supplementary Material: I reviewed sections A-D of the supplementary material Relation To Broader Scientific Literature: PieClam extends ideas from BigClam’s community affiliation framework by adding a learned prior to make it a generative model, drawing on concepts used in SBMs that define latent distributions for edge formation. In embracing both inclusive and exclusive communities, it generalizes beyond models that preserve the triangle inequality and can be applied to bipartite graphs. PieClam’s log cut distance builds upon well-known cut-based measures in graph theory and helps establish it as a universal autoencoder. Essential References Not Discussed: The reference list is sufficient. Other Strengths And Weaknesses: Strengths 1. The paper demonstrates notable originality by extending BigClam into a generative framework and enables a richer representation of graph structure compared to BigClam. 2. The theoretical analysis is thorough and convincingly supports the approach, underscoring its mathematical soundness. 3. The manuscript is well written, clearly organized, and technically solid, making it accessible to both experts and newcomers in the field. Weaknesses 1. A key concern lies in the equivariance of the encoder architecture. While it is straightforward to design universal autoencoders when equivariance is not enforced, the paper does not clarify how its approach balances equivariance against universality. 2. The experimental evaluation would benefit from additional benchmarks, particularly for link prediction on OGB datasets, to more rigorously establish the method’s effectiveness and generalizability. 3. The computational complexity of the proposed approach remains unspecified, leaving questions about its scalability and feasibility for large datasets. 4. The paper does not fully explain how the model generalizes to unseen data. Other Comments Or Suggestions: NA Questions For Authors: Please see respond to the aforementioned weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and in depth evaluation of our paper. >**Weaknesses:** >>1. A key concern lies in the equivariance of the encoder architecture... *Response.* Thank you for this comment. **PieClam (and all other Clam models) are in fact equivariant to node re-indexing.** This is due to the fact that maximizing the log likelihood via gradient descent in Clam models can be formulated as a message passing algorithm. We wrote this on Page 3, Column 1, Line 158 (about BigClam): *"We observe that the optimization process is a message passing scheme"* But we indeed did not link this to equivariance, and did not repeat this claim about all other Clam models. **If accepted, we will add an explanation about the equivariance of Clam methods to node re-indexing.** >>2. The experimental evaluation would benefit from additional benchmarks... *Response.* We ran additional experiments on link prediction on the OGB-ddi dataset. We will add more experiments in the final version. | Model | Hits@20 test | AUC |------------------------|----------------------|------------- | **IeClam** | **90.72 ± 2.35** | 99.89 ± 0.00 | NCN | 76.52 ± 10.47 | 99.97 ± 0.00 | NCNC | 70.23 ± 12.11 | 99.97 ± 0.01 | GraphSAGE | 49.84 ± 15.56 | 99.96 ± 0.00 | GCN | 49.90 ± 7.23 | 99.86 ± 0.03 | SEAL | 25.25 ± 3.90 | 97.97 ± 0.19 | PEG | 30.28 ± 4.92 | 99.45 ± 0.04 | BUDDY | 29.60 ± 4.75 | 99.81 ± 0.02 | Node2Vec | 34.69 ± 2.90 | 99.78 ± 0.04 | MF | 23.50 ± 5.35 | 99.46 ± 0.10 | Neo-GNN | 20.95 ± 6.03 | 98.06 ± 2.00 | GAT | 31.88 ± 8.83 | 99.63 ± 0.21 | CN | 17.73 | 95.20 | AA | 18.61 | 95.43 | RA | 6.23 | 96.51 | Shortest Path | 0 | 59.07 >>3. The computational complexity of the proposed approach remains unspecified, leaving questions about its scalability and feasibility for large datasets. *Response.* We wrote about the computational complexity of BigClam on Page 3, Column 1, Line 144: *"In order to implement the above iteration with $O(|E|)$ operations at each step, instead of $O(N^2)$, the loss can be rearranged as..."* As written above, basically, Clam models are trained with message passing algorithms, so they have efficient computational complexity: linear in the number of edges. BigClam. On Page 4, Column 1, Line 208, wewrote: *"This loss can be efficiently implemented on sparse graph by the formulation..."* We agree that we have not clarified enough that all other Clam models, including PieClam, share their computational and memory complexity with **We will clarify in the camera ready version (if accepted) that all Clam models have linear complexity with respect to the number of edges** >>4. The paper does not fully explain how the model generalizes to unseen data. *Response.* Studying how and why models generalize to unseen data is one of the key and most fundamental (mostly open) questions in learning theory, and specifically statistical learning. In this paper we do not provide generalization analysis, e.g., in a PAC learnability setting, VC dimension, Rademacher complexity, etc. We leave such analysis to future works, and note that most papers that first proposes a deep learning architecture do not provide a generalization analysis. If your question is meant to be more practical, i.e., how the training and test data are defined, and how testing is defined, this depends on the problem. 1. In anomaly detection, the setting is explained in Section 4.3: *"In unsupervised node anomaly detection, one is given a graph with node features, where some of the nodes are unknown anomalies. The goal is to detect these anomalous nodes, without supervising on any example of normal or anomalous nodes, using only the structure of the graph and the node features"* Namely, here generalization is better interpreted as a form of "transfer learning". One fits the Clam model to the whole graph data (solving the task of reconstruction/autoencoder), and then uses the trained community affiliation features to solve a different task: evaluating the likelihood of nodes according to the learned probabilistic graph model. 2. In link prediction, as explained in Section 4.4: *"In supervised link prediction, one is given a graph where for some of the dyads it is unknown if they are edges or not. The goal is to predict the connectivity of the omitted dyads."* So, here the training set for he Clam model are the known dyads (the given known edges and non-edges) and the test set are the unknown dyads, where the task is to decide if each unknown dyad is an edge or a non-edge.
null
null
null
null
null
null
null
null
Does learning the right latent variables necessarily improve in-context learning?
Accept (poster)
Summary: This paper investigates whether explicitly inferring task-relevant latent variables improves in-context learning (ICL) in Transformer models. The authors introduce an explicit model that enforces structured inference of latent variables and compare it with a standard implicit model that learns ICL end-to-end without explicit latent variable inference. The authors find that explicitly inferring task latents does not improve generalization. The explicit model effectively extracts latent variables but struggles to utilize them for robust predictions. This suggests that the challenge in ICL is not just learning task latents but also correctly leveraging them in downstream prediction. Claims And Evidence: I am unsure whether the chosen setting and the assumption that a single latent variable can effectively summarize all context information is the best way to explain how Transformers behave. In particular, enforcing this assumption by preventing x_query from attending to other context tokens may impose an artificial constraint that does not align with how Transformers naturally process information. Methods And Evaluation Criteria: See previous section. Theoretical Claims: No theory in this paper. Experimental Designs Or Analyses: To some extend see "Claims And Evidence" section. Supplementary Material: I just skimmed through it, looking at details of data set and model in section C and B. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper explores a potentially interesting question regarding the role of explicit latent variable inference in in-context learning. However, in its current form, the study feels **incomplete** in several ways. First, the assumption that a **single latent variable can fully summarize context information independently of \( x_{query} \)** seems **unnatural**. Transformers process information dynamically, and restricting direct attention from \( x_{query} \) to other context tokens may **artificially constrain** the model’s behavior rather than isolating a meaningful causal mechanism. This setup may not fully capture how in-context learning operates in standard architectures. Additionally, while the negative result is valuable, the study **does not sufficiently rule out alternative explanations**. For example: - **Implicit models may also recover task-relevant latents** in their final-layer representations, but this is not systematically tested. - The failure of the explicit model to leverage inferred latents for prediction is **not fully explained**—is this due to architectural constraints, optimization dynamics, or a fundamental limitation of explicit inference? - The **bottleneck structure itself may limit information flow**, rather than revealing a true failure of latent inference to improve generalization. These concerns are further detailed in the **"Questions for Authors"** section. Addressing them could significantly **strengthen the clarity and impact of the paper.** Other Comments Or Suggestions: I find it difficult to pinpoint a clear takeaway from this paper. While it presents several interesting observations, the main message remains unclear. Questions For Authors: Thank you for your insightful work on the role of latent variable inference in in-context learning. I have a few questions regarding your methodology and findings: Recovering True Latents in Implicit Models 1. In your experiments, you show that the explicit model successfully extracts the correct task latents. However, have you considered whether a standard Transformer (implicit model) might also recover similar latent representations, perhaps in the final-layer embeddings or through a linear probe analysis? This would help clarify whether the explicit model is truly unique in this regard. Dependence of Latents on Query Input 2. The explicit model prevents the query x_query from directly attending to the context. However, does this design fully isolate the necessity of learning the correct latent variables? If the true latent is inherently dependent on x_query (e.g., in tasks where the latent determines the relationship between query and context), wouldn’t this architectural constraint potentially hinder optimal inference? Failure to Leverage Learned Latents for Prediction 3. One of your key findings is that even when the explicit model correctly infers task latents, it does not generalize better. Could this be an issue with optimization dynamics (e.g., the choice of Adam or gradient descent) rather than an inherent failure to use the latents? Given that the last layer acts as a simple classifier, why does training another classifier on the same inferred latents yield better results? This suggests a mismatch between latent inference and prediction that is not fully explained. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable and constructive feedback. > Single latent variable can fully summarize context information independently of $x_{query}$ seems unnatural It is important to note that the dimensionality of the latent variable in explicit models is kept sufficiently large to encode the true latents for all the tasks considered (*except GP*). The independence assumption of latent variable and $x_{query}$ is standard across the field - Deep learning models learn parameters $\theta$ of a neural network from training data $\mathcal{D}$ to generalize to test observations $x_{query}$. This is the setup for vision, natural language and tabular tasks where $\theta$ is kept independent of $x_{query}$. - The same assumption is also used in representation and unsupervised learning. - Multiple works on ICL also claim this to be the underlying mechanism (Eq. 1 in [5,6]) Since most machine learning approaches train a model *independent of the test set*, they follow this independence assumption. In contrast, test-time training methods can be seen as not following this assumption [4]. > restricting direct attention from x_{query} to other context tokens may artificially constrain the model’s behavior rather than isolating a meaningful causal mechanism In all the tasks (*except GP Regression*), the correct causal mechanism is to infer the true latents solely from context and then leverage it for prediction. This indeed constrains the hypothesis class and theoretical results suggest that reducing the hypothesis class leads to tighter bounds as long as the true solution is realizable [1], which it is as the independence assumption is satisfied in the true data generating distribuion, i.e. $y_q \perp \mathcal{D} \,|\, x_q, z$. > Implicit models may also recover task-relevant latents Indeed they can, but the search space is larger. Given $T$ tokens and $L$ layers, it is not clear which combination of $𝑇\times𝐿$ representations should you decode latents from. Different places to probe can provide vastly different inferences; whereas in explicit model there is only one natural place to probe as it is precisely trained to infer the latents. We refer the reviewer to Figures 4(b) and 9 where we investigate this with probes in both implicit and explicit models and show that the counterfactual performance is worse in implicit models, showing that either the latents are not sufficiently encoded or just difficult to find. > failure of the explicit model to leverage inferred latents for prediction We conjecture that this is due to optimization dynamics or lack of inductive biases since our prediction model is expressive enough to represent a linear mapping, but is not able to do so when combined with the problem of inferring latents, in OOD settings. Note that this is similar to standard studies in OOD generalization that highlight failure cases of deep learning methods. > bottleneck structure itself may limit information flow, rather than revealing a true failure of latent inference Most works on inductive biases limit information flow and can aid generalization if this is reflected in the data too, eg. modular systems [2] limit information flow by blocking non-activated experts, information bottlenecks through KL bounds [3], etc. In our work, we consider tasks where inferring the correct latents *should* block additional information flow from the context to the query. Could the reviewer clarify if they meant something else by true failure of latent inference? > why does training another classifier on the same inferred latents yield better results? In one case we keep the last layer fixed and learn just the latent inference (known prediction) while in the other, we learn both latent inference and prediction jointly. Our experiments demonstrate that the latter leads to sub-optimalities, i.e. joint training of prediction parameters and latent variable inference causes problems. We hope that our response has resolved the reviewer's concerns and would be happy to provide further clarifications. [1] Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. [2] Andreas, Jacob, et al. "Neural module networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [3] Tishby, Naftali, and Noga Zaslavsky. "Deep learning and the information bottleneck principle." 2015 ieee information theory workshop (itw). Ieee, 2015. [4] Sun, Yu, et al. "Test-time training with self-supervision for generalization under distribution shifts." International conference on machine learning. PMLR, 2020. [5] Müller, Samuel, et al. "Transformers can do bayesian inference." arXiv preprint arXiv:2112.10510 (2021). [6] Han, Seungwook, et al. "Emergence of Abstractions: Concept Encoding and Decoding Mechanism for In-Context Learning in Transformers." arXiv preprint arXiv:2412.12276 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. Regarding the x_query and the explicit model: I understand the setup you're referring to. My question (in a simplified setting to make my point) is this — suppose the optimal algorithm is k-nearest neighbors. If we don’t allow x_query to be compared to other in-context examples, doesn’t that make it harder for the model to recover the k-NN algorithm? The reason I think this matters is that this kind of comparison between tokens seems like a central capability that enables transformers to perform so well. Another point — and this might be due to my misunderstanding — is about the "true latent variable." If the model is implementing some algorithm in its forward pass, like gradient descent (as in Von Oswald et al., 2023) or others, then isn’t the ICL capability not really about learning the correct latent variables? Could you clarify on this? In summary, the main message of the paper is still unclear to me. The only concrete takeaway I see is that ICL is not due to learning some latent variable plus a classifier on top. But out of the many possible explanations for why ICL works, this paper seems to rule out just one specific type. Therefor, I’m keeping my score as is. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in discussion and hope that our response alleviates their concerns. > suppose the optimal algorithm is k-nearest neighbors. If we don’t allow x_query to be compared to other in-context examples, doesn’t that make it harder for the model to recover the k-NN algorithm? We completely agree that if the optimal prediction was k-NN (or any non-parametric model), then the comparison between tokens is needed and hence an implicit model might do better (eg. GP regression). Identically, if the true underlying model was parametric, the optimal algorithm could be to infer the parameters and allowing comparison between tokens can make learning this solution harder. Thus, the two different choices above incentivize different solution structures $-$ which one is preferable depends on the task. Importantly, a very broad class of problems do have an underlying true model that can be described with some latent variables, e.g. - inferring the latent rules of games like chess, - inferring semantics like objects and relationships from visual scenes, - image classification $-$ there is a true mapping that defines whether something is a cat or not, - properties of molecules $-$ there are principles of science (chemistry and physics) that govern the properties - properties of galaxies through underlying parameters (often the target in simulation based inference) and so on. This is the class of problems that we focus on $-$ for tasks with true underlying latent variables, is explicitly inferring those latents useful for better prediction. We do not claim that all tasks require or are modeled through some underlying latents, but a lot of them are and those form the basis of our hypothesis and analysis. Even further, if the optimal algorithm is k-NN, then the explicit model could also infer the same solution by learning the voronoi tessellation (https://en.wikipedia.org/wiki/Voronoi_diagram) corresponding to the observations. > If the model is implementing some algorithm in its forward pass, like gradient descent (as in Von Oswald et al., 2023) or others, then isn’t the ICL capability not really about learning the correct latent variables? We thank the reviewer for bringing this point up and realize that there may have been a potential misunderstanding. We first note that [1] does model the latent variables ($W$ in their notation) but it does so **in a complex and distributed manner** by modeling this latent variable inference (*through gradient descent!*) and prediction ($\hat{y}_\theta$ in their notation) jointly with an implicit model. Note that if ICL is implementing some algorithm (gradient descent, Bayesian posterior estimation, etc.), then there needs to be some object (*latent variable*) that is being optimized or whose posterior is being inferred. Essentially, [1] shows that the implicit model in linear transformers for linear regression can be seen as composing context aggregation and prediction (which is by design in explicit models but can be inferred separately in this specific case of implicit models) such that the context aggregator infers the latents through gradient descent. We note that, in contrast, we do not make any assumptions about what algorithm a context aggregator can use to infer the latents, we only test whether they do. Since we do not make strong assumptions about the task, it is not easy (*maybe even impossible!*) to break down the implicit model into context aggregation and prediction, as [1] does for the specific case of linear transformers and linear regression. We thank the reviewer for their insight and hope our response answers their questions. We would greatly appreciate an increase in rating if the concerns have been addressed. [1] Von Oswald, Johannes, et al. "Transformers learn in-context by gradient descent." International Conference on Machine Learning. PMLR, 2023.
Summary: This paper delves into the mechanism for Transformers to do In-Context Learning (ICL). A common belief is that TF do CL through some statistical shortcuts and hence can not generalize well in OOD tasks. The authors test this hypothesis by minimally modifying the architecture to encourage the model to explicitly aggregate the information in the context to learn the task representation before performing the final prediction to the label fo the query. By modifying the architecture, the authors are able to compare a usual TF (the so-called implicit model) and the modified one (the explicit model). They conduct a series of experiments on various ICL tasks, including linear regression, non-linear regression and classification, and some reasoning tasks. They showed that 1. The explicit model does not outperform the implicit model in various tasks and in both ID and OOD cases. 2. They prove that the explicit model can learn the task representation very well, so the reason why they are poor in OOD cases is that they did not learn the final prediction part well. Replacing the final prediction function with an oracle leads to a much better performance. In general, the goal, experiments, and results are pretty well presented in this paper. The experiments are sufficient and clear. I like the style of this paper. Although they focused on a small question, they studied it deeply. Claims And Evidence: I have one suggestion. 1. The OOD task for linear/nonlinear regression/classification only relies on sampling the query input from a distribution with a larger variance. This is not very sufficient since this type of 'query shift' can be well tolerated even by a single layer of the linear self-attention [1] model. So I will suggest trying more complex distribution shifts and see what happens for both models. [1]. Trained Transformers Learn Linear Models In-Context. Methods And Evaluation Criteria: / Theoretical Claims: / Experimental Designs Or Analyses: / Supplementary Material: / Relation To Broader Scientific Literature: / Essential References Not Discussed: / Other Strengths And Weaknesses: / Other Comments Or Suggestions: / Questions For Authors: / Ethical Review Concerns: / Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and appreciate that they found the paper to be a clear, well written, in-depth analysis of the subject. To alleviate their concerns regarding OOD generalization, we refer them to Figures 2(c) and 5 where we test for compositional generalization instead of just shifting the query. In these tasks, novel combinations of underlying latents are provided during inference as opposed to solely changing the query (see **Training and Evaluation** on Page 5). Finally, we provide additional analysis where we test for OOD generalization in linear regression and classification by changing the distribution of the underlying weight vectors $w$ to be sampled from a normal distribution with larger variance. Our results, provided in Table 1 here (https://anonymous.4open.science/r/explicit-implicit-rebuttal-B263/explicit-implicit-rebuttal.pdf), indicate that it is not the case that explicit models are able to generalize better than implicit ones in such OOD scenarios. Even further, we see that while known prediction is a bit better, it still lags behind the implicit models because this setting is OOD for the context aggregator while maintaining in-distribution $x_{query}$. We hope that our response has resolved the reviewers' concerns and would be happy to provide further clarifications.
Summary: This paper notes that when we do in-context learning, it is likely that the network is, in some sense, learning about the structure of the task. This paper considers task spaces that are explicitly low-dimensional, such as linear regression, where you can use in-context learning to give information about the linear regression. To encourage the network to use this low-dimensional structure, they add a bottleneck to the transformer. However, they find that this does not improve performance. Thus, the paper is in many ways presenting a "negative result". Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A (no such theoretical claims). Experimental Designs Or Analyses: I am satisfied that the evidence they present supports their claims. Supplementary Material: No. Relation To Broader Scientific Literature: Good connections drawn in the Introduction. Related work is in the Appendix, which I don't really mind, but some might object to. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: My central issue with this paper is that it is --- ultimately --- a negative result. That can be fine if the negative result is sufficiently interesting, surprising and convincingly argued. However, I don't think that's the case here. In particular, while the authors clearly expected to improve in-context learning by introducing a bottleneck, I believe that this would be a very rare view in the field. That's basically because transformer based LLMs work really, _really_ well. In particular, they work really well at in-context learning, and they work really well at a broader array of tasks that share character with in-context learning (even completing the next token using a pre-trained model requires bringing in lots of information from the context). Now, perhaps _the_ essential component of transformers is self-attention. And of course, self-attention is the _opposite_ of a bottleneck, as it allows each token to attend to any previous tokens. So I'm really not sure who would expect introducing a bottleneck to improve performance. Indeed, whenever we introduce bottlenecks of any form into attention (sliding window, quantised KVs etc.) you get worse performance. Additionally, the interpretability results aren't that interesting as: * They're restricted to a network with bottlenecks that no one (presumably, not even the authors) would use in practice. * Their experiments are limited to settings with known latent variables (interpretability is most interesting when we don't know the latent variables). Now, if you buy that the result is expected, then I'm really not sure that the result is suitable for ICML. I would instead recommend that the authors consider a venue such as TMLR, which has two key criteria for acceptance: "Are the claims made in the submission supported by accurate, convincing and clear evidence?" "Would some individuals in TMLR's audience be interested in the findings of this paper?" The paper does clearly meet these thresholds. But ICML requires something more (which was the entire point of setting up TMLR). Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, but strongly disagree that our paper is a negative result that is “not sufficiently interesting, surprising or convincingly argued”. Besides the fact that several reviewers (1Akw, 1soF, TjQ5) found our motivation (detailed below) and study interesting and convincingly argued, ICML has a track record of accepting negative results [1]. > improve in-context learning by introducing a bottleneck, … very rare view in the field. We believe the reviewer has misunderstood the motivation of our work. Our goal is not to provide a new architecture but to investigate the hypothesis that Transformers suffer from sub-optimal performance primarily due to insufficient latent variable inference. To systematically test this hypothesis - which is an open debate (Lines 19-52 RHS provide ample citations on both sides) - we use an architecture biased towards latent variable inference using a bottleneck (Lines 52-89 LHS). If Transformers' performance is linked to explicitly inferring task latents, then biasing the model towards this solution ought to improve OOD generalization. If it doesn't, we can conclude that explicit latent variable inference isn't sufficient to improve ICL. We do not suggest that our architectures, or bottlenecks, are the answer; they are simply minimal interventions to test the importance of correct latent variable inference in ICL. Even though our goal is not to introduce new architectures, it is not a "rare view" in ML to improve generalization via bottlenecks for which ample evidence exists - MoE architectures [2] use only a subset of parameters - Perceiver models [3] introduce bottlenecks through learned latent variables - Retrieval and memory augmented methods [4] introduce bottlenecks by selecting a subset of data for context - Information bottleneck [5] are well studied for improving generalization - Parametric assumptions (eg. training a neural network on data and then throwing away the data) introduces a bottleneck as opposed to methods like kNNs which retain entire datasets (note the bottleneck here is the trained model, which has strictly less information than the data it was trained on). In our own study, explicit models with known prediction function — which is more bottlenecked than both the explicit and implicit model — outperforms both on various tasks. In light of these works, we believe that the story is more nuanced than introducing bottlenecks (or inductive biases) reduces performance. > transformer based LLMs work really well. We agree with the reviewer that they do, but this should not disincentivize research into improving or analyzing them. > limited to settings with known latent variables Our goal was never to provide a method for interpretability. To investigate the hypothesis that latent variable inference is not the problem, we need tasks where we can evaluate whether we are inferring the latents well. We rely on counterfactual predictions, a commonly used interpretability tool, as the metric to evaluate the extent to which the models infer task latents (hence the requirement for ground-truth latents). Our analysis highlights the difficulty of inferring task latents from implicit models, even though they perform well. This study thus contributes to a body of empirical evidence that allows us to conclude that improving task latent inference by itself is not the key to improved ICL generalization. > interpretability is most interesting when we don't know the latent variables. We disagree because in such cases, a metric for what is more interpretable is either unavailable or there is a noisy, potentially incorrect, proxy for it which leads to mis-interpretations [6]. Thus, to rigorously test our hypotheses, we relied on tasks with known latent variables. We hope that our detailed response has addressed the reviewer’s concerns and we would be happy to engage in further discussion to understand and resolve further questions. We hope that our response sufficiently highlights why the hypothesis we study is well motivated and interesting. [1] Karl, F., Kemeter, L. M., Dax, G., & Sierak, P. (2024). Position: embracing negative results in machine learning. arXiv preprint arXiv:2406.03980. [2] Liu, Aixin, et al. "Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model." arXiv preprint arXiv:2405.04434 (2024). [3] Jaegle, Andrew, et al. "Perceiver io: A general architecture for structured inputs & outputs." arXiv preprint arXiv:2107.14795 (2021). [4] Lewis, Patrick, et al. "Retrieval-augmented generation for knowledge-intensive nlp tasks." Advances in neural information processing systems 33 (2020): 9459-9474. [5] Tishby, Naftali, and Noga Zaslavsky. "Deep learning and the information bottleneck principle." 2015 ieee information theory workshop (itw). Ieee, 2015. [6] Doshi-Velez, Finale, and Been Kim. "Towards a rigorous science of interpretable machine learning." arXiv preprint arXiv:1702.08608 (2017).
Summary: The paper addresses a key question in in-context learning: whether explicit latent variable learning leads to better generalization, especially out-of-distribution (OOD). The conclusion is that the explicit bottleneck architecture does not help in terms of generalization. Claims And Evidence: Yes, the paper design experiments test the intended hypothesis well. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA Experimental Designs Or Analyses: 1. Is there any difference in the result when using an encoder (as in the paper) vs. using a decoder? Why did the author choose to use an encoder in particular? 2. For the regression problem, why did you choose the OOD task to be scaled x, instead of scaled w, since by bottlenecking the latent variable, the explicit model should probably be better at any OOD of the latent variable? Could you check the other OOD task for the regression problem, especially the ones that have OOD on the latent variable? 3. Could you explain how many layers of Transformer are used in the implicit and explicit model (including the context model and the predictor head)? Would there be any impact regarding the depth of the Transformer itself on the OOD generalizability? For instance, the deeper model may be better at OOD than the shallower model. - I see from the appendix that you consider two options in the predictor, one is a 4-layer TF, another is the MLP. Could you explain clearly which model you used to report the result? And why do you think using a shallower Transformer for the context extraction is sufficient for the explicit model to generalize? 4. By the ablation study on "explicit models learn to infer the correct latent variable, but not how to use it", the author suggests that by forcing the model to explicitly predict the correct latent variable, the explicit model generalizes better. I wonder if the effect of "dimension" in the output also matters. If I understand correctly, the y_q is one-dimensional, while the latent variable is 1 dimension for the linear regression, and 2 dimensions for the other tasks. Would it be possible to put the nonlinear & sinusoidal to also 1 dimension in x when conducting the Figure 3 experiment? This could potentially rule out the error in terms of the dimensionality. 5. Also in Figure 3, why is the classification task not beneficial from predicting the latent variable directly? Supplementary Material: I check the experimental setup Relation To Broader Scientific Literature: This paper discuss the key hypothesis that learning the latent variable is not beneficial to the overall OOD performance of the transformer. This is an interesting observation to understand how transformer learn to solve the synthetic problems. Essential References Not Discussed: Not that I aware of Other Strengths And Weaknesses: Strengths: The paper uses extensive experiments designed to investigate the hypothesis of whether learning an explicit latent variable helps in the transformer's generalization ability. Weakness: It is in general hard to draw a fair comparison on the OOD experiments. For instance, as I mentioned in the experiment section, the author chooses 8 layers for implicit TF, and 4 layers for explicit TF when extracting the latent z, and MLP / 4 layer TF for the predictor. However, as far as I know, the depth of the model strongly affects the model's OOD performance in the regression task. It is not justified why the explicit model is designed like that (with 4 layers allocated for context extraction), and could be beneficial if there are more explicit model designs, for instance, the context extractor = implicit model depth, is tested to accompany the results. Other Comments Or Suggestions: NA Questions For Authors: Update after the rebuttal, and after the direct message from the author to AC: I appreciate the authors for conducting extensive experiments varying the number of parameters. *I apologize for initially overlooking the additional results provided in the attached files*. I have now carefully reviewed the full set of results and summarized the key information below: * For linear regression: result is mixed | Implicit Model | Matching Explicit Model in terms of param | OOD Query (which is better) | OOD Latent (which is better) | |------------------------|-----------------------------------|-----------------------------|-------------------------------| | 4 layers | N/A | N/A | N/A | | 6 layers | (explicit-MLP) L_context=4, L_prediction=4 | explicit | implicit | | 8 layers | (explicit-MLP) L_context=4, L_prediction=8 | explicit | implicit | | 8 layers | (explicit-MLP) L_context=6, L_prediction=4 | explicit | implicit | | 8 layers | (explicit-Tsf) L_context=4, L_prediction=4 | explicit | implicit | * For linear classification, sinusoid, MLP regression, MLP classification: implicit is better | Implicit Model | Matching Explicit Model in terms of param | OOD Query (which is better) | OOD Latent (which is better) | |------------------------|-----------------------------------|-----------------------------|-------------------------------| | 4 layers | N/A | N/A | N/A | | 6 layers | (explicit-MLP) L_context=4, L_prediction=4 | implicit | implicit | | 8 layers | (explicit-MLP) L_context=4, L_prediction=8 | implicit | implicit | | 8 layers | (explicit-MLP) L_context=6, L_prediction=4 | implicit | implicit | | 8 layers | (explicit-Tsf) L_context=4, L_prediction=4 | implicit | implicit | * For RAVEN: result is mixed | Implicit Model | Matching Explicit Model in terms of param | OOD (which is better) | |------------------------|-----------------------------------|-----------------------------| | 4 layers | N/A | N/A | | 6 layers | (explicit-MLP) L_context=4, L_prediction=4 | implicit | | 8 layers | (explicit-MLP) L_context=4, L_prediction=6 | explicit | | 8 layers | (explicit-Tsf) L_context=4, L_prediction=4 | explicit | * For gene, result is not interpretable since no explicit runs have similar number of params than implicit runs. That said, under a similar total number of parameters, the implicit model generally performs better, though there are a few exceptions. The following concern still remains, and I hope to see results averaged over more trials to clarify this point: > Additionally, I am somewhat confused. In the paper, the OOD task for linear regression is defined as OOD query. However, the new results suggest that for OOD query, the explicit model performs better when the latent parameters are learned explicitly under a similar parameter count. Specifically, in Figure 2, the blue block (representing the explicit model as Transformer) should correspond to the Explicit-Tsf model with L_context=4 and L_prediction=4. However, this seems to lead to a conflicting conclusion with what is now presented. I hope I’m not misunderstanding or misinterpreting anything here—perhaps this result is based on a single run, and the variance might explain the inconsistency. If that’s the case, it would be helpful to clarify whether these results are averaged over multiple runs or represent individual trials. In addition, I would like to see more fine-grained explicit model configurations that match the number of parameters of the implicit models. Among the results provided during the rebuttal, only a few explicit setups are comparable in parameter count—one setup roughly matches the 6-layer implicit model, and one or two setups roughly match the 8-layer implicit model. Overall, I see promising signals that the implicit model outperforms in general, especially on ID tasks. However, I would also encourage the authors to investigate fairer comparisons by matching ID performance—e.g., tuning the parameter count so that both models achieve similar ID results, and then evaluating on OOD. Currently, a lot of settings (GP, MLP regression) have implicit model with better ID than the explicit model, and the the source of the OOD gap needs to be investigated. Lastly, I would suggest that the authors moderate the strength of their conclusions. Since it is difficult to cover all possible setups, it would be helpful to explicitly limit the scope of the findings and include a discussion paragraph acknowledging that the results are task- and architecture-specific. At this point, I fully understand the authors’ efforts and frustrations during the intense rebuttal phase. I have updated my score to 4, and I hope the authors will seriously consider the comments above. Once again, I sincerely appreciate the authors’ engagement and detailed responses throughout the discussion. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the value of our work and providing constructive cristicism. Throughout this comment, we will refer to additional experiments that are provided here: https://anonymous.4open.science/r/explicit-implicit-rebuttal-B263/explicit-implicit-rebuttal.pdf > Is there any difference in the result when using an encoder (as in the paper) vs. using a decoder? The only difference is that a decoder would use a causal mask as opposed to our setting. Since we feed tokens as [$(x_1, y_1), (x_2, y_2), \ldots (x_n, y_n)$] where $(\cdot)$ defines a token, we cannot leverage a causal mask (*refer to our response to Reviewer 1Akw*). Our training loss, however, is an unbiased estimator of a similar next-token prediction decoder loss (modeling $y_{i+1}$ given [$(x_1, y_1), (x_2, y_2), \ldots (x_i, y_i), (x_{i+1}, \phi)$] for all $i$ in parallel) but more expressive since context points can attend in anti-causal direction as well. We chose this setup as it provides more supervision than feeding a token as either $x$ or $y$, and is a common choice for many related works [1,2]. > Could you check the other OOD task for the regression problem, especially the ones that have OOD on the latent variable? We thank the reviewer for this great suggestion and already conduct similar experiments with OOD latents in Figures 2(c) and 5, where we test for compositional generalization instead of OOD queries. In these tasks, novel combinations of underlying latents are provided during inference as opposed to solely changing the query (see **Training and Evaluation** on Page 5). Inspired by the reviewer's suggestion, we conduct additional analysis where we test for OOD generalization in linear regression and classification by changing the distribution of the underlying weight vectors $w$ to be sampled from a normal distribution with larger variance. Our results, provided in Table 1 of additional experiments, indicate that it is not the case that explicit models are able to generalize better than implicit ones in such OOD scenarios. Even further, we see that while known prediction is a bit better, it still lags behind the implicit models because this setting is OOD for the context aggregator while maintaining in-distribution $x_{query}$. > Could you explain how many layers of Transformer are used in the implicit and explicit model We use 8 layers in our experiments, where the explicit model splits prediction and context aggregation evenly (4 layers each). We conduct additional experiments where we compare an 8-layered implicit model to an explicit model with 8-layered context aggregation, and refer the reviewer to Figures 1 and 2 in additional experiments. Our results indicate that even with a larger context aggregation model, the same results hold. > I wonder if the effect of "dimension" in the output also matters We point the reviewer to Figure 6 in the main paper which studies the role of dimensions. In general, the trend is consistent with increasing the complexity of the task, whether it be through the dimensionality of the input, the latents or the output. We also refer to the Figure 4 of additional experiments where we study the role of size of output dimensions in linear regression. > why is the classification task not beneficial from predicting the latent variable directly? For the linear classification task, we believe that all the models have saturated their performance (note > 97%). For the nonlinear classification, when we fix the prediction function, there is an infinite set of latents that could lead to the same functional form but the space of the solution space is quite entangled and convoluted (note that this refers to permutation and scaling symmetries that leave the functional form unchanged). In contrast, it might be easier to explore alternate solutions by changing the prediction function to have a smoother landscape of possible latent variables. We hope that our response has resolved the reviewers' concerns and would be happy to provide further clarifications. [1] Hollmann, Noah, et al. "Tabpfn: A transformer that solves small tabular classification problems in a second." arXiv preprint arXiv:2207.01848 (2022). [2] Müller, Samuel, et al. "Transformers can do bayesian inference." arXiv preprint arXiv:2112.10510 (2021). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. After reading it and reviewing the additional results, I have decided to maintain my score. Reason for not a higher score: While I appreciate the added experiment comparing implicit and explicit models, I still find it difficult to draw a fully fair comparison in the OOD setting. In the new explicit setup, both the context aggregator and the predictor are implemented as Transformers, which increases the total number of parameters compared to the implicit baseline. To better control for model capacity, one reasonable comparison would be to keep the context aggregator the same as in the implicit model and use a lightweight predictor such as an MLP. This would reduce the parameter overhead and make the setup more directly comparable to the implicit model. On the other hand, the current explicit setup—where the aggregator is identical to the implicit model and the predictor is the known function—can be viewed as an upper bound on performance when the latent variable is provided. From Figure 2 in the additional experiments, this setup achieves comparable or slightly better performance than the implicit model. More broadly, if the explicit model consists of two components (aggregator + predictor) and the goal is to investigate generalization within Transformer architectures, then how the model capacity is divided between these components can significantly affect performance. Evaluating only a single configuration (e.g., a 50/50 split) does not rule out the possibility that other allocations (e.g., 60/40, 70/30, etc.) may yield better results. This sensitivity is likely task-dependent, as different tasks may benefit from different capacity allocations between aggregation and prediction. I acknowledge that conducting such a sweep is non-trivial and resource-intensive, which makes it difficult to draw a strong negative conclusion from the current results. Reason for not a lower score: The paper presents extensive experiments and raises several insightful questions that contribute meaningfully to the understanding of task structure and generalization. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in discussion and hope that our response answers the questions and clarifies the concerns raised. Through our rebuttal experiments on a larger context aggregation model, we had validated the hypothesis regarding explicit latent variable inference and OOD generalization. While in our original setup we did test with both a transformer or light-weight MLP as a predictor (refer to Figure 2 in the original paper), we agree with the reviewer that a more in-depth study of the sensitivity of task performance to the complexity of different model parts is important. To alleviate the reviewer's concern, we run a large-scale analysis with different number of layers in the context aggregator and prediction module for both the implicit as well as the explicit models (both MLP and Transformer predictor). The results are highlighted here: https://anonymous.4open.science/r/explicit-implicit-rebuttal-B263/explicit-implicit-suppl.pdf, and show that across a suite of different tasks, explicit models do not show an improved performance over implicit ones, thereby further validating our hypothesis and making our claim stronger. We thank the reviewer for their insight and would greatly appreciate an increase in rating if their concerns have been addressed.
Summary: This paper investigates whether explicitly inferring latent variables of an underlying task improves in-context learning performance in transformers. They find that explicit modeling of latent variables does not necessarily improve performance compared to standard implicit models. They also find that while the explicit model does learn latent variables, the main problem with generalization is its prediction function is not properly trained. Claims And Evidence: Yes, the authors are careful to make claims that are backed by sufficient evidence. Methods And Evaluation Criteria: Yes, there are several appropriate datasets the authors use to show the generality of their findings. Theoretical Claims: I did not check any proofs, but also did not see any theoretical claims in the paper. Experimental Designs Or Analyses: The experimental design is solid. The experiments that are clearly explained, and questions I had about details while reading I was able to find in the appendix. Supplementary Material: Yes, I read the entire appendix. Relation To Broader Scientific Literature: This work finds that causing the transformer architecture to explicitly infer latent variables of few-shot prompts is not sufficient for good ICL generalization. There has been a few works that suggest LLMs infer latent variables while doing ICL [1,2]. But this work showed little difference in performance between standard transformer and the explicit model. This is an interesting data point among previous and related findings. For instance, while [3] & [4] provide evidence suggesting that LLMs trained on natural text seem to do implicitly model latent variables, one contribution of this paper might be that it provides a way to perhaps understand the failure modes of these models - that it can similarly be attributed to a poor "prediction function" which seemed to be the problem with the failure cases of the explicit model in this work. There are some nice contributions of this work that might help us understand the architectural and algorithmic shortcomings of the models we use. ___ [1] Xie, et al. An explanation of in-context learning as implicit bayesian inference. ICLR 2022. (https://openreview.net/forum?id=RdJVFCHjUMI) [2] Wang, et al. Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning. NeurIPS 2023. (https://proceedings.neurips.cc/paper_files/paper/2023/hash/3255a7554605a88800f4e120b3a929e1-Abstract-Conference.html) [3] Hendel, et al. In-Context Learning Creates Task Vectors. EMNLP 2023. (https://aclanthology.org/2023.findings-emnlp.624/) [4] Todd, et al. Function Vectors in Large Language Models. ICLR 2024. (https://openreview.net/forum?id=AwyxtyMwaG) Essential References Not Discussed: This work is well-placed among previous and concurrent work. I wanted to point out two works that I think are related, but were not cited, which study the idea of in-context learning centering around latent variables [2,5]. [2] support the bayesian inference of an internal latent view of in-context learning. The work in [5] shows an example of how simple implicit transformer models seem to learn latent variables like others suggest is happening in larger LLMs. Maybe a discussion of how [5] relates to this paper's study of implicit vs. explicitly modeling latent variables would be helpful to provide more context to how we might interpret both results. ___ [2] Wang, et al. Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning. NeurIPS 2023. (https://proceedings.neurips.cc/paper_files/paper/2023/hash/3255a7554605a88800f4e120b3a929e1-Abstract-Conference.html) [5] Han, et al. Emergence of Abstractions: Concept Encoding and Decoding Mechanism for In-Context Learning in Transformers. (https://arxiv.org/abs/2412.12276) Other Strengths And Weaknesses: This is a solid paper. The proposed setting of explicitly modeling latent variables is nice, and the experiments are carefully designed to test specific hypotheses. The results are thoughtfully presented without overclaiming and are supported by evidence across a variety of tasks. The main weakness I'd say of the paper is that since prediction function failure was the main reason the explicit model didn't work, there could be more discussion about how to improve this during training beyond the current text: (i.e., "supplemented with significant inductive biases in the prediction function"). Could you provide some examples of what this might look like for different tasks? Does next-token prediction (e.g. language modeling) have sufficient inductive biases that might mediate some of these problems, or do you think it's purely architectural? Other Comments Or Suggestions: Here's a list of minor typos I found while reading through: - Line 131: "shown gray" -> shown in gray - Line 782: "Hodgkin-Hoxley" -> Hodgkin-Huxley? ___ Note after Rebuttal period: As before the rebuttal, I am leaning towards accepting this paper. This paper does have its limitations, but the things I learned from this paper I think outweigh any reservations I may have had. The rebuttal by the authors answered my questions, and I feel they have also addressed the concerns of the other reviewers as well. Questions For Authors: My main question is related to the learning of a good prediction function. 1. Do you have any intuition as to why the prediction function is not trained well enough even though it appears like the latents are being modeled properly? Is it because the ID training data is sufficiently different from the OOD data? Perhaps it depends on the task. 2. It's a bit surprising that explicit training does not learn a "good" prediction function. Is there some way to algorithmically compare the learned prediction function to the optimal one? This might be a nice way of characterizing the failure modes. Would freezing the first half of the explicit model after a while and training only the predictor function approach the optimal one? (It seems like they should be mutually reinforcing in a sense) 3. In Line 136-142, left you mention you do not train with next-token prediction. Is there a reason this wouldn't work? Do you think this limits your results/claims in any way? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the value of our work and providing constructive cristicism. > Additional References We appreciate the reviewer bringing two relevant papers to our attention and will include them in the final version. - Wang et al. (2023) shows that inferring latents ($\theta^d$ in their setup) is helpful but relies on a finite set of tasks and latents that are shared across different batches of observations. In contrast, our approach considers an uncountable set of tasks. They also assume data beyond the current context that belongs to the same task, while we only consider the current context as defining the latent. - Han et al. (2024) show that performance on in-distribution (ID) tasks highly depends on how well the task latent $\theta$ is encoded. However, they also consider only finite set of tasks and do not focus on OOD generalization. In their language, we show that if the *concept decoding* part of the model doesn't have the proper inductive biases, it will only learn how to use the *encoded* concepts ID. While we show that explicit models do not outperform implicit ones even though they correctly infer the latents, it could mean either the implicit model uses a different mechanism or infers the latents in a potentially distributed, uninterpretable way depending on the task. Answering this is an important future work, and the reviewer points out relevant works that aim to do so in certain settings. > why the prediction function is not trained well enough even though it appears like the latents are being modeled properly Our hypothesis is that a learnable prediction function doesn't have appropriate inductive biases, either coming from its architecture or the training data. For example, a MLP prediction function can learn to be linear in the training regime, while being arbitrarily nonlinear outside, leading to suboptimal OOD performance. A similar argument can be made for other tasks. We will add a discussion about this in the draft. > algorithmically compare the learned prediction function to the optimal one We refer to Figure 11 for an example of a learned prediction function evaluated away from the training distribution. In addition, we refer to Figure 4 (https://anonymous.4open.science/r/explicit-implicit-rebuttal-B263/explicit-implicit-rebuttal.pdf) which illustrates the performance of explicit models with MLP prediction OOD on the sinusoid task. > you mention you do not train with next-token prediction. Is there a reason this wouldn't work? Do you think this limits your results/claims in any way? We could have trained our models using next-token prediction by feeding in tokens as [$(x_1), (y_1), (x_2), (y_2),\ldots (x_n), (y_n)$], where arguments inside $(\cdot)$ constitute a token. In this case, we could have done an augmented version of next-token prediction where we only consider losses corresponding to $x_i$ tokens (predicting the next query point is non-informative since *iid* samples). However, this requires the model to additionally learn which $y$ corresponds to which $x$. We instead feed [$(x_1, y_1), (x_2, y_2), \ldots (x_n, y_n)$] so the model does not need to learn that information, it is provided as input. However, this prohibits learning via next token prediction *solely computationally and not algorithmically*. In practice, our training loss is an unbiased estimator of predicting $y_{i+1}$ given [$(x_1, y_1), (x_2, y_2), \ldots (x_i, y_i), (x_{i+1}, \phi)$] for all $i$ in parallel, while being more expressive by allowing anti-causal communication within the context. This choice of modeling is common across a number of related works [1,2]. > Additional insights into inductive biases to improve prediction function We believe that there are a number of directions for inductive biases that could be worth pursuing. One direction is the architectural design of the prediction function, with or without task-specific knowledge baked in (e.g. using $\sin(\theta x)$ as the predictor for sinusoidal regression leads to perfect OOD generalization). Alternately, one could also look at optimization strategies (eg. alternate optimization instead of jointly optimizing, freezing one part of the network, etc.) that could lead to better convergence of the prediction function. As the reviewer rightly points out, next token prediction also provides an inductive bias towards this goal. We defer an in-depth analysis into the inductive biases as well as improving the prediction function as future work. We hope that our response has resolved the reviewer's concerns and would be happy to provide further clarifications. [1] Hollmann, Noah, et al. "Tabpfn: A transformer that solves small tabular classification problems in a second." arXiv preprint arXiv:2207.01848 (2022). [2] Müller, Samuel, et al. "Transformers can do bayesian inference." arXiv preprint arXiv:2112.10510 (2021).
null
null
null
null
FactTest: Factuality Testing in Large Language Models with Finite-Sample and Distribution-Free Guarantees
Accept (poster)
Summary: • Introduces FACTTEST, a framework that statistically evaluates LLM factuality with theoretical guarantees to detect hallucinations • Formulates hallucination detection as hypothesis testing, controlling Type I errors (incorrectly classifying hallucinations as truthful) at user-specified significance levels • Leverages Neyman-Pearson classification techniques to define a score function measuring output correctness and determine appropriate thresholds using calibration datasets • Provides strong Type II error control (missing actual hallucinations) under mild conditions when the score function effectively captures output correctness • Extends the framework to handle covariate shifts through density ratio estimation and rejection sampling • Demonstrates across multiple QA benchmarks that FACTTEST enables models to abstain from answering uncertain questions, improving accuracy by over 40% compared to base models • Shows FACTTEST outperforms fine-tuned baselines while using less training data Claims And Evidence: • The Type I error control claim is well-supported by both theoretical analysis (Theorem 2.1) and empirical validation (Table 2, Figure 1) showing error rates consistently below specified significance levels. • Performance improvement claims are generally substantiated: Table 1 shows significant accuracy gains (e.g., 39.74% to 83.90% on FEVER with OpenLLaMA-3B), though the headline "40% improvement" represents best-case scenarios rather than average gains. • Comparison with fine-tuned models is supported by Figure 3, demonstrating FACTTEST-t outperforms R-Tuning on HotpotQA and FEVER. On the other hand, the improvement is significantly smaller on other datasets. • Black-box API applicability is validated in Table 3, though only tested on GPT-4o mini. Methods And Evaluation Criteria: • The evaluation datasets (ParaRel, HotpotQA, WiCE, FEVER) represent a reasonable mix of question-answering and multiple-choice formats, spanning different knowledge domains. • The metrics (accuracy, Type I error, Type II error) directly align with the paper's goals and theoretical framework, providing clear performance indicators. • Comparing against both non-training methods (base models, SelfCheckGPT) and training-based approaches (R-Tuning) offers comprehensive benchmarking. • Testing across model scales (3B to 13B parameters) and architectures helps demonstrate generalizability. • The evaluation on black-box APIs is particularly valuable for real-world applicability where model internals may be inaccessible. Theoretical Claims: I did not look too closely at proofs. A cursory glance indicates they seem correct; the primary limitation is that Type II error control depends on score function quality, which is appropriately acknowledged in the paper. Experimental Designs Or Analyses: • **Type I error control experiments** (Table 2, Figure 1): Thoroughly validated across multiple datasets and models. The calibration process correctly maintains error rates below specified α thresholds, with results properly disaggregated by score function and model size. • **Comparison with fine-tuned models** (Figure 3): The experimental design fairly acknowledges data usage differences - FACTTEST-t uses only half the training data compared to R-Tuning and Finetune-All, strengthening performance claims. • **Covariate shift experiments** (Figure 4): Limited to a single dataset (ParaRel-OOD), which is adequate for proof-of-concept but insufficient to fully validate generalizability across different distribution shift types. • **Black-box API validation** (Tables 3): Sound approach of using open-source models to calculate certainty scores for closed models, though limited testing on question-answering tasks only. • **Limitations**: - No statistical significance testing for accuracy improvements - Temperature settings could affect uncertainty estimation - Limited analysis of score function selection impact on overall performance - No explicit runtime analysis for practical deployment considerations Supplementary Material: I did not review the supplementary materials. Relation To Broader Scientific Literature: • **Hallucination detection methods**: FACTTEST bridges the gap between three existing approaches: - Retrieval-based methods that require external knowledge bases - Training-based approaches like R-Tuning that need extensive fine-tuning - Uncertainty estimation techniques that lack theoretical guarantees • **Statistical learning theory**: Builds directly on Neyman-Pearson classification work • **Selective prediction**: Advances the line of work on LLM "know when they don't know" capabilities by providing formal statistical guarantees. • **LLM factuality benchmarks**: Uses established datasets (ParaRel, HotpotQA, FEVER, WiCE) that have been employed in previous factuality research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** • Works with any uncertainty quantification method as the score function, making it adaptable as better estimation techniques emerge. • Can be applied without fine-tuning, providing immediate benefits to existing models. • Could be incorporated into existing LLM systems. **Weaknesses:** • **Limited to classification/short-form settings**: All experiments focus on question-answering or multiple-choice tasks with short responses. No evaluation on long-form generation where hallucinations often manifest differently and may require different detection approaches. • Multiple generations required for uncertainty estimation could limit practical deployment in latency-sensitive applications. • Performance likely sensitive to threshold choices, but limited analysis of this sensitivity. • Potential for false negatives when correct answers differ syntactically from reference answers. • The framework's binary approach may oversimplify factuality, which often exists on a spectrum. • Qualitative analysis of when/why the method fails would strengthen understanding of its limitations. Other Comments Or Suggestions: N/A Questions For Authors: 1. The framework was evaluated only on QA and multiple-choice tasks. Have you investigated its applicability to long-form generation where hallucinations often manifest differently (e.g., factual inconsistencies within paragraphs)? For example, testing on the dataset from "Long-form factuality in large language models". 2. What is the runtime overhead of FACTTEST compared to base models? The requirement for multiple generations (5-15) for uncertainty estimation could create latency issues in practical applications. 3. In many real-world scenarios, comprehensive calibration datasets with known ground truth may not be available. How might FACTTEST be adapted for open-domain settings where correct answers for calibration are limited? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed reviews and questions you raise to help improve our paper! > ***W1: statistical significance testing*** We've conducted bootstrap analysis for 95% confidence intervals. Results available in: https://anonymous.4open.science/r/ICML_rebuttal-8905/icml2025__FactTest-2.pdf. Confidence intervals for all datasets will be included in our revision. > ***W2: Temperature could affect UQ*** Temperature affects uncertainty quantification but **doesn't limit FactTest**, which controls type I error for any score regardless of temperature. Experiments with different temperatures (see link above) show that while accuracy varies, type I error remains below α. > ***W3: Score function selection impact*** For any score function $\hat\eta$, the type I error is always controlled below $\alpha$ (Theorem 2.1). Type II error is nearly optimal with an inflation depending on $\epsilon_\eta=\inf_{H\text{ increasing}}\|H\circ\hat\eta-\eta\|_\infty$, which is the deviation between the oracle score $\eta$ and the used score $\hat\eta$ up to increasing transformation $H$ (Theorem 2.2). > ***W4: Multiple generations and latency*** FactTest works with any score function, including single-generation ones. FactTest-cls in Table 14 (Section D.6) directly predicts answer correctness without multiple generations, with negligible runtime overhead. We expect more efficient score functions developed in the future to further enhance FactTest. > ***W5: sensitive to threshold choices*** I am afraid there is a misunderstanding of our type I and II error guarantee. Our method selects threshold that guarantees type I error control below α with probability ≥1-δ for any user-specified α,δ. Moreover, our power analysis indicates that the threshold selected always possesses nearly optimal type II error as long as the score function captures the correctness of generated answers. Therefore, **the performance of the selected threshold is always guaranteed**. > ***W6: False negatives*** FNs occur when score functions poorly separate correct/incorrect samples. This requires better score functions, **orthogonal to our contribution of providing statistical guarantees for any score function**. > ***W7:binary approach may oversimplify factuality*** We currently use binary accept/reject decisions at significance level α. A natural extension is to output the answer together with the largest confidence level $1-\alpha$ for which the answer is rejected, providing a spectrum of factuality.. > ***W8: when/why fails*** While FactTest controls type I error for any score function, poor score functions increase type II error. With a constant function, FactTest would reject all answers to maintain type I error bounds. > ***Q1: long-form*** Extending to long-form is our future direction. We see two approaches: 1.Document-level analysis: We could apply our framework with score functions designed to measure overall factuality, treating the entire response as a single unit. 2.Claim-level analysis: Formulate as a multiple testing problem and extends FactTest from controlling false positive rate to false discovery rate. For an answer with $m$ claims $c_1,...,c_m$, we will have: $H_{0,j}$: Claim $c_j$ is not correct. $H_{1,j}$: Claim $c_j$ is correct. > ***Q2: runtime overhead*** Inference Runtime of FactTest shown in https://anonymous.4open.science/r/ICML_rebuttal-8905/icml2025__FactTest-2.pdf Single-generation function (e.g., FactTest-cls) add negligible overhead. > ***Q3: Open-domain settings with limited calibration data*** If the correct answers for in-distribution questions $P_{q,M(q),a}$ are limited, one possiblility is to incoporate OOD questions $\tilde P_{q,M(q),a}$ for which correct answers are available, and then apply our method in section 3 to address the distribution shift in the calibration samples. Our method in section 3 only requires the oracle rule (optimal score function), of judging whether an answer is correct or not for a question, remains the same, $P(y=1|q,M(q))=\tilde P(y=1|q,M(q))$. Then it remains to estimate the density ratio of incorrect answers $\frac{dP_{q,M(q)|y=0}}{d\tilde P_{q,M(q)|y=0}}$, which equals to $$\begin{align} \frac{dP_{q,M(q)|y=0}}{d\tilde P_{q,M(q)|y=0}}=\frac{dP_{q,M(q),y=0}\tilde P_{y}(0)}{d\tilde P_{q,M(q),y=0}P_y(0)}=\frac{dP_{q,M(q)}\tilde P_y(0)}{d\tilde P_{q,M(q)}P_y(0)}\propto\frac{dP_{q,M(q)}}{d\tilde P_{q,M(q)}}. \end{align}$$ Since the multiplicative constant $\frac{\tilde P_y(0)}{P_y(0)}$ in the density ratio only affect the efficiency of rejection sampling, provided the range $B$ of uniform random variables is large enough compared to $\frac{\tilde P_y(0)}{P_y(0)}$, we can estimate the density ratio $\frac{dP_{q,M(q)}}{d\tilde P_{q,M(q)}}$ based on merely unlabeled question-generated answer pairs $(q,M(q))$, which doesn't rely on the correct answers at all. Therefore, we believe our method is still applicable even if correct answers for in-distribution questions are limited.
Summary: The paper proposes a framework to provide a statistical guarantee of the correctness of an output generated by LLM. The methodology leverages hypothesis testing and provides guarantees about type I and type II errors. Experiments are conducted on question-answering datasets. ## Update after rebuttal: I have increased my score after authors addressed my concerns. Claims And Evidence: No issues identified Methods And Evaluation Criteria: No issues identified Theoretical Claims: No issues identified Experimental Designs Or Analyses: No issues identified Supplementary Material: Yes. ablation study Relation To Broader Scientific Literature: The work is related to hallucination detection and statistical guarantees for a prediction. Essential References Not Discussed: Please refer to mentioned baselines in the strength and weakness section. Other Strengths And Weaknesses: Strengths - The work focuses on providing statistical guarantees while checking the factuality of the generated content. - The proposed method performs well for covariate shifts as well. Weaknesses - The guarantees are dependent on training data pairs. The generalizability of the proposed method could be more convincing with experiments on the OOD dataset. - Baselines are limited. To make the experiments more comprehensive, the baselines [1,2,3,4,5] could be included. [1] Chen, Chao, et al. "INSIDE: LLMs' internal states retain the power of hallucination detection." arXiv preprint arXiv:2402.03744 (2024). [2] Vashurin, Roman, et al. "Benchmarking uncertainty quantification methods for large language models with lm-polygraph." arXiv preprint arXiv:2406.15627 (2024). [3] Lin, Zhen, Shubhendu Trivedi, and Jimeng Sun. "Generating with confidence: Uncertainty quantification for black-box large language models." arXiv preprint arXiv:2305.19187 (2023). [4] Farquhar, Sebastian, et al. "Detecting hallucinations in large language models using semantic entropy." Nature 630.8017 (2024): 625-630. [5] Azaria, Amos, and Tom Mitchell. "The internal state of an LLM knows when it's lying." arXiv preprint arXiv:2304.13734 (2023) Other Comments Or Suggestions: Please refer to previous section Questions For Authors: Please refer to previous section Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and suggestions. We are glad that you acknowledge our work’s motivation, method and theoretical claims. Here we provide responses and additional experimental results to address your concerns. > ***W1:guarantees are dependent on training data pairs*** In Section 3, **we specifically address the covariate shift setting**, which allows the distribution of testing question-answer pairs to differ from the distribution of calibration pairs, provided that the oracle rule (optimal score function) for judging whether an answer is correct remains consistent. We have **already incorporated experiments on OOD datasets** in Section 5.4, where Figure 4 demonstrates that FactTest-O (our extension for out-of-distribution domains) effectively controls Type I error and significantly outperforms baseline methods in terms of accuracy. These results empirically validate our theoretical extension to covariate shifts, showing that our framework maintains its statistical guarantees even when applied to question distributions different from those in the calibration set. > ***W2:More baselines*** Thank you for suggesting additional related works. We believe there may be a misunderstanding about the nature of our contribution. FactTest is **fundamentally a meta-framework that works with any score function** (including uncertainty quantification methods), deriving a statistically guaranteed threshold to determine whether to reject an answer, rather than proposing a new uncertainty scoring mechanism itself. Regarding the specific papers mentioned: [1] proposes a metric to evaluate self-consistency that could **serve as a score function within our FactTest framework rather than as a competing baseline**. [2] is a **benchmarking paper that does not propose a new UQ or hallucination detection method**, but rather evaluates existing approaches with new metrics. [3] proposes UQ methods for black-box hallucination detection which could **serve as score functions** for FactTest. We have implemented their UDEG approach as a score function within our framework (FactTest-udeg), with results shown below. [4] is **already implemented** in FactTest as score function, which we denote as FactTest-se in our experiments. [5] trains a classifier to predict statement truthfulness. We have implemented this as SAPLMA and compared it with FACTTEST. Additionally, we show how it can be used as a score function within our framework (FactTest-saplma). Here we provide experiments of FactTest compared with [5]. We also implement [5] and udeg in [3] as a score function within FactTest to further demonstrate how these UQ methods could work within our framework. | Dataset | Base Model | SAPLMA | FactTest-saplma | FactTest-kle15 | | ------- | ---------- | ------ | --- | -------------- | | ParaRel | 36.66 | 67.33 (0.24) | 84.77 (0.04) | 78.45 (0.03) | | HotpotQA | 25.72 | 25.13 (0.02) | 31.91 (0.04) | 55.35 (0.03)| | Dataset | Base Model | FactTest-udeg5 | FactTest-udeg10 | FactTest-udeg15 | | ------- | ---------- | ------ | --- | -------------- | | ParaRel | 36.66 | 44.8 (0.04) | 36.53 (0.04) | 36.71 (0.04) | These results further demonstrate how existing uncertainty quantification methods can be integrated into our framework, with FactTest providing statistical guarantees on Type I error control while maintaining or improving accuracy. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification about the nature of contribution of the paper, along with additional results. I will raise my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer for the rebuttal comment and raising the score. If there are any additional concerns or suggestions, please let us know, and we will be happy to make further revisions.
Summary: The paper proposes FactTest, a framework to assess if an LLM can be factual with high probability correction guarantees. FactTest treats hallucination detection as a statistical hypothesis-testing problem. By doing this, it rigorously controls the maximum allowed Type I error rate ensuring hallucinations are not incorrectly classified as factual content at user-specified significance levels.. Additionally, under mild conditions, FactTest provides strong control over Type II errors, preventing truthful responses from being mistakenly rejected. The framework is distribution-free, making no assumptions about underlying data distributions or the number of human annotations. Moreover, FactTest is model-agnostic and applies equally well to black-box and white-box language models. It is also robust against covariate shifts, maintaining its effectiveness despite changes in input distributions. Extensive experiments conducted by the authors on question-answering benchmarks demonstrate that FactTest effectively detects hallucinations, enabling LLMs to abstain from answering uncertain questions, resulting in accuracy improvements of over 40%. Claims And Evidence: A lot of the theoretical claims are beyond my expertise. I have focussed some issues regarding the evaluation in the section below. Methods And Evaluation Criteria: I find the evaluation to be less convincing. I've listed my concerns below. - In table 1, using selfcheckgpt-NLI to reduce hallucinations using a threshold of 0.5 severely weakens the baseline. The selfcheckgpt paper does not claim that the NLI classifier is calibrated. A threshold of 0.5 could potentially optimize for increased coverage (instead of reduced risk) hence leading to the values not being comparable. - If the authors claim FactTest can reduce both type 1 and type 2 errors, why are the only comparisons with the baseline using accuracy? Why not use a metric like AUC-PR [1,2], AUCROC[2], or AU risk coverage curve[3] like several other papers that measure uncertainty estimates for factuality? These metrics measure both type 1 and type 2 errors. - The other tables that measure type 1 and type 2 don't compare to any baselines, so it's hard to know if Facttest is actually more reliable at measuring hallucinations than other approaches. [1] Manakul, Potsawee, Adian Liusie, and Mark JF Gales. "Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models." arXiv preprint arXiv:2303.08896 (2023). [2]Fadeeva, Ekaterina, et al. "Fact-checking the output of large language models via token-level uncertainty quantification." arXiv preprint arXiv:2403.04696 (2024). [3] Kamath, Amita, Robin Jia, and Percy Liang. "Selective question answering under domain shift." arXiv preprint arXiv:2006.09462 (2020). Theoretical Claims: At a very surface level. Many of the theoretical claims made about the connection between NP classification and PAC-style conformal prediction are beyond my expertise. Experimental Designs Or Analyses: The experimental design seems valid. Supplementary Material: I glanced over the supplementary material. Relation To Broader Scientific Literature: The authors have missed more recent papers on estimating uncertainty for factuality. Essential References Not Discussed: Mentioned in evaluation. Other Strengths And Weaknesses: As mentioned, I've only commented on my concerns regarding the empirical evaluations (the improper metrics, and lack of proper comparisons against baselines). I am happy to engage in more discussion with the authors about the concerns. Other Comments Or Suggestions: - Questions For Authors: Questions listed as concerns regarding evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedbacks. We are glad that you acknowledge the validity of our framework and experimental design. Here we provide responses to your concerns one by one. > ***W1:Threshold of SCGPT*** We acknowledge that using a threshold of 0.5 for SelfCheckGPT may not be optimal. In our framework, SelfCheckGPT is more suitable as a score function where thresholds are derived with explicit control over Type I error, rather than as a direct baseline. FactTest aims to determine thresholds that guarantee control over Type I error for any score function. In fact, Table 13 in Section D.6 demonstrates the integration of SelfCheckGPT as a score function in our framework. For a more appropriate comparison, we have implemented SAPLMA [1] as a baseline, a classifier specifically designed for hallucination detection that predicts whether an answer is correct. The accuracy (\%) and Type I error (numbers in parentheses) performance are: | Dataset | Base Model | SAPLMA | FactTest-kle15 | | -------- | ---------- | ------------ | -------------- | | ParaRel | 36.66 | 67.33 (0.24) | 78.45 (0.03) | | HotpotQA | 25.72 | 25.13 (0.02) | 55.35 (0.03) | [1] Azaria, Amos, and Tom Mitchell. "The internal state of an LLM knows when it's lying." > ***W2:why accuracy, why not AUC-PR, AUCROC, or AURCC*** The primary aim of FactTest is rigorous control over Type I error by explicitly setting a rejection threshold for any score function (including UQ) with statistical guarantees. When determining whether an answer is correct using a given score function, a threshold must be selected to distinguish correct from incorrect responses, with each threshold corresponding to a specific Type I error rate. Rather than reducing Type I error, FactTest determines the threshold for any score function that can statistically ensure the Type I error below user-specified $\alpha$. This operating point is crucial for high-stakes applications where accepting a hallucinated answer even rarely can be unacceptable. - Why not AUC-PR, AUCROC, or AURCC: These metrics evaluate the overall effectiveness of uncertainty scores across all possible thresholds. They do not capture performance at the fixed operating point (a specific threshold at a user-specified $\alpha$) required by our method. These metrics would be more appropriate for evaluating the underlying score functions rather than the thresholding mechanism itself. Users could use such metrics to compare different score functions before applying our FactTest framework. - Why accuracy: Since we determine the threshold at a user-specified α and use it to reject answers considered incorrect, we naturally report Type I error (to verify our theoretical guarantees) and accuracy on willingly answered questions (to demonstrate practical utility). This approach directly measures the performance at our chosen operating point rather than averaging across all possible thresholds. > ***W3:type 1 and type 2 don't compare to any baselines*** Our framework aims to control Type I error under a user-specified level α, meaning the threshold determined by FactTest depends on α. UQ methods typically output scores without rejection thresholds, while baselines like R-Tuning use fixed rejection rules, resulting in only one Type I error rate. Our Type I error figures (Figure 1) demonstrate performance with different α values, showing that Type I error can almost always be controlled below α using FactTest. Adding a baseline like R-Tuning would simply show a horizontal line, as it doesn't offer the flexibility of varying threshold levels that FactTest provides. > ***W4:more recent papers on uncertainty*** We appreciate this feedback and will incorporate discussion of more recent works in our next version to ensure comprehensive coverage of related methods, including but not limited to: "[1] proposes a computationally efficient method that leverages semantic diversity to detect hallucinations. [2] revisits standard uncertainty metrics and highlights the limitations of naive entropy-based methods. [3] provides an effective and computationally efficient method to quantify predictive uncertainty in LLMs." [1] Semantic Entropy Probes: Robust and Cheap Hallucination Detection in LLMs [2] Rethinking Uncertainty Estimation in Natural Language Generation [3] Improving Uncertainty Estimation through Semantically Diverse Language Generation --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications! I still recommend weak acceptance
null
null
null
null
null
null
null
null
Efficient Source-free Unlearning via Energy-Guided Data Synthesis and Discrimination-Aware Multitask Optimization
Accept (spotlight poster)
Summary: The authors propose the DSDA source-free unlearning framework the address the challenge of inaccessible original training data. DSDA consists of two key components: (1) AEGDS generates synthetic data using Langevin dynamics, and (2) DAMO balances unlearning objectives by resolving gradient conflicts. Extensive experiments demonstrate that DSDA outperforms existing source-free unlearning methods in terms of both efficacy and efficiency. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: The proposed method is well-suited to source-free unlearning. The evaluation metrics and datasets are widely used Theoretical Claims: Yes. I check the proofs for key theorems (Theorem 3.1 on Langevin dynamics and Theorem 3.3 on multitask gradient optimization) Experimental Designs Or Analyses: Yes. The weaknesses are detailed below Supplementary Material: I review the proofs and implementation details Relation To Broader Scientific Literature: This paper contributes to the under-explored challenge of source-free unlearning, a topic with increasing relevance due to privacy concerns and regulatory demands Essential References Not Discussed: The paper thoroughly cite and discuss the related works necessary to understand its key contributions. Other Strengths And Weaknesses: Strengths 1. The authors propose an innovative and practical solution to a pressing and under-explored problem in machine learning. 2. The findings on feature distribution shifts after unlearning provide compelling evidence for the need for discriminative feature alignment. The observation that traditional unlearning methods disrupt intra-class compactness and inter-class separability highlights a previously overlooked issue. 3. Extensive experiments on multiple benchmark datasets strongly support the proposed method. Additionally, the ablation studies illustrate the individual contributions of AEGDS, alignment objectives, and multitask optimization. 4. The paper is well-organized, with clear theoretical foundations and thorough experimental validation. Weaknesses 1. The authors claim that the synthetic data closely approximates the original data distribution using feature distribution overlap and visual verification. However, further incorporating quantitative metrics (such as FID) to evaluate the soundness of synthetic data could strengthen the paper. 2. The framework includes several hyperparameters, especially the weighting factors for multitask optimization, yet the paper provides limited discussion on their potential impact on performance. A deeper analysis of hyperparameter sensitivity would be valuable for understanding the framework’s robustness. Other Comments Or Suggestions: n/a Questions For Authors: Please refer to the weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***W1: Incorporating quantitative metrics (such as FID) to evaluate the soundness of synthetic data could strengthen the paper.*** We respectfully disagree with the reviewer’s suggestion. We emphasize that our goal is to generate synthetic data that approximates the original distribution while preserving privacy. Unlike model inversion methods, which aim to maximize image fidelity, our approach prioritizes unlearning effectiveness without reconstructing identifiable samples. Therefore, evaluating the synthetic data with metrics like FID, which emphasize visual quality, is not necessary for our objectives. ***W2: Parameter sensitivity experiments w.r.t. weighting factors.*** The weighting factors in our MAMO method are set to ensure that the three losses are on a similar scale. To analyze their sensitivity, we conduct experiments of one-class unlearning task on CIFAR-10. We fix the weight of the retain loss $w_R$ at 0.1 and vary $w_F$ and $w_{Disc}$ within {0.01, 0.1, 0.5, 1}, observing the unlearned model’s $A_r$ and $A_f$. The results in the table below show that when $w_F$ and $w_{Disc}$ are within the range of 0.1–0.5, the model's performance remains stable. However, an excessively large $w_F$ or a small $w_{Disc}$ leads to a noticeable decline in $A_r$. | $w_R$| $w_F$ | $w_{Disc}$ | $A_r$ | $A_f$ | | ---- | ---- | ---- | ---- | ---- | | 0.1 | 0.01 | 0.01 | 71.68 | 0.00 | | 0.1 | 0.1 | 0.01 | 74.60 | 0.00 | | 0.1 | 0.5 | 0.01 | 74.23 | 0.00 | | 0.1 | 1 | 0.01 | 73.14 | 0.00 | | 0.1 | 0.01 | 0.1 | 76.92 | 0.00 | | 0.1 | 0.1 | 0.1 | 79.25 | 0.00 | | 0.1 | 0.5 | 0.1 | 78.07 | 0.00 | | 0.1 | 1 | 0.1 | 77.03 | 0.00 | | 0.1 | 0.01 | 0.5 | 74.38 | 0.00 | | 0.1 | 0.1 | 0.5 | 78.35 | 0.00 | | 0.1 | 0.5 | 0.5 | 78.05 | 0.00 | | 0.1 | 1 | 0.5 | 77.52 | 0.00 | | 0.1 | 0.01 | 1 | 72.46 | 0.00 | | 0.1 | 0.1 | 1 | 77.67 | 0.00 | | 0.1 | 0.5 | 1 | 78.02 | 0.00 | | 0.1 | 1 | 1 | 77.26 | 0.00 |
Summary: This paper proposes a new framework, DSDA, for machine unlearning without access to the training data. Specifically, DSDA first crafts synthetic data via energy-based models with Langevin dynamics and then performs unlearning using the generated synthetic data. The authors observe that simply formulating the unlearning problem as $\lambda\_1 \mathcal{L}\_R + \lambda\_2 \mathcal{L}\_F$ would let retain class samples become dispersed and disrupt feature distributions; therefore, DSDA include a feature alignment objective $\mathcal{L}\_{Disc}$ to improve intra-class compactness and inter-class separability of retain classes. Furthermore, since the objective now involves $\mathcal{L}\_R, \mathcal{L}\_F$ and $\mathcal{L}\_{Disc}$, gradient conflicts may happen during the unlearning process. Hence, DSAD employ a multitask optimization strategy to make sure the update vector is close to the joint gradient. Experiments on CIFAR-10, CIFAR-100, and PinsFaceRecognition datasets across CNN and ViT demonstrate the effectiveness of the proposed framework DSDA compared to baselines. Claims And Evidence: In general, the claims made in the submission are clear and convincing. However, regarding the observation that simply formulating the unlearning problem as $\lambda\_1 \mathcal{L}\_R + \lambda\_2 \mathcal{L}\_F$ would let retain class samples become dispersed and disrupt feature distributions, I recommend the authors provide more evidence to support that. In the submission, an empirical result on CIFAR-10 when forgetting class 1 is provided. However, $\lambda\_1, \lambda\_2$ may affect the results, as well as the forgotten class, \i.e., for scenarios where classes having hierarchical categories may affect the distributions and be more difficult. Methods And Evaluation Criteria: Yes, the proposed framework DSDA is important to improve the practicality of machine unlearning as training data may not be available, and DSDA not need to collect auxiliary data. Theoretical Claims: The proof seems correct. Experimental Designs Or Analyses: The experimental setting is suitable and correct. Please refer to Claims And Evidence* for the concerns on the analysis of observations. Supplementary Material: Proofs and implementation details. Relation To Broader Scientific Literature: Compared to existing source-free machine unlearning methods, the key contribution of this work is that there is no need to collect auxiliary data and no need to train an auxiliary model/model retraining. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The proposed DSDA crafts synthetic data via energy-based models with Langevin dynamics, providing a fresh perspective on source-free machine unlearning. However, this mechanism may be limited on high-resolution data. In addition, the proposed framework is limited to class-wise forgetting. Other Comments Or Suggestions: - It would be better to explain why not adopting contrastive alignment [1-2] for the discriminative feature alignment objective, as the former can also obtain the objective and can drop weight factors $\alpha$. - [3] discusses the gradient conflict issue in machine unlearning and employs an optimization method to resolve the issue. It would be better to explain the rationale behind the choice of the solution to gradient conflicts in the submission. - It would be better to admit and discuss limitations such as class-wise forgetting in the paper. I am willing to raise my score if the authors can address my main concerns. ---- > [1] Liu, Qingxiang, et al. "Personalized Federated Learning for Spatio-Temporal Forecasting: A Dual Semantic Alignment-Based Contrastive Approach." arXiv preprint arXiv:2404.03702 (2024). > > [2] Tan, Yue, et al. "Is heterogeneity notorious? Taming heterogeneity to handle test-time shift in federated learning." Advances in Neural Information Processing Systems 36 (2023): 27167-27180. > > [3] Wu, Jing, et al. "Erasediff: Erasing data influence in diffusion models." arXiv preprint arXiv:2401.05779 (2024). Questions For Authors: - Regarding the objective $\mathcal{L}_F$, what if DSDA apply random labelling? Would it help with feature alignment? - What is the data range for the crafted data? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***W1: This mechanism may be limited on high-resolution data.*** Thank you for the comment. However, we respectfully disagree. Our method is not restricted to high-resolution data, as demonstrated by our experiments on three datasets with varying resolutions—CIFAR-10 (32×32), CIFAR-100 (32×32), and PinsFaceRecognition (224×224). The consistent performance across both low- and high-resolution scenarios validates the broad applicability of our method. ***W2&C3: The proposed framework is limited to class-wise forgetting.*** We appreciate the reviewer’s insight. Our method is designed for class-wise unlearning, as it aligns with common unlearning scenarios and is more practical in real-world applications. Most existing unlearning studies also focus on class-level or concept-level unlearning, as instance-level unlearning poses inherent challenges—removing a specific instance does not guarantee that the model will completely forget similar patterns, as retain data with similar features may still contribute to strong performance on the forgot data. While our method is effective in removing class-level information, we recognize its limitations in finer-grained unlearning, such as instance-wise or attribute-wise unlearning, which we will clarify in our final version. ***C1: Why not adopting contrastive alignment for the discriminative feature alignment objective?*** The contrastive alignment methods in [1-2] employ a metric learning loss that is fundamentally similar to our $L_{disc}$ without the weight factors $\alpha$. However, our method introduces $\alpha$ to adaptively adjust gradient magnitudes based on how far a similarity score deviates from its optimal values. This adaptive weighting mechanism provides two key advantages. First, it prioritizes large updates for highly misaligned features, accelerating convergence and improving optimization efficiency. Second, it prevents excessive penalization of already well-aligned features, reducing the risk of overcorrection. As a result, our method achieves more precise and stable discriminative feature alignment, ultimately enhancing unlearning effectiveness. ***C2: Explain rationale behind the choice of the solution to gradient conflicts.*** The method in [3] primarily balances two losses and requires a predefined distinction between the main and auxiliary objectives, limiting its applicability to more complex multi-objective optimization. In contrast, our approach can effectively balance three or more objectives without the need for a manually designated main objective, enabling a more flexible and adaptive optimization process. This allows for the seamless integration of the Discriminative Feature Alignment Objective and ensures a better trade-off among multiple competing objectives in unlearning. ***Q1: Regarding the objective LF, what if DSDA apply random labelling? Would it help with feature alignment?*** We appreciate the reviewer’s thought-provoking question. However, random labeling may not be an effective alternative for the forget objective $L_F$. As shown in Figure 2(b), most forget data samples are predicted as specific classes rather than randomly distributed across all classes. Additionally, prior research [1] has shown that fine-tuning a trained model with randomly labeled forget data can introduce unintended shifts in the decision boundaries of retain classes, ultimately degrading the model’s utility on the retain data. Therefore, the forget objective $L_F$, which explicitly optimizes unlearning through gradient ascent, is more controlled and effective in achieving the desired unlearning behavior. [1] Chen, Min, et al. "Boundary unlearning: Rapid forgetting of deep networks via shifting the decision boundary." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. ***Q2: What is the data range for the crafted data?*** The crafted data follows a similar range to the preprocessed original data. For example, in the case of CIFAR-10, we apply a common normalization setting with CIFAR_MEAN = (0.5071, 0.4865, 0.4409) and CIFAR_STD = (0.2673, 0.2564, 0.2762), resulting in a transformed data range of approximately [−2,2]. Our synthetic data is generated within a slightly extended range of [−2.5,2.5], ensuring compatibility with the model while maintaining sufficient variability for effective unlearning. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The authors addressed my concerns, so I increased my final score to 4.
Summary: The paper addresses the challenge of source-free unlearning for image classification ML models, where the training data cannot be accessed after initial model training. The paper proposes a novel framework called DSDA, which utilizes Langevin dynamics, Runge–Kutta methods and gradient-based multitask optimization to achieve source-free unlearning. The paper demonstrate that DSDA achieves superior efficiency and effectiveness through experiments on three datasets. The paper also conducts ablation studies for key components and presents visualization analysis of the synthetic data. Claims And Evidence: The authors provide adequate theoretical and experimental evidence for their claims. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-suited and relevant for the problem. Theoretical Claims: Yes. The theoretical claims regarding the AEGDS and DAMO are well-supported. Experimental Designs Or Analyses: Yes. The experimental design is adequate, including comparisons with sota methods, ablation studies and visualizations. Supplementary Material: I read the mathematical proofs and detailed experimental setups. Relation To Broader Scientific Literature: The paper contributes to the emerging field of source-free unlearning, which aims to remove specific data influence from a trained model without access to the original training dataset. The findings on feature distribution changes after unlearning contribute to the growing body of work on explainable AI and model interpretability. Essential References Not Discussed: The authors well discussed existing works on machine unlearning. Other Strengths And Weaknesses: Strengths : 1. The paper addresses a practical and critical problem of source-free unlearning. Unlike existing methods that either require access to training data or incur high computational costs, the proposed framework introduces an efficient alternative by leveraging energy-guided data synthesis and multitask optimization. 2. The incorporation of Langevin dynamics-based sampling for synthetic data generation is an innovative approach that bridges model inversion, generative modeling, and unlearning. With the proposed AEGDS method, DSDA directly reconstructs data distributions without external generators, enhancing both privacy protection and computational efficiency. 3. The paper includes extensive experiments across multiple datasets and compare DSDA with sota baselines, demonstrating its superiority in both unlearning effectiveness and efficiency. Weaknesses : 1. A more detailed discussion on the potential limitations of the synthetic data (e.g., its ability to generalize to different data distributions) would enhance the robustness of the argument. 2. The legend in Figure 5 (a) appears to be small, which may affect readability. Additionally, the color contrast could be improved to enhance clarity. Other Comments Or Suggestions: See Weaknesses for details. Questions For Authors: Please response to the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***W1: A deeper discussion on the synthetic data's limitations would strengthen the argument.*** We appreciate the reviewer’s suggestion. Our experiments on three datasets with 10, 100, and 105 classes demonstrate that the synthetic data effectively supports unlearning across diverse distributions. As shown in Section 4.4, the visualized feature distributions further confirm that the generated data closely approximates real data across various classes. However, we acknowledge that for certain distributions, such as highly sparse or imbalanced data, our method may not fully capture fine-grained structural details. Addressing this limitation is an important direction for future research. ***W2: The legend in Figure 5 (a) appears to be small, which may affect readability. Additionally, the color contrast could be improved to enhance clarity.*** We thank the reviewer for pointing out this issue. We will optimize Figure 5 (a) in our final version.
Summary: The authors present a well-structured and innovative approach to source-free unlearning, where a trained model must forget specific data without access to the original training dataset. To achieve this, the authors propose a novel two-stage framework DSDA. Firstly, the proposed AEGDS generates synthetic dataset as a substitute for the original ones using Langevin dynamics, enhanced with Runge–Kutta and momentum-based acceleration. Secondly, based on findings of the unlearned feature distribution, the authors introduce a novel discrimination-aware unlearning objective and perform balanced optimization to achieve unlearning. The authors conduct adequate experiments on three datasets and multiple unlearning tasks. Results show that DSDA outperforms existing source-free methods and is comparable to general methods, in terms of efficiency and effectiveness. Claims And Evidence: Yes, the claims are well supported. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: Yes (e.g. theorem 3.1 and 3.3). Experimental Designs Or Analyses: Yes. Detailed in “Other strengths and weaknesses”. Supplementary Material: I reviewed the supplementary material. Relation To Broader Scientific Literature: The paper also contribute to the fields of model inversion and multitask learning. Essential References Not Discussed: There are no related works not discussed. Other Strengths And Weaknesses: Strengths : 1. The authors presents and addresses the pressing problem of source-free unlearning, a critical challenge in real-world machine unlearning scenarios where access to training data is restricted. The proposed framework offers a novel and efficient solution, surpassing existing source-free methods that rely on knowledge distillation. 2. The authors provides insightful visualizations of feature space to illustrate the impact of unlearning, highlighting how traditional methods disrupt feature distributions. These insights empirically justify the need for discriminative feature alignment, strengthening the theoretical motivation. 3. The metrics in the experiment are comprehensive, including accuracy, efficiency and defensive capability. 4. The authors conduct ablation experiments to isolate the contributions of each component of DSDA, clearly demonstrating the importance of each part of the framework in achieving the overall performance improvements. 5. The writing is clear and well-structured. In particular, the authors clearly introduce the core components AEGDS and DAMO with intuitive figures, algorithmic pseudocode, and detailed explanations, ensuring that the methodology is easy to understand and implement. Weaknesses : 1. Because the METHOD is written with many symbols and equations, adding a notation table will make the paper clearer. Other Comments Or Suggestions: N/A. Questions For Authors: 1. As the proposed AEGDS generates synthetic data that approximates the original training distribution, I am concerned that could the synthetic data cause potential privacy linkage? 2. The authors mention weighting factors for multitask optimization, but provide limited discussion on their impact. Could they provide insights or experiments showing how these factors affect the framework’s performance and balance between unlearning objectives? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***W1: Adding a notation table will make the paper clearer.*** We thank the reviewer for pointing out this issue. We will add a notation table in our final version. ***Q1: Could the synthetic data cause potential privacy linkage?*** The reviewer raises a critical concern. However, our work addresses this concern through an empirical analysis presented in Section 4.4. Specifically, we visualize the generated synthetic samples and demonstrate that they are visually indistinguishable from random noise, making it impossible for human observers to extract any meaningful information. This observation confirms that the synthetic data does not retain identifiable features of the original training data, thereby mitigating potential privacy risks. ***Q2: Parameter sensitivity experiments w.r.t. weighting factors.*** The weighting factors in our MAMO method are set to ensure that the three losses are on a similar scale. To analyze their sensitivity, we conduct experiments of one-class unlearning task on CIFAR-10. We fix the weight of the retain loss $w_R$ at 0.1 and vary $w_F$ and $w_{Disc}$ within {0.01, 0.1, 0.5, 1}, observing the unlearned model’s $A_r$ and $A_f$. The results in the table below show that when $w_F$ and $w_{Disc}$ are within the range of 0.1–0.5, the model's performance remains stable. However, an excessively large $w_F$ or a small $w_{Disc}$ leads to a noticeable decline in $A_r$. | $w_R$| $w_F$ | $w_{Disc}$ | $A_r$ | $A_f$ | | ---- | ---- | ---- | ---- | ---- | | 0.1 | 0.01 | 0.01 | 71.68 | 0.00 | | 0.1 | 0.1 | 0.01 | 74.60 | 0.00 | | 0.1 | 0.5 | 0.01 | 74.23 | 0.00 | | 0.1 | 1 | 0.01 | 73.14 | 0.00 | | 0.1 | 0.01 | 0.1 | 76.92 | 0.00 | | 0.1 | 0.1 | 0.1 | 79.25 | 0.00 | | 0.1 | 0.5 | 0.1 | 78.07 | 0.00 | | 0.1 | 1 | 0.1 | 77.03 | 0.00 | | 0.1 | 0.01 | 0.5 | 74.38 | 0.00 | | 0.1 | 0.1 | 0.5 | 78.35 | 0.00 | | 0.1 | 0.5 | 0.5 | 78.05 | 0.00 | | 0.1 | 1 | 0.5 | 77.52 | 0.00 | | 0.1 | 0.01 | 1 | 72.46 | 0.00 | | 0.1 | 0.1 | 1 | 77.67 | 0.00 | | 0.1 | 0.5 | 1 | 78.02 | 0.00 | | 0.1 | 1 | 1 | 77.26 | 0.00 |
null
null
null
null
null
null
World Model Implanting for Test-time Adaptation of Embodied Agents
Accept (poster)
Summary: This paper proposes a world model implanting framework to augment the LLM-based agents. The world models are learned using domain/task-specific data to capture the in-domain characteristics, serving as domain experts. With these world models, this paper introduces a prototype-based retrieval method together with an attention-based knowledge integration to allow test-time adaptation. This framework also applies a meta-learning objective to improve the adaptability to unseen tasks/domains. Experiments are conducted on Virtual-Home and ALFWorld benchmarks. Claims And Evidence: This paper claims that the proposed WorMI framework is able to generalize to unseen domains in zero-shot or few-shot manners by leveraging the most relevant information from the seen world models. However, how does this method guarantee that the knowledge of the seen world models will benefit the target task? If the target domain/task is highly out-of-the-training distributions, how does this paper handle this case? Methods And Evaluation Criteria: The proposed method applies off-the-shelf models to extract abstract state features. However, it might be risky to discard crucial visual/structural information about the environment. In addition, without being grounded in the visual environment, how does this paper ensure the LLM provides accurate actions without potential hallucination issues? Theoretical Claims: No theoretical claims are provided in the main paper. Experimental Designs Or Analyses: I have checked the Experiments section. For the meta-learning, this paper does not provide an ablation study or analysis to assess the effectiveness of this objective. The improvement of generalizability contributed by meta-learning remains unclear. Supplementary Material: I have read the appendix, and the supplementary material also contains the source code. Relation To Broader Scientific Literature: The main contribution compared with previous LLM-based embodied agents is the integration of the domain-specific models, which provide domain knowledge to augment the LLM. Essential References Not Discussed: To the best of my knowledge, the references are sufficiently covered. Other Strengths And Weaknesses: Overall, this paper is well-written and easy to follow. For the weaknesses, please refer to the issues raised above. Other Comments Or Suggestions: Please refer to the issues raised above. Questions For Authors: Please refer to the issues raised above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed comments. We will include the following experimental results and clarifications in the final version. > Q1. How does this method guarantee that the knowledge of the seen world models will benefit the target task? If the target domain/task is highly out-of-the-training distributions, how does this paper handle this case? Our method guarantees that the knowledge of seen world models benefits the target task by combining two key components. First, the prototype-based retrieval selects only those world models that were trained on data with distributions similar to the target environment, ensuring that the most relevant domain-specific information is leveraged. Second, our compound attention model integrates the selected world models with the reasoning model on two levels, world-level integration and reasoning-level alignment, while meta-learning is used to adapt effectively to diverse world model combinations. This process not only creates a robust, domain-specific composite model but also aligns the integrated knowledge with the reasoning model to enhance decision-making. This approach is validated by our results in Table 1, where our method outperforms baselines even in unseen domains, confirming that the aligned knowledge is indeed beneficial for the target task. Moreover, if the environment is completely unrelated to any available world model, the agent can still utilize the reasoning model’s common knowledge via world-to-reasoning alignment, maintaining at least a baseline level of performance. Please refer to our response to Reviewer SQ6Q, Q4 for details on the adversarial world models experiments, which involve models that are highly out-of-training distribution. Of course, in highly out-of-distribution scenarios that lie well beyond our assumptions, the success rate may drop significantly. > Q2. It might be risky to discard crucial visual/structural information about the environment. In addition, without being grounded in the visual environment, how does this paper ensure the LLM provides accurate actions without potential hallucination issues? In VirtualHome and ALFWorld, the environment provides text-based observations that include visual and structural information for performing tasks. These text-based outputs have been widely used in prior work [1, 2, 3, 4, 5] and the agent can select suitable actions without discarding crucial spatial or visual context. Moreover, our WorMI framework is designed to be modular with respect to input modalities: the reasoning model and individual world models can each handle different data forms. The table below shows the performance of multi-modal WorMI, which employs a VLM as its reasoning model, using both text and image states in VirtualHome. Multi-modal WorMI exhibits only a slight performance drop compared to WorMI, demonstrating the applicability for multi-modal experiment setups. Additionally, there is certainly room for improvement of Multi-modal WorMI, as we do not have enough time to optimize the hyperparameters. Even so, our approach still demonstrates superior performance compared to the baselines. We will include these experimental results in the final version. | Model | SR (↑) | PS (↓) | |------------------|--------|--------| | Multi-modal WorMI| 57.65% | 17.21 | | WorMI | 66.12% | 15.17 | [1] Huang, Wenlong, et al. "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents." ICML 2022. [2] Song, Chan Hee, et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models." ICCV 2023. [3] Hazra, Rishi, Pedro Zuidberg Dos Martires, and Luc De Raedt. "Saycanpay: Heuristic planning with large language models using learnable domain knowledge." AAAI 2024. [4] Singh, Ishika, et al. "Progprompt: Generating situated robot task plans using large language models.", ICRA 2023. [5] Yoo, Minjong, et al. "Exploratory retrieval-augmented planning for continual embodied instruction following.", NeurIPS 2024. > Q3. For the meta-learning, this paper does not provide an ablation study or analysis to assess the effectiveness of this objective. The improvement of generalizability contributed by meta-learning remains unclear. The table below shows an ablation study on meta-learning for unseen tasks and scenes. WorMI-M is a variant that learns world model composition sequentially instead of using meta-learning. Compared to the WorMI, WorMI-M exhibits lower performance. This indicates that meta-learning equips our framework to better handle world model combinations it may not have encountered during training, thereby improving generalizability. We will include these results in the final version. | Model | SR (↑) | PS (↓) | |----------|--------|--------| | WorMI-M | 53.31% | 18.23 | | WorMI | 66.12% | 15.17 | --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing the rebuttal. I've read the author's response and comments from other reviewers. I have no further questions at this time. I will increase my original rating to 3. --- Reply to Comment 1.1.1: Comment: We deeply appreciate the reviewer’s insightful and constructive feedback. We are encouraged by the comments noting that our paper is well-written and easy to follow. The reviewer's insights on the experiments with highly out-of-distribution data, multi-modal evaluation, and the ablation study on meta-learning are extremely valuable. We will include these suggestions in the final version. Thank you once again!
Summary: The paper introduces WorMI, a framework designed to improve the adaptability of embodied AI agents across diverse and unseen domains at test time, without requiring extensive retraining or additional data collection. Experiments on the VirtualHome and ALFWorld benchmarks demonstrate that WorMI outperforms state-of-the-art LLM-based embodied agents in zero-shot and few-shot adaptation. The framework also supports continual model implanting, allowing new world models to be added or removed flexibly. Claims And Evidence: WorMI’s effectiveness in zero-shot and few-shot adaptation is strongly supported. In zero-shot tests, WorMI outperforms all baselines on unseen tasks and scenes. The paper also claims advantages of the prototype-based world model retrieval mechanism, which selects relevant domain-specific models at test time. This claim is backed by an ablation study: using WorMI’s prototype retrieval to pick a few pertinent world models yields better results than naive alternatives. The authors further claim that WorMI’s design generalizes across unseen domains. This is reflected in the experimental setup: in VirtualHome and ALFWorld, “unseen” test scenarios involve new tasks and/or environments not encountered in training. Methods And Evaluation Criteria: The WorMI framework logically addresses the problem of test-time adaptation for embodied agents. It does so by allowing the agent to dynamically retrieve and integrate domain-specific knowledge at inference time, rather than requiring retraining. However, there are several limitations of the proposed method. It introduces computational overhead due to retrieving and integrating multiple world models at test time, which may not scale well in resource-constrained settings. The framework heavily depends on large language models, making its performance sensitive to the LLM’s limitations, such as biases and hallucinations. Adaptation to dynamic environments is limited, as the approach primarily focuses on structured, predefined world models rather than handling real-time environmental changes. The lack of explicit robustness tests makes it unclear how well WorMI handles incorrect retrieval or misleading world models. Theoretical Claims: The paper includes a key theoretical component related to prototype-based world model retrieval. It provides a proof that the distance between prototype sets can serve as a bounded proxy for the distance between full datasets. While the proof shows a bound, it does not quantify how much information is lost by using prototypes instead of full embeddings. The paper does not provide experimental results comparing prototype retrieval vs. full retrieval to validate this bound in practice. Experimental Designs Or Analyses: The experimental setup appears mostly sound and well-motivated for the problem of test-time adaptation in embodied AI. The study evaluates WorMI on VirtualHome (a 3D household task simulation) and ALFWorld (a text-based embodied environment). These benchmarks are reasonable choices because they provide structured tasks, unseen environments, and diverse domain shifts. However, both are simulations, meaning results may not fully translate to real-world robotics applications. The paper compares WorMI against four baselines, covering a range of adaptation approaches. These baselines are appropriate and diverse, making the comparisons meaningful. The experiments are well-described and appear repeatable, as dataset splits, baselines, and architectures are clearly outlined. Supplementary Material: The supplementary material provides additional theoretical justifications, implementation details, and experimental analyses for the WorMI framework. Relation To Broader Scientific Literature: The concept of world models has been widely explored in robotics and reinforcement learning. Prior work, such as World Models by Ha & Schmidhuber (2018), introduced learned world models to enable agents to simulate future states and make more efficient decisions. The WorMI framework extends world models by allowing multiple world models to be retrieved and fused dynamically at test time. Unlike prior methods where a single learned world model guides decision-making, WorMI selects the most relevant world models per task, combining them with an LLM for reasoning. Test-time adaptation aims to help models generalize to new environments without retraining. WorMI introduces a retrieval-based approach to test-time adaptation. Instead of updating a model’s parameters during test time, it retrieves and fuses world models dynamically, reducing the need for computationally expensive online updates. WorMI follows a modular learning paradigm by treating each world model as a domain-specific knowledge module that can be combined dynamically. (Modular Multitask Reinforcement Learning with Policy Sketches) Essential References Not Discussed: Some test-time adaptation papers should be added to the related work section for a more comprehensive discussion: Test-time adaptation: Tent: Fully Test-Time Adaptation by Entropy Minimization Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization Test-Time Training with Self-Supervision for Generalization under Distribution Shifts Other Strengths And Weaknesses: Strengths: The paper presents an innovative approach to test-time adaptation by dynamically retrieving and fusing world models to enhance embodied agent reasoning. The empirical results demonstrate strong improvements over baselines in both zero-shot and few-shot adaptation scenarios. Weaknesses: Computational or memory efficiency is not analyzed, even though WorMI requires additional computation for retrieval and fusion. The study does not provide qualitative insights into failure cases, making it unclear why adaptation fails in some scenarios. The paper claims that WorMI is scalable, but it only evaluates retrieval from a small number of world models. It does not test how well the approach scales when hundreds of world models must be retrieved dynamically. There is no explicit analysis of what happens when the retrieval mechanism selects irrelevant world models, which could be a key failure mode in complex environments. The ablations confirm that both prototype retrieval and compound attention contribute positively, but they do not explore simpler alternatives like selecting the most relevant world model without fusion, which could reveal whether the full attention mechanism is necessary. Other Comments Or Suggestions: The paper is well-written, with clear explanations and a logical structure. The writing is concise and easy to follow, making the technical contributions accessible. The figures are well-designed and effectively illustrate key concepts, particularly the retrieval mechanism and world-wise compound attention. Questions For Authors: How well does WorMI scale when retrieving from a large number of world models (e.g., 50–100 instead of ≤6)? Have you tested how retrieval accuracy or computational cost changes as the number of world models increases? What happens when the retrieval mechanism selects an irrelevant or suboptimal world model? Is there a mechanism for detecting and correcting retrieval errors at test time? Have you considered evaluating WorMI in a real-world robotic setting instead of simulation? What are the main challenges in transferring the approach from VirtualHome and ALFWorld to physical environments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed comments. We will include references, the following experimental results, and additional clarifications in the final version. > Q1. Experimental results for inference time and memory usage Below shows the inference times and memory usages among the baselines and WorMI. A detailed explanation is provided in response to Reviewer zfbw, Q3. | |Inference Time|Memory| |-|-|-| |LLM+FT, ZSP|298ms|21877MiB| |LLM-Planner|401ms|21877MiB| |SayCanPay|609ms|46230MiB| |WorMI(K=2,N=4)|339ms|30020MiB| |WorMI(K=2,N=6)|348ms|33445MiB| |WorMI(K=3,N=6)|385ms|33445MiB| > Q2. Clarification on the limitations due to utilizing LLMs WorMI leverages domain-specific world models to reduce hallucination and bias instead of relying solely on the LLM’s own knowledge. As shown in Table 4, it remains robust even with a smaller LLM while keeping expansion costs low. However, completely eliminating LLM weaknesses remains challenging. These limitations are explicitly addressed in the main text. > Q3. Experimental results for real-time environmental change WorMI composes pre-trained models at test time, forming a composite world model that adapts to unseen or dynamic environments. Below are the results under real-time environment changes, such as shifting object locations or state changes over time. |Model|SR (↑)|PS (↓)| |-|-|-| |LLM-Planner|45.35%|20.16| |SayCanPay|41.33%|21.91| |WorMI|58.36%|19.33| > Q4. Robustness and scalable tests for prototype-based retrieval In Table 3(a), we compare a random selection strategy (WorMI-R) to our prototype-based retrieval approach, illustrating that incorrect retrieval slightly degrades performance. To investigate this more thoroughly, we conducted two additional experiments. First, we increased the proportion of adversarial world models by replacing some with untrained Llama-3.2-1B models. With prototypes unchanged, the retrieval could not distinguish adversarial from valid models. At moderate proportions of adversarial world models, performance remains stable, but it drops sharply when their proportion becomes too high. |Adv. ratio|SR(↑)|PS(↓)| |-|-|-| |0%|66.12%|15.17| |16%|66.67%|14.96| |33%|58.58%|16.00| |50%|38.03%|21.42| We also scaled the number of world models from N=6 to N=12. As shown in the table below, WorMI-R suffers more from incorrect retrieval as N grows, whereas our prototype-based retrieval remains relatively resilient. |Model|SR(↑)|PS(↓)| |-|-|-| |WorMI-R(N=6)|62.04%|16.96| |WorMI(N=6)|66.12%|15.17| |WorMI-R(N=12)|51.17%|19.22| |WorMI(N=12) |66.51%|14.90| Overall, these results show that while suboptimal retrieval or misleading world models can reduce performance, WorMI’s compound attention integrating world-to-world knowledge remains robust, unless most models are adversarial. We will include these results, along with an experiment using more world models, in the final version. > Q5. Experimental results comparing prototype retrieval vs. full retrieval The table below shows the results comparing prototype retrieval and full retrieval (WorMI-P). There is almost no performance difference, but prototype retrieval significantly reduces inference time. |Model|SR(↑)|PS(↓)|Inference Time| |-|-|-|-| |WorMI-P| 66.54%|15.04|811ms| |WorMI|66.12%|15.17|385ms| > Q6. Qualitative analysis One notable failure case arises when there is no relevant world model for the unseen domain. For instance, if the instruction is “Place the slipper in the closet", but the agent has never encountered a slipper in any seen domain, the success rate drops and the agent might attempt actions for a similar object (e.g., “towel”). It is because world-to-reasoning alignment training encourages reliance on the knowledge derived from the world model, the agent struggles in scenarios where it must rely solely on the reasoning model’s common knowledge. In future work, we plan to explore more flexible architectures and training methods for controlling the degree of utilizing world models. > Q7. Comparison with using most relevant world models The table below shows that a performance comparison between WorMI and the variant that uses only the single most relevant world model without fusion (WorMI-F). For seen tasks and scenes, a single world model suffices as it captures the domain knowledge. In contrast, for unseen tasks and scenes, combining multiple world models via world-to-world integration is advantageous since no single model fully represents the unseen domain. |Model|SR(↑)|PS(↓)| |-|-|-| |Unseen tasks & scenes| |WorMI-F|42.75%|20.54| |WorMI|66.12%|15.17| |Seen tasks & scenes| |WorMI-F|82.72%|11.16| |WorMI|85.78%|10.76| > Q8. Challenges for real world settings We consider multi-modality is critical for real robotic systems, which often rely on various sensor inputs. Our WorMI framework is designed to support this by allowing the reasoning model and individual world models to come from different modalities. We add a multi-modal experiment in response to Reviewer MES3, Q3. --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my questions. I will keep my current rating for this paper. --- Reply to Comment 1.1.1: Comment: We are truly grateful for the reviewer's insightful and constructive feedback. We are encouraged by the comments noting that our approach is innovative, demonstrates strong experimental improvements, and that our experimental setup is both sound and well-motivated. The reviewer's suggestions regarding experiments on resource usage, prototype-based retrieval, and real-time environmental changes have been very helpful. We have addressed all of the reviewer's comments in our author response and will incorporate these suggestions into the final version. If you have any further questions or comments, please feel free to ask. Thank you once again!
Summary: This paper presents World Model Implanting (WorMI), a framework to improve the test-time adaptation of embodied AI agents. This work assumes access to a set of world models which are pre-trained on a set of datasets. During the adaptation phases, they select a subset of models that are most relevant to the input state (done via prototype-based retrieval), and then they perform "world-wise" hierarchical cross attention that is the input to the reasoning model. Claims And Evidence: Yes. All the claims have experiments to support them. Methods And Evaluation Criteria: Yes. Theoretical Claims: No significant theory (mostly an empirical work). One exception is the bound for prototype-based similarity that is discussed in the appendix. I have not given much importance to that part of the paper. Experimental Designs Or Analyses: Yes, I have checked the soundness and validity of the experiments. No critical issues in experimentation. Supplementary Material: Yes, I did review the supplementary material (the entire appendix) to get to know more details of implementation. However, as I mention later, I was unable to find the details of the "world models" and the object detection module used. Relation To Broader Scientific Literature: This work is very much related to the field of Embodied AI, and agents being able to adapt in their environment is a very critical problem to address since they are bound to fail to some new unseen environments/tasks. Essential References Not Discussed: Not to the best of my knowledge. Other Strengths And Weaknesses: **Strengths**: 1. The paper shows really strong few-shot results in unseen environments as well as on unseen tasks in Alfred and VirtualHome embodied environments. **Weaknesses**: 1. I find the notion of the world model in the paper to be misleading as the term is predominantly used to mean a transition dynamics model with an optional reward model. However, there are no details regarding the World model training or its description apart from one small paragraph at the beginning of Sec 3.2. 2. The writing of the paper needs to be improved. Lots of missing definitions and missing details (mentioned above as well as in other comments) make me question the reproducibility of this work. Other Comments Or Suggestions: No specific typo that I could find. Questions For Authors: 1. How are the "world models" trained? 2. What is $I$ in $D_j = {(I, s_t, a_t, s_{t+1})}$ denote? 3. Is it possible to compare the inference time for WorMI and the baselines? I'm curious to see how much time does the selection of relevant $K$ models takes. 4. For figure 5(a) what are WM {1-6} trained on? 5. I am curious if WorMI can adapt to finding objects that are not typically in the "desired" location. For instance for the task of "Place breadslice in microwave," the agent's attention is over the kitchen as it is the most likely place to be -- however if the microwave (for whatever) reason is in the living room/bedroom - does the agent have the capability to explore? 6. What object detection module $\phi_D$ is used? If it's a learned model -- how many of the errors in the final performance are due to either misrecognition or missing the recognitions? 7. [Clarification question]: For the results in Table 3 (a), are the models still trained with World-wise compound attention and only differ in the number of models used to perform the retrieval? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed comments. We will include the following experimental results and clarifications in the final version. > Q1. How are the "world models" trained? Following the reviewer’s comment, we clarify that the world models imitate the environment by capturing dynamics and affordances. Each world model is fine-tuned with LLaMA-3.2-1B using text-based states $s_t$, actions $a_t$, and instructions $I$. Specifically, we use three auxiliary tasks: (1) predict $s_{t+1}$ from $(s_t, a_t)$ to learn transition dynamics; (2) identify feasible actions $a_t$ from $s_t$ to capture affordances; (3) predict $a_t$ from $(s_t, I)$ to account for instruction conditioning. The dataset is collected only from seen scenes, and instructions are sampled solely from seen tasks. Training prompts for each world model build on the environment prompts in the appendix, with added instructions for each auxiliary task. > Q2. What is $I$ in $D_j=(I,s_t,a_t,s_{t+1})$ denote? $I$ is the embodied task instructions. We will add the descriptions in the main text Line 146. > Q3. Comparison for the inference time and memory efficiency. The table below shows the inference times and memory usages among the baselines and our WorMI. We use LLaMA-3.2-11B for reasoning and LLaMA-3.2-1B for world models, with the same configurations for all baselines. LLM-Planner’s long in-context prompts increase inference time, and SayCanPay’s use of three models with repetitive action log probability computations further slows performance. WorMI uses smaller world models and selects only the K world models, with each domain-specific world model kept much smaller than the reasoning model, easing scalability. |Model|Inference Time|Memory| |-|-|-| |LLM+FT, ZSP|298ms|21877MiB| |LLM-Planner|401ms|21877MiB| |SayCanPay|609ms|46230MiB| |WorMI(K=2,N=4)|339ms|30020MiB| |WorMI(K=2,N=6)|348ms|33445MiB| |WorMI(K=3,N=6)|385ms|33445MiB| > Q4. For figure 5(a) what are WM {1-6} trained on? WM1 to WM6 represent six pre-trained world models, with each model trained on a dataset from a distinct domain as demonstrated in Figure 5(b) showing Domain 1 through Domain 6. > Q5. Exploration capability for WorMI WorMI is capable of exploration even if the object is located outside its seen domain. For instance, if the TV is usually in the living room but is found instead in the bedroom or kitchen, it checks that likely location, and if the TV is missing there, it searches other rooms. This works because, beyond each world model’s domain-specific knowledge, the world-to-reasoning alignment also leverages the reasoning model's general knowledge, letting the agent systematically explore instead of relying exclusively on its seen domains. > Q6. Clarification for object detection module $\phi_D$. We use the environment’s built-in object detection, which is error-free. Therefore, errors due to missed or incorrect object recognition do not affect our method or any of the baselines. This setup aligns with text-based simulation environments such as VirtualHome and ALFWorld, where object descriptions are provided without recognition errors, and is consistently applied in other comparison studies as well [1, 2, 3, 4, 5]. In addition, for real-world scenarios requiring visual input, our framework can integrate an object detection module $\phi_D$ by using a vision-language model as its reasoning component, extending the approach beyond purely text-based domains. The table below shows the performance of multi-modal WorMI, which employs a VLM as its reasoning model, using both text and image states in VirtualHome. Multi-modal WorMI exhibits only a slight performance drop compared to WorMI, demonstrating the applicability for multi-modal experiment setups. Additionally, there is certainly room for improvement of Multi-modal WorMI, as we do not have enough time to optimize the hyperparameters. Even so, our approach still demonstrates superior performance compared to the baselines. |Model|SR (↑)|PS (↓)| |-|-|-| |Multi-modal WorMI|57.65%|17.21| |WorMI|66.12%|15.17| [1] Huang et al. "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents." ICML 2022. [2] Song et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models." ICCV 2023. [3] Hazra et al. "Saycanpay: Heuristic planning with large language models using learnable domain knowledge." AAAI 2024. [4] Singh et al. "Progprompt: Generating situated robot task plans using large language models.", ICRA 2023. [5] Yoo et al. "Exploratory retrieval-augmented planning for continual embodied instruction following.", NeurIPS 2024. > Q7. Clarification for the configuration in Table 3(a) Yes. All variants in Table 3(a) use the same world-wise compound attention during training. They only differ in which world models are selected at test time. WorMI-E uses all models, while WorMI-R randomly picks a subset, and WorMI does prototype-based retrieval.
Summary: This paper presents WorMI, a framework enabling embodied agents to adapt to new domains at test time by combining large language models with domain-specific world models. It introduces prototype-based world model retrieval and world-wise compound attention to effectively integrate knowledge from multiple models. Experiments show WorMI outperforms existing methods in zero-shot and few-shot scenarios, demonstrating robust adaptation to unseen domains. The framework's design allows for scalable, efficient deployment in real-world settings where adaptability and data efficiency are crucial. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The paper introduces WorMI, a framework that advances embodied AI by enabling test-time adaptation through dynamic composition of domain-specific world models with LLMs, building upon and extending prior work in model composition, knowledge retrieval, and meta-learning. The novel prototype-based retrieval and compound attention mechanisms in WorMI efficiently select and integrate relevant models, addressing limitations in computational efficiency and adaptability found in previous approaches, and demonstrate superior performance in zero-shot and few-shot scenarios across established benchmarks. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1) The paper presents a novel framework (WorMI) that creatively combines prototype-based retrieval with a world-wise compound attention mechanism for embodied AI adaptation. This approach addresses a significant challenge in the field by enabling dynamic composition of world models at test time, representing a meaningful advancement over previous methods. 2)The work tackles a crucial problem for real-world embodied AI applications—adaptation to new domains without extensive retraining. The demonstrated performance improvements, particularly in zero-shot and few-shot scenarios, suggest practical impact and scalability. 3)The paper is well-structured. Weaknesses: 1)While the combination of methods is novel, the individual components (prototype-based retrieval, attention mechanisms, meta-learning) are not entirely new. Some readers might argue that the innovation is incremental rather than groundbreaking. 2)The framework's performance is inherently dependent on the underlying language model, which could be seen as a limitation since it inherits any weaknesses of the LLM. 3) What specific challenges have you identified in deploying WorMI in real robotic systems with sensorimotor embodiments? Other Comments Or Suggestions: See the weaknesses Questions For Authors: See the weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed comments. We will include the following experimental results and clarifications in the final version. > Q1. While the combination of methods is novel, the individual components (prototype-based retrieval, attention mechanisms, meta-learning) are not entirely new. Some readers might argue that the innovation is incremental rather than groundbreaking. To the best of our knowledge, our model implanting framework is the first to enable selective addition and removal of domain-specific world models in an agent’s policy at test time. The novelty lies in how we orchestrate them to enable a plug-and-play composition of world models. Rather than training a single monolithic model or simply combining multiple models externally, we propose a framework where each world model is fully implanted in the reasoning model only when relevant to the current domain. Our approach is novel in its three-fold strategy. First, prototype-based retrieval efficiently selects only the most relevant world models for an unseen domain, reducing overhead. Second, our compound attention mechanism fuses domain-specific knowledge and aligns it with LLM reasoning at test time. Finally, meta-learning enables seamless adaptation to new models or domains without retraining components. By combining these three ideas, we achieve a flexible test-time architecture that is easily extensible and maintainable, providing a clear step forward over existing methods that rely solely on in-context learning or static model ensembles. > Q2. The framework's performance is inherently dependent on the underlying language model, which could be seen as a limitation since it inherits any weaknesses of the LLM. WorMI leverages domain-specific world models to reduce hallucination and bias instead of relying solely on the LLM’s own knowledge. As shown in Table 4, it remains robust even with a smaller LLM while keeping expansion costs low. However, completely eliminating LLM weaknesses remains challenging. These limitations are explicitly addressed in the main text. > Q3. What specific challenges have you identified in deploying WorMI in real robotic systems with sensorimotor embodiments? We consider multi-modality is critical for real robotic systems, which often rely on various sensor inputs. Our WorMI framework is designed to support this by allowing the reasoning model and individual world models to come from different modalities. The table below shows the performance of multi-modal WorMI, which employs a VLM as its reasoning model, using both text and image states in VirtualHome. Multi-modal WorMI exhibits only a slight performance drop compared to WorMI, demonstrating the applicability for multi-modal experiment setups. Additionally, there is certainly room for improvement of Multi-modal WorMI, as we do not have enough time to optimize the hyperparameters. Even so, our approach still demonstrates superior performance compared to the baselines. We will include these experimental results in the final version. | Model | SR (↑) | PS (↓) | |------------------|--------|--------| | Multi-modal WorMI| 57.65% | 17.21 | | WorMI | 66.12% | 15.17 |
null
null
null
null
null
null
One Leaf Reveals the Season: Occlusion-Based Contrastive Learning with Semantic-Aware Views for Efficient Visual Representation
Accept (poster)
Summary: The work propose occlusion-based contrastive learning with masked image modeling approach. It compares against iBot, MAE, I-JEPA and other relevant SSL methods. Achieves competitive results with less time needed for training. Claims And Evidence: clearly stated and confirmed by the results One of them is usage of occlusion into MIM training, which is well designed for MIM framework rather than manual augmentation selection. The OCL is able to extract better high level concepts. Methods And Evaluation Criteria: correct and appropriate methods based on contrastive learning and MIM are commonly used, however using occlusion for training is novel Standard benchmark are used such as ImageNet and architecture such as ViT-L/16. Additionally tasks such as linear probining and fine tuning are reported. Additional benchmarks on COCO and ADEK are used. Theoretical Claims: NA Experimental Designs Or Analyses: following the good practices and other works Common benchamrks such as ImageNet, ADEK, COCO and tasks such as fine tuning and linear probing. Additional ablations on times needded to train are provided and looks promising. Additionally generalization capabilites are tested on different variants of ImageNet. Supplementary Material: Not Relation To Broader Scientific Literature: provided extensively, including SimCLR, iBot, I-JEPA and MIM. Essential References Not Discussed: it is discussed properly Other Strengths And Weaknesses: it is interesting and well written work, clear presentation of the claims and methodology that is easy to follow Other Comments Or Suggestions: a discussion of how much time is needed for occlusion in this setup? Not whole pretraining, but I would like to see what kind of fraction of time in pretraining is used on occlusion when compared to standard augmentations. Questions For Authors: how much time is needed for occlusion in this setup in training compared to non-occluded images? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your sincere review, especially for summarizing the strengths of our work, including 1) interesting and well-written work, 2) clear presentation of the claims and confirmed by the results, 3) novel training method of occlusion that is easy to follow and 4) good experiments with common benchmarks, additional promising ablations and generalization validations. In response to your concerns, we have provided a detailed explanation below and revised our manuscript accordingly. 1. **(Comment & Question) occlusion time.** Thanks so much for your sincere and constructive suggestions. Different from the standard augmentations used in traditional contrastive pre-training, we use the torch module with CUDA acceleration to implement the occlusion operations for our method. **Quantitatively, our whole model has a floating point of operations (FLOPs) of 12.03 G, while the operation of occlusion only has 0.12 G FLOPs, accounting for about 1%.** It is confirmed that occlusion operations take up very few computation resources and runtime but bring great efficiency improvements. Besides, **to better demonstrate the efficiency of our model, we conduct the table following the MixMAE to illustrate the comparison with previous methods on ImageNet-1K classification with ViT-B model.** All methods are evaluated by Aug., Epoch (Ep.), FLOPs/G, Parameters (Param.)/M, linear probing (LIN) and fine-tuning (FT). The resolution of images is fixed to 224×224. Aug. indicates the utilization of handcrafted view data augmentation during pre-training. FLOPs/G is utilized to show the runtime and computational resources of the pre-training. Param./M is calculated for the encoders of the pre-training model, following the MixMAE. $\dagger$ denotes the results are copied from the MixMAE. Top-1 accuracy (Acc) is used as the metric. | | Aug. | Ep. | FLOPs (G) | Param. (M) | LIN | FT | |-------------------------------------------------------|:----:|:-----:|:---------:|:----------:|:-----:|:-----:| | **Masked Image Modeling** | | | | | | | | BEiT $\dagger$ | w/o | 800 | 17.6 | 87 | - | 83.2 | | MAE $\dagger$ | w/o | 1,600 | 17.5 | 86 | 68.0 | 83.6 | | CAE $\dagger$ | w/o | 1,600 | 17.5 | 86 | 70.4 | 83.9 | | I-JEPA | w/o | 600 | 17.5 | 86 | 72.9 | - | | **Contrastive Learning** | | | | | | | | DINO | w/ | 1,600 | 74.7 | 171 | 78.2 | 82.8 | | MoCo v3 | w/ | 600 | 74.7 | 171 | 76.7 | 83.2 | | **Masked Image Modeling with Contrastive Learning** | | | | | | | | SiameseIM | w/ | 1,600 | 16.3 | 88 | 78.0 | 84.1 | | ccMIM | w/o | 800 | 39.1 | 86 | 68.9 | 84.2 | | ConMIM | w/ | 800 | 17.5 | 86 | - | 85.3 | | MixMAE $\dagger$ | w/o | 600 | 15.6 | 88 | 61.2 | 84.6 | | iBOT $\dagger$ | w/ | 1,600 | 17.5 | 86 | 79.5 | - | | OCL | w/o | 800 | 12.0 | 86 | 74.2 | 83.4 | **From the table, our method achieves 12.0 G FLOPs, surpassing the second-best method (MixMAE at 15.6 G) and reducing computational costs by approximately 23%.** It is verified that our occlusion operation achieves considerable enhancement of efficiency. **We have revised our paper and added more detailed descriptions about occlusion time.** Thanks for your sincere suggestion. Thank you once more for generously dedicating your time to provide a thoughtful review. Your feedback is tremendously valuable, and we are open to hearing from you at any time. If you find our response satisfactory, we would greatly appreciate your assistance in improving our rating score.
Summary: The paper introduces Occluded Image Contrastive Learning (OCL), a novel self-supervised learning (SSL) paradigm for efficient visual representation. OCL combines the strengths of Masked Image Modeling (MIM) and Contrastive Learning (CL) by using random masking to create diverse views within an image and contrasting them within a mini-batch. The key innovation lies in generating fine-grained semantic differences through masking, which reduces conceptual redundancy and avoids the need for hand-crafted data augmentations or auxiliary modules. The authors demonstrate that OCL is highly scalable, achieving competitive results on downstream tasks like ImageNet classification, object detection, and segmentation, while significantly reducing pre-training time and computational resources. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed method is well-motivated. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: I believe the experiments are not convincing enough. Supplementary Material: No Supplementary Material. Relation To Broader Scientific Literature: The paper builds on existing work in Masked Image Modeling (MIM) (e.g., MAE, BEiT) and Contrastive Learning (CL) (e.g., SimCLR, MoCo v3), but distinguishes itself by combining the strengths of both paradigms without relying on hand-crafted augmentations or auxiliary modules. Essential References Not Discussed: This paper has considered most of the relevant works as far as I know. Other Strengths And Weaknesses: Strength: 1. The paper is well-written and easy to follow. 2. The ablation studies are comprehensive. Weaknesses: 1. In line 134, only a low masking ratio is adopted, and the contrastive learning requires two branches, which may take much more computational costs? However, no effective training epoch or running time is provided in Table 4 for fair comparisons. 2. Although more computational costs are consumed, both the fine-tuning and linear-probing results are comparable to previous SOTA contrastive learning and MIM methods. 3. The detection and segmentation results on COCO and ADE20k datasets are also lower than previous SOTA methods. Other Comments Or Suggestions: It's better to provide training wall-clock time or computational costs of the proposed methods. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your review, especially for summarizing the strengths of our work, including 1) well-motivated method 2) well-written paper and easy to follow and 3) comprehensive ablation studies. In response to your concerns, we have provided a detailed explanation below and revised our manuscript accordingly. 1. **(Weakness 1) computational costs.** Thanks very much for your detailed review. Concerning traditional contrastive paradigms such as MoCo v3 and DINO, they use student-teacher dual networks as two branches to process distinct views, leading to almost double the computation cost for one image. However, **we did not use two networks as two branches, while we utilized two different parts of an image as two branches.** Thus, two branches of contrastive learning in our method will not bring additional computation costs. Cooperating with the masking strategy, we can further reduce the amount of computation for an image. Moreover, our model does not depend on an additional transformer decoder to reconstruct the image, leading to less computation. **In summary, we use the above contributions to significantly reduce the computational costs of the pre-training paradigm and improve the efficiency.** - To better demonstrate the efficiency of our model, we conduct the table following the MixMAE. https://s2.loli.net/2025/03/31/RfKjqYMtAxoVQHC.png (Same Table as in Reviewer 2yVR, sorry for limited chars.) **From the table, our method achieves 12.0 G FLOPs, surpassing the second-best method (MixMAE at 15.6 G) and reducing computational costs by approximately 23%.** - Moreover, **we have presented Figure 3 in our manuscript to illustrate the efficiency and scaling ability of our model, and discussed the computational costs in line 248.** OCL is highly scalable compared to previous methods, requiring less computational resources while achieving comparable and competitive results and without relying on handcrafted data augmentations. https://s2.loli.net/2025/03/31/VqHwXUrMgpcGTt1.png - Furthermore, **we have also conducted ablation experiments on mask ratio and discussed it in section 3.2.1. MASKED RATIO of our manuscript.** The results demonstrate that a lower masking ratio (0.4) optimally balances computational efficiency and visual representations. https://s2.loli.net/2025/03/31/VfcqPNk7SXuoIjO.jpg - Besides, **We also provide the runtime with FLOPs and parameters for Table 4** to better illustrate the ablation of the MLP head from the efficiency perspective. | MLP Head | FLOPs(G) | Param. (M) | Pre-training Hours | Eff. Bsz. | LIN | FT | |--|--|--|--|--|--|--| | w/o | 42.3 | 303 | 559 | 1,800 | 77.4 | 85.7 | | 2-layer | 43.6 | 305 | 600 | 1,800 | 76.7 | 85.6 | | 3-layer | 43.7 | 306 | 611 | 1,800 | 76.6 | 85.6 | **We have added the discussion about the computational costs of two branches in our contrastive paradigm, and added the Table of pre-training details of the ViT-B model to show the runtime and effective pre-training epochs, and revised our Table 4 to add FLOPs and Parameters.** Thanks for your constructive suggestions. 2. **(Weakness 2) fine-tuning and linear-probing results.** Many thanks for your sincere review. We are sorry for the misunderstanding of our theme in this paper. Our goal is to provide a new pre-training paradigm for efficient visual representation with affordable training time and reasonable computation cost. **Actually, our method reduces computational costs while maintaining efficiency.** From the FLOPs comparison in the table above, we can see that our FLOPs of 12.0 G significantly exceed the second-best MixMAE with 15.6 G, **improving the efficiency by about 23%.** With such significant improvement in efficiency, our method achieves comparable and competitive results of fine-tuning and linear-probing to previous SOTA contrastive learning and MIM methods. **We have added a more detailed discussion about the fine-tuning and linear-probing results to clarify the misunderstanding.** Thanks for your sincere review. 3. **(Weakness 3) detection and segmentation results.** Thanks for your comments. We found it is quite similar to the weakness 2. We provide a variety of downstream tasks to show that our pre-trained model has the generalization ability competitive with previous SOTA methods. **However, our core contribution lies in proposing a simple pre-training paradigm that balances efficiency and performance, no bells and whistles.** We hope it helps us train larger vision models and replicate the success of LLM. **We have also added a discussion about the relationship between downstream tasks and our theme.** Thanks for your sincere review. We appreciate your time and thorough review. Your feedback is highly valuable to us, and we welcome further communication from you. If you are satisfied with our response, we would be grateful for your support in enhancing our rating score.
Summary: This work proposes a contrastive self-supervised training method that relies on masking. After a global masking step, two nonoverlapping sets of patches are selected to create two views which are then aligned to each other and contrasted to all views of other samples in the minibatch. Noticeably, no contrastive MLP-head is used. Despite its simplicity regarding architecture and augmentations, the proposed OCL method performs reasonably well both in fine-tuning and linear probing evaluations. Additionally, the authors report a significant decrease in training duration compared to multiple other efficient methods, e.g. MAE, I-JEPA. The method applies a loss (T-distributed spherical metric) that introduces an additional concentration parameter. ---- update after rebuttal: The authors have addressed some of my concerns and answered the questions. Therefore, I will keep the positive score. Claims And Evidence: Claims are that OCL is a highly efficient SSL method that performs nearly on-par with SOTA methods. Evidence is provided in various benchmark evaluations. Methods And Evaluation Criteria: Imagenet classification evaluation is the common way to evaluate self-supervised methods. Additionally, semantic segmentation and object detection performance is evaluated on ADE20k and COCO, respectively. Robustness is evaluated on a few ImageNet-like benchmarks (A,R and S). Overall, a solid set of evaluation experiments. Ablation and sensitivity studies are mainly performed on ImageNet1k. Theoretical Claims: no theoretical claims. Experimental Designs Or Analyses: I checked the performance and ablation experiments and did not find any issues. Supplementary Material: Yes. All three pages. Relation To Broader Scientific Literature: OCL is another small step towards improved SSL training efficiency and less reliance on hand-crafted augmentations. Essential References Not Discussed: none. Other Strengths And Weaknesses: Strengths: - Efficent and simple. The result without MLP head is quite astounding. - Only needs very basic augmentations, like MAE. Masking is performed randomly, which is a strength compared to methods that rely on specific masking strategies that are tailored to Imagenet, e.g. I-JEPA. - Like other joint embedding methods, it is shown that the learned features already have a higher level of semantic abstraction than standard MIM methods. Weakness: - Results are good, but not ground breaking. Them main argument for OCL is its reported efficiency. The number of reported epochs varies between 800 (line 380) and 1600 (Table 6.). Unless the authors already take the 2 generated views into account, and report effective epochs, this does not fit. Furthermore, there is not a single table that holds both, runtime and training epochs. - Given the similarity in effective masking ratio to MAE and the need to create 2 views, the source of the speedup is not that easy to explain. see questions. - Using the strategy from MoCo V3 to freeze the patch layer reduces the number of learnable parameters and has shown benefits with regard to overfitting. On the other hand it suffers from the same problem like e.g. max-pooling, it often works well, but sometimes fails. No study about the effect of random initialized untrainable patch creation is performed. Other Comments Or Suggestions: none. Questions For Authors: Why do you apply this global mask, why is it applied to all samples in a mini-batch. Does it have efficiency reasons or does it benefit learning? Looking at the masking ratios and the number of epochs, I cannot spot the source of the speedup compared to MAE. Compared to I-JEPA it is even harder to find the edge of OCL. You mention the simplicity of the architecture and no MLP-head, is an effective epoch (sample times views) of OCL that much faster to train than an epoch of MAE or I-JEPA? There are studies about the concentration parameter, but no ablation about the loss itself. How is the loss and the ability to not use MLP heads related? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful and detailed review. Your positive feedback serves as a great source of encouragement for our team, especially recognizing the strengths: 1) efficient and simple framework, 2) not relying on specific masking strategies and 3) higher level of semantic abstraction than standard MIM methods. Furthermore, we have diligently responded to each of your questions and incorporated your feedback into our manuscript revisions: 1. **(Weakness 1) efficiency about epochs and views generations** Thanks very much for detailed and thoughtful review. We sincerely apologize for the typo in Table 6, that we actually used 800 epochs for pre-training, which is consistent with our Table 7, Table 8 and the description in line 380 of the paper. **We have corrected this typo and double-checked the entire manuscript to make sure there are no further typos.** Besides, we are sorry for the misunderstanding of the efficiency of views generation. **We do take into account the time it takes to generate two views.** As illustrated in Algorithm 1, we conduct generations of two views in torch module MiCLAutoencoderViT (code in models_mae.py) with CUDA acceleration, instead of image pre-processing using CPU like DINO and MoCo v3. Thus, our method has considered the efficiency of views generation. And **we have clarified this misunderstanding in our revised manuscript.** 2. **(Weakness 2) runtime and training epochs.** Many thanks for your kind suggestions. To better demonstrate the efficiency of our model, we conduct the table following the MixMAE. https://s2.loli.net/2025/03/31/RfKjqYMtAxoVQHC.png (Same Table as in Reviewer 2yVR, sorry for limited chars.) **From the table, our FLOPs of 12.0 G significantly exceed those of the second-best method, MixMAE (15.6 G), improving efficiency by about 23%.** Concerning DINO and MoCo v3, they leverage student-teacher dual networks to pre-train, leading to higher FLOPs and Parameters of the pre-training encoder. **We have revised the paper and added this table to illustrate the efficiency of our method more clearly.** 3. **(Weakness 3 & Question 2) the sources of the speedup.** We sincerely appreciate your kind and insightful suggestions. The acceleration of our method primarily stems from two key factors: - First, **the masking strategy significantly reduces the number of tokens processed by the ViT for a single image.** With a ratio of 0.4, only a part of tokens participate in pre-training, dramatically decreasing the computational load required by the ViT. - Second, **the contrastive framework eliminates the need for additional modules to reconstruct the image.** Traditional MIM methods and most MIM-CL hybrid approaches employ a transformer decoder for image reconstruction, which incurs substantial computational overhead. In contrast, our OCL requires neither a decoder nor an MLP head, thereby reducing pre-training computations and enhancing efficiency. As shown in Table of Answer 2, **our method requires significantly less training time per epoch compared to MAE and I-JEPA.** Specifically, OCL achieves 12.0G FLOPs, surpassing MAE of 17.5G by approximately 31% through elimination of decoder dependency. Moreover, I-JEPA’s higher FLOPs (29.8 G) result from its dual encoders (context and target) to generate different views. **We have revised the paper to provide a more detailed explanation of these acceleration mechanisms.** 4. **(Weakness 4) frozen patch layer.** Sincerely thanks for your thoughtful and helpful suggestions. We did experience training failures. While **we were more concerned about the efficiency of the model's pre-training**, so we simply changed the random seed and retrained. **We will continue to explore solutions to model failures in future research.** 5. **(Question 1) global masking strategy.** Thank you for valuable suggestions. **The global masking strategy is designed to enhance representation learning by addressing semantic redundancy in images.** As highlighted by the MAE and discussed in line 32 of our paper, images inherently carry redundant semantics. Global masking prunes unnecessary patches, forcing the model to abstract high-level semantic patterns. **We have revised the paper to provide a detailed explanation of the global masking strategy.** 6. **(Question 3) the relationship between MLP head and loss.** Thanks for valuable suggestions. Though distinct in mechanism, both T-SP loss and MLP head enhance visual representations. **We have added additional ablation experiments and related discussion to validate the relationship between MLP head and T-SP loss.** | T-SP Loss | MLP Head | LIN| FT| |--|--|--|--| |√|√|77.0|85.5| |x|√|72.4|85.1| |√|×|77.9|85.8| |×|×|61.3|82.6| Thanks again for your valuable time and careful review. Your feedback is immensely valuable to us. If you find our response satisfactory, could you please consider helping us improve our rating score?
null
null
null
null
null
null
null
null
Quantum Algorithms for Finite-horizon Markov Decision Processes
Accept (poster)
Summary: This paper presents four quantum algorithms for time-dependent, finite-horizon Markov Decision processes (MDPs) in both the exact dynamics setting and the generative model setting: 1. In the exact dynamics setting, the algorithm QVI-1 achieves a quadratic speedup in the active space size (A) for computing the optimal policy and V-value. QVI-2 provides a sub-quadratic quantum speedup in the state space (S). 2. In the diffusion model setting, the algorithm QVI-3 and QVI-4 achieves speedups in terms of A (sub-quadratic) and estimation error $\epsilon$ (quadratic). Claims And Evidence: The submission claims the correctness and efficiency of the proposed quantum algorithms. Their correctness and complexity bounds are rigorously proved in the manuscript. Meanwhile, the authors also establish a quantum lower bound for finite-horizon MDPs, demonstrating that QVI-3 and QVI-4 are nearly minimax optimal. Methods And Evaluation Criteria: Overall, the proposed methods and evaluation in this submission appear to make sense: - The proposed quantum algorithms, including the input (query) models and the input/output (in the form of pseudo code), are clearly stated. They incorporate standard quantum subroutines such as Quantum Mean Estimation and Quantum Maximum Searching to accelerate existing classical algorithms for time-dependent, finite-horizon MDPs. - The correctness and computational complexity are established through rigorous mathematical proof. Technical overviews of the theoretical results are provided for every quantum algorithm, and they look plausible. Theoretical Claims: I read the technical overview of the quantum algorithms. The quantum accelerations are identified as follows: 1. In QVI-1, the optimal action is obtained by taking the maximum over the whole action space in the Bellman recursion, which is accelerated by Quantum Maximum Search (QMS, Theorem 3.3) 2. In QVI-2, the quantum speedup is achieved by an improved estimation of $P^T_{h|s,a} \hat{V}_{h+1}$ through the Quantum Mean Estimation with Binary Oracles (QMEBO, Theorem 3.6, proven by the authors) 3. In QVI-3, the Quantum Mean Estimation (QME, Theorem 4.2) is used to improve an $\epsilon$-approximation of $P^T_{h|s,a} \hat{V}_{h+1}$. This can be further accelerated by QMS as like QVI-1. 4. QVI-4 focuses on the computation of Q-values. It adapts the total variance technique (Wang et al., 2021) to the time-dependent, finite-horizon setting. Quantum speedups are achieved by a sequence of QME subroutines. The technical summary is clear and easy to follow. The results appear to be technically solid, although I was not able to verify the proofs in the appendices line by line. Experimental Designs Or Analyses: No numerical experiment is presented in this submission. Supplementary Material: The appendices are technical proofs of the main results. They are well organized. I checked the lower bound part (B.3). The quantum lower bound proof relies on a reduction from finite-horizon MDPs to infinite-horizon MDPs, which makes sense to me. Relation To Broader Scientific Literature: The pursuit of quantum speedups in stochastic control and reinforcement learning has been an active area of research, with finite-horizon MDPs serving as a standard problem in this domain. - This submission investigates several quantum algorithms for time-dependent, finite-horizon MDPs and provides rigorous complexity analyses. These algorithms present evidence of quantum speedups for control and RL problems, which is promising for the entire community. - In addition, the proof of quantum query lower bounds for finite-horizon MDPs advances our understanding of the computational limits of quantum computers in control and RL. Essential References Not Discussed: A few references are highly relevant to quantum accelerations of optimal control/RL but are not discussed by the author. - The authors claim that "However, the quantum algorithm and analysis there cannot be applied to general finite-horizon MDPs." This paper (https://arxiv.org/abs/2206.04741) proposes amplitude estimation to estimate the value function of a policy in a finite MDP with a finite horizon. - (Cornelissen, 2018) is not the only work that improves policy gradient. This paper (https://arxiv.org/abs/2411.01391) provides a super-quadratic improvement in policy gradient estimation based on the quantum numerical linear algebra (ODE solvers + LCU) technique. Other Strengths And Weaknesses: Strengths: - The quantum lower bounds for finite-horizon MDPs are new results in the literature. - The quantum mean estimation with binary oracles (QMEBO) subroutine is clearly stated and its efficiency is proved (Theorem 3.6). It can be of independent interest in future research. Weaknesses: - The efficient construction of the query input model (Definitions 3.2 and 4.1) appears highly nontrivial. Is it possible that these input models might be more expensive than the quantum algorithms themselves and thereby nullify the quantum speedups? Other Comments Or Suggestions: - Part 1 in Theorem 4.2: please use \left( and \right) in the big-O notation. - While the paper claims the algorithms QVI-3 and QVI-4 achieve the "minimax optimal", this concept is never clearly defined in the main text. It would be nice to discuss the minimax optimality for self consistency. Questions For Authors: 1. Throughout this paper, I couldn’t find a clear explanation of the algorithm name ‘QVI.’ Could you clarify what ‘QVI’ stands for? 2. What are the possible applications/extensions of the techniques presented in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer FX1s for the thoughtful and constructive feedback. Below, we address the reviewer’s concerns regarding references, the typo in Theorem 4.2, the definition of "minimax optimal," the clarity of the algorithm name "QVI," the cost of constructing quantum oracles, and the potential applications and extensions of our techniques. The statement in Section 1 that "the quantum algorithm and analysis there cannot be applied to general finite-horizon MDPs," refers to (Naguleswaran et al., 2006), which only focuses on a specific class of MDPs—deterministic shortest path problems—and does not generalize to broader finite-horizon MDPs. We agree that (Wiedemann et al., 2022) addresses general finite-horizon MDPs, and we have properly cited this work in Section 1. However, their algorithm is inefficient, with a quantum sample complexity exponential in the state space for obtaining a near-optimal policy. Besides, we appreciate the reviewer for recommending [1] which provides a super-quadratic improvement in policy gradient estimation. However, [1] did not directly show how such a method can be applied to solve MDP problems. It would be an interesting direction to explore whether this powerful technique can be used to improve the results in (Cornelissen, 2018) for solving MDP problems. We apologize for the typo in Theorem 4.2 and we will correct this in the revised manuscript. Additionally, we acknowledge the lack of clarity in defining "minimax optimal." To clarify, we plan to replace "minimax optimal" with " (asymptotically) optimal" and formally define that an algorithm is (asymptotically) optimal if its query/sample complexity matches the corresponding lower bound up to constant factors [3]. Accordingly, we propose to revise the claim that "**QVI-3** and **QVI-4** are nearly minimax optimal" to "**QVI-3** and **QVI-4** are nearly (asymptotically) optimal (up to log terms) in computing near-optimal V/Q values and policies, provided the time horizon $H$ is a constant." For example, **QVI-3**'s quantum sample complexity in computing near-optimal V values and policies is $\tilde{O}\left( \frac{S \sqrt{ A } H^{3}}{\epsilon} \right)$ (Theorem 4.4) while the quantum lower bound is $\Omega\left( \frac{S\sqrt{ A }H^{1.5}}{\epsilon \log^{1.5}(\epsilon^{-1})} \right)$ (Theorem 4.7), differing by a factor of $H^{1.5}$ and a log term. We apologize for the lack of clarity in the algorithm name "QVI." We plan to revise the sentence "..., we propose a quantum value iteration algorithm QVI-1, ..." in the first paragraph of our summarized contributions in Section 1 to explicitly introduce the term "QVI". "QVI" stands for Quantum Value Iteration, where “VI” refers to the classical Value Iteration algorithm and “Q” indicates our quantum adaptation using subroutines like **QMS** and **QME**. We appreciate the concern about the cost of constructing the quantum oracles in Definitions 3.2 and 4.1. For the quantum generative model in Definition 4.1, as noted in our response to Reviewer Xd2s, **the classical generative model G and the quantum generative model $\mathcal{G}$ have similar costs at the elementary gate-level** if the classical circuit of the classical generative model is accessible. Furthermore, assuming that the classical generative model can be called in constant time and that we have access to quantum random access memory (QRAM) in [2], the time complexities of **QVI-3** and **QVI-4** are the same as the sample complexities of **QVI-3** and **QVI-4** up to log terms, so the reported speedups for **QVI-3** and **QVI-4** remains valid. Similarly, the time complexity of **QVI-1** and **QVI-2** is not degraded by the construction of quantum oracle $O_{\mathcal{QM}}$ (Definition 3.2) either. Specifically, suppose the classical oracle $O_{\mathcal{M}}$ in Definition 3.1 is a computer program, and we have its source code, we can also efficiently convert the classical circuit of $O_{\mathcal{M}}$ to a quantum circuit to implement the quantum oracle $O_{\mathcal{QM}}$. Unlike the quantum generative model, the output of the quantum oracle $O_{\mathcal{QM}}$ is not a superposition. Therefore, even without access to QRAM, the time complexities of **QVI-1** and **QVI-2** are the same as the quantum query complexities of **QVI-1** and **QVI-2** up to log terms as long as the classical oracle $O_{\mathcal{M}}$ can be called in constant time. We thank the reviewer for encouraging more discussion on the applications and extensions of our techniques. Our quantum algorithms have potential use in robotics (e.g., path planning) and operations research (e.g., inventory management). Our theoretical techniques could be extended to partially observable or multi-agent MDPs. [1] Clayton, Connor, et al. "Differentiable Quantum Computing for Large-scale Linear Control.". [2] Giovannetti, Vittorio, Seth Lloyd, and Lorenzo Maccone. "Quantum random access memory.". [3] Cormen, Thomas H., et al. "Introduction to algorithms".
Summary: In this work, the authors propose quantum algorithms for solving time-dependent, finite-horizon Markov Decision Processes (MDPs). The goal is to estimate the optimal policy that maximizes the expected reward over the finite time horizon, given a finite and discrete state and action space. Equivalently, this task can be viewed as maximizing the V-value function, which represents the sum of all future rewards for a given policy and initial state. The authors present quantum algorithms for estimating the optimal policy and value function in two settings: the exact dynamics setting, where the agent has full knowledge of the transition probabilities for all state-action pairs at each time step, and the generative model setting, where the agent can only sample transition states for specific state-action pairs using a generative model. In both cases, the proposed quantum algorithms achieve a polynomial advantage over the best known classical algorithm in terms of query complexity to the oracle giving the transition probabilities. In the exact dynamics setting, the authors leverage the quantum maximum searching algorithm to achieve a quadratic improvement in query complexity, reducing it from O(A) to O(\sqrt(A)), where A is the size of the action space. Additionally, they propose a second algorithm that provides a quadratic speedup with respect to the state space dimension when computing an epsilon-approximation of the optimal policy and value function. In the generative model setting, the proposed quantum algorithms achieve polynomial speedups in query complexity with respect to the action space, time horizon interval, and approximation error for the optimal policy, value function (V-value), and state-action value function (Q-value). These speedups are obtained using the quantum mean estimation algorithm. Finally, the authors establish a lower bound for quantum algorithms in the generative model setting, indicating that their proposed algorithms are nearly optimal. Claims And Evidence: To the best of my knowledge, all the claims appear to be well-supported. In particular, the quantum speedups rely on well-established algorithms, such as Quantum Maximum Search and Quantum Mean Estimation, and it seems reasonable that these contribute to a computational advantage. Methods And Evaluation Criteria: The paper does not include any numerical experiments to support its claims, as it is purely theoretical. However, benchmarking the classical and quantum algorithms based on query complexity to the transition probability function seems reasonable. Additionally, the implementation of the quantum oracle for this function appears well-founded, making the comparison fair. Theoretical Claims: I have skimmed the proofs and all seem fine. The claims are consistent what one would expect based on previous literature. Experimental Designs Or Analyses: The paper does not contain any experiment. Supplementary Material: no Relation To Broader Scientific Literature: The paper discusses its relation to previous work in the introduction, highlighting that quantum algorithms have already been proposed for infinite-horizon problems with time-invariant value functions [3]. In contrast, this work focuses on finite-horizon and time-dependent scenarios. Additionally, the authors explicitly state where the key query advantage arises from the application of previously known algorithms. (Quantum Maximum Search [1] and Quantum Mean Estimation [2]). Essential References Not Discussed: I am not sure about essential, but https://arxiv.org/pdf/2212.09328 contains results about comparing finite horizon approximations of infinite time settings which I think are important for this discussion. Other Strengths And Weaknesses: Strength: I think the results in the paper are solid and it is an interesting combination of well-known quantum algorithms for MDPs. Weaknesses: - I am concerned about the novelty of the results. As the authors acknowledge, the use of quantum maximum search and quantum mean estimation for solving MDPs was already introduced in [3] for the infinite-horizon case. However, they do not clearly explain the challenges in adapting these techniques to the finite-horizon setting or what novel contributions they introduced to make this adjustment. Also appendix B of https://arxiv.org/pdf/2212.09328 establishes a rather tight relationship between finite and infinite time horizons. Furthermore if I am not mistaken that paper and related works actually prove results for finite horizons and then show it also implies good performance for infinite horizons. -Regarding the time-dependent case, for finite times, one can reduce this to time independent case, by expanding the state space to encode the timestamp. If one does this and applies previous results, do we get something different? - The speedup achieved is only polynomial (quadratic) compared to classical algorithms using rather well known techniques. Additionally, in the exact dynamics setting, the improvement is only relative to the best-known classical algorithm. Establishing a lower bound for classical algorithms, as was done in the generative model setting, would strengthen the results. - The presentation of the results lacks clarity. In particular, a high-level overview of the proposed algorithms and their workings would significantly improve the readability and understandability of the paper Other Comments Or Suggestions: no other comments Questions For Authors: 1) What are the difficulties of extending the results in [3] to the finite horizon case considered in this paper? 2) Could you establish lower bounds on the query complexity for classical methods in the exact dynamics setting? 3) Could you provide a high-level explanation of your algorithms in the main text, in addition to the pseudocode? 4) Regarding the time-dependent case, for finite times, one can reduce this to time independent case, by expanding the state space to encode the timestamp. If one does this and applies previous results, do we get something different? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s concern regarding the perceived lack of novelty in our quantum algorithms. While our algorithms leverage **QMS** and **QME** as in prior work (Wang et al., 2021), their analysis for infinite-horizon MDPs can not be readily applied to time-dependent and finite-horizon MDPs. A key contribution of our work lies in the design and analysis of **QVI-4**. Note that in our setting, the value operator in Definition 2.1 is time-dependent and lacks contraction property-unlike the infinite-horizon case, which allows iterative policy/value updates via fixed points. This lacks of contraction property in our setting poses significant challenges in designing **QVI-4**. In SolveMdp1 (Wang et al., 2021), the $\epsilon_{k}$ optimal V/Q-values and policy can be directly used to initialize the next epoch $k+1$. This is not feasible in our case. To address this, we propose another initialization strategy in **QVI-4**, setting $V_{k+1,h}^{(0)}= V_{k,h}$ for all $h\in[H]$ and initializing $V_{k,H}= V_{k,H}^{(0)}=\mathbf{0}$. We then use induction on $k$ to prove the correctness of **QVI-4** (Lemma B.4), showing non-trivial technical contributions in extending quantum algorithms to finite-horizon MDPs. We appreciate the reviewer’s question about the lower bound for classical methods in the exact dynamics setting. We can derive a classical lower bound of $\Omega(S^2 A)$ in the exact dynamics setting by adapting the method from [2] to finite-horizon MDPs. Consider two hard instances $M_{1}$​ and $M_{2}$ similar to those in [2]. ​In $M_{1}$, we can derive that the optimal value for $s\in \mathcal{S}\_{U}$ is $V^{\*}\_{h}(s)=\frac{H-1-h}{2}$, while in $M_{2}$, the optimal value for $s\in \mathcal{S}\_{U} \setminus \{ \overline{s} \}$ remains $V^{\*}\_{h}(s)=\frac{H-1-h}{2}$ but $V\_{h}^{\*}(\overline{s})=H-1-h$. To achieve $\frac{H-1}{4}$-optimal $V\_{0}$ with high probability, any algorithm must distinguish $M_{1}$ from $M_{2}$, requiring to search for two discrepancies in an array of size $\Omega(S^{2} A)$. Therefore, the classical lower bound for computing an $\epsilon$ optimal $V_{0}$ for the time-independent and finite-horizon MDP is $\Omega(S^{2} A)$ for $\epsilon \in (0,\frac{H-1}{4})$. This implies the classical lower bound for obtaining an $\epsilon$ optimal policy for the time-dependent and finite-horizon MDP in the exact dynamics setting is $\Omega(S^{2} A)$. This classical lower bound can also be derived using the reduction technique in Appendix B.3. We acknowledge the reviewer’s concern on the clarity of a high-level overview of the proposed algorithms. Sections 3 and 4 already include high-level overviews (also noted by Reviewer FX1s), and we plan to make them more prominent. Besides, **QMEBO** uses binary oracles to encode the $P_{h|s,a}$ and $\hat{V}\_{h}$ and transfer this information to amplitude via controlled rotation unitary operators, followed by amplitude estimation to compute an estimate of $P_{h|s,a}^{T} \hat{V}\_{h}$. We thank the reviewer for pointing out the missing reference [1]. While [1] studies quantum speedup for policy gradient methods in finite-horizon MDP and explores connections to infinite-horizon MDP, our work differs in three key differences. First, [1] only provides quantum speedups for estimating the gradients of V values. It does not produce a bound on the overall complexity for their algorithms to converge and obtain a near-optimal policy. In contrast, we directly quantify the complexity for computing a near-optimal policy. Second, regarding the approximation between finite-time and infinite-time, note that [1] uses the approximation to solve infinite-horizon MDP. Perhaps due to this reason, their result (Theorem 3.1 and 4.1) requires the finite-horizon MDP to also have a discount factor away from 1. In contrast, our paper is not interested in infinite-horizon MDP. Instead, we only use it to derive a lower bound. As a result, for our solution to finite-horizon MDP, we do not require a discount factor away from 1. Therefore, our results are new and more broadly applicable than [1]. We agree that a time-dependent MDP can be converted to a time-independent MDP by expanding the state space to $\mathcal{S}' = \mathcal{S} \times [H]$. However, this transformation does not reduce the complexity. Even including [1] , the only known quantum algorithm for general finite-horizon MDP remains that of (Wiedemann et al., 2022), whose sample complexity to obtain an $\epsilon$-optimal policy is $O(A^{3SH/2}H/\epsilon)$ for time-dependent setting and $O(A^{3S/2}H/\epsilon)$ for time-independent setting. Thus, even after reduction, their complexity with the enlarged state space $S'$ remains $O(A^{3SH/2}H/\epsilon)$, which is inferior to our algorithms. [1] Jerbi, Sofiene, et al. "Quantum policy gradient algorithms.". [2] Chen, Yichen, and Mengdi Wang. "Lower bound on the computational complexity of discounted markov decision problems.". --- Rebuttal Comment 1.1: Comment: I thank the authors for a strong and clear response, and I am inclined to raise the score. I have an additional question. The authors state: "We thank the reviewer for pointing out the missing reference [1]. While [1] studies quantum speedup for policy gradient methods in finite-horizon MDP and explores connections to infinite-horizon MDP, our work differs in three key differences. First, [1] only provides quantum speedups for estimating the gradients of V values. It does not produce a bound on the overall complexity for their algorithms to converge and obtain a near-optimal policy. In contrast, we directly quantify the complexity for computing a near-optimal policy. Second, regarding the approximation between finite-time and infinite-time, note that [1] uses the approximation to solve infinite-horizon MDP. Perhaps due to this reason, their result (Theorem 3.1 and 4.1) requires the finite-horizon MDP to also have a discount factor away from 1. In contrast, our paper is not interested in infinite-horizon MDP. Instead, we only use it to derive a lower bound. As a result, for our solution to finite-horizon MDP, we do not require a discount factor away from 1. Therefore, our results are new and more broadly applicable ..." Is the estimation of gradients not the dominating cost in the process? I appreciate the categorical difference between unit discount and arbitrary close to unit. But is this an important difference beyond this? --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer LjjN for the constructive feedback and for the positive recognition of our work throughout the review process. We are especially grateful for the reviewer’s decision to raise the score. Below, we address the additional questions regarding (i) the computational cost of estimating the gradient of the value function in policy gradient algorithms, and (ii) the distinction between unit discount and near-unit discount settings. While we agree that estimating the gradient of V value in the policy gradient methods is computationally expensive, we emphasize that their convergence properties are also important. Note that the underlying optimization problem is typically non-convex (see, for example, Lemma 11.5 in [2]). In that case, these algorithms may only converge to local optima or stationary points, which may be far from the global optimum and thus fail to obtain a near-optimal policy. In contrast, our QVI algorithms are designed to consistently obtain a near-optimal policy, offering a significant advantage in solution quality. Second, even reaching a stationary point with policy gradient methods remains computationally intensive. Specifically, we believe the overall sample complexity of their algorithms with respect to the error term $\epsilon$ is much worse than that of our **QVI** algorithms. As shown in Lemma 11.8 of [2], for a $\beta$-smooth value function $V_{\pi_\theta}$ with stochastic gradients (variance bounded by $\sigma^2$), the stochastic gradient ascent (SGA) algorithm converges to an $\epsilon$-approximation stationary point in expectation after $K\geq O\left( \frac{\sigma^{2}}{\epsilon^2} \right)$ iterations. As noted in [2], the variance of the stochastic gradients is normally huge in practice, which further increases the computational burden. When combined with the quantum techniques for estimating the gradient of $V_{\pi_{\theta}}$ in [1], the total quantum sample complexity of obtaining a stationary point for a finite-horizon MDP-with a reward function bounded by $\mid R\mid_{\max}=1$, time horizon $H$ and discount factor $\gamma\in(0,1)$-scales as $O\left( \sqrt{ d } \frac{D H^{3}\sigma^{2}}{\epsilon^{3} (1-\gamma)} \right)$ (numerical gradient estimation) or $O\left( d ^{\xi(p)} \frac{B_{p} H^{2} \sigma^{2}}{\epsilon^{3} (1-\gamma)} \right)$ (analytical gradient estimation). Note that both quantum sample complexities show an $O\left( \frac{1}{\epsilon^3} \right)$ dependence on the error term $\epsilon$. This highlights the significant computational cost of policy gradient methods, even for achieving a suboptimal stationary point. In contrast, **QVI-3** and **QVI-4** demonstrate a much more favorable $O\left( \frac{1}{\epsilon} \right)$ dependence on $\epsilon$, which implies that our algorithms require much fewer quantum samples to achieve a high-quality near-optimal policy, especially as the desired precision ($\epsilon \rightarrow 0$) increases. We thank the reviewer for the insightful follow-up question regarding the difference between the unit discount and arbitrary close-to-unit discount settings. Although we agree with the reviewer that in the finite-horizon MDP setting, there should not be a significant theoretical distinction between these two setting, the results in [1] show an $O\left( \frac{1}{1-\gamma} \right)$ dependence on $\gamma$ which will explode when $\gamma \to 1$. Specifically, in a finite-horizon MDP with a fixed horizon $H$, the cumulative reward is computed over a finite number of steps, and the value function is inherently bounded by the horizon $H$ and the reward function (bounded by $\mid R\mid_{\max} = 1$ in our case). As a result, the introduction of a discount factor $\gamma$ is not necessary to ensure the boundedness of the value function, unlike in the infinite-horizon MDP setting, where $\gamma < 1$ is typically introduced to make the infinite sum of discounted rewards remains finite. In fact, setting $\gamma=1$ is more common for finite-horizon MDPs in many applications, such as robotics, game playing, and resource management, where the goal is to maximize total reward over a fixed time period—e.g., completing a task within a set number of steps. Note that our **QVI** algorithms can easily extend to solve those finite-horizon MDP with a discount factor $\gamma<1$ without changing the quantum query/sample complexity. In contrast, as discussed above, the quantum sample complexities of reaching a stationary point for a finite-horizon MDP using the policy gradient method in [1] show an $O\left( \frac{1}{1-\gamma} \right)$ dependence on $\gamma$. This implies that their result requires the finite-horizon MDP to have a discount factor $\gamma\in(0,1)$ and incurs significantly higher sample complexity as $\gamma \to 1$. Therefore, our results are more broadly applicable than [1]. [1] Jerbi, Sofiene, et al. "Quantum policy gradient algorithms.". [2] Alekh, Agarwal, et al. "Reinforcement Learning: Theory and Algorithms".
Summary: In this submission, the authors presented quantum algorithms for finite-horizon Markov decision processes (MDPs). These quantum algorithms cover MDPS in the exact dynamics setting and in the generative model setting. Polynomial speedups are achieved. In addition, lower bounds on the query complexity for the generative model setting are also proved in this submission, although not tight. The quantum algorithms are based on well-known quantum techniques including quantum maximum finding and quantum mean estimation. ## update after rebuttal I appreciate the authors' rebuttal comments to address my concerns. Most of my concerns have been addressed and I have increased my score. Claims And Evidence: Theoretical proofs are provided for the claims. Methods And Evaluation Criteria: N/A Theoretical Claims: I checked the proofs and they appear to be correct. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: The problems this submission studies may find applications in quantum machine learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I appreciate the authors' effort on studying quantum speedups for an important machine learning problem. In the generative model setting, the classical lower bounds presented in this submission show a clear separation between quantum and classical algorithms. The quantum lower bounds rule out the possibility of exponential speedups. There are a few weaknesses: 1. Technically, the quantum algorithm is presented in the form that simply replaces the maximum finding and mean estimation of the classical algorithm to well-known quantum subroutines. In this sense, these quantum algorithms lack technical contribution in both the quantum and classical regimes. 2. Although it is nice to see the low bounds, they are still not tight. (But I think this is a minor weakness). 3. I am not fully convinced about the quantum generative model shown in Definition 4.1. In general, QSample is considered as a harder model than the classical sampling model. So it might not be fair to compare the quantum generative model with the classical model. It would be nice if some justification of this comparison can be provided. Other Comments Or Suggestions: N/A Questions For Authors: How to justify the comparison between the quantum and classical generative models, particularly for MDP? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Xd2s for the detailed feedback on our submission. Below, we address the reviewer’s concerns regarding the technical contributions and the justification of the comparison between quantum and classical generative models, while proposing feasible revisions to strengthen our submission. We acknowledge the reviewer’s concern that our quantum algorithms may appear to simply replace classical subroutines with well-known quantum ones, such as quantum maximum searching **QMS** (Durr & Hoyer, 1999) and quantum mean estimation **QME** (Montanaro, 2015). However, we would like to highlight the significant technical contributions of our work, which go beyond straightforward substitutions. First, a key technical contribution of our work is the development of the Quantum Mean Estimation with Binary Oracles (**QMEBO**, Algorithm 3) subroutine in the exact dynamics setting (Section 3.2). QMEBO adapts quantum mean estimation for binary oracles, achieving a speedup in the state space $S$ (from $O(S)$ to $O(\sqrt{ S } /\epsilon)$) for near-optimal policies. This is a non-trivial adaptation that enables the application of quantum techniques to MDPs. Second, while our algorithms do leverage well-known **QMS** and **QME**, it is nontrivial to infuse the two existing quantum subroutines into existing reinforcement learning frameworks. For example, as pointed in Section 4.2, when integrating the total variance technique and **QME**, we can not directly apply **QME2** as its classical counterpart, because **QME2** requires prior knowledge of $\sigma\_{k,h}^{s,a}$ and an upper bound on $\sigma\_{k,h}^{s,a}$ to estimate the $\mu\_{k,h}^{s,a}$ to an error of $\frac{\epsilon \sigma\_{k,h}^{s,a}}{(2H^{1.5})}$. To address this, we propose to use **QME1** to obtain an estimate $(\hat{\sigma}\_{k,h}^{s,a})^{2}$ of $(\sigma\_{k,h}^{s,a})^{2}$ with an error $4b$ and use **QME2** to estimate $\mu\_{k,h}^{s,a}$ with an error proportional to $\overline{\sigma}\_{k,h}^{s,a}=\sqrt{ (\hat{\sigma}\_{k,h}^{s,a})^{2}+4b}$. This integration demonstrates the technical depth required to adapt quantum subroutines to our setting, contributing to the field of quantum reinforcement learning. We thank the reviewer for raising the important question about the fairness of comparing quantum and classical generative models for MDPs. In fact, the comparison is fair because implementing quantum generation model $\mathcal{G}$ (Definition 4.1) has a comparable overhead as the classical generation model $G$ (Eq. 3, Section 4). In the classical generative model setting, it is assumed that we can access to a classical generative model/simulator $G$ that draws samples from the distribution $P_{h|s,a}$. The classical generative model makes particular sense when the environment is a computer program, and thus we can use the same computer program to simulate the drawing of samples. Specifically, if the simulator is a computer program and we have its source code, then for the classical generative model $G$ we can produce a Boolean circuit $\mathcal{C}$ that acts as the simulator, i.e., draws samples from the distribution $P_{h|s,a}$. For the quantum generative model, we use the fundamental result in quantum computation ([1], Nielsen & Chuang, 2010, Section 1.4.1) that any classical circuit $\mathcal{C}$ with $N$ logic gates can be efficiently converted into a quantum circuit $\mathcal{Q}$ with $O(N)$ logic gates, capable of computing on quantum superpositions of inputs. Moreover, the conversion is efficient and can be achieved by simple conversion rules at the logic gate level by using the Toffoli gate. The authors in (Wang et al., 2021, arXiv:2112.08451) confirmed this by explicitly constructing the quantum generative model for infinite-horizon MDPs from a circuit of the corresponding classical generative model in Appendix A. We believe such a construction method can be readily extended to the case of finite-horizon MDPs. Thus, the classical generative model $G$ and the quantum generative model $\mathcal{G}$ **have comparable costs at the elementary gate level**, making the comparison between the quantum sample complexity with the classical sample complexity fair. Furthermore, the time complexities of **QVI-3** and **QVI-4** are the same as the quantum sample complexities of **QVI-3** and **QVI-4** up to log terms under the assumptions that the classical generative model can be called in constant time and that we have access to quantum random access memory (QRAM) proposed in [2]. Roughly speaking, QRAM is a memory that stores the classical values and that allows them all to be read at once in quantum superposition. To address the reviewer's concern, we propose to add a paragraph in Section 4 to explicitly justify the comparison between the quantum and classical generative models. [1] Bennett, Charles H. "Logical reversibility of computation.". [2] Giovannetti, Vittorio, Seth Lloyd, and Lorenzo Maccone. "Quantum random access memory.".
Summary: This paper explores quantum algorithms designed to improve the efficiency of solving finite-horizon Markov Decision Processes (MDPs) in two settings: exact dynamics and generative models. The main contribution is the introduction of quantum value iteration (QVI) algorithms. Claims And Evidence: The claims made in the paper are generally supported by formal proofs, especially in the complexity and correctness of the proposed quantum algorithms. There are no immediately obvious problematic claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suited to the problem. Theoretical Claims: The theoretical claims in the paper, particularly concerning the correctness and complexity of the quantum algorithms, appear solid. The proofs provided for the algorithms seem logically consistent. Experimental Designs Or Analyses: The manuscript does not include explicit experimental evaluations but focuses on the theoretical aspects of the algorithms. This is appropriate for the topic, as the aim is to establish the theoretical foundations of quantum speedups in MDPs. The only concern might be the lack of empirical validation for the proposed algorithms, though this is often the case for early theoretical work in quantum algorithms. Supplementary Material: There is no supplementary material, only an appendix containing detailed proofs and further technical details about the quantum algorithms. Relation To Broader Scientific Literature: The paper connects well with prior work in quantum reinforcement learning and quantum algorithms for MDPs. Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper is original in its focus on finite-horizon MDPs and provides a clear exploration of quantum speedups in this domain. The algorithms are innovative and mathematically rigorous. One weakness, however, is the lack of empirical validation of the proposed algorithms in real-world applications, which could have provided a clearer understanding of their practical impact. Other Comments Or Suggestions: The paper could benefit from a clearer distinction between the classical methods it compares against. While it mentions improvements over classical algorithms, specific examples or comparisons to concrete classical algorithms could strengthen the case for quantum superiority. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer tc3r for the thorough and constructive feedback on our submission. Below, we address the reviewer’s concerns and provide clarifications to strengthen our submission. We acknowledge the reviewer’s concern regarding the lack of empirical validation in our current submission. As a primarily theoretical contribution, our work focuses on establishing the correctness and complexity of the proposed quantum value iteration (QVI) algorithms for finite-horizon MDPs in two distinct settings, as evidenced by the formal proofs provided in the appendix. We recognize the importance of empirical evaluation in demonstrating practical impact; however, the current state of quantum hardware poses significant challenges for implementing and testing quantum algorithms at scale. Specifically, the limited qubit counts, high error rates, and restricted access to large-scale quantum computers make it difficult to empirically validate the proposed speedups at this stage. We are committed to pursuing empirical validation in future work. Besides, to address the reviewer’s suggestion for making the quantum speedup more concrete, we propose to include an illustrative example of a toy MDP, such as the Inventory Management Problem, in Section 3 and Section 4 to demonstrate the difference in query complexity between classical value iteration and QVI algorithms.
null
null
null
null
null
null
Visual and Domain Knowledge for Professional-level Graph-of-Thought Medical Reasoning
Accept (spotlight poster)
Summary: The paper introduces a novel dataset specifically designed for professional-level medical reasoning in medical visual question answering. It leverages a decade-long collection of MRI and clinical data related to Hypoxic-Ischemic Encephalopathy (HIE), enriched with expert annotations and insights. The authors generate clinical question-answer pairs and MRI interpretations to facilitate comprehensive diagnosis and neurocognitive outcome prediction. Furthermore, the paper proposes the innovative Clinical Graph of Thoughts (CGoT) model that integrates domain-specific medical knowledge with LVLMs. The reported results, including a 15% absolute gain on key neurocognitive outcome tasks, underscore the dataset’s potential and the model’s promising performance. Claims And Evidence: The results presented in the paper convincingly support the conclusions drawn by the authors. The experimental findings are robust and clearly demonstrate that the proposed approach leads to significant improvements over baseline methods. Detailed evaluations on the dataset illustrate that the clinical reasoning and predictions are well-grounded in the data. The evidence provided aligns with the claims, ensuring that the novel contributions are both meaningful and reproducible. Overall, the alignment between claims and supporting experimental outcomes is strong and well-articulated. Methods And Evaluation Criteria: The methods and evaluation criteria used in this paper are well-suited for the problem of clinical diagnostic reasoning. The paper describes a thorough experimental setup and uses appropriate metrics to assess performance, ensuring that the impact of the proposed methods can be reliably measured. The benchmark dataset, constructed from real clinical data, provides a realistic and challenging testbed for evaluating diagnosis and prognosis tasks. The inclusion of both visual and clinical data in the evaluation strengthens the overall design. In summary, the approach is methodologically sound and the evaluation criteria are fitting for the intended application. Theoretical Claims: The paper does not involve formal theoretical proofs or derivations, which is appropriate given its focus on clinical application and experimental demonstration. Experimental Designs Or Analyses: The experimental design is well-constructed and thoroughly detailed, providing confidence in the validity of the analyses performed. The paper clearly explains the procedure for data collection, annotation, and the subsequent generation of clinical question-answer pairs. The evaluation is comprehensive, with proper consideration given to both visual interpretation and neurocognitive outcome prediction tasks. Each experiment is designed to test specific aspects of the proposed Clinical Graph of Thoughts (CGoT) model, and the results are transparently presented. Overall, the experimental design and analysis section is robust and leaves little room for ambiguity regarding the scientific claims. Supplementary Material: The supplementary material has been carefully reviewed and does not raise any issues. Relation To Broader Scientific Literature: The paper makes a significant contribution by providing a benchmark that is closely aligned with real-world clinical applications. By incorporating a dataset sourced from relevant clinical settings and focusing on professional-level medical reasoning, the work fills an important gap in the literature on medical visual question answering. It relates well to prior studies in clinical diagnosis while also expanding the scope by integrating both imaging and clinical data for neurocognitive outcome prediction. The proposed CGoT model builds upon existing concepts in LVLMs and adapts them with domain-specific insights, further positioning the work within the broader scientific discourse. Overall, the paper successfully bridges the gap between academic research and clinical practice. Essential References Not Discussed: No Other Strengths And Weaknesses: One potential concern is that the paper mentions the dataset is sourced from Massachusetts General Hospital, which might conflict with the anonymity requirements of a double-blind review process. This detail could inadvertently reveal the institution behind the dataset, thereby compromising the anonymity of the submission. Aside from this, the paper is strong in its methodological design and clinical relevance. The combination of long-term data collection, expert annotations, and integrated clinical reasoning within the model is highly commendable. Addressing the anonymity issue explicitly would help mitigate any ethical or procedural concerns. Other Comments Or Suggestions: Figure 3 could be improved as the current design allows for some overlap between images and text, which hampers clarity. A clearer layout that avoids any overlap will improve both readability and the overall presentation of the results. The authors might consider revising the figure with better spacing or annotations to ensure that all visual elements are distinct and easy to interpret. Enhancing the visual quality here would further support the clarity of the paper's contribution. Overall, this is a minor presentation issue that, once fixed, will polish an otherwise well-structured paper. Questions For Authors: The paper mentions that domain prompts are defined by doctors with varying years of experience, specifically distinguishing between low-experience and high-experience physicians. Could you elaborate on whether this difference in clinical expertise leads to significant performance variations or impacts the model’s outputs? How have you ensured that the variation in clinical experience among the doctors does not bias the dataset or the evaluation of the Clinical Graph of Thoughts (CGoT) model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer's constructive feedback; we have addressed your concerns below. >Q1. One potential concern is that the paper mentions the dataset is sourced from Massachusetts General Hospital, which might conflict with the anonymity requirements of a double-blind review process. This detail could inadvertently reveal the institution behind the dataset, thereby compromising the anonymity of the submission. Aside from this, the paper is strong in its methodological design and clinical relevance. The combination of long-term data collection, expert annotations, and integrated clinical reasoning within the model is highly commendable. Addressing the anonymity issue explicitly would help mitigate any ethical or procedural concerns. Thanks for recognizing the value of our dataset and method. We further address the anonymity issue here. While the dataset used in this study originates from Massachusetts General Hospital (MGH) and was collected and shared under appropriate IRB approvals, **this does not imply that the research was conducted by or on behalf of MGH, but just provide that we can collect the HIE specific data from one of the best hospitals in the world, with the most professional clinical knowledge of HIE on the world.** The use of the MGH dataset was independent of any institutional affiliation and does not reveal the authors' identities. Moreover, before submission, we have carefully reviewed the manuscript to ensure that any potentially identifying references to the authors' institutions have been removed or anonymized to comply fully with double-blind review requirements. We will restore the appropriate acknowledgments and dataset provenance upon acceptance, as per standard publishing practice. >Q2. Figure 3 could be improved as the current design allows for some overlap between images and text, which hampers clarity. A clearer layout that avoids any overlap will improve both readability and the overall presentation of the results. The authors might consider revising the figure with better spacing or annotations to ensure that all visual elements are distinct and easy to interpret. Enhancing the visual quality here would further support the clarity of the paper's contribution. Overall, this is a minor presentation issue that, once fixed, will polish an otherwise well-structured paper. We thank the reviewer for the thoughtful suggestion regarding Figure 3. We agree that improving the layout and removing the overlap between images and text will enhance the clarity and readability of the figure. In the revised version, we will update Figure 3 to ensure that all visual elements are clearly separated, with improved spacing and annotations. We appreciate the reviewer’s feedback on this presentation. >Q3. The paper mentions that domain prompts are defined by doctors with varying years of experience, specifically distinguishing between low-experience and high-experience physicians. Could you elaborate on whether this difference in clinical expertise leads to significant performance variations or impacts the model’s outputs? How have you ensured that the variation in clinical experience among the doctors does not bias the dataset or the evaluation of the Clinical Graph of Thoughts (CGoT) model? The initial annotations were performed by a junior yet experienced fellow and subsequently reviewed through consensus checks with senior experts. This process resulted in a single, unified annotation set. By consolidating annotations through expert agreement and consensus, we aimed to minimize inter-reader variability and reduce potential bias introduced by individual experience levels in both the dataset and the evaluation of the CGoT model. We will include more detailes of the annotation protocol in the revised manuscript to enhance transparency.
Summary: The authors introduce the HIE-Reasoning dataset and a Clinical Graph of Thought (CGoT) model for professional-level Medical Visual Question Answering (MVQA) focused on Hypoxic-Ischemic Encephalopathy (HIE). The dataset, built from a decade of MRI and clinical data, includes 749 expert-annotated question-answer pairs and aims to simulate complex clinical reasoning. CGoT integrates visual and textual clinical knowledge into LVLMs, outperforming baselines by ~15% on neurocognitive outcome prediction. Evaluations reveal limitations in existing LVLMs for such tasks. Claims And Evidence: - The HIE-Reasoning dataset is a pioneering effort, shifting MVQA from basic perception to clinically relevant reasoning, with tasks like neurocognitive outcome prediction. Methods And Evaluation Criteria: - CGoT’s graph-of-thought approach, mimicking clinical workflows, creatively decomposes complex tasks into manageable steps, enhancing interpretability and performance. - Fig. 1 contrasts existing MVQA with HIE-Reasoning but lacks examples of “clinically irrelevant” questions for clarity. - Sec. 5.2: Med-Flamingo’s failure is noted, but no discussion of why or how CGoT avoids similar pitfalls (e.g., hallucination). Theoretical Claims: N/A Experimental Designs Or Analyses: - The dataset’s decade-long curation and expert validation ensure high clinical fidelity, while CGoT’s significant gains (e.g., 71.73% vs. 56.60% on outcome prediction) validate its efficacy. - Ablation studies (Tables 3, 4) lack depth—e.g., no analysis of visual knowledge components (ADC vs. $Z_{ADC}$) or alternative reasoning structures beyond task omission. How robust is CGoT to variations in graph design? - Sec. 4.2.1: How are $Z_{ADC}$ thresholds (-2) justified beyond citing Bao et al. (2023)? Sensitivity to this choice is unexplored. Supplementary Material: N/A Relation To Broader Scientific Literature: - Good to include recent work, but miss some related evaluation work [1,2]. [1] Yan Q, He X, Yue X, et al. Worse than random? An embarrassingly simple probing evaluation of large multimodal models in medical VQA[J]. arXiv preprint arXiv:2405.20421, 2024. [2] Xia P, Chen Z, Tian J, et al. Cares: A comprehensive benchmark of trustworthiness in medical vision language models[J]. Advances in Neural Information Processing Systems, 2024, 37: 140334-140365. Essential References Not Discussed: As said in Relation To Broader Scientific Literature Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer's valuable suggestions and address concerns below within 5000 characters. ### Q1. Fig. 1 lacks examples of \"clinically irrelevant\" questions. \"Clinically irrelevant questions\" include general or superficial queries about images, such as modality (\"What type of scan is this?\") or organ identification (\"What organ is shown?\"). Although valid in general VQA, **these questions lack direct clinical relevance or diagnostic utility**. ### Q2. How does CGoT avoid hallucination pitfalls like Med-Flemingo? - **Structured Reasoning Pathway:** CGoT decomposes tasks into clinically meaningful subtasks aligned with expert workflows, **increasing interpretability and traceability**. - **Clinical Knowledge Grounding:** Reasoning steps explicitly use specialist-curated clinical knowledge (e.g., ADC maps, injury scores), **anchoring predictions in verifiable evidence**. - **Modular Task and Output Verification:** Each subtask undergoes ground-truth validation, reducing hallucinations via fine-grained feedback. Intermediate outputs (injury scores, consistency signals) allowing both users and the model itself to validate predictions and detect inconsistencies early in the pipeline. ### Q3. Ablation studies on visual knowledge components (ADC vs. ZADC). **Table 3-1. Ablation study on visual knowledge components** | Raw ADC | ZADC | Brain Anatomy | Lesion Grading | Lesion Anatomy | Rare Locations | MRI Injury Score | Neuro Outcome | Interpretation Summary | |-|-|-|-|-|-|-|-|-| |✔️|✔️|✔️| 62.41%, 0.0703 | 43.57%| 41.47%|49.62%| 71.73%| 53.68%| || ✔️| ✔️| 58.64%, 0.0849 | 42.78%| 41.22%| 47.37%| 68.11%| 51.07%| |✔️|| ✔️| 46.62%, 0.1152 | 26.96%| 25.10%| 30.83%| 60.44%| 34.55%| |✔️| ✔️| | 62.41%, 0.0703 | 39.90%| 37.98% | 39.85%| 64.05%| 47.97%| Each visual knowledge type (ADC, ZADC, brain anatomy) has complementary roles. Raw ADC provides signal details, ZADC identifies abnormal regions, and brain anatomy supply anatomical priors. Removing any component degrades performance, but removing ZADC has the crucial effect, as it provides the probability of abnormal brain regions—crucial for MRI injury interpretation. ### Q4. Justification and sensitivity of ZADC (–2). - **Justification (z < –2):** The threshold of –2 aligns with clinical conventions and prior studies for HIE abnormal regions probabilities [A], indicating ADC values below two standard deviations from the normal atlas—often interpreted as abnormally reduced diffusion in neonatal HIE ADC maps—and serves as a marker for potential brain injury regions. **Table 3-2. Ablation study on varying ZADC threshold values** | Threshold | Lesion Grading | Lesion Anatomy | Rare Locations | MRI Injury Score | Neuro Outcome | Interpretation Summary | |--|--|--|--|--|--|--| | z<–1.8| 58.64%, 0.0782|43.04%| 41.42%| 47.14%| 68.23%| 51.42%| | z<–2.2| 60.90%, 0.0867|42.58%| 38.19%| 37.59%| 70.93%| 50.67%| | z<–2| 62.41%, 0.0703|43.57%| 41.47%| 49.62%| 71.73%| 53.68%| Across ZADC threshold variations, CGoT outperforms baselines, demonstrating robust and effective performance. The drop at z < −2.2 in MRI injury stems from an overly strict threshold that misses mild injuries. NRN includes mild cases (0 and 1), which are often excluded by this threshold, leading to missed low-grade injuries. z < −2 better captures these signals, yielding optimal performance. ### Q5. Alternative reasoning structures. **Clinically Grounded.** CGoT is predefined and grounded in well-established diagnostic and prognostic workflows used in neonatal care. Its structure captures a real clinical reasoning pipeline [A,B], where each edge and reasoning step reflects realworld clinical logic. Modifying or removing these edges would compromise both interpretability and clinical validity. **Quantitative Analysis.** Table 4 (main paper) demonstrates omitting intermediate tasks significantly reduces neurocognitive outcome prediction accuracy, confirming all subtasks are essential. Alternative structures (e.g., bypassing injury scoring or directly predicting outcomes from lesions) resulted in lower accuracy and clinical implausibility (Table 4). Intermediate reasoning nodes and current reasoning structures are necessary; the model is robust and tolerant to minor errors in these steps, as shown in response to Reviewer #fHCe (Tables 2-1 and 2-2). Future explorations of adaptive structures must maintain clinical interpretability. Ablation studies on node features (Tables 3-1, 3-2) confirm that all types of knowledge inputs contribute meaningfully to performance, reinforcing the thoughtful and necessary design of CGoT. [A] Mining multi-site clinical data to develop machine learning MRI biomarkers: application to neonatal hypoxic ischemic encephalopathy. 2019. [B] NICHD Magnetic Resonance Brain Imaging Score in Term Infants With HIE: A Secondary Analysis of a Randomized Clinical Trial. ### Q6. Add related evaluation work [1,2]. We will add and discuss the new related work in the later version. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. It has addressed most of my comments and I will maintain the original score.
Summary: The paper introduces HIE-Reasoning, a professional-level medical visual question answering (MVQA) benchmark focused on neonatal Hypoxic-Ischemic Encephalopathy (HIE). The authors propose the Clinical Graph-of-Thought Model (CGoT), which integrates visual and clinical domain knowledge into a structured reasoning framework, significantly enhancing the interpretability and accuracy of clinical decisions and prognoses. Claims And Evidence: See Strengths And Weaknesses part Methods And Evaluation Criteria: See Strengths And Weaknesses part Theoretical Claims: See Strengths And Weaknesses part Experimental Designs Or Analyses: See Strengths And Weaknesses part Supplementary Material: None Relation To Broader Scientific Literature: See Strengths And Weaknesses part Essential References Not Discussed: See Strengths And Weaknesses part Other Strengths And Weaknesses: **Strengths** 1. This paper provides a new HIE-Reasoning benchmark tailored for professional-level medical reasoning, which consists of six MRI-related tasks. 2. This paper introduces a CGoT framework, which integrates clinical knowledge and clinical reasoning steps with LVLMs, enhancing both model performance and transparency. 3. The paper is well-structured and easy to follow. **Weaknesses** 1. This paper lacks an analysis of the computational complexity, efficiency, and resource requirements of the proposed CGoT compared to SOTA baselines, including the inference time, memory usage, and API costs. This would be helpful in assessing its scalability and practical applicability. 2. The performance of CGoT on more complex tasks (e.g., predicting 2-year neurocognitive outcomes) is heavily dependent on the results of previous tasks (e.g., MRI injury scores) (shown in Table 4). In other words, solving these more complex tasks requires first addressing related prerequisite tasks, which reduces efficiency and limits practical applicability. 3. The paper lacks comparisons with important baselines. The proposed CGoT incorporates clinical knowledge into the LVLM, guiding it to addressing more complex medical questions via a clinician-like diagnostic process. However, existing methods such as MDAgent [1] and MedAgents [2] also emulate real-world medical decision-making processes for multimodal medical reasoning tasks using LLMs and LVLMs. These should be included as strong baselines to better validate the effectiveness of CGoT. 4. The proposed HIE-Reasoning benchmark requires a more detailed description, particularly regarding the distribution of open-ended and multiple-choice questions. **Reference** [1] MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making, NeurIPS, 2024. [2] MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning, ACL, 2024. Other Comments Or Suggestions: 1. Several typos are present in the paper. For example, in Line 048 and Line 090, "Fig. 1" and "Fig. 2" should be revised to "Figure 1" and Figure 2". In Line 124, "proposed" should be revised to "propose". In Line 255, "Sec" should be revised to "Section". In Line 207, "MRI Interpretation Summary." should be revised to "Task 6. MRI Interpretation Summary.". 2. "MRI" needs to be introduced with its full name at the beginning of the paper. Questions For Authors: 1. How to evaluate the model performance on open-ended questions in the HIE-Reasoning benchmark? 2. How to identify the clinical knowledge (i.e., visual and textual knowledge) most relevant to each specific question? Was this knowledge preprocessed at the beginning? 3. In Table 3, why does retaining only GoT or clinical knowledge lead to worse performance compared to removing both simultaneously? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive comments. We address your concern below within 5000 characters. ### Q1. Computational Complexity | Model | Input Tokens | Output Tokens | Inference Time | API Cost | |-|-|-|-|-| | Gemini|10,580| 446| 6.50s| $0.00080 | | CGoT-Gemini| 36,256 | 1,146 | 21.09s | $0.00272| | MedAgent-Gemini| 354,801| 10,497| 210.75s| $0.02976| | MdAgent-Gemini| 236,193| 9,228| 142.75s| $0.02048| *Table 1-1: Case-wise computational analysis on the Gemini-1.5-flash backbone.* CGoT introduces modest overhead compared to Gemini-1.5-Flash but achieves 15% accuracy improvement with clinical transparency. CGoT significantly reduces computational cost compared to SOTA agents (MedAgent, MdAgent) through structured reasoning that mirrors clinical workflows. We will add this new analysis in the later version. ### Q2. Dependency on Prerequisite Tasks Prerequisite tasks before final decisions align with standard clinical practice. Our approach enhances both transparency and effectiveness. 1. **Intermediate steps are necessary and model is robust to minor errors of these intermediate steps:** **Intermediate step is necessary**: *Table 2-1: Removing MRI injury scoring from the reasoning chain leads to a sharp performance drop in outcome prediction.* | MRI Injury Score | 2-year Outcome | |------------------|---------------| | ✘ | 50.94 | | ✔ | 71.73 | **Robust to small errors of intermediate steps**: We applied random ±1-level perturbations (across 4 severity levels) to MRI injury score in varying percentages of cases (0–30%). Results show graceful degradation, indicating robustness to small inaccuracies. *Table 2-2. CGoT is robust to minor errors of intermediate steps.* | Perturbation Ratio (%) | 2-year Outcome (%) | |------|-----| | 0 | 71.73 | | 10 | 67.83 | | 20 | 66.22 | | 30 | 62.72 | 2. **Clinically Grounded:** CGoT mimic clinical diagnostic workflows by modeling the stepwise reasoning clinicians use. Its intermediate tasks are clinically meaningful and enhance interpretability. Step-by-step inference in chain-of-thought reasoning [A] and inference scaling [B] has been proved to enhance interpretability and improve performance on other complex reasoning tasks. Table 4 (main paper) shows incorporating such intermediate tasks improves the final outcome prediction compared to relying solely on end-to-end black-box models (Table 2, main paper). ### Q3. Comparisons with MDAgent and MedAgent | Model | Lesion Grading (%)| Lesion Anatomy (%) | Rare Lesion Locations (%) | MRI Injury Score (%) | Neurocognitive Outcome (%) | Interpretation Summary (%) | |---|---|---|----|-----|------|-----| | MDAgent| 28.57, 0.1508| 42.61 | 38.29 |47.85 | 48.81| 51.22| | MedAgent| 30.95, 0.1674| 41.88| 37.57| 45.80| 54.32| 49.17| | **CGoT** | **62.41, 0.0703** | **43.57** | **41.47** | **49.62**|**71.73**| **53.68** | *Table 3-1: Performance comparisons.* CGoT outperforms both baselines by incorporating structured clinical reasoning with domain-specific visual and textual knowledge, enabling superior performance on complex clinical tasks. ### Q4. Question Types |Total QA Pairs|Open-Ended|Multiple Choice | |--|---|---| | 749 | 399| 350| *Table 4-1: Question distribution.* - **Open-ended:** Lesion percentage, lesion anatomy, rare lesion localization, MRI interpretation summaries. - **Multiple-choice:** Lesion severity ratings, MRI injury scoring, outcome classification. ### Q5. Evaluation Metric (1) ROUGE-L, capturing content overlap and fluency by comparing generated answers to expert references, commonly in medical summarization tasks with LLM [C]. (2) F1 score for questions related to brain regions to assess correctness and completeness of region injured (Sec.5). ### Q6. Identify Clinical Knowledge Relevance Clinical experts (radiologists, neonatologists, and neurologists) curated, identified, and validated all relevant clinical knowledge during dataset construction to align with real-world clinical reasoning. ### Q7. Analysis of Components in Table 3 (main paper): Retaining only GoT or only clinical knowledge performs worse than removing both due to misaligned or incomplete reasoning. GoT without clinical context may propagate irrelevant patterns, while clinical knowledge without GoT ignores task dependencies. When both are removed, the model defaults to a purely end-to-end approach, which avoids the confusion caused by partially constrained reasoning. This highlights the importance of combining knowledge and clinical reasoning. ### Q8. Paper Revision We will carefully review and revise the paper. [A]. W., J., et al. "Chain-of-thought prompting elicits reasoning in large language models." NeurIPS 2022. [B]. G., D., et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning." arXiv 2025. [C]. T., L., et al. "Evaluating large language models on medical evidence summarization." NPJ digital medicine 6.1 (2023): 158. --- Rebuttal Comment 1.1: Comment: Thanks author's rebuttal. The authors have mostly addressed my comments and I will maintain the original score.
null
null
null
null
null
null
null
null
T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling
Accept (poster)
Summary: This paper examines the SFT + RL training pipeline for enhancing reasoning in LLMs. The authors propose several techniques to improve model performance, including SFT with critiques, auxiliary entropy bonuses, high-temperature sampling, Exponential Moving Average (EMA) stabilization, and On-policy KL normalization. These methods aim to refine reasoning quality, stabilize training, and optimize exploration in the RL framework. Claims And Evidence: The paper claims that SFT with critiques provides a strong foundation for RL training, but it lacks experimental evidence to support this assertion. Without concrete results, it is unclear how much critiques contribute to improving reasoning performance. Additionally, all proposed techniques—including SFT with critiques, auxiliary entropy bonuses, high-temperature sampling, EMA stabilization, and On-policy KL normalization—require a thorough ablation study to isolate their individual effects and validate their impact on model performance. Methods And Evaluation Criteria: While the proposed methods are conceptually sound, the paper lacks sufficient discussion on their effectiveness. There is minimal empirical analysis or justification for how each technique contributes to overall model improvement. A deeper exploration, supported by quantitative results and comparisons, is needed to validate their impact on reasoning performance. Theoretical Claims: This is not a theory paper. Experimental Designs Or Analyses: The paper has serious issues in its experimental design. While multiple techniques are proposed, there is insufficient experimental validation to demonstrate the effectiveness of each method. A more rigorous evaluation, including ablation studies and comparative analysis, is necessary to substantiate the contributions of these techniques. Supplementary Material: I hope the authors can provide detailed explanation for training process, but there is not. Relation To Broader Scientific Literature: This paper has limited contributions in terms of the techniques it talks about. Essential References Not Discussed: None Other Strengths And Weaknesses: Strength 1.Explores supervised fine tuning (SFT) with critics + reinforcement learning (RL) as a means to enhance reasoning in LLMs. Weaknesses 1.Lack of Empirical Validation – The paper proposes multiple techniques (e.g., SFT with critiques, auxiliary entropy bonus, high-temperature sampling, EMA, On-policy KL normalization) but lacks sufficient experimental results to support their effectiveness. An ablation study is needed. 2.Lack of detailed description of the proposed method. Other Comments Or Suggestions: None Questions For Authors: Lack of clarity on how critiques are used for SFT training: 1.If critiques are incorporated into SFT, what is the learning signal? 2.Is the model being explicitly trained to generate critiques alongside answers? 3.Is there a more powerful model used to generate critiques alongside answers? 4.What is the prompt used to generate the critiques? 5.How does the author ensure the correctness of the critiques? Can the author provide an ablation study for the techniques proposed in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your kind review and feedback! We would first like to emphasize that the primary contribution of this work is to propose a framework designed to effectively scale the reinforcement learning (RL) training of large language models (LLMs). This approach significantly enhances reasoning capabilities and offers a unique perspective on understanding inference scaling. Extensive experimental results on math and reasoning benchmarks clearly demonstrate the effectiveness of our method, and we hope you recognize both our contribution and the efficacy of our proposed framework. Additionally, we fully agree that comprehensive ablation studies are essential. To thoroughly validate each component's effectiveness, we have included extensive ablation analyses in our paper (see Tables 1, 2, 3, and Figure 2). For your convenience, we re-present these results here: 1. Effects of high temperature | Temperature | min-*p* | MATH500 | AIME | Omni-MATH-500 | |-------------|---------|---------|------|---------------| | 0.9 | 0 | 78.2 | 19.1 | 32.0 | | 1.1 | 0 | 84.6 | 29.0 | 37.8 | | 1.2 | 0 | 86.4 | 29.3 | 38.6 | | 1.3 | 0 | 84.6 | 24.3 | 36.4 | | 1.2 | 0.05 | 78.8 | 11.5 | 31.6 | 2. Effects of penalty design | | Penalty | step40 | step80 | step120 | step160 | |--------------|---------|--------|--------|---------|---------| | OverLongRatio | ✔️ | 0% | 2.6% | 1.6% | 0.7% | | | ✘ | 0% | 4.1% | 16.3% | - | | Accuracy(%) | ✔️ | 78.6 | 80.1 | 81.2 | 81.2 | | | ✘ | 79.0 | 79.2 | 76.4 | - | 3. Effects of long CoT for SFT. It can be observed that RL with LongCoT (T1) significantly outperforms RL with short CoT (Qwen2.5-Instruct). | Model | MATH500 | AIME | Omni-MATH-500 | |-----------------------------|---------|------|---------------| | (RL) Qwen2.5-14B-Instruct | 78.9 | 13.7 | 30.1 | | (SFT) T1-SFT (Qwen2.5-14B) | 77.2 | 10.3 | 28.5 | | (RL) T1 (Qwen2.5-14B) | 87.4 | 30.5 | 38.6 | | -- | -- | -- | -- | | (RL) Qwen2.5-32B-Instruct | 82.8 | 13.6 | 33.1 | | (SFT) T1-SFT (Qwen2.5-32B) | 83.4 | 24.9 | 34.6 | | (RL) T1 (Qwen2.5-32B) | 92.4 | 50.6 | 49.6 | For questions, Q1: If critiques are incorporated into SFT, what is the learning signal? - A1: In this work, we employ a prompt engineering pipeline (as detailed in Section 2.2.1) to generate responses accompanied by reflections. Critiques serve as intermediate outputs within this pipeline and are integrated into the final response rather than being presented separately. Consequently, our training employs traditional Supervised Fine-Tuning (SFT) with next-token prediction. Q2: Is the model explicitly trained to generate critiques alongside answers? - A2: No, critiques are inherently part of the generated response rather than explicitly trained as a separate output. Q3: Is there a more powerful model used to generate critiques alongside answers? - A3: We utilize Qwen-72B-Instruct and Gemini-1.5-Pro to initially generate Chains-of-Thought (CoT). Gemini-1.5-Pro is specifically employed to generate critiques. Finally, the o1-mini model (without access to its hidden CoT) integrates the generated CoT and critiques into the final response. Notably, except for o1-mini, these models demonstrate significantly poorer math reasoning performance compared to T1. Furthermore, since the o1-mini model's CoT is inaccessible, using responses from it without the explicit long CoT results in considerably weaker performance relative to T1. Q4: What is the prompt used to generate the critiques? - A4: The prompts used to generate the synthetic responses involve a pipeline of prompting with 3 prompts. We will add them into the appendix in the next version due to the space limit of rebuttal. To generate critiques, the key point is to ask the LLM to “ analyze what's wrong in every given wrong response and try to find the key point in every correct response”. Q5: How do the authors ensure the correctness of the critiques? - A5: We do not explicitly verify the correctness of critiques. Instead, we just expect the model to learn and initialize self-reflection behavior in the SFT stage and prioritize accuracy and effectiveness improvements during the Reinforcement Learning (RL) stage. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and for sharing the ablation study experiment. Q1: How are critiques integrated into the final response? Are they simply concatenated? New question: It seems that a significant part of this project/paper’s contribution comes from its open-source nature. Could you elaborate on that? --- Reply to Comment 1.1.1: Comment: Thanks for your kind response! ### Q1: How are critiques integrated into the final response? Are they simply concatenated? A1: The critiques are not merely concatenated; instead, we employ an additional LLM to rewrite and integrate the attempts along with their critiques into a coherent final response. In our preliminary experiments, we initially attempted to directly concatenate all responses and critiques without further processing. However, we found that the resulting output was neither fluent nor effective. Consequently, we adopted an LLM-based integration approach. Specifically, as detailed in Section 2.2.1, we first concatenate multiple intermediate responses $({y_i}^N)$ and their associated critiques $({c_i}^N)$. This concatenated text is then provided to an LLM along with a "rewrite" instruction. The input prompt to the LLM is formatted as follows: --- prompt = """ Show me a new solution to the given problem according to the reference solutions. The new solution should contain the meaningful failed attempts in the above answers, and you can learn from the mistakes and finally reach the correct results Keep all the above attempts to show you intelligent reasoning and reflection. You can do some cross-verification across different `correct responses` or use different methods to solve the problem and ensure correctness. You can use the failed attempts in the above answers to learn from the mistakes and finally reach the correct results. [Problem] [Response-1] [Critique-1] ... [Response-N] [Critique-N] """ --- In this way, the LLM can gracefully integrate the attempts and critiques as a fluent final response with reasoning and reflection. ### Q2: It seems that a significant part of this project/paper’s contribution comes from its open-source nature. Could you elaborate on that? A2: All experiments in our paper are conducted using open-source models (Qwen2.5 and GLM-4-9B). While the original training data is publicly accessible, we have performed additional cleaning and curation. We will release our trained models, curated datasets for both supervised fine-tuning (SFT) and reinforcement learning (RL), as well as comprehensive training details, to facilitate further research and advancement within the open-source community. We hope that our response can help address your concerns. Should you have any additional questions or require further clarification, we would be pleased to discuss them further.
Summary: The paper proposes an RL based method to improve the reasoning / inference scaling capabilities of LLMs. The paper discussed the importance of exploration and proposed an RL objective incorporating an entropy bonus to encourage diverse sampling. The paper also discusses specific tricks, such as format penalty, over-long length penalty, the size of rollout, the sampling temperature, and the min-p, and their impact on RL training. The experiment showing the correlation between generation length / accuracy are very interesting. It shows results on scaling inference thinking steps, where SFT model has a flat curve, but RL enables model to more effectively use the inference-scaling budget and improve on MATH, AIME, etc. It also proposes a hypothesis that not just correct steps matter, but wrong steps that led to correct solution also matters. Overall, the paper presents very strong RL results on widely used benchmarks; its empirical analysis are clear and thought-provoking. It proposes novel thesis, and ways to understand RL reasoning training and inference scaling. ## update after rebuttal My position still stand as accept. Claims And Evidence: Claim 1: Exploration matters for RL learning and RL stability. It shows empirical evidence that higher temperature in RL sampling, 1.1 and 1.2 gives better training result in Table-2. Also, a greater rollout budget gives significantly better training result as in Figure-4. However, what I failed to find is an ablation on the entropy penalty in the loss function, which seems to be a core claim of paper. Claim 2: Inference scaling is key and closely connected to RL reasoning learning. The paper proposes a novel way to measure inference scaling by the truncating the thinking budget. Figure-7 shows a consistent pattern that for RL models, having longer thinking budget improves reasoning accuracy. And truncating thinking process leads to performance loss for reasoning models. Methods And Evaluation Criteria: The proposed method and evaluation criteria are widely used and accepted, and makes sense. Theoretical Claims: Yes, the claims of adding entropy penalty to RL loss to encourage exploration makes sense. The connection it draws between inference time scaling and reasoning training makes sense. Experimental Designs Or Analyses: Yes, i checked the main paper result, Table-1. To obtain stable numbers for small test set such as AIME, it averages the result across many runs. Supplementary Material: NA Relation To Broader Scientific Literature: It relates to the Warm-Start RL training in the DeepSeek-R1 technical report. Essential References Not Discussed: Ablation on the token entropy loss seems to be missing and its impact not discussed. Effect of the importance of CoT initialization is missing. Other Strengths And Weaknesses: Other Strength: 1. Proposes to analyze the inference-scaling effect in LLMs by truncating the reasoning trace, and using a summarization model to give final response. It unveils convincing patterns that reasoning models benefit from longer thinking budget, and is well-connected to inference time scaling. Other Comments Or Suggestions: NA Questions For Authors: Have you tried other RL algorithms, eg. GRPO, and is the improvement you obtained specific to the algorithm being used. Is the warm-start needed? The paper claims one distinction it makes is by considering both the correct and wrong steps of reasoning, but there don't seem to be enough evidence backing the claim. Also, having a cold start baseline would be helpful to illustrate the importance of your proposed CoT initialization strategy. And if cold-start doesn't work, may be the correct response lie in a region difficult to reach from the base model, it will further illustrate the strength of the warm-up strategy the authors proposed. Is there a plan to release the training code/setup? Code Of Conduct: Affirmed. Overall Recommendation: 4 Ethical Review Concerns: NA
Rebuttal 1: Rebuttal: Thanks for your valuable feedback! ### Q1: Have you tried other RL algorithms ? We also conduct experiments using GRPO, and run the experiments using Qwen-14b with k=16 for efficiency. The results are shown as follows | | AIME | Omni-math-500 | MATH500 | | --- | --- | --- | --- | | T1 w/ RLOO | 23.8 | 36.8 | 83.6 | | T1 w/ GRPO | 23.4 | 37.4 | 83.8 | It can be observed that T1 with RLOO and T1 with GRPO show similar performance, which demonstrates that the proposed techniques can bring consistent improvement based on all existing RL algorithms, including both RLOO and GRPO. ### Q2: Is the warm-start needed? A2: Yes, a warm-start is important for reinforcement learning (RL) training as it provides a strong initial foundation for reasoning patterns and encourages exploration. In comparison, for example, Qwen2.5-14b-Instruct, which is trained with RL but utilizes short-CoT, significantly underperforms compared to T1, an RL-trained model initialized from a LongCoT SFT model. Recent studies, such as DeepSeek-R1-Zero, have shown that RL training from a base model can also yield strong reasoning capabilities and self-reflection patterns. However, training directly from a base model typically requires an extensive training period to develop these cognitive patterns. A warm-start approach positions the model effectively from the outset, allowing continuous optimization and refinement of its reasoning abilities. | Model | MATH500 | AIME | Omni-MATH-500 | | ----------------------------- | --------- | ------ | --------------- | | (RL) Qwen2.5-14B-Instruct | 78.9 | 13.7 | 30.1 | | (SFT) T1-SFT (Qwen2.5-14B) | 77.2 | 10.3 | 28.5 | | (RL) T1 (Qwen2.5-14B) | 87.4 | 30.5 | 38.6 | | -- | -- | -- | -- | | (RL) Qwen2.5-32B-Instruct | 82.8 | 13.6 | 33.1 | | (SFT) T1-SFT (Qwen2.5-32B) | 83.4 | 24.9 | 34.6 | | (RL) T1 (Qwen2.5-32B) | 92.4 | 50.6 | 49.6 |
Summary: The paper introduces T1, a novel approach for enhancing the reasoning abilities of large language models (LLMs) by scaling reinforcement learning (RL) and leveraging inference compute. The method begins by initializing the LLM with synthesized chain‐of‐thought data that incorporates trial‐and‐error and self-verification, enriching the model’s reasoning patterns beyond simple correct-step replication. During subsequent RL training, T1 encourages extensive exploration by oversampling diverse responses using high-temperature sampling and incorporates a token-level entropy bonus along with on-policy KL normalization to stabilize training. These techniques boost the model’s ability to generate longer, coherent “thinking” sequences, and unlock an inference scaling property—where increased generation length directly correlates with improved reasoning performance. Empirical evaluations are mainly conducted in math reasoning benchmarks (including AIME2024, MATH500, Omni-MATH-500, and GPQA), demonstrating that T1 outperforms both its supervised fine-tuning baseline and other state-of-the-art models including o1-preview and QwQ-32B-preview. The paper also proposes a simple strategy to measure inference scaling by truncating the generated reasoning process and showing that longer reasoning leads to more accurate final answers. ## update after rebuttal My review still leans toward acceptance, but it could also be rejected. The major concern about "All training and most evaluations focus on math word problems and quantitative reasoning" is partially addressed with a few additional experiments (on one additional logical reasoning task), and there are some interesting findings like "our experiments indicate that training on mathematical problems generalizes effectively to other domains." I hope to see more insights into why and how math reasoning can be generalized to other domains (with more thorough empirical and, if possible, theoretical results). The claim of the next version should be re-defined in the center of math reasoning to avoid any confusion or misinterpretation from audiences. Claims And Evidence: The paper makes several claims that are largely supported by **empirical evidence**, though a few appear slightly overstated. The primary claim is that the proposed T1 model achieves superior performance on challenging math reasoning benchmarks and exhibits inference scaling behavior. This is backed by Table 1 in the paper, which shows T1 (with RL training) outperforming baseline models on multiple math problem datasets. Another core claim is that T1 demonstrates inference scaling, meaning that allowing the model to “think” longer (increased inference budget) directly yields higher accuracy without external verification. The authors support this by conducting experiments in which they systematically truncate the chain of thought at varying lengths and measure performance. The claim that T1’s exploration-oriented RL training yields better reasoning is supported by ablations, such as "Sampling more responses encourages exploration," "High temperature in sampling benefits RL training," and "Effects of penalty." A minor overstatement: the claim that "T1 achieves superior performance across all benchmarks" is inaccurate—on GPQA, a baseline model slightly outperformed T1. Methods And Evaluation Criteria: Yes, the paper proposes an appropriate methodology and benchmarks for reasoning tasks. - well-chosen benchmarks (AIME2024, MATH500, OmniMath, GPQA) objectively test math complex reasoning - key design choices are justified, including synthetic chain-of-thought data, high K sampling (K=64) for RL, and entropy-based exploration One major limitation is that the paper focuses heavily on math, training on math, and testing on math, thus limiting claims about general reasoning improvement. Theoretical Claims: There are no theoretical analyses in this paper. Experimental Designs Or Analyses: Yes, the paper provides robust, well-executed experiments with detailed ablations, for example: - comparisons with strong baselines (GPT-4, Claude, QwQ-32B) confirm T1’s effectiveness (Table 1) - ablation studies validate key components: Scaling K improves exploration, while penalties prevent training collapse (Figures 3, Table 3) - inference scaling is well-tested: Truncation experiments confirm that longer reasoning systematically improves accuracy Supplementary Material: Yes, the supplementary materials of this paper mainly contain some implementation details and sampled examples from the proposed model. Relation To Broader Scientific Literature: The paper studies the recent trends of using RL to improve LLM reasoning, i.e., test-time scaling. The paper’s contributions tie into the literature by confirming hypotheses that were hinted at in prior work (that RL and longer reasoning could significantly improve performance) and by introducing techniques (trial-and-error data augmentation, high-volume sampling in RL) that advance the understanding of how RL works. Essential References Not Discussed: This paper addresses one of the most urgent research problems in training thinking models right now. Considering the short period of time this area has taken to raise, the important missing references are mostly the other concurrent works on improving LLM reasoning with RL, for example, DeepSeek's R1, s1, or any other claimed successful replications of o1 or R1. While it's unfair to ask this paper to compare comprehensively with others, we should expect a more thorough discussion should this paper go into ICML. Other Strengths And Weaknesses: Strengths: 1. A major strength of the paper is how it creatively integrates several strategies – synthetic data generation, large-scale RL training, entropy regularization, etc. – into a cohesive framework. Each component (trial-and-error CoT pre-training, oversampling, penalty) is not entirely new on its own, but the way they are combined to achieve empirical improvements. 2. The paper’s clarity is generally strong. It is well-structured, first motivating the problem, then describing the T1 approach, and then providing extensive evaluation. The authors also clearly articulate the intuition behind each design choice (e.g., why high-temperature sampling is used and why they expect trial-and-error data to help). The writing is mostly easy to follow, and important points are supported by either references or experimental evidence. 3. The paper mentions that “The model weights and the data for SFT and RL training will be publicly available. If the authors follow through, this is a strong point of openness. It would allow the community to reproduce and build on T1. Considering the computational intensity of this approach, releasing the trained model and the large synthetic dataset would be highly valuable for further research. Weaknesses: 1. A notable limitation is that all training and most evaluations are centered on math word problems and quantitative reasoning. While the paper argues this is a generic “reasoning” improvement, it is possible that some of the gains are specialized in math problem-solving. The trial-and-error CoT data is constructed for math questions; the reward function is based on checking numeric or symbolic answers. The authors do test one general puzzle dataset (GPQA) and see improvements​, which is encouraging, but a broader evaluation would strengthen the claims of general reasoning enhancement. In short, generality is a bit under-tested – the method’s success in math might be partly due to the structured nature of math problems (clear correctness criteria, availability of many similar problems for training). Future work could explore applying T1 to other reasoning benchmarks (e.g., Big-Bench reasoning tasks, logical deduction puzzles) to ensure the approach is universally beneficial. 2. Some critical training details are missing, including training infra config, compute cost, and more details on the training data. The approach is computationally heavy. RL with K=64 sampled responses per prompt, especially on a 32B model, implies a massive amount of GPU time. The paper doesn’t detail the compute used, but one can infer it is significant. This could be a practical weakness – the barrier to entry for others to apply this method is high. The authors mitigate this by planning to release the model, but training a new model with T1’s approach would be expensive. It’s worth noting that they chose relatively smaller base models (9B, 14B, 32B), likely because scaling to 70B or beyond with such an intensive training loop would be extremely costly. So, the method’s scalability in terms of engineering is a concern. Other Comments Or Suggestions: N/A Questions For Authors: Some of the questions are similar to the weaknesses or problems I raised above; the authors can combine them to reduce the overall response length. 1. The paper's evaluation primarily focuses on math benchmarks (MATH500, AIME, Omni-MATH), with a small test on GPQA. Could the authors provide clear results with meaningful baselines on non-mathematical reasoning tasks, such as multi-hop QA, commonsense reasoning, or logical deduction tasks? 2. Given that RL training was done with K=64 sampled responses per prompt, and models were trained up to 32B parameters, can the authors provide an estimate of GPU hours or FLOPs used for training? 3. The authors mention that the initial supervised fine-tuning (SFT) was done on synthetic reasoning data with trial-and-error and verification steps. Which model was used to generate this data (e.g., GPT-4, an earlier version of Qwen, etc.)? 4. The paper evaluates T1 at 9B, 14B, and 32B scales. Did the authors test whether similar RL training benefits models at a smaller scale (e.g., 7B or 3B)? 5. Did the authors observe high variance in results across different training runs? Does T1 always converge to a strong policy, or did some runs fail to improve significantly? 6. The authors demonstrate that RL improves reasoning over supervised fine-tuning (SFT) on synthetic data. How would T1 compare to an SFT model trained on a much larger dataset of human-annotated reasoning chains (e.g., thousands of expert solutions rather than synthetic ones)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your kind review and valuable suggestions! ### Q1 & W1: All training and most evaluations focus on math word problems and quantitative reasoning. Thank you for the question. We primarily conduct reinforcement learning (RL) training on math problems due to the following two reasons: Math problems can be reliably verified by rules, and thus the reward cannot be hacked during RL training. Math data are easily accessible in open-source communities. Data in other domains tends to be challenging either to verify reliably or to collect comprehensively. Nonetheless, our experiments indicate that training on mathematical problems generalizes effectively to other domains, and we actively evaluate our models in non-mathematical tasks as well, including GPQA and ZebraLogic. We found that training on math data can effectively lead to improvement in other reasoning tasks. | | GPQA | ZebraLogic | | -- | -- | -- | | T1-SFT (Qwen-14B) | 42.3 | 15.3 | | T1 (Qwen-14B) | 48.3 | 27.8 | | T1-SFT (Qwen-32B) | 49.5 | 20.3 | | T1 (Qwen-32B) | 56.1 | 27.9 | ### Q2 & W2: Critical training details, such as infrastructure configuration, compute costs, and detailed training data information, are missing. We employ 64 H800 GPUs, with each iteration taking approximately 300 seconds for a 14B parameter model (larger models require more GPUs and higher computational resources). Smaller K can lead to faster training. While training larger models involves substantial costs, smaller models also significantly benefit from RL training. Furthermore, ongoing improvements in training and inference infrastructure optimization within the open-source community are expected to continuously reduce the compute requirements for RL training and enhance overall training efficiency. For the training data, we use open-source data, including MATH-train, and NuminaMath . We split around 12k for the SFT stage and the others for RL training. And we will open-source the data later. You can find more training details in Appendix A. [1] MATH (train): Measuring Mathematical Problem Solving With the MATH Dataset [2] Numina-MATH: https://huggingface.co/datasets/AI-MO/NuminaMath-CoT/ ### Q3: Could you provide an estimate of GPU hours or FLOPs used for training? We utilize 64 H800 GPUs, with each iteration requiring around 300 seconds for a 14B model. Since RL training combines generation and training processes, we have not recorded exact FLOP measurements. ### Q4: Which models are used for synthetic data generation? We leverage multiple models for synthetic data generation. Specifically, we use Qwen-72B-Instruct and Gemini-1.5-Pro to generate initial Chains-of-Thought (CoT), Gemini-1.5-Pro to generate critiques, and finally, the o1-mini model (without hidden CoT) to merge the CoT and critiques into the final answer. We experimented with several other models and discovered that different models excel in different aspects; thus, we utilize a combination of these models to produce high-quality final outputs. ### Q5: Have you conducted RL training on smaller models? A5: Currently, our experiments are mainly on 9B to 32B models and all of them shows significant performance improvement. We haven’t conducted RL training on smaller models. We think that we can also achieve observable improvement across different scale of models with RL. ### Q6: Is there significant variance across different RL training runs? In our experiments, we observed stable and consistent performance across multiple runs. Variance can be further reduced by increasing the sampling response (larger $K$). Overall, the improvements from RL training are consistent and reproducible across different runs. ### Q7: How would model T1 compare with a Supervised Fine-Tuning (SFT) model trained on a larger dataset of human-annotated reasoning chains? This is an insightful question and one that we are also exploring. Currently, nearly all available reasoning datasets are synthetically generated by models. Human-annotated datasets, such as GSM8k-train and MATH-train, usually contain short CoTs, leading to inferior performance. Open-source, high-quality, human-annotated reasoning datasets remain scarce, and it is unclear whether human-generated LongCoT data would outperform synthetic or RL-trained models. Annotating large-scale human reasoning data is challenging and costly, but we aim to investigate this further in future work.
Summary: This paper introduces T1, a method for enhancing LLM reasoning through reinforcement learning with increased exploration and inference scaling. The authors initialize a policy with chain-of-thought data incorporating trial-and-error patterns, promote exploration during RL through response oversampling, and analyze inference scaling by truncating reasoning steps. Claims And Evidence: The paper's empirical results show performance improvements on math reasoning benchmarks. However, the lack of comparison with other RL methods for LLMs (particularly GRPO) significantly undermines the credibility of the advancement claims. Without these comparisons, it's impossible to determine if T1 represents genuine progress over existing approaches. Methods And Evaluation Criteria: The exploration-encouraging RL approach has merit, but its novelty is questionable. The core motivation of encouraging exploration during RL training for better reasoning is critically similar to GRPO's. Theoretical Claims: I don't see any theoretical claims to verify. Experimental Designs Or Analyses: The experiments focus on benchmark performance but lack comparative analysis against state-of-the-art RL methods for LLMs. This omission is critical since the authors claim to advance LLM reasoning through RL. The ablation studies are informative but insufficient without positioning against relevant baselines like GRPO or PPO. Supplementary Material: Yes, example description part. Relation To Broader Scientific Literature: The paper inadequately discusses its relationship to recent work on RL for LLMs. While it mentions techniques like RLOO, it fails to thoroughly compare with and differentiate from other RL methods, which shares remarkably similar goals and motivations. Essential References Not Discussed: GRPO, PPO Other Strengths And Weaknesses: Strengths: Novel approach to scaling RL for reasoning Clear experimental demonstration of inference scaling Strong empirical results on challenging benchmarks Practical implementation on open-source models Weaknesses: Limited comparison to alternative RL algorithms Heavy focus on math reasoning with less emphasis on other domains Limited analysis of the computational costs of the approach Some details about reward modeling are underspecified Other Comments Or Suggestions: No, I don't have any other comments Questions For Authors: How does the computational cost of T1 compare to other RL approaches for LLMs? The paper mentions sampling K=64 responses which seems computationally intensive. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable feedback! ### W1: Lack of comparison with other RL methods for LLMs (particularly GRPO) A1: We appreciate the reviewer's concern regarding the comparison with other RL methods, particularly GRPO and PPO. However, our work, T1, is designed to be independent of any specific RL framework, including GRPO, PPO, and RLOO. The core contribution of T1 lies in developing techniques to scale RL for LLMs, which are agnostic to the underlying RL method. Specifically, the techniques we propose can be integrated with any RL framework to enhance the reasoning abilities of LLMs. We chose RLOO as the primary RL framework due to its demonstrated stability and the strong performance observed in prior work. Notably, we find that RLOO and GRPO show similar results in our preliminary tests, reinforcing the idea that the techniques we introduce can be applied to both methods. Furthermore, the main contribution of our paper lies in how we leverage RL scaling to improve reasoning, a key area where existing methods, including GRPO and PPO, have not demonstrated substantial improvement in comparison to our findings. We also conduct experiments using GRPO, and run the experiments using Qwen-14b with k=16 for efficiency. The results are shown as follows | | AIME | Omni-math-500 | MATH500 | | --- | --- | --- | --- | | T1 w/ RLOO (this work) | 23.8 | 36.8 | 83.6 | | T1 w/ GRPO | 23.4 | 37.4 | 83.8 | It can be observed that T1 with RLOO and T1 with GRPO show similar performance, which demonstrates that the proposed techniques can bring consistent improvement based on all existing RL algorithms, including both RLOO and GRPO. ### W2: The contribution is similar to GRPO A2: While both T1 and GRPO aim to improve RL for language models, our approach fundamentally differs from GRPO by directly targeting exploration scaling rather than refining the RL algorithm itself. - GRPO focuses on decomposing and improving PPO (e.g., by removing the value network and using group-wise reward normalization), with little emphasis on boosting exploration. - In contrast, T1 introduces strategies to help RL scaling for reasoning, such as trial-and-error learning with self-verification through chain-of-thought data, along with entropy bonuses and dynamic KL regularization—to systematically explore a broader reasoning space. T1 offers orthogonal contributions to algorithms like GRPO or RLOO and provides new insights into how RL can directly enhance the reasoning capabilities of LLMs, including RL training scaling and inference scaling. ### W3: Computational cost A3: We employ 64 H800 GPUs for all training, with each iteration taking approximately 300 seconds for a 14B parameter model with K=64. Larger models require higher computational resources and smaller K can lead to faster training. Current open-source improvements in training and inference infrastructure optimization are expected to continuously reduce the compute requirements for RL training and enhance overall training efficiency. ### W4: More details about reward modeling. A4: We use rule-based reward with the correctness of response as the reward, 1 for correct and 0 for wrong. We ask the model to put the final answer within box and then use LLM to judge whether the model’s response equals to the groundtruth. You can find more details in Appendix A.
null
null
null
null
null
null
Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics
Accept (oral)
Summary: (i) The paper introduces the Algebraic Combinatorics Dataset Repository (ACD Repo), a collection of nine datasets designed to challenge machine learning models in conjecturing and problem-solving in modern algebraic combinatorial mathematics. (ii) This dataset contains foundational results and open problems in algebraic combinatorics, including topics such as computing characters of irreducible symmetric group representations, mHeight function of a permutation, and Schubert polynomial structure constants. (iii) The paper evaluates various machine learning models, including logistic regression, MLPs, and transformers, highlighting challenges in interpretability and performance. Claims And Evidence: The claims about novelty and difficulty of dataset are justified, but I think the following claims are problematic. (i) The paper claims that ML models can generate useful conjectures in algebraic combinatorics. However, there is not explicit example of a novel conjecture validated by human experts is provided. (ii) The effectiveness of different ML models is demonstrated, but unexpected performance gaps (e.g., transformers performing worse than MLPs in the lattice path dataset) are not sufficiently analyzed. (iii) The claim that dataset diversity aids in generalization is plausible, but its empirical justification is limited to performance metrics rather than deeper mathematical insights. Methods And Evaluation Criteria: Overall, the methods and evaluation make sense. The datasets are well-structured, covering a range of combinatorial problems suitable for ML-based conjecturing. Evaluation metrics focus on accuracy, but additional measures such as model interpretability and robustness could strengthen the analysis. Some datasets are relatively small, raising concerns about generalizability to more complex mathematical settings. Theoretical Claims: The paper does not introduce new theoretical results but leverages known results in algebraic combinatorics to structure the datasets. Experimental Designs Or Analyses: As shown in the Methods and Evaluation Criteria section, while results are clearly presented, deeper analysis is missing in cases where models underperform, such as the transformer results in Table 1. Furthermore, more discussion on the effect of dataset difficulty and feature representations on model success rates would enhance the findings. Supplementary Material: The supplementary material consists of a 27-page appendix detailing evaluation setups and additional results. The dataset repository and evaluation scripts are available via an anonymous Git repository, ensuring reproducibility. Relation To Broader Scientific Literature: The work builds on existing ML approaches for mathematical reasoning and shifts the focus from theorem proving to conjecturing. This work is also closely related (and potentially contributing to) formal proof verification (e.g., Lean, Coq), and the Fajtlowicz’s Graffiti conjecture-generating systems. Essential References Not Discussed: It seems that the paper includes all essential references in related works. Other Strengths And Weaknesses: Strengths: The paper contributes novel and cutting-edge datasets with real mathematical significance. The datasets are diverse and encourage broader engagement with ML in mathematics. Weaknesses: The lack of discussion on surprising experimental results and the absence of clear examples of ML-generated conjectures that have led to mathematical insights. Other Comments Or Suggestions: Below are some suggestions for improvement: (i) Discuss the computational cost of training models on the datasets. (ii) Provide qualitative examples of conjectures generated by ML models that are non-trivial and potentially useful to mathematicians. (iii) Moves the appendix pdf from supplementary material to the paper. Questions For Authors: (i) Why do transformers perform so poorly on the lattice path dataset, underperforming even MLPs and random guessing? (ii) Can the authors provide concrete examples of new conjectures discovered by ML models that were later validated? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful feedback and questions. We especially appreciated the questions about differences in model performance, which we also have. We provide responses to the points in the review below. - *Addition of examples of conjectures powered by machine learning that were later validated by proofs* - We agree that providing some precedent for this work would be helpful. In the paper we now cite the following. (i) A specialized graph neural network architecture was developed in [1] to explore a problem related to Kazhdan-Lusztig polynomials and insights from this work were used to prove some new theorems in [2]. (ii) In [3] the authors use machine learning to study the geometry of affine Deligne-Lusztig varieties and prove a new lower bound on the dimension of these geometric objects. (iii) We have added a description of some of our own ongoing work where an interpretability analysis of a model trained on a subclass of weaving patterns led to a new theorem. - *Differences in architecture performance, especially with respect to lattice paths* - Thanks for noting this. We have added some additional text to Section C which discusses this issue and speculates on some reasons that a given architecture may have underperformed on a dataset. 1. **Data representation:** Many of the datatypes (e.g., permutations, partitions, and lattice paths) are rarely used as input data within machine learning research. Consequently, we have a limited understanding of what representations are optimal for a given architecture (there are at least a dozen common representations of permutations in algebraic combinatorics and many look very different). Further, some small-scale experiments we have run suggest (unsurprisingly) that the best representation is partially determined by the solution to the task. 2. **Implicit bias or architectural priors:** We speculate that in certain cases architectural priors or implicit bias coming from the training routine may be misaligned with the problem solution. We note on line 1326 that the claimed sensitivity bias of transformers may be at odds with certain problems in combinatorics where the solution requires learning a map that is very sensitive to small changes to input (e.g., permutation parity). - These issues would probably be best explored in smaller, ‘toy’ settings where possible solutions are well-known, rather than the open problems of this collection. - We were also surprised by the inability to obtain good performance with transformers on the lattice paths dataset and had several team members train on these independently to confirm this issue. Currently, our best guess is that because we restrict to covering relations (a small subset of all possible order relations), our dataset ends up being sparsely sampled in terms of the problem space. We have seen transformers underperform in similar settings. - *Dataset diversity claim* - The reviewer is correct that this is mostly speculation. We will soften our language. - *The computational cost of training models* - We have added a table to the appendix to capture these statistics. - *Concatenating the appendix pdf with the main paper* - Unfortunately, ICML does not let us submit these as one document. - *Mention of Lean, Coq, and Graffiti* - Thank you for pointing this out, we now reference these. - *Some datasets are relatively small, raising concerns about generalizability to more complex mathematical settings.* - We would argue that most of these datasets (other than possibly mHeight) represent complex mathematical settings. The datasets corresponding to open problems must be complex in some sense since mathematicians have made careers out of trying to solve them and yet they remain open. Even the datasets associated with problems that are not open, such as characters of the irreducible representations of the symmetric group (where a combinatorial algorithm has been known for over 70 years) still has many questions associated with it (for instance, when characters are zero or not). In all cases we provide or point to code that allows the user to generate larger datasets for larger $n$. - Finally, while we hope that our datasets will be useful to evaluate the efficacy of ML for math techniques, we stress than a non-generalizing method that helps solve one of these open problems would be a huge accomplishment. [1] Davies, Alex, et al. "Advancing mathematics by guiding human intuition with AI." Nature 600.7887 (2021): 70-74. [2] Blundell, Charles, et al. "Towards combinatorial invariance for Kazhdan-Lusztig polynomials." Representation Theory of the American Mathematical Society 26.37 (2022): 1145-1191. [3] Dong, Bin, et al. "Machine Learning assisted exploration for affine Deligne–Lusztig varieties." Peking Mathematical Journal (2024): 1-50. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification! I raised my rating from 3 to 4.
Summary: The paper introduces the Algebraic Combinatorics Dataset Repository (ACD Repo), a collection of nine datasets to use AI in advancing research-level algebraic combinatorics. A key contribution from author is the dataset focus on open problem with a large collection of examples. A clear focus from author is not just creating a new benchmark (models on some problems can easily do >90%) but more to see if AI can extract insights which finally lead to conjecturing. The paper provides initial baselines on different datasets using different AI models with or without the language component. Claims And Evidence: No particular claim has been made. Methods And Evaluation Criteria: Authors are not proposing a new method, but a new dataset. Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: The work includes experimental results of standard architectures to the datasets. Supplementary Material: N/A Relation To Broader Scientific Literature: This is one of the first attempt to propose challenging and open problems for the AI community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. Their focus is really at the conjecturing process, which is a sizable chunk of the research activity. 2. The paper includes examples of how interpretability analysis and LLMs can be used to extract mathematical insight and generate conjectures. Other Comments Or Suggestions: N/A Questions For Authors: 1. Is there any particular reason why some problems are not interesting outside a certain boundary? For example, looking at the Github it says that for a particular problem N can be only 6/7/8. 2. There are math areas where even small Ns lead to intractable problems. Can you clarify if this is the case for the area as well? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the interesting questions. We provide answers below. - *Is there any particular reason why some problems are not interesting outside a certain boundary? For example, looking at the Github it says that for a particular problem N can be only 6/7/8.* - This is a good question. In most cases we have provided the code for a user to generate any $n$ they are interested in. Of course, for sufficiently large $n$ it becomes too compute or memory intensive to generate or store the full dataset (storing all permutations of 20 elements is expensive) and at some point, even computing individual instances becomes expensive. In most cases, we decided to provide several datasets that sit at what we would consider the “sweet spot”, large and complex enough to be used to train model ML architectures but small enough that researchers with limited compute budgets could still work with them (this is generally in the 10K to 10M range). We will plan to add this reasoning into a section at the beginning of Section B. - *There are math areas where even small Ns lead to intractable problems. Can you clarify if this is the case for the area as well?* - One can easily find problems where data is very hard to generate beyond the trivial size or where specific families of examples are hard to generate. One example of the latter comes from Kazhdan-Lusztig polynomials with $\mu$-coefficient (Section 4.4) which is neither 0 nor 1. This is an area of interest to researchers. Unfortunately, the first such instance appears for $n=10$ at which point there are $(10!)^2$ polynomials (not necessarily distinct). We had initially aimed to include a problem around $\mu$ coefficients but abandoned it because the computational burden was too great. --- Rebuttal Comment 1.1: Comment: Thanks for these answers, I will keep the score!
Summary: The authors introduce a collection of datasets called the Algebraic Combinatorics Dataset Repository, which contains 9 datasets including an open-ended research question and many examples that should be used to derive conjectures. The authors describe the mathematical background of each dataset as well as results of training some neural models on them. The authors demonstrate how conjectures might be derived based on training machine learning models on this data, and leave the dataset as a good testbed for looking for conjectures. Claims And Evidence: The authors claim that the use of problems from algebraic combinatorics is a natural choice for their dataset, and I agree with them. The dataset is nicely curated to be useful to researchers with lesser mathematical background, as the problems do not require much background to understand. Although I am not familiar with the particular problems included int he dataset, the authors do a good job motivating their importance/relevance for such a dataset. Methods And Evaluation Criteria: The authors target the field of using machinel earning models on raw mathematical data to make conjectures. I believe that the methods they experiment with make sense, and are generally well motivated. However, the justification for machine learning methods assisting with making conjectures involves a GNN/XAI based approach, and a program synthesis approach using LLMs. However, the authors train MLPs and transformers on their data and report results stemming from that. This seems to be at contrast with the more explainable methods they cite and use, because they don't give any explanation of how one might use MLPs/transformers to extract conjectures. In this way the experiments they run on the datasets seem to be confused with the broader goal of the dataset and the submission.. Theoretical Claims: There are no theoretical claims in this submission. Experimental Designs Or Analyses: The concern I have about experimental design is the same as mentioned in the above section. While the authors train on transformers and MLPs, they don't actually mention whether or h ow these would be useful models/training regimes for extracting conjectures. The GNN based approach, and the program synthesis approach they actually experiment with seem to be much better candidates for demonstrations for researchers on how one may go about producing conjectures. I think the paper should contain much more information in the main body about the program synthesis experiment. Supplementary Material: I did review the supplementary material and appreciate its comprehensive nature. Relation To Broader Scientific Literature: I think the submission has a nice spot in the broader scientific literature for AI-for-math. At present there have not been (to my knowledge) datasets of this kind that support mathematical conjecturing from raw data. This is probably in part due to the AI-for-math research having a larger community presence of AI researchers, who are less familiar with mathematical results than the researchers from the maths world. I think this dataset provides a good starting point for researchers interested in this area. Essential References Not Discussed: I think the related works is quite slim in the present form. I think the authors can include at least the two following citations: the Ramanujan Machine (which generated conjectures relating to well known constants like \pi and e) and Graffiti, which generated conjectures in graph theory. I think the authors can also include further references in conjecturing, there are many in the literature. (Notably, what I suggested does not include machine learning, but I still think these are useful to include). Other Strengths And Weaknesses: - Other Comments Or Suggestions: At the header of section 4.2 I think it may be better to leave the (Key Tool in the solution of a recently solved conjecture) in the body of the section, not in the header. My feeling is that this is too verbose. In section 5, the part about the challenges of making such datasets does not feel to be in the appropriate spot. This may be better included in the limitations of the work. I also believe that the program synthesis for schubert polynomials, as an example, should appear first in the experiment section. This is because this is an experiment carried out by the authors. The GNNs example including (He et al. 2024) is not a contribution of this paper, and should be secondary to experiments carried out by the authors. On line 434 in the conclusion, I recommend changing `can't` to aovid using contractions. Also, I think you may mention FunSearch (Deepmind, '23) in your discussion of your program synthesis based approach - that methodology may be successfully applied to find conjectures fitting the datasets. Questions For Authors: 1. Can the authors clarify the purpose of evaluating MLPs and transformers on each of the problems? Would it not be better to use some other more interpretable methods? (I do understand you don't want to do the work for people who would use your dataset! But how might one actually use MLPs and transformers to create conjectures). 2. By what process did you select these problems for your dataset? This information should also be included in the paper for enabling others to find suitable problems for further exploration. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for providing feedback on the paper, especially for pointing out that we may need to provide more motivation for our choice of baselines and for suggestions on the paper’s structure. We provide responses to the points in the review below. - *Tension between baselining on MLPs and transformers while only providing example approaches that use more sophisticated methods.* - Thank you for bringing this up. We agree that we may not have effectively communicated the purpose of our baselines. Most approaches to extracting mathematical insight from an ML model first require one to have a model that is performant on some task related to the problem one cares about. Of course, this is just a necessary but not sufficient condition (the second step, extracting insights seems to be harder in our experience). Many potential users of these datasets (especially those coming more from the mathematics community) will have some experience and intuition for MLPs and transformers. Far fewer will have experience with GNNs or program synthesis paradigms. We hope that the baselines we provide will give the broadest possible audience a sense of how "hard" it is just to get a performant model (acknowledging that problem "hardness" ultimately depends on the method used). We also did not want to bias users toward only applying program synthesis approaches which, though powerful, also have limitations. We have added in a short section at the beginning of Section 4 clarifying the purpose of the baselines. - This all being said, we also recognize that program synthesis will be a topic of special interest at ICML this year so we will plan to move some additional program synthesis details back into the main body of the paper in our next revision. - Finally, we agree with the reviewer that adding a third example in Section 5 that uses either MLPs or transformers would be appropriate given their central place in the baselines. We have included a short description of some results we obtained on a subclass of weaving patterns. We obtained these by using clustering to identify prototypical patterns in the Shapley values associated with a performant model. Analysis of these prototypes guided us to a formal characterization of the matrices corresponding to this subclass. - *Program synthesis results first in Section 5* - This is a good point! We have now made this change. - *Mention of FunSearch* - Omitting this was an oversight. We have added it in, thanks! - *Essential references* - Thank you for pointing these out! We have added these. - *By what process did you select these problems for your dataset?* - In choosing questions we aimed to have decent coverage of the major areas of research in algebraic combinatorics. One way of doing this is to ensure that the major combinatorial gadgets (e.g., permutations, Young tableaux, etc.) are featured. Algebraic combinatorics draws from different subfields of algebra, for instance representation theory or algebraic geometry. We also tried to make sure we had some coverage from this angle as well. It is easy to make up problems which are open but which no one cares about. To avoid this, all of our open problems were suggested by two experts in the field. The machine learning researchers on the team then checked that these problems seemed reasonable for application of machine learning and then reformulated them in an ML-friendly format. We have now added several sentences at the beginning of Section 4 addressing this. - *Header on Section 4.2* - Thank you, we have changed this. - *Location of challenges section* - We agree with this comment and have made this change. - *Use of 'can't'* - We removed the contraction, thanks! --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply to my questions and notes! I agree that a third example using MLPs/Transformers in Section 5 (perhaps as the first, or second in order) would be very useful for this paper. It also was helpful to hear about how you chose these problems, thanks! I will keep my score.
Summary: The paper introduces nine datasets arising from problems in algebraic combinatorics. These datasets are meant to test capabilities of machine learning models on symbolic math tasks. The authors clearly motivate and describe each of the nine problems and the resulting benchmarks. Several baseline methods are tested on the benchmarks. Useful discussion is provided showing how development of ML models for these benchmarks can lead to progress in mathematics. **Update after rebuttal** The authors sufficiently addressed my objections, so I raised my score and I feel I can recommend the paper for acceptance. Claims And Evidence: The main contribution of the paper is the introduction of a novel benchmark, so the focus is on presenting the benchmark and not on stating and verifying claims. However, the authors want to show that the datasets are challenging for standard ML approaches, and to this end baseline experiments are conducted. I'm not completely sure how to interpret the results -- in fact, it seems that for the majority of the datasets the baseline methods (especially MLP) could achieve quite high performance. Methods And Evaluation Criteria: The authors in general choose appropriate methods and evaluation criteria for providing the baselines. Theoretical Claims: The authors introduce a bit of background theory in algebraic combinatorics to motivate the problems resulting in the datasets. I'm not familiar with this domain of mathematics, so I myself wasn't able to judge how appropriate the problems are -- but I believe the problems indeed are well-motivated and appropriate, as the authors seem to be well-versed in algebraic combinatorics. Experimental Designs Or Analyses: I didn't see any issues with the experimental design. Supplementary Material: The authors link to a repo with code for downloading and producing the data described in the paper. The code is in general documented and seems to be mostly / almost functional, however: - There are no installation instructions. - For some reason, one script is in Java; why? - The pre-generated data files need to be downloaded from various locations on Google Drive -- is there a reason for not including them in the repo? Or you could provide a link to download all the datasets at once, for convenience. - The Python notebooks have bugs or missing imports; for instance, when running `how_to_load_datasets.ipynb`, I see errors like ``` FileNotFoundError: [Errno 2] No such file or directory: 'grassmannian_cluster_algebras/3_4_12_valid_train.txt' ``` or ``` ... line 432, in load_kl_polynomial_data for k,t in enumerate(file_names[size]): ~~~~~~~~~~^^^^^^ KeyError: 8 ``` Another example: in `rsk_generation_with_sage.ipynb` there is an undefined `RSK` function. The code should be checked and all such errors should be eliminated. Relation To Broader Scientific Literature: This work is related to the literature on applying ML to math problems, and more specifically, to math benchmarks. There are several categories of existing math benchmarks: formal mathematics benchmarks, natural language math benchmarks of varying difficulty, and synthetic math problems. This work falls in the last category, and because there are not many such benchmarks, I find the contribution of the authors very useful. Essential References Not Discussed: I think the authors in general include relevant references, however, it would be perhaps good to discuss [1], where also synthetic math benchmarks for ML are created, and [2], where the studied ML task is to generate formulas describing integer sequences from OEIS. [1] Saxton et al.: Analysing Mathematical Reasoning Abilities of Neural Models. ICLR 2019\ [2] Gauthier, Urban: Learning Program Synthesis for Integer Sequences from Scratch. AAAI 2023 Other Strengths And Weaknesses: ### Strengths The paper introduces a novel and interesting benchmark which connects ML research with more advanced mathematics. In my opinion, such benchmarks are highly desirable, and may stimulate developments on both ML and mathematics side. The math problems and resulting data are well-described and motivated, making them accessible for non-experts. ### Weaknesses Some technical details related to training the baselines are missing, for instance: - What training hyperparameters were used (n. of epochs)? - What architectural hyperparameters were used (n. of layers)? - How logistic regression was optimized? Most of the problems can be solved with the baseline methods with quite high accuracy, which makes one wonder how difficult the benchmarks actually are. ### Conclusion In general, I find the work very useful, and once the authors resolve the issues raised (the problems with the code, missing technical details), I'm willing to raise the score. Other Comments Or Suggestions: It would be better to provide appendix in the same pdf as in the main text -- now some links (e.g. Table 3) do not work. In Table 2, in the caption you mention Claude 3.5 Sonnet but in the table itself it is not present. The captions of the tables should be placed above, per formatting instructions. "this example comes from (war)" -- this reference needs fixing Table Table --> Table these these --> these Machine Learning meets --> Machine Learning Meets Questions For Authors: 1. I see that MLP in general performs better than the transformer, but sometimes it's the other way around (e.g., on "cluster algebra quivers"). Do you maybe have a hypothesis why? 2. What are the numbers presented in Table 3? is it mean squared error? 3. I didn't understand how the RSK problem is cast as a regression task -- could you elaborate on that? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for all their thoughtful comments on the paper and for pointing out issues with the GitHub page. We provide responses to the points in the review below. - *“…it seems that for the majority of the datasets the baseline methods … could achieve quite high performance.”* - This is a good point and one that we tried to address in lines 086-091. Our goal is a little different than a standard benchmark where model performance alone is what we care about (this is one reason we ultimately decided to remove the word 'benchmark' from the paper). When looking for mathematical insight, being able to train a model to perform the task associated with an open problem is a necessary, but not sufficient condition. We must also be able to extract mathematical insight from the model. The model that has just learned a ‘bag of heuristics’ might be highly performant but would probably be hard to extract clean theorems from. The inclusion of the benchmarks was meant to give the reader a better sense of how challenging the first step is (getting a performant model). We have now clarified this in the text. - *Installation instructions* - Thank you for pointing this out. These have now been added to the [GitHub repository](https://github.com/icml2025-43403439/43403439/blob/main/README.md#environment-installation). - *Java program* - This was code written by one of our mathematician collaborators for their research prior to the inception of the ACD repo. The Java version has now been replaced with a Python version. - *Errors when loading data* - Thank you for pointing these out. We now believe we have fixed all the issues listed. - *One link to download all datasets* - This is a good idea! We now have a single script (`load_datasets.py`) which can be run to download, all datasets, unzip them and move them to the correct folder (other than KL polynomials, RSK, and Schubert, which are too large to download programmatically via the API). - *Additional references* - Good catch! We have added both. - *Training and architectural hyperparameters* - The range of hyperparameters (including number of layers, depth, etc.) that we explored can be found in Section C.1 of the Supp. Material. We will add a table with the specific hyperparameters we used for each dataset (an abbreviated copy of the transformers table can be found below). We have also added training epochs (60), batch size (varied between datasets), loss function (cross-entropy for classification and MSE for regression), and Adam hyperparameters (the Pytorch default other than learning rate) into Section C.1. | Dataset | LR | Depth | Dimension | Heads | | -| - | - | - | - | | Lattice, $n=10$ | 0.001 | 4 | 80 | 8 | | Lattice, $n=11$ | 0.0005 | 4 | 80 | 4 | | Lattice, $n=12$ | 0.0005 | 6 | 40 | 8 | | Weaving, $n = 6$ | 0.0001 | 4 | 80 | 8 | | Weaving, $n = 7$ | 0.0001 | 4 | 80 | 8 | | Quivers | 0.0005 | 6 | 80 | 4 | | Grassmannian | 0.0005 | 6 | 80 | 4 | | Schubert $n = 4$ | 0.0001 | 4 | 80 | 6 | | Schubert $n = 5$ | 0.0005 | 4 | 40 | 6 | | Schubert $n = 6$ | 0.0005 | 4 | 80 | 4 | | mHeight, $n = 8$ | 0.0001 | 4 | 80 | 4 | | mHeight, $n = 9$ | 0.001 | 6 | 20 | 8 | | mHeight, $n = 10$ | 0.001 | 4 | 80 | 8 | | $S_n$, $n = 18$ | 0.001 | 6 | 80 | 6 | | $S_n$, $n = 20$ | 0.001 | 4 | 80 | 8 | | $S_n$, $n = 22$ | 0.001 | 4 | 40 | 8 | | RSK, $n = 8$ | 0.001 | 4 | 20 | 6 | | RSK, $n = 9$ | 0.001 | 4 | 80 | 4 | - *Logistic regression optimization?* - A short description is provided in Section C.1. We used the default settings for logistic regression provided by sklearn (we have now replaced the word 'standard' by 'default'). - *MLPs vs. transformer performance* - This is a great question to which we mostly don’t have answers (especially for open problems where the solution is unknown). We see the choice of data representation as one critical unknown for many of these datasets. For example, what is the best way to represent a permutation when using a transformer vs MLP? We have also seen evidence that the transformer's sensitivity bias limit it in certain math settings (e.g. parity problems). - Typos - These have now all been addressed, outside of the supplementary material being split from the main body. Unfortunately, ICML does not let us submit these as a single document. - *The meaning of Table 3* - Correct. We have added a note to clarify this. - *RSK regression task* - Permutations are represented as their inversion vector, a $\{0,1\}$ vector whose first entry is 1 if the permutation inverts the ordering of 1 and 2, whose second entry is 1 if the permutation inverts the ordering of 1 and 3, etc. This vector is the target of regression. Doubtless better frameworks could be devised by one interested in this task, but that was not our goal in the baselines. Clarification has been added to Section B.6. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and fixes in the code! I'm increasing my score, and have two additional remarks: (1) Thank you for the `load_datasets.py` script, it seems to work, but not for all datasets. For example, for RSK I see: ``` Downloading rsk from Google Drive (file_id=1CfuxD_XgTefbEduxJnXgXoUOt-GY-smq) to datasets/rsk.zip... Finished downloading rsk. Unzipping rsk.zip into data... [ERROR] rsk.zip is not a valid zip file - you will need to download the data manually. ``` Could you have a look what happens here? (2) > Permutations are represented as their inversion vector [...] This vector is the target of regression. Regression predicts a continuous variable, so is this binary vector cast as a continuous variable somehow? --- Reply to Comment 1.1.1: Comment: Thank you for your additional remarks! (1) *The `load_datasets.py` script does not pull all the datasets*. For large files (> 150MB), Google Drive prompts for manual confirmation, so `load_datasets.py` cannot easily pull them programmatically. To avoid having to manually download the RSK/Schubert/KL datasets, we have collected [all the data into a new link here](https://drive.google.com/file/d/1A5DlXHj81c5JlgpuCoS5FtMahbSHAyyj/view), and have updated the README.md to describe both approaches for pulling the datasets: > ### Downloading the datasets > You have two options for downloading the datasets. > > 1. Manually via a Google Drive link > > The datasets can be downloaded here: [https://drive.google.com/file/d/1eWcXsNPAsCJMsVqcoYscUUz9S0-wjKnY/view?usp=sharing](https://drive.google.com/file/d/1A5DlXHj81c5JlgpuCoS5FtMahbSHAyyj/view). > After downloading the file, unzip it into a folder called `data` (assuming you have put the downloaded zip file in the same directory as this README): > ```bash > unzip all_data.zip -d data > ``` > > 2. Programmatically via the Google Drive API (grabs 6/9 datasets) > > To download the data into a folder called `data` run: > ```bash > python load_datasets.py > ``` > Note however that the datasets for KL polynomials, RSK, and Schubert polynomials are too large to download programmatically via the Google Drive API and have to be downloaded manually. See their respective README files for more details. (2) *Representing the RSK task as a regression problem by casting the binary vector*. That is correct. What we are calling the inversion vector is a vector of 0’s and 1’s which we convert to floats and treat as a vector with continuous values. MSE loss is then computed across entries in the vectors.
null
null
null
null
null
null
Behavioral Exploration: Learning to Explore via In-Context Adaptation
Accept (poster)
Summary: This paper presents a method to predict expert actions based on past observations and predict how "exploratory" the expert's behaviors are relative to the context. This design enables the model to mimic the behavior of an expert, and select the most relative experiences to explore. --- Update after rebuttal: Many important results, such as tabular results, have not been provided. The supplemented videos lack descriptions and comparisons. This paper still needs more polish. The reviewer's score is unchanged. Claims And Evidence: 1. The claim in the introduction is that existing works are "slow and expensive." This is not supported by any evidence or citations. 2. The introduction example claims that the robots should not explore "pick up a plate or pick up the same cup." The reviewer doesn't agree with this because exploring (although not successful attempts) can be quite beneficial. Exploring wrong experiences can be quite beneficial in avoiding erroneous actions. For example, exploring the action of picking up a plate can help avoid making such a mistake. Methods And Evaluation Criteria: 1. Measuring coverage through the inverse feature matrix needs better motivation. Compared to similar lines of work in this domain (the coverage scheme) is highly recommended. 2. D4RL is a relatively new benchmark while evaluating other existing standard benchmarks (e.g., Atari or Mujoco) are recommended. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Variances are not shown in Figure 2-Figure 4. 2. Tabular results should be provided as well, at least in the appendix. 3. Supplementary videos are recommended. 4. It seems that Figure 2 - Figure 4 do not present consistent results. For example, HILP doesn't always outperform SUPE. Supplementary Material: Yes. More video demos are recommended. Relation To Broader Scientific Literature: The reviewer cannot agree with the motivations behind this work (see the above comments). Therefore, the significance of this work is questionable. Essential References Not Discussed: The literature review is reasonably extensive. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: See above sections. Questions For Authors: Please answer the questions mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We have added videos illustrating the performance of our approach and baselines on our real-world WidowX setting [here](https://drive.google.com/drive/folders/1-B7kgD9lXVR41WyhtR1XuLepQXI8QR7W?usp=share_link), and provide clarification on additional points below. ### The claim in the introduction is that existing works are "slow and expensive”… It is well-known that randomized exploration, on which the majority of practical RL approaches are based, is “expensive” in the sense that it requires many interactions before it discovers the correct behavior (see e.g. [1]). Furthermore, most RL algorithms update their behavior “slowly” by gradually fitting a value function to newly collected data, and updating their policy to maximize this value function. While this approach can fit the data after sufficiently many gradient steps, to avoid training instability one must typically limit the number of gradient steps per sample, causing the policy to lag behind the behavior that could in principle be learned from the data (see e.g. [2]). We believe these statements are backed up by our experiments (Figures 2 and 3, in particular), which illustrate that our approach of in-context adaptation leads to much faster exploration and task-solving ability than RL approaches, and furthermore these observations are common throughout the RL literature. ### The introduction example claims that the robots should not explore… While we concur that in certain settings—in particular, when very little prior knowledge is available—taking “wrong” actions is indeed useful, in settings where one already knows an action is “wrong” taking this action is of limited usefulness. For example, if the agent knows what a “cup” is, then it already knows that picking up a plate is incorrect, so attempting to pick up the plate does not help it learn anything new about which cup it should pick up. Furthermore, as there are many possible "wrong" behaviors, exploring them all is very sample inefficient. The focus of this paper is on scenarios where the prior knowledge (in our case, in the expert demonstrations) is rich enough to avoid many “wrong” behaviors, and where we want to quickly explore over potentially correct behaviors. We believe this is useful in many real-world settings. For example, ideally we would want a robot policy that follows instructions (when we tell it to “pick up the pot” it picks up a pot, and not some other object) but that can try different behaviors until it picks up the correct pot. Our experiments illustrate that OpenVLA, a state-of-the-art generalist robot policy, can do the former yet not the latter, and existing work on exploration can achieve the latter (after many samples attempting every possible behavior) but not the former—our work fills a gap in the literature by enabling a policy to achieve both simultaneously. ### D4RL is a relatively new benchmark… We would like to emphasize that D4RL is nearly 5 years old and has become the standard benchmark for nearly all work in offline RL since its release (see e.g. [3]-[5]). Furthermore, other RL benchmarks (such as Atari) do not have standard offline datasets—which is critical in our setting—and are therefore not amenable to this work. ### Variances are not shown… Figures 2-4 do include error bars—see, for example, the BC curve of Figure 2. As stated in the main text, these error bars denote 1 standard error. ### Tabular results… We will include tabular results in final version of the paper, as requested. ### Consistent results in Figures 2-4 Figures 2 and 3 consider different metrics—Figure 2 plots the number of goals reached, while Figure 3 the total number of regions reached—and Figure 4 a different environment entirely, and so we would not necessarily expect the ordering to be the same. ### Measuring coverage… As noted in footnote 1 in our paper, measuring coverage through the inverse feature matrix is a canonical technique in statistics and, furthermore, is common in the RL theory literature (e.g. [6]-[8]). [1] Dann, Chris, et al. "Guarantees for epsilon-greedy reinforcement learning with function approximation." ICML, 2022. [2] Li, Qiyang, et al. "Efficient deep reinforcement learning requires regulating overfitting." ICLR, 2023. [3] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." NeurIPS, 2021. [4] Kostrikov, Ilya, et al. "Offline reinforcement learning with implicit q-learning." arXiv, 2021. [5] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." NeurIPS, 2020. [6] Jin, Chi, et al. "Provably efficient reinforcement learning with linear function approximation." COLT, 2020. [7] Wagenmaker, Andrew J., et al. "Reward-free rl is no harder than reward-aware rl in linear markov decision processes." ICML, 2022. [8] Zhou, Dongruo, et al. "Nearly minimax optimal reinforcement learning for linear mixture markov decision processes." COLT, 2021. --- Rebuttal Comment 1.1: Comment: Good moves. These points will definitely improve your drafts. However, these are major revisions and although the authors promise to make many changes. The reviewer cannot evaluate The modified draft due to the challenging reviewing policies, the reviewer cannot decide whether to change his mind before seeing the revision. A submission is recommended therefore. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for their follow-up response. We respectfully disagree, however, with the assessment that the proposed changes entail a major revision. The changes we have outlined in response to the reviewer’s initial feedback are primarily clarifications, and we will plan to highlight the same clarifying points in the final version of the paper as we have highlighted in our rebuttal. As these are primarily small expositional changes, we believe they are well within the scope of the standard revisions that occur in the review process at ICML. If the reviewer feels that our rebuttal has addressed their concerns, we would greatly appreciate if they would be willing to reconsider their assessment of our work. Best Regards, Authors
Summary: This paper proposes a method—Behavioral Exploration (BE)—that is capable of online in-context task adaptation while learning offline from a set of reward-free expert demonstrations. Such an ability is achieved by conditioning the policy on the coverage measure, which reweights the probabilities of less frequent trajectories while remaining under the distribution of expert policies. This approach is compared with existing exploration algorithms, zero-shot RL methods, and a modern VLA in maze-like and robotic environments, and also on a real-world robotic task. The claimed contributions of the paper are the method’s ability to generalize to new goals faster than baseline approaches and the scalability to both simulated and real-world robotic tasks. Claims And Evidence: There are three main claims in the paper: - [C1] The proposed method is able to learn from possibly not optimal RL data to quickly adapt to the task This claim is supported by experiments on two D4RL benchmarks: AntMaze and Franka Kitchen. The proposed method is seems to be more sample efficient while exploring the AntMaze environment, but acts on par with zero-shot HILP and falls short of SUPE. To my mind, the evidence authors provide is not enough to make conclusions of faster adaptation due to similar performance of the BE with the existing ones. [Concern 1] At the same time, authors argue that their method is easier to train, compared to unsupervised zero-shot methods. Although I have an intuition this might be true, I find this explanation unrelated to the original claim of fast adaptation. Thus, I find the evidence not convincing enough to support the claim of faster adaptation. - [C2] The method can learn to explore from diverse robotic demonstrations To show the method is capable of in-context exploration, authors choose the Libero 90 dataset, which consists of human demonstrations of different tasks in the Libero robotic simulator. Authors employ two approaches; first, they hide the final goal and measure the number of attempts of their method and the baselines to achieve the goal. In the second, the task is provided. For a baseline, the behavior cloning (BC) algorithm is used. In both scenarios, the BE approach finds the goal in fewer attempts compared to BC. [Concern 2] Although the experimental results are positive, the selection of baselines appears insufficient, as only BC is included. Based on the previous experiment, it would be reasonable to include unsupervised zero-shot RL methods, provided they do not require rewards during training. Without additional experiments, the claim is only weakly supported. [Concern 3] Besides that, I have a question to the authors. In Appendix B.2 they mention that when the goal is hidden, BC and BE are still provided with “a scene conditioning vector”. What does this vector consist of? - [C3] The method scales to the real-world environments To support this clam, authors train their method on the BridgeData V2 dataset and set up a robotic arm WidowX 250 to solve three tasks, where it needs to reach the right object on a table. The trial is counted as a success if the right object was picked at least once in five attempts. Two methods are chosen for the comparison; OpenVLA, the recent Vision-Language-Action model that receives an image and language instruction to produce an action. The second baseline is behavior cloning. It is shown that BE has more successful trials compared to baselines, which indicates it is able to demonstrate exploration behavior. [Concern 4] While the comparison with OpenVLA provides some insights, its relevance may be questioned. According to Appendix B.3, the language command for VLA was fixed for all trials, whereas BE, being an explorative algorithm, adapts by conditioning on a coverage value to generate new trajectories. In contrast, VLA is designed to follow the language commands commands without feedback from the environment. Thus, after touching one of the object once, VLA does not receive any meaningful signal whether the episode was done successfully or not, which eliminates the incentive to try another object. For instance, it would be interesting to see how the results change if, after an initial unsuccessful trial, the model were prompted to select “*another”* object. Given these differences, the motivation for the current comparison could be further clarified. Overall, the claim appears plausible; however, it could benefit from a more comprehensive comparison and further investigation. Methods And Evaluation Criteria: I have raised some concerns regarding the baselines in [C2; C3]. Apart from them, the evaluation metrics and data seem reasonable. Theoretical Claims: I read the informal proposition in the main text. I did not check correctness of the proof in the Appendix A. Experimental Designs Or Analyses: I have raised my concerns in [C3] regarding the experiment design. Supplementary Material: I reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: The paper introduces a novel method that learns to explore from offline data and can adapt to new tasks by using the exploration strategy. To my mind, this is the first method that do not use the rewards or learn its approximation during inference, working entirely with states and actions. However, novelty is not the only criterion for a paper to be of high quality; this paper could be improved by clarifying its claims, adding missing baselines, and justifying some of its evaluation choices. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strengths: 1. The novel method which proposes in-context exploration entirely without rewards 2. Well-written with accessible explanations 3. Evaluation in the real-world setting Weaknesses: 1. No discussion or conclusion sections are found. Other weaknesses are listed in the section “Claims And Evidence”. Other Comments Or Suggestions: None Questions For Authors: I have numbered my concerns as [Concern N] in the 'Claims And Evidence' section. Discussing these concerns could potentially change my opinion and affect the score. [Edit] I have updated my score to 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We have added an additional unsupervised RL baseline to the [Libero experiment](https://drive.google.com/file/d/19kFpxrczDTiXL1S0rL1FfX2zXwAT1qVZ/view?usp=share_link), which we found performed worse than both BE and BC, and have also run additional experiments with OpenVLA, testing out the reviewer’s suggestion for enabling OpenVLA to explore. See below for discussion on these points and additional clarifications. ## [Concern 2] New RL Baseline on Libero We have added SUPE, the most effective unsupervised RL baseline, on Libero. Please see Figure 1 [here](https://drive.google.com/file/d/19kFpxrczDTiXL1S0rL1FfX2zXwAT1qVZ/view?usp=share_link). As illustrated, SUPE performs significantly worse than both BC and our approach. We note that the number of episodes we consider for Libero (15) is significantly less than what is typically required by RL algorithms to effectively learn, so we do not find this result surprising—it further illustrates the necessity for fast in-context adaptation as compared to standard RL gradient-based updates. ## [Concern 4] New OpenVLA Experiments We make several comments on our inclusion of OpenVLA as a baseline. First, our goal in including it is to demonstrate that existing state-of-the-art methods for robotic control do not effectively explore over the space of possible tasks when the goal specification is ambiguous. In other words, OpenVLA does not already solve the problem we are attempting to solve—if the specified goal is ambiguous, OpenVLA does not attempt different behaviors until it finds the correct one, it simply chooses the behavior it deems most likely. Second, as the reviewer has highlighted, OpenVLA receives no feedback on whether it has succeeded. This is fundamental to OpenVLA, however, as well as most other current state-of-the-art BC methods for robotic control: OpenVLA is not trained to condition on a history of past interactions or observations, and therefore fundamentally cannot adapt online. Our approach aims to fill precisely this gap, and we hope it will be incorporated in future policy training to enable effective adaptation to history. We note as well that, as discussed in our reply to Reviewer kepy and illustrated by our new BC with history conditioning baseline on Libero, naively conditioning on history can actually hurt performance—our approach may be a way to instead improve performance when conditioning on history. Given this, instructing OpenVLA to pick up “another” object is an ambiguous command—since OpenVLA has no history dependence, it has no grounding for what “another” object is. Nonetheless, we ran an additional trial with OpenVLA on the command “pick up the other silver pot”. We evaluate on the task illustrated in Figure 18, and give the results below, indicating which fraction of episodes the policy goes to the pot on the left, the right, or fails to grasp at either, and compare to our original command. | Command | Left | Right | Neither | |-|-|-|-| | “pick up the silver pot” | 1/8 | 7/8 | 0/8 | | “pick up the other silver pot” | 2/8 | 4/8 | 2/8 | As this illustrates, while this command does cause OpenVLA to move to the left pot somewhat more frequently, it also hurts its ability to move to either pot at all (likely because this command is somewhat outside the training distribution for OpenVLA). ## [Concern 1] Ease of Training and Evidence for Faster Adaptation To clarify our claim that the method is easier to train, this is indeed somewhat tangential to the claim of faster exploration and adaptation online, the primary claim of our paper and the main focus of our experiments. However, our approach does lead to lower computational overhead and simplicity at deployment compared to the baseline RL methods, all of which require online gradient updates and hyperparameter tuning. While this is an advantage of our approach, we see this as complementary to our main objective. We would also like to emphasize that BE is able to explore significantly faster on Antmaze and Libero than all RL baselines, supporting our main claim of faster adaptation and exploration. For example, to reach the final performance of SUPE, the best RL baseline, it requires only 15% of the samples. While the performance between BE and the baseline RL methods is similar on Franka Kitchen, we believe the strong performance on Antmaze and Libero highlights that in many settings, our approach leads to a substantial gain. ## [Concern 3] Scene Conditioning As noted in the paper, Libero 90 consists of 90 tasks distributed across 21 scenes. The scene conditioning vector is then simply a one-hot vector of dimension 21 indicating which scene the agent is operating in. While this information can be inferred entirely from visual information (the scenes appear visually distinct) we found that adding this conditioning improved the performance of both our approach and BC slightly. This is not critical to performance, however.
Summary: This paper addresses the problem of in-context RL -- in particular, learning how to *explore* through in-context adaptation. The proposed method (Behavioral Exploration, BE) performs reweighted behavior-cloning over large expert datasets, with a long-context policy that also conditions on a variable representing how exploratory the expert’s behavior is. The pretrained BE policy is validated in downstream RL / IL tasks, and demonstrates improved performance over standard BC and RL approaches. Claims And Evidence: A key claim is that BE learns to explore over the space of expert behaviors from an expert demonstration dataset, which should speed learning on downstream tasks. - The experiments, however, do not evaluate BE as a pretraining method for any downstream tasks. Another key claim is that BE performs in-context adaptation. - The provided explanations and experiments do not totally convince me that BE is adaptively exploring in-context. - The reuslts on D4RL show that BE learns a better policy and explores more than baselines, but do not demonstrate any *adaptive* exploration ability -- these results could also occur simply because BE is better at modeling the behaviors shown in the offline dataset. - I would be more convinced if the authors demonstrate that BE is able to learn faster on a completely unseen task than baselines. Methods And Evaluation Criteria: The authors consider naive BC as a baseline, conditioning on state only. How well would BC with the diffusion transformer policy backbone + conditioning on history, but without the reweighting objective, perform? Theoretical Claims: I did not check the theory. Experimental Designs Or Analyses: yes. See claims. Supplementary Material: Yes, all except theory. Relation To Broader Scientific Literature: This paper examines how to equip a policy with exploration abilities based on contextual information. This relates to the emerging area of in-context RL and meta-RL. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strengths - Overall, the paper seems well motivated - The proposed approach is based on BC, making it applicable for robotics settings. - Strong empirical performance in both simulated and real robot experiments Weaknesses - There are some places where a lot of jargon is used, making it difficult to grasp what the authors mean: - In particular, the section on selecting the history distribution (pg 5) is quite confusing. - L257: what is the induced distribution of online states? Induced by what? - Why is estimating the distribution of online states a fixed point problem? How is the fixed point problem speciied? - In-context learning is not clearly defined. What criteria should be used to determine if a policy has successfully adapted its behavior online? Other Comments Or Suggestions: Typo in Line 880: affective -> effective Questions For Authors: 1. The notation in the paper suggests that the demonstration data is generated by a single expert, but the text argues that the demonstration dataset should contain multiple behaviors. Is there any reason why the demo dataset can only contain behavior from one expert? 2. How is the coverage objective, which attempts to cover behaviors shown in $pi_\beta$ diffrent from imitation learning approaches such as feature matching? 3. How can we decouple the performance improvement due to learning to explore, versus the performance improvement due to access to demonstration data? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We have run an additional requested baseline on Libero—BC with history conditioning, which we found performed worse than BC—and additional experiments illustrating that BE adapts to its history online; please see below for further discussion and the results [here](https://drive.google.com/file/d/19kFpxrczDTiXL1S0rL1FfX2zXwAT1qVZ/view?usp=share_link). ## New Experiment Showcasing BE’s Ability to Adapt Online Please see this Figures 2 and 3 [here](https://drive.google.com/file/d/19kFpxrczDTiXL1S0rL1FfX2zXwAT1qVZ/view?usp=share_link) for illustration of the BE’s ability to adapt to history. To illustrate adaptivity, we run a BE policy trained on Antmaze Large, conditioning on either (a) history sampled from states visited in past online episodes (the strategy used in our experiments), or (b) histories containing only a single fixed trajectory. As our results illustrate, the history can have significant impact on the agent’s performance: when conditioning on a single trajectory, the agent largely takes routes that avoid the space traversed by this trajectory, adapting its behavior to visit novel states. Given this, we see that the agent achieves significantly higher coverage conditioning on the past states it has observed, rather than a single fixed trajectory. ## New Experiment Showing History Conditioning Hurts Standard BC We have a trained a BC policy with history conditioning on Libero—please see Figure 1[here](https://drive.google.com/file/d/19kFpxrczDTiXL1S0rL1FfX2zXwAT1qVZ/view?usp=share_link). As can be seen, conditioning the BC policy on history actually hurts performance. We believe this is an example of “causal confusion” [1], a known phenomenon where conditioning on additional information can decrease performance of BC by introducing spurious correlations. ## Learning on Downstream Tasks The reviewer states that: “A key claim is that BE learns... The experiments... downstream tasks”. We want to highlight that the main focus of the paper is not training on downstream tasks, but rather pretraining a policy that can explore effectively online, focusing its exploration on behaviors exhibited by the demonstration data. While existing work (e.g. [2]) has shown that a richer supply of data does improve policy learning on downstream tasks—and given this we believe that data collected with BE may indeed be useful for learning downstream tasks—this is not the main focus of the paper. We are instead primarily interested in how to pretrain a policy for exploration, and our experiments were chosen to reflect this. We will update the exposition in the paper to make this clear. ## Additional Clarifications *History Distribution*: At deployment, we condition the BE policy on the history of states it has already collected online. As any policy $\pi$ induces some distribution over states visited, the distribution of history encountered at test time depends on the learned BE policy. Ideally, we would like the distribution the BE policy was trained on to match this online distribution. As described in Section 4.3, this is challenging as we do not know what the state visitation distribution of the learned policy will be. Addressing this is a fixed-point problem in that we must first optimize a BE policy on some history distribution, deploy it to find what history distribution it induces, then re-optimize a policy on this new history distribution, and repeat until convergence. To avoid this complication, we suggest choosing the history distribution in training to be a uniform distribution over trajectories in the offline dataset, which we found worked well in practice. *Number of Demonstrators*: The offline data may be collected by different demonstrators; in this case $\pi_{\beta}$ is a mixture policy over all demonstrator policies. *How can we decouple the performance improvement…?*: While our approach can learn to explore effectively with any sufficiently diverse set of offline data, it focuses its exploration on the behaviors exhibited in the offline data. Therefore, while diverse low-quality data may be sufficient for training a policy that collects diverse data, it is not necessarily sufficient for successfully achieving a task; the demonstration data must contain examples of successful task completion for BE to learn these behaviors. Thus, both the learning to explore and the type of demonstration data are important for our approach. *How is the coverage objective…*: Would the reviewer be able to provide additional clarification for what they mean by “feature matching” here? One potential difference is that BE aims to induce *different* behaviors, while typical BC usually aims to induce the *same* behaviors. [1] De Haan, Pim, et al. "Causal confusion in imitation learning." NeurIPS, 2019. [2] Yarats, Denis, et al. "Don't change the algorithm, change the data: Exploratory data for offline reinforcement learning." arXiv preprint arXiv:2201.13425 (2022).
Summary: The manuscript presents "Behavioral Exploration" (BE), a novel formalization for learning from expert demonstrations, while giving the ability to the policy/user to explore in a controlled manner (via some parameters). The main innovation is the Behavior Policy Coverage metric, and how to use it to condition on previous historical data to enhance exploration (or exploitation). The authors provide a practical algorithm and the results showcase that BE outperforms several baselines in RL and imitation learning scenarios. Claims And Evidence: Overall the authors' claims are matched and supported by the evidence provided. The authors provide proof for the main theoretical result, and many experiments and explanations for the empirical results. Methods And Evaluation Criteria: The D4RL and LIBERO datasets/scenarios and the real world experiments are suited to evaluate the task and provide meaningful comparisons overall. The RL and real-world experiments require some clarifications (see "Experimental Designs Or Analyses"), but otherwise the experiments are well-suited for validating the algorithm. Theoretical Claims: I did not check the correctness thoroughly, but the claim and proof make sense. Experimental Designs Or Analyses: I have a few comments here (tbh I was confused if those comments fit this section or the section on "Methods And Evaluation Criteria"): 1) The RL setting is not described at all (the explanation in the supplementary is not enough). I understand that there is a lack of space, but we need to be more specific here. It is implied that the authors do something similar with two other papers, but we never see the exact pipeline. This is important as the authors might not even do what the other papers describe. Reading the paper alone (and not the other papers), it is not clear how the RL scenarios are set up. In other words, the policy learned by BE is used only for exploration? Is there policy adaptation based on newly acquired rewards/data? How is this incorporated with the BE policy? Overall, this part needs more explanation. 2) I am not sure I understand the real-world experiment. The authors say that this is "a reaching task", but then they say "we count a trial as a success if the policy interacts with both objects". Again the setting here is poorly described. 3) The task success definitions and metrics (e.g. "we count a trial as a success if the policy interacts with both objects") are quite vague. Thus, this makes us skeptical about the results. If they are actually not so vague (but due to lack of space, the authors had to come up with a sentence), then they need to be defined more precisely in the text (or even supplementary). Otherwise, the authors should perform new experiments with more rigorous metrics. 4) Aggregated performance curves can lead to misleading results. The authors should at least describe how they aggregated the results from different tasks. Supplementary Material: All of it (passed quickly the proof). Overall the supplementary is a good addition to the main text. Relation To Broader Scientific Literature: The ability to effectively explore vast spaces can beneficial in many fields. Moreover, making it possible to "ground" exploration to real-world experience (via demonstrations and the conditioning) can be even more useful and beneficial, since it can shrink the exploration time while not losing interest parts of the state space. Overall, the problem tackled by the paper is very interesting, and the paper provides one solution to this. Essential References Not Discussed: I did not find any. Other Strengths And Weaknesses: Strengths ------------- - Well written paper - The objective is clearly conveyed - The BE policy learning pipeline is nice and imho novel Weaknesses ----------------- - The authors could "give up" a few experiments (e.g. D4RL kitchen) and provide more meaningful discussions of the results - The aggregated performance curves Other Comments Or Suggestions: Very well written paper. Questions For Authors: - How would BE work in tasks where we require more fine control? E.g. when we require torque control? Or low-level joint space control? (The ant maze experiment does not require fine control imho) - What would be some more quantitative metrics for evaluating performance? "we count a trial as a success if the policy interacts with both objects" (and other similar metrics in the paper) are quite vague and cannot really be trusted Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. In the following we provide additional clarification on the success metrics and RL experiments, as well as several other comments the reviewer had. We will update the final version of the paper to include all these details. ## Clarification of Success Metrics For the real-world WidowX experiments, we define an “interaction” with an object as the end-effector of the WidowX making contact with the object (as stated in the supplemental: “We consider a reach to be successful if the end-effector makes contact with the object”). We note that in the settings we considered this is straightforward and unambiguous to evaluate. As we are evaluating the ability of each approach to explore the possible tasks in the scene, we run the policy 5 times, and count the overall sequence as a success if across these 5 trials both objects in the scene were interacted with—the end-effector made contact with each object at least once across the 5 episodes. For Libero and D4RL Kitchen, we utilize the built-in success detector in the simulator to identify success. For D4RL Antmaze, we utilize the same goal locations as in (Wilcoxson et al., 2024), and count a goal as reached if the agent reaches within distance 0.5 of a square of side length 1.5 centered at the goal point. For each setting, we evaluate on each task separately (e.g. each goal for Antmaze, or task for Libero), and the plots given in the paper are performance averaged across each task (see below for further clarification on aggregation of results). Figures 2, 4, 6, 7 therefore show what fraction of trials have resulted in a success up to a particular time (averaged across tasks and random seeds). We will add all these details to the final version of the paper. If the reviewer has any additional questions on the success metrics we are happy to provide further clarification. ## Clarification on RL Problem Setting and Experimental Details For all RL baselines, we run them as described in their original works. In particular, for each setting the algorithm is given an offline dataset of transitions from the environment, but with no reward labels. With the exception of “Online RND”, each method we consider pretrains on this data (Online RND only trains online). After pretraining, each method is deployed online, and is evaluated on the number of steps it takes to reach the specified goal at least once. The BE policy is therefore used only for exploration, and is not fine-tuned at all during online deployment. However, as described in Section 4, the history of previous (online) interactions is fed into the context of the BE policy online, so that its behavior can adapt to the past experience—this adaption is only “in-context”, however; no gradient steps are taken on the BE policy. For every other approach considered, with the exception of BC (which is not updated at all online), the policy is updated online with standard gradient-based RL updates (in particular, which seek to maximize RND bonuses, inducing exploration). ## Aggregated Performance Curves We observe similar trends in the individual results as in the aggregated results, but for the final version will include in the upplemental individualized results on Antmaze (for each maze type and goal location) and Libero (for each of the 90 tasks). For the WidowX results, the supplemental already provides individualized results (see Figures 20-22). In all cases where results were aggregated (Antmaze, Libero, and WidowX settings), the aggregated results are the mean of the individual results. ## Fine-Grained Control In addition to Antmaze, which, as noted by the reviewer, does require joint-space control, the Franka Kitchen environment considered in Figure 4 also requires joint-space control. We believe that these results indicate that our method does scale effectively to settings with joint-space control. More generally, we would expect our method to scale effectively to any setting on which BC performs well. As our method is essentially training on a BC objective augmented with a particular conditioning structure, we would expect it to, in general, perform at least as well as BC, which we found to be the case in all our experiments. The degree to which it learns effective exploratory behavior is a function of several additional factors, in particular the distribution of behaviors present in the offline data, so there may be cases where it is not able to learn more effective exploratory behavior than BC—we do not necessarily expect the type of control to affect this, however.
null
null
null
null
null
null
Ad Hoc Teamwork via Offline Goal-Based Decision Transformers
Accept (poster)
Summary: The paper introduces TAGET, a hierarchical framework for offline ad hoc teamwork (AHT) that leverages teammate-aware goal-conditioned Decision Transformers. The core idea is to dynamically predict teammate-aware return-to-go (TA-RTG) and sub-goals (TA-Goals) to enable real-time adaptation to unknown teammates without online interactions. Key innovations include a trajectory mirroring strategy for data efficiency, dual state encoders with regularization for partial observability, and a hierarchical structure combining high-level goal prediction with low-level action generation. Experiments in three environments demonstrate TAGET’s superiority over baselines, with an average performance improvement of 48.08%. Ablation studies validate the contributions of individual components, such as TA-Goal decoding and data preprocessing. Claims And Evidence: The authors raise issues such as poor generalization of the learned policy and challenges in accurately modeling teammates. However, the subsequent experimental section does not clearly demonstrate whether the proposed method effectively addresses these problems. For instance, how accurate is the modeling of teammates, and how well does the policy generalize? Methods And Evaluation Criteria: The method and validation don’t quite make sense. In my past experience with offline reinforcement learning, data augmentation doesn’t provide significant improvements in offline scenarios. Moreover, the accuracy of Teammate-aware goal prediction is hard to guarantee, and no experimental validation is provided. Theoretical Claims: No, there are no proofs in this paper. Experimental Designs Or Analyses: The evaluation of teammate modeling accuracy, policy generalization, and the ablation of return in the return-to-go function is missing. Additionally, experiments involving humans as teammates are not included. The experimental scenarios are relatively simple, and further testing in more complex environments, such as SMAC [1], is needed. [1] Wang, Caroline, et al. "N-Agent Ad Hoc Teamwork." arXiv preprint arXiv:2404.10740 (2024). Supplementary Material: Yes, I read the all parts. Relation To Broader Scientific Literature: This paper clearly situates TAGET within offline RL (e.g., Decision Transformer) and AHT (e.g., ODITS, LIAM), and highlights gaps in prior AHT methods (online dependency) and offline RL (lack of teammate adaptability). Essential References Not Discussed: [1] **multi-agent decision transformer**: Meng, Linghui, et al. "Offline pre-trained multi-agent decision transformer: One big sequence model tackles all smac tasks." arXiv preprint arXiv:2112.02845 (2021). [2] **offline teammate modeling**: Zhu, Zhengbang, et al. "Madiff: Offline multi-agent learning with diffusion models." arXiv preprint arXiv:2305.17330 (2023). Other Strengths And Weaknesses: **Strengths**: 1. The offline AHT problem addressed is novel. **Weaknesses**: 1. The method lacks innovation. Techniques such as dataset augmentation, local observation-based global state prediction, and decision transformers are not new. The authors need to further clarify how these techniques are organically combined. 2. The experimental section is insufficient, lacking evaluations of teammate modeling accuracy, policy generalization, and ablation studies on the return in the return-to-go function. 3. The method still relies on algorithms that collect data driven by diversity. I’m curious about the method’s robustness when applied to low-diversity datasets. Other Comments Or Suggestions: No. Questions For Authors: 1. What is the difference between teammate-aware return-to-go and conventional return-to-go? (Has it shifted from observation to global state?) 2. How does the method address the issue of “poor generalization of the learned policy” mentioned in the paper? If this isn’t addressed, it remains a point of concern. 3. Can the trajectory mirroring strategy be understood as a form of data augmentation? How is it different from traditional data augmentation? How does this strategy work when the dataset’s diversity is low? 4. What is the impact of dataset quality on performance? Does a dataset with low diversity greatly affect the performance of the method? 5. Can you further clarify how the various components of the method are integrated? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1. Questions in Claims And Evidence.** A1. We apologize for any confusion. Firstly, we perform implicit modeling of teammates to capture their changes and update our TA-goal, so there is no explicit metric to measure the accuracy of teammate modeling. As for the generalization of the policy, we demonstrate this in our experiments. Specifically, we test the policy with teammates that have never been seen during training. The fact that our method achieves higher scores when cooperating with these unknown teammates, compared to other baseline methods, serves as evidence of the generalization ability of our proposed approach. **Q2. Questions in Methods And Evaluation Criteria.** A2. Regarding your concerns about data augmentation, more details can be found in our response to Reviewer TMEP, Response A1. Regarding the accuracy of TA-Goal, during training, we use the real state after $k$ steps as the ground truth, which is a common practice in many works. During testing, since there are no trajectories for reference, we cannot directly measure the accuracy of TA-goal's prediction. Our goal during testing is not to achieve the absolute accuracy of the TA-Goal but to assess its ability to generalize to unknown teammates in AHT. **Q3. Questions in Experimental Designs Or Analyses.** A3. The evaluation of teammate modeling accuracy and policy generalization is answered in A1. Since TA-RTG is a predicted value rather than a fixed input, it cannot be ablated in the traditional sense. Besides, we agree that further testing in more complex environments, such as SMAC, is necessary, and we plan to conduct such tests. **Q4. Questions in Essential References Not Discussed.** A4. Thank you for pointing out these relevant works. We acknowledge the importance of the two and will include a discussion of them in the revised manuscript to better situate our contributions within the existing literature. **Q5. Concern about innovation.** A5. Thank you for the comment. Our method is not designed to simply combine existing techniques, but is driven by a novel problem setting - offline AHT. This new setting raises unique challenges, which motivate our methodological choices. Rather than applying existing tools directly, we analyze why previous approaches like DT are insufficient and propose tailored solutions. Our trajectory mirroring strategy is a data preprocessing method specifically designed for the AHT setting. It improves data efficiency and teammate diversity without introducing extra computational cost. After preprocessing, we learn policies from offline data. While we adopt the Decision Transformer (DT) architecture, we find that standard DT is not well-suited for our problem, as RTG cannot capture dynamic teammate behavior. To address this, we propose TA-RTG, a novel formulation that incorporates teammate-awareness. Furthermore, given the dynamic nature of the environment, we propose the novel concept of TA-Goal, a sub-goal representation predicted from TA-RTG, to guide the transformer’s decision-making in a hierarchical manner. **Q6. Concern about the method’s robustness when applied to low-diversity datasets.** A6. Thank you for raising this important point. While our method benefits from diverse offline data, we also aim to improve robustness through model design. Specifically, the proposed TA-RTG and TA-Goal mechanisms help the model infer and adapt to teammate behavior even under limited diversity. In our ablation studies, the variant without the trajectory mirroring strategy can be seen as a setting with reduced data diversity. We observe that the method still maintains reasonable performance under this condition, demonstrating a degree of robustness to limited diversity. Nonetheless, we acknowledge this as a valuable direction and plan to further explore robustness under low-diversity settings in future work. **Q7. Difference between TA-RTG and RTG.** A7. The main difference between the two is that TA-RTG considers teammate behavior in a team-shared reward setting. TA-RTG incorporates how teammates' actions influence the collective reward, adapting the agent's strategy based on the team context. TA-RTG shifts from relying solely on the agent’s observation to considering the team context, allowing it to adapt to changes in teammate behavior (TA-RTG and RTG ablation: https://anonymous.4open.science/api/repo/materials-E410/file/ablation_new.pdf). **Q8. Concern about poor generalization.** A8. Thank you for highlighting this important point. Our method addresses the generalization issue through both data-level and model-level designs. On the data side, the trajectory mirroring strategy increases teammate diversity. On the model side, the proposed TA-RTG and TA-Goal explicitly encode teammate behavior and adapt decision-making accordingly. These components together improve the model’s ability to generalize to unseen teammates. We will clarify this contribution further in the revised manuscript.
Summary: The paper presents TAGET, a novel hierarchical framework designed to address the challenges of ad hoc teamwork in settings where only offline multi-agent interaction data is available. Traditional ad hoc teamwork approaches rely on real-time interactions, but TAGET circumvents the need for costly online simulations by learning robust policies from pre-collected datasets. At its core, TAGET integrates a high-level module that dynamically predicts a teammate-aware sub-goal, leveraging a new teammate-aware return-to-go signal and dual state encoders to capture the evolving team contex, with a low-level, goal-conditioned Decision Transformer that generates adaptive actions. The framework also incorporates a trajectory mirroring strategy that repurposes each multi-agent trajectory by cyclically treating each agent as the ego agent, thereby enhancing data efficiency and capturing diverse decision-making perspectives. Extensive experiments conducted in environments such as Predator-Prey, Level-Based Foraging, and Overcooked demonstrate that TAGET significantly outperforms existing baselines in terms of overall return, stability, and generalization to unseen teammate behaviors, marking a substantial advancement in offline multi-agent reinforcement learning. ## Update After Rebuttal The authors have addressed part of my concerns. However, I still have some concerns about its potential effect: I have a different view of the feasibility of transformer based method is the route towards the final goal of juggling realistic deployment and offline training. The features of decentralized execution and parallel decision making is crucial to my best experience. For this reason, I raised my score from 1 to 2. Claims And Evidence: ### Supported Claims: 1. The claims of better performance than other offline methods are supported by strong experimental evidence. ### Problematic Claims: 1. The authors claim that "TA-Goal captures the dynamics of teammate behaviors, providing more robust and adaptive guidance than RTG in complex multi-agent environments." However, I didn't see the evidence to support this claim and it is non-trivial to understand this claim by common sense. 2. The authors claim that "To model this, we represent the team context in a stochastic embedding space $\mathcal{H}$, where it is encoded as a latent probabilistic variable $h_{i}$ drawn from a multivariate Gaussian distribution." However, I didn't see any evidence to support this viewpoint. Why is the team context modeled as a multivariate random variable? Furthermore, why is it Gaussian distribution? 3. The authors claim that "Similarly, $z_{i}$, as an approximation of $h_{i}$, is encoded into a stochastic embedding space, where it is represented as a latent probabilistic variable drawn from a multivariate Gaussian distribution." Can the authors provide any evidence to support this modeling strategy? 4. It seems like in Eq. (6), the first KL-divergence is a reverse KL-divergence. Can the authors provide more insight into this design? 5. In Eq. (8), the groundtruth goal is constructed by the concatenation of team observations. As a result, it is not a binary variable. Can the authors give an explanation about why the loss in Eq. (8) is formulated as binary cross entropy? Methods And Evaluation Criteria: ### Evaluation Criteria: The evaluation criateria used in this paper is acceptable to the AHT problem. ### Methods: The authors claim that the offline data is easy to collect and propose to use decision transformer as an offline RL method to address AHT. However, I have some concerns on the difficulty of collecting offline data, which may affect the reasonableness of the proposed method. First, if we consider collecting offline samples from simulators, why not train the ego agent through interaction with environment? On the other hand, if we consider collecting data from real-world scenarios, what is the way to capture long-horizon rewards? It is not easy to track a team of agents constantly in a long episode, especially with accurate records of data. For example, the traffic data captured by CCTV cameras is with lower frequency than the control rate by an agent. Let's say that the distance between two cameras is 1 km, and there may happen multiple decision making steps. On the other hand, if each agent is able to collect data by the camera installed on its body, the alignment between timestamps could extremely difficult. In conclusion, offline data collection is not as easy as what the authors claimed. Theoretical Claims: This paper has no theoretical claims. Experimental Designs Or Analyses: In general, the experimental design is following the conventional experimental setup of AHT. However, how the teammate sets are generated may miss some information. The authors have mentioned that they have trained four distinct populations of MARL policies. What are the MARL algorithms used? How are the four distinct populations formed? Supplementary Material: I have reviewed all contents shown in the supplementary material. Relation To Broader Scientific Literature: In prior work, the ego agent (learner) was not trained by the offline dataset. As I mentioned above, this could be due to the validity and difficulty of collecting offline data. In turn, this could lead to a question that if offline RL paradigm does fit in the aim of the AHT problem. The major issue of previous methods is how to construct diverse teammate sets (models). With the same techniques of establishing diverse teammate sets, I suspect if the offline method proposed in this paper is necessary. The direct evidence is that in experiments it seems like the proposed offline method still requires diverse teammate sets, like what is required in AHT with online interaction. If the problem of offline data collection is dismissed, the major contribution of this paper is proposing a training paradigm which does not need online interactions. The remaining questions is that if the offline collected data without interactions can actually have good quality as the data collected with interactions. Essential References Not Discussed: Another recently published paper in ICML 2024 [1] which studies online interactions for AHT could be cited in related works to enhance the understanding of the main contribution of this paper. [1] Wang J, Li Y, Zhang Y, Pan W, Kaski S. Open Ad Hoc Teamwork with Cooperative Game Theory. In International Conference on Machine Learning 2024 Jul 8 (pp. 50902-50930). PMLR. Other Strengths And Weaknesses: 1. The methods are well described, however, the reason behind part of the procedure is not explained. 2. The proposed algorithm is novel with an idea to use offline collected data, though the applicability to real-world problems is still under concern. Other Comments Or Suggestions: In Eq. (8), $\hat{G}$ and $G$ interchangebly represent the groundtruth in multiple places. I believe this could be a typo. Questions For Authors: The authors are required to make clarification for the problematic claims listed in **Claims And Evidence**. In addition, the authors are requested to answer the concerns about the applicability of methods in **Methods And Evaluation Criteria** and the dataset generation in **Experimental Designs Or Analyses**. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your suggestions. **Q1. Concerns on the difficulty of collecting offline data and the applications of methods** A1.We apologize for the misunderstanding. Our claim that offline data is "relatively easy to collect" refers to its cost and safety advantages over online training, not to ignore practical challenges. We use simulator-collected data to avoid exploration risks (e.g., collisions in autonomous driving) and improve data efficiency with trajectory mirroring. For real-world data, we address challenges like long-horizon rewards using multi-sensor fusion [1] and hierarchical reward decomposition [2]. While data collection is challenging, offline RL remains valuable for providing a safe, low-cost baseline and enabling future online refinement. **Q2. Concerns on empirical validation and intuitive explanation of TA-Goal's advantages over RTG** A2.Towards empirical validation, our experiments (Fig. 3) show that TAGET (TA-Goal-based) outperforms DT (RTG-based). Intuitively, RTG is a long-term scalar target that doesn't adapt to changing teammate behaviors (e.g., obstacle avoidance). TA-Goal, on the other hand, dynamically adjusts based on teammate intent, enabling adaptive actions (e.g., flanking maneuvers) as shown in Fig. 4. https://anonymous.4open.science/api/repo/materials-E410/file/ablation_new.pdf **Q3. Why is the team context modeled as a multivariate random variable and selected Gaussian distribution** A3. The team context is modeled as a multivariate random variable to capture uncertainties in teammates' policies, enhancing decision robustness. The Gaussian distribution is chosen due to: (1) mathematical and computational advantages; (2) uncertainty modeling capability (3) research alignment—Gaussians are standard in deep probabilistic models (e.g., VAEs [3][4] ). Many existing works [5] adopt this approach, and we will clarify this in our paper more clearly. **Q4. The evidence to support this modeling strategy.** A4. In partially observable settings, the ego agent infers the global team context $h_i$ from local observations $o_i$. KL-divergence regularization (Eq. 6) aligns $z_i$ with $h_i$, enabling latent team dynamics capture without global info. This method has been validated in previous works [5][6] for improved generalization. If additional evidence is needed, we will include it in the revised manuscript. **Q5. Eq.(6). The first KL-divergence is a reverse KL-divergence.** A5. The first term in Eq. (6) is a reverse KL-divergence, following the beta-VAE approach [4], which encourages the latent representation to match a standard normal prior, preventing overfitting and improving generalization. The second term uses forward KL to align the proxy and team context representations. **Q6. Eq.(8) explanation about why the loss is formulated as binary cross entropy** A6. Thanks for the useful feedback. The groundtruth goal $\hat{G}$ is constructed by concatenating discretized observations (e.g., grid-world positions, task states) and one-hot encoding the global state $s_{t+k}$, where each dimension represents a binary indicator. We use BCE loss per dimension to handle binary classification, which aligns with the discrete nature of our environments. Using MSE for continuous regression would be suboptimal, as the goal representation is categorical and requires probabilistic modeling of binary outcomes. We will also fix the inconsistency between $\hat{G}$ and $G$. **Q7. Concerns about the applicability of methods in Methods And Evaluation Criteria.** A7. While collecting offline datasets is challenging, it is manageable in AHT. The core challenge is enabling the ego agent to handle unknown teammates, which previous works have addressed in two ways: achieving generalization through algorithms and constructing diverse sets of teammates. Even with algorithmic generalization, a diverse offline dataset is still needed for better performance, which doesn’t mean it’s not worth exploring. The offline method is not meant to replace online interactions but is motivated by safety and cost considerations, and can be further optimized when combined with online methods. **Q8. The dataset generation in Experimental Designs Or Analyses.** A8. Thank you for your feedback. For more details, please refer to our response to Reviewer cnPV, A1. * [1] Li, et al. "Dynamic Semantic SLAM Based on Panoramic Camera and LiDAR Fusion for Autonomous Driving." IEEE TAITS 2025. * [2] Matsuda, et al. "Hierarchical Reward Model of Deep Reinforcement Learning for Enhancing Cooperative Behavior in Automated Driving." JACII 2024. * [3] Kingma, et al. "Auto-encoding variational bayes." 20 Dec. 2013. * [4] Higgins, et al. "beta-vae: Learning basic visual concepts with a constrained variational framework." ICLR. 2017. * [5] Gu, et al. "Online ad hoc teamwork under partial observability." ICLR. 2021. * [6] Xie, et al. "Future-conditioned unsupervised pretraining for decision transformer." ICML, 2023. --- Rebuttal Comment 1.1: Comment: 1. The answer for Q1 is acceptable to me, but I hope the authors can complement the introduction with these faithful statements. 2. I suggest the authors complement the evidence to support the modeling strategy as well as Q5 and Q6 in the revised maniscript, to deliver a high-quality and responsible work. This will not only benefit the future work, but also prevent the risk of misleading others. 3. About the applicability of using transformer, I never judge its usefulness to tackle off-line training. However, I believe that the incorporation of benefit of tackling off-line training should be built upon some decision-making paradigms that are adaptable to existing recognized scenarios. Otherwise, it will highly likely lead to a result that some tachniques are developed independent of what was needed in priority in real world. --- Reply to Comment 1.1.1: Comment: Thanks for your insightful feedback. **Q1. Complement the introduction with these faithful statements.** A1. Thank you for your constructive feedback. We have revised the introduction (Lines 19–25) with the faithful statements. The updated text now reads: "Environmental simulators are often unavailable or expensive, especially for real-world scenarios such as autonomous driving or disaster response, where online RL approaches face prohibitive safety risks (e.g., collisions or mission failures during exploration). However, offline data collection **provides a safer and more cost-effective alternative**—multi-agent interaction datasets can be acquired through pre-recorded logs (e.g., traffic data from cameras and road sensors) or synthetic simulations. While challenges such as long-horizon reward assignment and partial observability persist, recent advances in offline RL (e.g., multi-sensor fusion [1] and hierarchical reward decomposition [2]) enable practical solutions. In this work, we further improve data efficiency via a trajectory mirroring strategy, amplifying the utility of limited datasets by treating each agent as the ego agent. These advantages make offline AHT a viable foundation for real-world deployment, offering a safe baseline policy that can later be refined through online adaptation." **Q2. Complement the evidence to support the modeling strategy as well as Q5 and Q6 in the revised manuscript.** A2. We have revised the descriptions of Eq.(6) (Lines 256–265) and Eq.(8) (Lines 254-255). "The first KL-divergence term $D_{KL}( f_{\varphi}(\cdot | o_t^i, o_t^{-i})$ is designed as a reverse KL to enforce a compact latent space. Following the beta-VAE framework [3], reverse KL encourages the learned latent distribution to concentrate around the modes of the prior (standard normal distribution), effectively preventing overfitting to noisy observations and improving generalization to unseen teammates. In contrast, the second term employs forward KL, which aligns the proxy encoder’s output with the team context encoder by minimizing the divergence in expectation. This hybrid regularization ensures the latent space regularity and the consistency between local and global representations under partial observability. " "The groundtruth goal $\hat{G}$ is constructed by concatenating discretized observations (e.g., grid-world positions, task states) and one-hot encoding the global state, where each dimension represents a binary indicator. The loss function is defined as: " **Q3. Transformer-based offline training techniques must align closely with established decision-making paradigms in real-world scenarios.** AHT captures a core challenge in real-world multi-agent systems: enabling an ego agent to collaborate with unknown teammates without prior coordination. Unlike centralized MARL frameworks that require synchronized training of all agents—a scenario rarely feasible in practice (e.g., autonomous cars cannot jointly train with all possible human drivers)—AHT directly models the ego-centric adaptation paradigm. **This aligns with real-world applications** where agents must operate in open environments with changing teammates. **The success of offline MARL** [4] demonstrates that effective policies can be learned without online interactions, making its extension to AHT (a specialized case of MARL) a natural and viable direction. Beyond theory, offline AHT offers **practical value**: in autonomous driving, vehicles learn adaptive strategies from pre-collected data, avoiding collision risks and real-world testing costs. Besides, in healthcare robotics, surgical assistants trained offline on historical operation logs can adapt to diverse surgeon workflows without risking patient safety during live procedures. These applications ensure efficiency and safety across critical domains. DT is uniquely suited for offline AHT due to its ability to model long-term dependencies in sequential decision-making. However, vanilla DT struggles in AHT due to its reliance on scalar return-to-go (RTG), which fails to capture dynamic team objectives. We propose the novel concept of TA-Goal, a sub-goal representation predicted from TA-RTG, to guide the transformer’s decision-making in a hierarchical manner. To our knowledge, this is the first work addressing offline AHT, which we hope will inspire future research and practical applications. We would be grateful if you could reevaluate our work. * [1] Li, et al. "Dynamic Semantic SLAM Based on Panoramic Camera and LiDAR Fusion for Autonomous Driving." IEEE TAITS 2025. * [2] Matsuda, et al. "Hierarchical Reward Model of Deep Reinforcement Learning for Enhancing Cooperative Behavior in Automated Driving." JACII 2024. * [3] Higgins, et al. "beta-vae: Learning basic visual concepts with a constrained variational framework." ICLR 2017. * [4] Yang, et al. "Believe what you see: Implicit constraint approach for offline multi-agent reinforcement learning." NeurIPS 2021.
Summary: This paper introduces **TAGET (Teammate-Aware Goal driven hiErarchical Decision Transformers)**, a novel framework for offline ad hoc teamwork (AHT). The AHT problem requires an agent (ego agent) to collaborate with unknown teammates without prior coordination. Unlike existing approaches that rely on online reinforcement learning (RL) and direct environmental interactions, TAGET enables offline learning from pre-collected multi-agent datasets, addressing critical challenges such as limited data availability, partial observability, and dynamic teammate adaptation. Claims And Evidence: 1. The trajectory mirroring strategy is claimed to enhance data efficiency, and ablation results (Figure 5) support this. However, no alternative data augmentation methods are tested, making it unclear if mirroring is the best approach. Testing other augmentation techniques (e.g., data mixing or adversarial augmentation) would provide stronger evidence. 2. The paper assumes TAGET is robust to diverse teammate behaviors, but no systematic study on the impact of teammate diversity is provided. Evaluating performance across different levels of teammate heterogeneity would make the generalization claim more convincing. Methods And Evaluation Criteria: 1. The hierarchical TAGET framework aligns well with the offline ad hoc teamwork (AHT) problem, effectively addressing data scarcity, partial observability, and dynamic teammate adaptation. The trajectory mirroring strategy is a reasonable approach to improve dataset efficiency. 2. The evaluation on Predator-Prey, Level-Based Foraging, and Overcooked is appropriate for testing multi-agent coordination and adaptability. 3. Baseline comparisons include DT, Prompt-DT, ODITS-off, and LIAM-off, which are relevant, but classic offline RL methods like CQL [1] or QT [2] should be included for a more comprehensive benchmark suite. 4. Evaluation metrics focus on return scores, which effectively measure task performance. However, no explicit measure of adaptability to new teammates over time is provided. A metric tracking adaptation efficiency would strengthen the evaluation. 5. Authors did not explicitly claim how to compute standard errors with repeat experiments, which makes it hard to tell whether the improvements are meaningful. Computational time of each algorithm should also be listed for fair comparison. Overall, the methods are well-designed, but the evaluation could be strengthened with real-world benchmarks, additional baselines, and statistical validation. - [1] Kumar, A., Zhou, A., Tucker, G., & Levine, S. (2020). Conservative q-learning for offline reinforcement learning. Advances in neural information processing systems, 33, 1179-1191. - [2] Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., ... & Levine, S. (2018, October). Scalable deep reinforcement learning for vision-based robotic manipulation. In Conference on robot learning (pp. 651-673). PMLR. Theoretical Claims: There is no novel theoretical claim in this paper. All mathematical formulations to the problem are well-defined. Experimental Designs Or Analyses: The detailed experimental designs are not clarified, such as parameter selection, model architecture, encoder/decoder selection. Authors should include the main algorithm in the main paper, while putting all the substantial tricks into the appendix. Supplementary Material: Yes, I reviewed the whole part. The supplementary material only contains three pages, with the first page introducing the environment of experiments, the second page presenting the extended results of different environments with respect to various baseline algorithms and goal steps. Here, additional experiment details should be addressed, like detailed hyperparameter setting, neural network architecture, method to compute standard deviation, etc. Relation To Broader Scientific Literature: This paper introduces TAGET, addressing key limitations of prior ad hoc teamwork (AHT) methods in multi-agent reinforcement learning (MARL), particularly in offline settings. Previous online AHT methods rely heavily on environment interactions, limiting generalization and adaptability to unknown teammates. TAGET overcomes these issues by learning teammate-aware goal-driven policies from offline datasets. Traditional online AHT approaches [1][2] require continuous adaptation to teammates through reinforcement learning (RL), which is infeasible without environmental access. TAGET builds on offline RL advancements (e.g., Decision Transformer [3], Conservative Q-Learning [4]), reformulating AHT as a sequence modeling problem while introducing TA-Goal and TA-RTG for better teammate modeling. Unlike prior offline MARL works [5][6] that learn joint optimal policies, TAGET focuses on real-time adaptability to changing teammates. The trajectory mirroring strategy further differentiates TAGET by enhancing dataset efficiency, addressing data limitations in offline settings. Additionally, prior teammate/opponent modeling methods [7][8] rely on explicit policy inference, which struggles in offline settings due to missing exploration opportunities. TAGET circumvents this by predicting future global states (TA-Goal) rather than teammates' policies, making it more robust to unseen agents. By integrating ideas from offline RL, sequence modeling, and multi-agent adaptation, TAGET advances offline AHT research and sets a foundation for applying decision transformers in multi-agent collaboration without online interactions. - [1] Barrett, S., & Stone, P. (2015, February). Cooperating with unknown teammates in complex domains: A robot soccer case study of ad hoc teamwork. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 29, No. 1). - [2] Rahman, M. A., Hopner, N., Christianos, F., & Albrecht, S. V. (2021, July). Towards open ad hoc teamwork using graph-based policy learning. In International conference on machine learning (pp. 8776-8786). PMLR. - [3] Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., ... & Mordatch, I. (2021). Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34, 15084-15097. - [4] Kumar, A., Zhou, A., Tucker, G., & Levine, S. (2020). Conservative q-learning for offline reinforcement learning. Advances in neural information processing systems, 33, 1179-1191. - [5] Tseng, W. C., Wang, T. H. J., Lin, Y. C., & Isola, P. (2022). Offline multi-agent reinforcement learning with knowledge distillation. Advances in Neural Information Processing Systems, 35, 226-237. - [6] Jiang, J., & Lu, Z. (2023). Offline Decentralized Multi-Agent Reinforcement Learning. In ECAI (pp. 1148-1155). - [7] Gu, P., Zhao, M., Hao, J., & An, B. (2021). Online ad hoc teamwork under partial observability. In International conference on learning representations. - [8] Zintgraf, L., Devlin, S., Ciosek, K., Whiteson, S., & Hofmann, K. (2021). Deep Interactive Bayesian Reinforcement Learning via Meta-Learning. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1712-1714). Essential References Not Discussed: The references in this paper are sufficient. Other Strengths And Weaknesses: ## Strengths: 1. Novel Offline Ad Hoc Teamwork Framework – The paper introduces TAGET, a hierarchical decision-transformer-based approach to offline AHT, addressing critical challenges such as limited data, partial observability, and teammate adaptation. 2. Effective Use of Goal-Based Adaptation – The introduction of Teammate-Aware Goal (TA-Goal) and Teammate-Aware Return-to-Go (TA-RTG) is a significant innovation that helps the ego agent dynamically adapt to unseen teammates without relying on explicit online interactions. 3. Data Efficiency via Trajectory Mirroring – The trajectory mirroring strategy improves the use of offline datasets by allowing each agent to be treated as an ego agent in different contexts, effectively increasing the amount of usable training data. 4. Addresses a Practical Limitation of Online AHT – Most prior ad hoc teamwork approaches require online interactions, which are impractical in real-world applications. TAGET’s offline approach makes AHT more feasible for real-world deployment. ## Weaknesses: 1. Limited Novelty in Offline RL Techniques – While TAGET innovates in offline AHT, its use of hierarchical decision transformers and return-to-go estimation builds on existing offline RL techniques (e.g., Decision Transformer, CQL) rather than introducing fundamentally new RL methods. A clearer differentiation from prior offline RL works would strengthen its contribution. 2. No Direct Ablation for TA-RTG Effectiveness – The paper claims that Teammate-Aware Return-to-Go (TA-RTG) is superior to standard RTG, but no explicit ablation study compares TAGET with standard return-to-go instead of TA-RTG. This omission makes it unclear whether TA-RTG is necessary for TAGET’s success. 3. Lack of Analysis on Teammate Diversity Impact – The generalization of TAGET to diverse teammates is assumed, but the paper does not systematically analyze how teammate heterogeneity affects performance. Testing against teammates with drastically different strategies or levels of expertise would provide a stronger understanding of its adaptability. 4. Confusing Figures and Lack of Detailed Descriptions – Some figures lack clear explanations (e.g., Figure 3 performance curves do not explicitly mention how the 95% confidence intervals are computed, and no verbal explanations of those curves are provided). The captions should give more precise descriptions of experimental setups and key observations. Other Comments Or Suggestions: Please refer to the “Other Strengths And Weaknesses” section. Questions For Authors: Please refer to the “Other Strengths And Weaknesses” section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for your constructive feedback. **Q1. Concerns about the trajectory mirroring strategy.** A1. Our trajectory mirroring strategy is a data preprocessing method specifically designed for the AHT, not a data augmentation method. It works by sequentially designating different agents as the ego agent in a given trajectory. As a result, the combinations of the remaining teammates change, increasing the diversity of teammate sets. This approach not only enhances data efficiency but also provides a wider variety of teammate combinations for training. Through ablation experiments, we have demonstrated that this preprocessing method improves algorithm performance without incurring additional computational overhead. If you believe that other augmentation methods could provide meaningful comparisons, we are open to considering further experiments, but our current focus is to showcase the effectiveness of this method in AHT. **Q2. Concerns about the teammate's diversity and teammate heterogeneity.** A2. Thank you for your insightful feedback. To clarify, the robustness of TAGET is focused on its generalization to unseen teammates, especially in terms of adapting to diverse and unknown behaviors. In our experiments (Fig. 3), we evaluate performance using diverse sets of previously unseen teammates to test this generalization. We have added the cross-play experiments between different populations, the cross-play matrix can be seen in https://anonymous.4open.science/r/materials-E410/cross-play-matrix.md. **Q3. Concerns about baseline** A3. Thanks for this insightful feedback! We agree that including classic offline RL methods like CQL [1] and QT [2] would provide a more comprehensive benchmark suite. We’ve now added CQL as a baseline. **TAGET outperfoms CQL** across all environments, which can be found in https://anonymous.4open.science/api/repo/materials-E410/file/comparison.pdf. **Q4. Concerns about evaluation metrics.** A4. There is currently no specific metric for measuring teammate adaptability. However, achieving a higher return generally indicates better cooperation with teammates in a game. To address this concern, we could consider comparing our agent’s adaptability with the Oracle Upper Bounds suggested by Reviewer cnPV, which would provide a meaningful comparison of our approach in terms of teammate adaptation. **Q5. Concerns about the novelty.** A5. Thank you for your comment. While we build on existing research such as Decision Transformer and return-to-go estimation, previous methods are not directly applicable to the novel problem of offline AHT. To address this, we introduce TA-RTG and TA-Goal to handle the unique challenges of offline AHT, where agents must infer and adapt to the behaviors of unseen teammates. This teammate-aware approach enables better generalization in AHT settings, differentiating our method from prior works. We believe this extension of offline RL into the AHT domain is a key innovation. **Q6. Concerns about the direct ablation for TA-RTG.** A6. Thank you for your suggestion. We have added direct ablation experiments for TA-RTG, and the results can be found in https://anonymous.4open.science/api/repo/materials-E410/file/ablation_new.pdf. The experimental results show that **TA-RTG outperforms traditional RTG**, as it is able to capture changes in teammates' strategies and adapt to unknown teammates. **Q7. Confusing figures and lack of detailed descriptions** A7. Thank you for your valuable feedback. We acknowledge that some figures, such as Figure 3, could benefit from clearer explanations. We will make sure to correct these issues in future revisions. **Q8. Concerns about the supplementary material.** A8. Thanks for your useful suggestion. We commit to including these additional experiment details, such as detailed hyperparameter settings, neural network architecture, the method for computing standard deviation and the computational time of each algorithm, in the Supplementary Material of the revised version. Furthermore, we will also open-source our code. * [1] Kumar, A., Zhou, A., Tucker, G., & Levine, S. (2020). Conservative q-learning for offline reinforcement learning. Advances in neural information processing systems, 33, 1179-1191. * [2] Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., ... & Levine, S. (2018, October). Scalable deep reinforcement learning for vision-based robotic manipulation. In Conference on robot learning (pp. 651-673). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Although the additional experiments presented by the authors are somewhat helpful in supporting the conclusions, I remain concerned that the work lacks sufficient theoretical backing to qualify as a strong empirical contribution. Consequently, I have decided to maintain my current evaluation.
Summary: This paper addresses ad hoc teamwork in the offline setting. It proposes a method called TAGET, which is based off the decision-transformer architecture and learns from a dataset of offline cooperative multi-agent interactions. It has a couple of main ideas. First, trajectory mirroring: where one agent is sampled to be the ego agent, and the remainder are teammates to be modeled. Second, agent modeling - where the teammate returns to go and a latent variable corresponding to teammate subgoals is modeled. TAGET is evaluated on Predator Prey, Level-Based-Foraging, and Overcooked, and compared against the Decision-Transformer, Prompt-DT, offline ODITS, and offline LIAM. Overall, it improves against baselines on all tasks, in terms of generalization to unseen teammates. Claims And Evidence: The train and test set of teammates were generated using the same MARL algorithm (Soft Value Diversity). The authors claim, “Each checkpoint represents a distinct joint strategy, capturing a diverse range of behaviors and interaction patterns.” (L282) However, no empirical evidence is presented to back up this claim about a crucial aspect of the experimental design. I would like to see cross-play matrices demonstrating how well different checkpoints can coordinate with each other, focusing especially on whether training teammates can coordinate with testing teammates. Typically, AHT papers will also generate test-sets using hand-coded heuristic policies, which should be easy to code up / get from other papers for LBF, Overcooked, and Predator-Prey. The analysis would also be stronger if the authors tested against heuristic policies that cover a wide range of teammate behavior. Methods And Evaluation Criteria: Evaluation method follows standard protocol in AHT. Theoretical Claims: N/A Experimental Designs Or Analyses: I have concerns about the strength of the baselines. While TAGET outperforms baselines in all tasks (Fig. 3), many of the proposed baselines are simply not designed for offline ad hoc teamwork, and look like they are entirely failing to learn in most tasks. - Decision Transformer - offline single-agent RL - Prompt DT - offline single-agent RL - ODITS - online AHT - LIAM - online AHT I do not think it’s surprising that LIAM and ODITS fails to learn in the offline setting, and that the DT and Prompt DT fail to generalize to the multi-agent setting/unseen teammates (a characteristic of the AHT problem). While I understand that the authors have proposed the first (to my knowledge) offline AHT method, and therefore there is no other offline AHT baseline, I wonder if they can compare to a method that is closer to the intended setting? For example, why not compare against MADT (Meng et al. 2021), which is an offline multi-agent decision transformer? I would also be better able to understand how well TAGET is performing if the authors could add some oracle upper bounds to the results plots. For example, perhaps the average self-play score of the test teammates? Supplementary Material: Yes Relation To Broader Scientific Literature: The paper is mostly well-situated in the literature, and cites relevant papers that i am aware of. However, I did notice that the authors cited Durugkar et al 2020 as an AHT paper, when it is not. Durugkar et al. 2020 is closer to a decentralized MARL paper. Essential References Not Discussed: No Other Strengths And Weaknesses: - Strengths - This paper fills a hole in the AHT literature - The paper has been well-contextualized w.r.t. related work. - The ablation study shows that each component of TAGET contributes positively to the final performance - Weaknesses - Hard to understand how strong the method is due to lack of good baselines. The baselines fail because they were not designed for the problem setting, and no oracle upper bounds are presented to contextualize perforamnce. - Lack of information about the diversity of the traning and test teammates - Overall, the experimental analysis is a little sparse. Some additional empirical questions could be considered, which I have put in the “Questions” section. Other Comments Or Suggestions: Missing word at Line 33, right column Questions For Authors: - How many transitions are included in the offline interaction dataset for each task? - How does TAGET’s performance change as the size of the offline dataset changes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your kind suggestions and helpful feedback! **Q1: Concerns about teammate diversity.** A1. We apologize for the confusion about the teammate generation. First and foremost, Soft Value Diversity (SVD) is not a MARL algorithm but a framework specifically designed to generate diverse policies. SVD maximizes the differences in value estimates across observation-action pairs between different teams [1]. This design ensures diversity between training and testing policy sets. As a result, we did not conduct separate experiments to demonstrate the diversity of teammate sets, since this diversity is inherent to the SVD framework. To assuage your concerns, we've added the **cross-play experiments** between different populations, demonstrating significantly lower cooperation efficiency between populations compared to within-population cooperation, empirically validating that the policies represent truly distinct strategies. In cross-population cooperation experiments, each population sequentially designates one agent as the ego agent to collaborate with members from other populations. The matrix notation (Row1, Col3) specifically denotes an experimental configuration where Population 3's designated ego agent interacts with teammates from Population 1, establishing a systematic evaluation framework for inter-population coordination capabilities. As shown in the cross-play matrix(https://anonymous.4open.science/r/materials-E410/cross-play-matrix.md), in each row of the matrix, the diagonal positions behave best, which means that **strategies between populations don't work well in coordinating**, validating the **diversity** of our test teammate sets. **Q2. Concerns about the baseline and oracle upper bounds.** A2. Thanks for your raising the issue about baseline selection. As there are no existing methods specifically designed for offline AHT, our paper addresses this gap by proposing the first dedicated solution. To provide a reasonable comparison, we adapted methods from offline RL and online AHT. We’ve now added MADT [2] as a baseline, as you suggested. Our results show that **TAGET outperforms MADT**(https://anonymous.4open.science/api/repo/materials-E410/file/comparison.pdf). Due to time constraints, we only tested on the teammate test set of size 4, and we will fill in the experiments subsequently. It's important to note that standard MARL methods, including MADT, don’t fully align with the offline AHT setting, as they typically assume teammates of the same type. AHT, however, involves an ego agent adapting to diverse teammates. Our new experiments with MADT confirm this mismatch, highlighting the key strength of TAGET in handling teammate uncertainty. **Oracle Upper Bounds**: We have included oracle upper bounds represented by the average self-play scores of test teammates, which are shown in the cross-play matrix. **TAGET achieves 86.0%, 77.2%, and 95.1% of the oracle upper bounds** in PP, LBF, and Overcooked environments respectively, which demonstrates strong performance considering the challenging nature of AHT with unknown teammates. **Q3. Questions about experimental analysis.** A3.Our offline interaction dataset includes ​2,300,000 transitions in PP, 1,430,000 transitions in LBF, and 5,500,000 transitions in Overcooked. While we have not yet conducted a systematic ablation on dataset size variation, our experiments with ​data augmentation ablations have revealed that our method still maintains reasonable performance under reduced data diversity. We will add related ablation experiments in the future. It's worth mentioning that our trajectory mirroring strategy mitigates the negative effect when the diversity of offline datasets is low (shown in Fig. 5). * [1] Ding, Hao, et al. "Coordination Scheme Probing for Generalizable Multi-Agent Reinforcement Learning." (2023). * [2] Meng, Linghui, et al. "Offline pre-trained multi-agent decision transformer." Machine Intelligence Research 20.2 (2023): 233-248.
null
null
null
null
null
null
Noise-Guided Predicate Representation Extraction and Diffusion-Enhanced Discretization for Scene Graph Generation
Accept (poster)
Summary: This study proposes a Noise-Guided Predicate Representation Extraction and Diffusion-Enhanced Discretization (NoDIS) technique to solve the long-tail problem inherent in the existing dataset for learning the Scene Graph Generation (SSG) model. The main contribution is that the existing proto-type learning method used diffusion pliocytosis as a method for expanding the expression space of the predicate feature. Additionally, feature gathers as a decision boundary that can occur as the representation space is determined, and a discretization mapper is introduced to prevent inference performance degradation. This study was verified on the benchmark for SGG and proved effective in alleviating the long-tail problem. Claims And Evidence: - In this paper, the proposed NoDIS can improve prediction of feature. This is consistent with my intuition, but I have questions about whether it can be argued in general. It can be said that the diversity of feature representation may increase in the process of adding noise and de-noise during the diffusion process, but there is a lack of clear evidence (the effect of NoDIS can be confirmed by the ablation of Tab. 2 in this text, but I think it is too fragmentary evidence). As in the PE-Net study used as a baseline in this paper, if the results of feature representation through t-SNE are attached, the trust in the argument will be improved. - The Learnable Feature Discretization Mapping Module proposed in Section 3.4 is shown to avoid confusion between inter-classes that may occur during the diffusion process. If so, it is thought that the change in feature distance between inter-classes is also necessary, as reported in Figure 1 (b) of the text for the change in variance between the intra-classes. If the module worked effectively, the feature distance between the inter-classes would increase further. Methods And Evaluation Criteria: - The proposed model was evaluated on the existing two-stage-based SGG model, which can be seen as fair progress in terms of performance comparison with the existing two-state model. - The benchmark data used in the experiment, Visual Genome, and GQA datasets, are used as major data for SGG performance evaluation, which can be considered valid as data sets for performance comparison experiments. - All three subtasks (PredCls, SGCls, SGDet) performances that are key to SGG performance comparison are reported, and R@100 values per head, body, and tail class are well reported for proof of long-tail problem mitigation. - The Implementation Detail aspect for reproducibility of experimental performance is somewhat inadequate. Values such as learning rate scheduler and feature dimension used in the experiment are missing. - It seems that the experimental results on the GQA dataset are missing. The paper says that it was used as performance evaluation data in sec. 4.1, but we can't find the experimental results in either the main or the supplement. Theoretical Claims: - This paper was not accompanied by any specific mathematical proof. - Although the paper argues for feature representation enhancement through diffusion policy, the theoretical basis for this is lacking. - All proofs of the claims have been performed as inductive proofs, but I believe that additional strong experimental evidence of the claims is needed. Experimental Designs Or Analyses: - The Visual Genome and GQA datasets used in the experiment are used as major data sets for SGG performance evaluation, and there seems to be no problem in selection. - Models set as performance comparators are two-stage-based models and are suitable as performance comparators. - GQA is mentioned as the usage dataset, but not found in the actual experimental table. Without experiments on additional benchmark datasets, the claims of model's general performance improvement are highly problematic. - It is reasonable to report the recall values of each head, body, and tail class to prove the Long-tail problem mitigation, but the comparison group is not appropriate. For the proposed effect of NoDIS, it is considered correct that baseline, baseline + prototype, baseline + NoDIS, and baseline + all are used to prove the effectiveness. - The above problem is also shown in Tab. 2 of ablation study, which shows better performance when using only prototype than when using NoDIS only. This can be judged as the main contribution of the performance increase by the prototype learning method rather than the proposed NoDIS. - Tab. 6, 7 in Appendix is the same experiment as in text Tab. 2, 3. If it's unnecessary, it's right to remove it. Supplementary Material: - In this paper, the code has been submitted as a Supplementary Material, which is positive in terms of model reproductibility. - Demo. and Visualization codes are included for performance verification. Relation To Broader Scientific Literature: - Although the research on Scene graph is well organized, the general content of Diffusion is far from improving feature representation in the case of Diffusion studies. To reinforce the research claim, the reinforcement of feature presentation through the Diffusion policy should be added to the relatiob work. - The proposed paper focused on two-stage-based SGG models, but recent research trends have mainly focused on one-stage, which can train SGG models end-to-end. In addition, with the development of the Large Language Model (LLM), this study feels somewhat old compared to the study dealing with long-tail research through open-vocabulary with LLM. Essential References Not Discussed: - There is a missing one of the latest papers dealing with the long-tail issue based on two-stage, the same as in this study. Please refer to it. Kim, Hyeongjin, et al. "Scene graph generation strategy with co-occurrence knowledge and learnable term frequency." Proceedings of the 41st International Conference on Machine Learning. 2024. Other Strengths And Weaknesses: Strength - The introduction of a diffusion policy for solving the long-tail problem, which is the challenge problem of the existing scene graph dataset, is evaluated creatively. - The paper explains the proposed method in detail, and the formula and algorithm table are well organized. Weakness - Experiments to prove effectiveness on the proposed NoDIS are somewhat lacking - Lack of related work content. In order to claim the improvement of feature presentation through the diffusion policy claimed in this paper, research related to this must be added. - It feels somewhat old compared to open-vocabulary studies using the latest one-stage or LLM. Other Comments Or Suggestions: - Many figure diagrams are missing descriptions of x-axis and y-axis. There is an inconvenience in interpreting the diagram. - The formula is well written, but it is difficult to follow because there are many identical expressions. For example, in the case of Eq. 5 , it is difficult to distinguish between pre- and post-operation T. - The same expression in Eq. 10 is ambiguous. My personal understanding is that you only use the Q_p vectors that you have concatenated when constructing T, but we need to use a better expression. Mathematically, it seems that we only use the first element. - The letter and arrow are overlaid at the top (c) of Figure 2, which needs to be corrected. - Tab. 2 looks better to replace with Tab. 6 from Appendix. Both experiments are the same, and for Tab. 6, both NPR and PPA are reported to have unused baseline performance, making it easier for experimental evaluation. - Unification is necessary when referring to Figure in the text. Some are denoted as Figure xb, while others are denoted as Figure x(b). Questions For Authors: - To verify the effectiveness of the proposed NoDIS, experiments on the change of feature distance between t-SNE and inter-classes such as in PE-Net are needed. - What is the basis for improving feature representation when using the Diffusion policy? If you have that study, it is recommended to add it to the related work. This enhances the reader's understanding of the thesis. - Is the experiment on GQA missing? As far as I've reviewed, it's nowhere to be seen in the text and Appendix. This takes a toll on the suggestion method generalization argument. - The latest SGG research trend is dominated by the one-stage method, and I wonder why you adopted the two-stage method. Is there any particular reason? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## We sincerely appreciate your time and effort in carefully reviewing our paper and providing constructive feedback. We are also grateful for your recognition of our work. - *For Weakness 1*: Thank you for bringing this up. First, we compared and analyzed our method against state-of-the-art approaches using the same evaluation framework on the VG dataset (in Section 4.2 and Table 1). Additionally, **we conducted comparisons on the Open Images V4, V6 (results in Reviewer giRZ), and GQA (results in Reviewer FDij) datasets**. Finally, we performed an ablation study by breaking down our method and analyzing its effectiveness from multiple perspectives, **including inter-module effectiveness (in Table 2), the impact of the diffusion method (in Table 4), loss function design (in Table 3), diffusion step size (in Reviewer FDij), and denoising step size (in Reviewer FDij)**. - *For Weakness 2*: Thank you for pointing this out. Current diffusion-based methods are primarily used for generation tasks. Some studies also apply diffusion for data augmentation by generating additional pseudo-samples to enhance tail information and improve model performance. We have included this discussion in the related work section of our paper. - *For Weakness 3*: Thank you for pointing this out. **Although existing LLMs demonstrate strong image understanding capabilities, they widely suffer from hallucination issues, largely due to data bias**. For example, when asked about the relationship between "person" and "short" in an image, LLaVA incorrectly responds: "The person is standing next to a short." However, when guided by NoDIS, LLaVA correctly outputs: "The person is wearing shorts." We will include more details and additional results in the appendix. Therefore, mitigating biased outputs caused by data bias remains a critical challenge. Moreover, while LVLMs have extremely large parameter sizes, NoDIS contains only around 300M parameters (in reviewer giRZ), requiring significantly less GPU memory for training. Optimizing and experimenting with smaller models can better support large models and help alleviate biased predictions. - *For Question 1*: Thank you for pointing that out. We previously used t-SNE for visualizing the representation space. However, we considered that t-SNE itself may have some randomness (as it requires dimensionality reduction). Additionally, after feature discretization mapping, our method clusters multiple samples of similar predicates into a unified representation space. As a result, visually, it appears as a distinct small region, whereas PE-Net appears as a more scattered large region, which does not look aesthetically pleasing. Therefore, instead of using t-SNE for visualization, we calculated inter-feature relationships and presented the results numerically. We chose not to include the t-SNE results in the paper, but we can certainly add them in the appendix of the final paper. - *For Question 2*: Thank you for pointing this out. In SGG tasks, no existing work has utilized Diffusion models to enhance features for expanding the tail representation space. Instead, most approaches rely on pretrained Diffusion or GAN models for data augmentation, which incurs higher computational costs. Some methods expand feature representations by training a decoder and applying a distillation-like approach, as briefly mentioned in the Introduction. We will include a detailed discussion of these methods in the Related Work section. - *For Question 3*: Apologies for the omission due to space constraints. **We not only evaluated our method on GQA but also conducted experiments on Open Images V4 and V6**. **The GQA results can be found in our response to Reviewer FDij**, while **the Open Images V4 and V6 results are provided in our response to Reviewer giRZ**. These results demonstrate that our method achieves outstanding performance across multiple datasets, highlighting its strong generalization capability. - *For Question 4*: Thank you for pointing this out. Two-stage methods primarily rely on Faster R-CNN's entity detection capabilities to extract entity representations and construct relationship information by independently training a relationship detection module. In contrast, single-stage frameworks based on the Transformer architecture perform object detection and relationship classification simultaneously. While single-stage methods are simpler, they require higher training costs. Moreover, existing single-stage methods are relatively scarce and generally underperform compared to two-stage methods, as shown in the table. |Method|Stage|mR@50|mR@100| |:-------:|:-------:|:-------:|:-------:| |RelTR|One-stage|10.8|-| |SSR-CNN|One-stage|8.4|10.0| |Iterative SGG|One-stage|8.0|8.8| |Relationformer|One-stage|9.3|10.7| |SGTR|One-stage|12.0|15.2| |**Transformer-NoDIS(ours)**|two-stage|**16.91**|**19.25**| ### If you have any further questions or concerns, please let us know, and we will provide additional clarification. --- Rebuttal Comment 1.1: Comment: Overall, the authors have responded to all reviewer comments with logical consistency and detailed justifications supported by quantitative evidence. In particular, they have clearly expressed their intention to address the identified experimental and theoretical limitations, and have also proposed to include additional references and experimental results. Therefore, the response is considered to be both sincere and sufficiently thorough. Based on this, I will consider to rise current score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition of our work, and thank you very much for improving the score!
Summary: The paper addresses bias in scene graph generation, especially the difficulty of learning fine-grained predicate labels under long-tailed distributions. The authors propose NoDIS, which has two core aspects: First, Noise-Guided Predicate Extraction expands predicate representations by applying a single-step noise addition and iterative denoising, then further augments this diversity through a conditional diffusion mechanism. This helps generate richer, more diverse representations for both frequent (head) and infrequent (tail) predicates. Second, Feature Discretization Mapping learns to aggregate these expanded predicate features into a discretized representation. This step reduces ambiguity for the classification head by unifying predicate samples of the same category into a more consistent cluster. Empirical evaluations on the Visual Genome and GQA benchmarks demonstrate that adding NoDIS boosts the performance on rare classes. Ablation studies confirm that both the diffusion-based augmentation and the discretization module are integral to the performance gains. Claims And Evidence: 1. The paper states that existing SGG approaches overlook intraclass diversity and interclass consistency in predicate representations, leading to biased predictions favoring head classes. They show that prior efforts primarily focus on loss reweighting or sampling strategies, and only partially address the wide range of predicate semantics. Quantitative results (particularly in mR@K metrics) show that standard baselines often fail to predict tail predicates accurately. 2. The proposed NoDIS method, by applying noise-guided extraction and a conditional diffusion model, effectively expands the representation space of predicates, improving the recall on long-tailed predicates. Comparisons on the VG dataset indicate that the average performance on tail categories (mR@K) is consistently higher compared to baselines. Their ablation study further isolates the impact of the diffusion approach on increasing within-class variance of predicate features. 3. The discretization mapper helps unify predicate features that belong to the same category, improving classification accuracy and reducing confusion among semantically similar predicates. The authors show a measurable drop in intraclass feature variance once discretization is applied and an alignment between the learned representation distributions and the final decision layer. This is reinforced by the improvement in F@K and mR@K metrics. While the paper does not provide an in-depth theoretical proof that this particular combination of modules is uniquely optimal, their experimental findings and ablations make a convincing empirical case that NoDIS alleviates biases and achieves strong performance on standard SGG benchmarks. Methods And Evaluation Criteria: The authors propose two-phase design: (1) object detection, (2) predicate classification enhanced by noise-guided extraction and conditional diffusion. Noise injection and denoising are used to create more distinct predicate embeddings (intra-class diversity). A discretization mapping collapses these diverse embeddings for each category, reducing confusion at the classifier stage. The chosen evaluations are well aligned: the authors address data skew and demonstrate improvements on established unbiased metrics (mR@K, F@K). Theoretical Claims: The paper’s theoretical contributions are mostly method-centric rather than purely formal proofs. The authors do mention a KL-divergence loss to align representation distributions and reference how the diffusion model preserves or shifts distributions across timesteps. While they do not include a formal proof of correctness or convergence for the diffusion approach, the claims rely on standard diffusion modeling assumptions. I did not see any outright flaws in their theoretical arguments, though the proofs and formal rigor focus on standard diffusion derivations and established VAE/GAN-based feature augmentation concepts rather than novel mathematics. Experimental Designs Or Analyses: The authors compare performance on three tasks (PredCls, SGCls, SGDet) using widely adopted metrics (R@K, mR@K, F@K). All in all, the experimental protocols appear sound and consistent with common SGG benchmarks. One area for improvement might be clarifying whether their random noise schedules significantly impact results or whether different hyperparameter choices matter (e.g., number of diffusion steps). Supplementary Material: Code is provided in the supplementary material, but I have not looked into detail. Relation To Broader Scientific Literature: - Long-tailed recognition: Instead of conventional class-rebalancing or cost-sensitive methods, they utilize a generative model (diffusion) to expand underrepresented classes. - SGG bias reduction: Past works have used sampling, reweighting, or knowledge distillation. NoDIS is somewhat unique in employing a diffusion-based pipeline to construct richer embeddings. - Feature discretization: The idea of quantizing or discretizing features with a VQ-VAE–style approach to unify sample clusters seems also a novel adaptation in SGG. Essential References Not Discussed: The references to mainstream SGG and diffusion-based generation are mostly in place. Other Strengths And Weaknesses: Strengths: 1. The conceptual combination of diffusion-based expansion plus discretized representation is novel, improving both the expressiveness (helpful for tail classes) and consistency (for simpler classification). 2. Improved tail performance is clearly demonstrated by the metric gains on mR@K. Weaknesses: 1. Figure 2(b) is indeed somewhat hard to parse, due to the overlapping gray/dotted arrows and the large number of parallel lines (the textual explanation helps, but the graphic is busy). 2. Hyperparameter sensitivities (e.g., noise schedule, number of diffusion steps, discretization granularity) are not deeply explored. In practice, these can matter greatly for diffusion-based methods. 3. The inclusion of more recent baselines would strengthen the comparisons and provide a more comprehensive evaluation. Other Comments Or Suggestions: - Experimental table (Table 1) might benefit from consistent significant figures of results. - If possible, consider making the figure for NoDIS more straightforward by reducing the complexity of parallel branches or clarifying the ordering of steps in the visual layout. Questions For Authors: 1. How sensitive are results to different noise schedules or varying the number of denoising steps? 2. At inference, once you generate expanded predicate features, are they discretized solely by a nearest-neighbor approach to a learned embedding codebook? Would an alternative approach (e.g., approximate nearest neighbors or distinct heads for each class) help or hinder the method? 3. Have you considered deeper comparisons or breakdowns of tail performance specifically on GQA? Because GQA has many predicate categories, this might better showcase the benefit of diffusion-based augmentation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## We sincerely appreciate your time and effort in carefully reviewing our paper and providing constructive feedback. We are also grateful for your recognition of our work. - *For Weakness 1*: Thank you for bringing this up. Figure 2(b) illustrates both the training and inference processes, making it relatively complex. We have now optimized the diagram by simplifying redundant module structures, resulting in a clearer and more concise representation. - *For Weakness2*: Thank you for pointing this out. We follow the existing noise design of SD and DiT, adopting a linear noise schedule. To further investigate the impact of diffusion steps on performance, we conduct an ablation study with 50, 100, and 150 steps, as shown in Table 1. Additionally, to examine the effect of step size during the denoising process, we perform another ablation study with step sizes of 10, 15, 20, and 25, as presented in Table 2. A more detailed ablation analysis will be provided in the final paper. |Diffusion steps|random steps| R@50/mR@50 | R@100/mR@100 | F@50/F@100 | |:------:|:------:|:------:|:------:|:------:| | 50 | 10 | 49.96/**37.25** | **53.21**/**39.97** | **42.68**/**45.65** | | 100 | 10 | **50.28**/36.86 | 52.74/39.54 | 42.54/45.20 | | 150 | 10 | 49.73/36.90 | 52.18/39.76 | 42.36/45.13 | |random steps|R@50/mR@50|R@100/mR@100|F@50/F@100| |:------:|:------:|:------:|:------:| |10|**49.96**/**37.25**|**53.21**/39.97|**42.68**/**45.65**| |15 | 47.51/36.81| 50.14/39.88 | 41.48/44.42 | |20 | 47.63/36.92| 50.26/**39.99** | 41.60/44.54 | |25 | 47.54/36.85|50.15/39.92|41.52/44.46| - *For Weakness 3*: Thank you for your suggestion. On the VG dataset for the PredCls task, using Transformer as the baseline, our method outperforms the approach published in TMM 2024 by 0.35 and the one in TIP 2024 by 0.95 in F@100. In terms of overall performance, with PE-Net as the baseline, our method surpasses the CVPR 2024 approach by 0.73 in mR@100. Additionally, on the Open Images dataset, based on the score_wtd metric (as referenced in Reviewer giRZ's response), our method improves upon the best existing approach by 0.29 on OI V4 and by 1.05 on OI V6. - *For Suggestions*: Thank you for your suggestion. We have optimized the content accordingly and will present it in the final paper. We sincerely appreciate your thorough review of our paper! - *For Questions 1*: Thank you for pointing this out. In Section 4.3 and Table 4 of the paper, we briefly analyze the necessity of incorporating a random denoising strategy for supervised optimization. During the forward diffusion process, **our noise scheduling strategy follows that of SD and DiT**. To better enforce consistency in the generated representations of the enhanced model, we introduce an additional denoising strategy by randomly sampling n steps for denoising as an extra constraint. During inference, we adopt the DDPM denoising strategy to iteratively remove noise step by step. **Experimental results on different diffusion and denoising step selections are presented in For Weakness2**. We will include this analysis in the final version of the paper with a more detailed discussion. - *For Questions 2*: Thank you for pointing this out. Our initial approach considered using the K-nearest neighbors (KNN) method. However, since Diffusion generates diverse features in the early stages, predicate representations within the same category exhibit significant variation, making it ineffective to constrain them using KNN. Instead, we maintain a component similar to a CodeBook to learn a central representation from the representation space of multiple samples within the same category. This central representation is designed to have strong generalization capabilities. In contrast, KNN and clustering methods are too restrictive and struggle to model diverse representations effectively. - *For Questions 3*: Thank you for pointing this out, and we apologize for not including it in the paper due to space limitations. The table below compares our method with existing approaches on the PredCLS task of the GQA dataset. Our method achieves further performance improvements over existing methods. More comprehensive evaluation results across multiple tasks will be included in the final version of the paper. Additionally, we have supplemented performance improvements on the Open Images V4 and V6 datasets (results in Reviewer giRZ), which will also be included in the final version. | Method | R@50 | mR@50 | F@50 | R@100 | mR@100 | F@100 | |:------:|:------:|:------:|:------:|:------:|:------:|:------:| | Motifs | **65.3** | 16.4 | 26.2 | **66.8** | 17.1 | 27.2 | | Motifs-DC |61.3|21.4|31.7|62.7|22.5|33.1| | Transformer |65.2|19.1|29.5|66.7|20.2|31.1| | Transformer-DHL|-|18.2|-|-|20.1|-| | **Transformer-NoDIS(Ours)**|46.54|**30.47**|**36.83**|48.45|**31.90**|**38.47**| ### If you have any further questions or concerns, please let us know, and we will provide additional clarification. --- Rebuttal Comment 1.1: Comment: Thank you for preparing a detailed rebuttal. I do find the work overall interesting and promising. I will be keeping my score as is. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition of our work. We have also revised the paper according to your comments. Finally, thank you for taking the time to read our response!
Summary: The paper proposes NoDIS, a method designed to address bias in SGG arising from long-tail predicate distributions. The contributions are: 1) it introduces a noise-guided predicate representation extraction technique that employs conditional diffusion models to increase the diversity of predicate representations within the same category. 2) it develops a diffusion-enhanced discretization module to unify similar predicate representations, simplifying decision-layer complexity and improving prediction accuracy. Evaluations on VG and GQA datasets demonstrate that NoDIS outperforms existing unbias methods. Claims And Evidence: 1. The authors claim improved generalization by diversifying predicate representations and reducing prediction ambiguity via discretization mapping. These claims are convincingly supported by experiments on VG and GQA datasets. 2. The quantitative analysis demonstrates improvements over baselines, especially in terms of mR@K, indicating effective bias mitigation. 3. Ablation studies effectively validate the impact of each proposed module. Methods And Evaluation Criteria: 1. standard benchmark datasets (VG and GQA) aligns well with community standards. 2. the evaluation metrics (R@K, mR@K, F@K) are conventional and sensible for measuring performance in SGG, particularly mR@K for capturing both head and tail predicate predictions. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs and analyses appear sound: 1. ablation studies systematically demonstrate the contributions of each module (Noise-Guided Predicate Refinement, Diffusion-Enhanced Feature Enhancement, and Discretization Mapping). 2. Comparisons with multiple state-of-the-art methods across established benchmarks validate the effectiveness of the proposed method comprehensively. Supplementary Material: The authors provided the code, but I did not have sufficient time to reproduce their results. The provided code appears to be well-organized. Relation To Broader Scientific Literature: The paper situates itself within recent SGG literature, clearly discussing related prior methods like Motifs and RelTR, and identifying their limitations, especially regarding biases and handling tail classes. The paper differentiates itself by introducing novel diffusion-based methods and discretization techniques, improving over previous augmentation and feature enhancement methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Clearly motivated approach to tackling long-tail distributions and bias. 2. Innovative integration of diffusion models for feature diversification. 3. Strong empirical validation and ablation studies. Weaknesses: 1. Complexity in the methodology might limit reproducibility without extensive supplementary details. 2. Limited exploration on generalizability beyond standard benchmarks, and also can explore the results in PSG and OpenImage dataset. Other Comments Or Suggestions: Minor suggestions and typos: 1. Typos/Minor corrections: - Section 3.2: clarify more explicitly the role of the Query Token initialization. - Table formatting (eg, Table 4 in the ablation studies section) could be improved slightly for readability. Suggestion: 1. Provide additional qualitative visualization examples or failure cases for more transparency Questions For Authors: 1. Diffusion Complexity: given the computational cost associated with diffusion models, can you clarify how scalable your approach is to larger-scale datasets or real-world scenarios? er will help assess the generalization and robustness of the method beyond benchmarks.) 2. Comparison with Other Generative Models: Why was diffusion specifically chosen for feature augmentation over alternative generative models such as GANs or VAEs? If use GANs or VAEs, what's the performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## We sincerely appreciate your time and effort in carefully reviewing our paper and providing constructive feedback. We are also grateful for your recognition of our work. - *For Weakness 1*: Thank you very much for pointing that out. This work is the first to apply diffusion models for feature enhancement in the SGG task. To facilitate reproduction by other researchers, we have provided detailed implementation details in the appendix(Section: A. Implementation Details), including hyperparameters such as learning rate, number of iterations, and batch size. Additionally, we have submitted the original code as a supplementary file. Upon acceptance of the paper, we will upload the code to GitHub and include the repository link in the paper for broader accessibility. - *For Weakness 2*: Thank you for your constructive feedback. We followed the same training procedure and evaluation criteria to validate the reliability of our method on Open Images V4 and V6, as shown in the table below. Compared to recent approaches, our method demonstrates a significant advantage on the Open Images dataset, achieving the best performance in wmAP_rel, wmAP_phr, and score_wtd. This highlights our method's strong capability in both relationship detection and phrase-level consistency assessment, effectively and accurately capturing relationship details between objects. A detailed comparison and analysis will be included in the final paper. |Dataset|Method|mR@50|R@50|F@50|wmAP_rel|wmAP_phr|score_wtd| |:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:| |OI V4|RelDN|70.4|**75.7**|73.0|36.1|39.9|45.2| |OI V4|GPS-Net |69.5|74.66| 72.0 |35.0|39.4|44.7| |OI V4|BGNN |**72.1**|75.5|**73.8**|37.8|41.7|46.9| |OI V4|**Transformer-NoDIS(ours)**|70.34|74.84|72.52|**38.21**|**42.35**|**47.19**| |OI V6|DBiased|42.1|74.6|53.8|34.3|34.4|42.3| |OI V6|PENet|39.3|**76.5**|51.9|36.6|37.4|44.9| |OI V6|SGTR|42.6|59.9|49.8|38.7|37.0|42.3| |OI V6|CSL|41.7|75.4|53.7|34.3|35.4|42.9| |OI V6|BCTR|48.8|68.6|57.0|36.0|**39.0**|42.5| |OI V6|SQUAT|-|75.8|-|34.9|35.9|43.5| |OI V6|**Transformer-NoDIS(ours)**|**48.93**|74.11|**58.94**|**38.87**|**38.95**|**45.95**| - *For Minor Suggestion*: Thank you for your suggestions. In Section 3.2, we randomly initialize learnable Query Tokens based on a Gaussian distribution to extract independent predicate information from the entity representation space. By leveraging a coarse-grained denoising process, entities are treated as noise, allowing the Query Token to extract predicate representations independently of entities, thereby preventing entity influence on predicate classification. For the ablation study in Table 4, we will restructure it to analyze different aspects, including the denoising method, conditional introduction, denoising steps, and feature discretization mapping. Due to page limitations, we have simplified the overall experimental results. Additionally, we will include our failed experiments in the final paper appendix. We sincerely appreciate your thorough review of our paper! - *For Suggestion 1*: Thank you very much for your constructive suggestions. Due to space limitations, we have not included more failure cases and analysis. We will add additional visualizations of both successful and failed cases, along with further analysis, in the appendix. We sincerely appreciate your thorough review of our paper! - *For Question 1*: Thank you for your question. Our method differs from diffusion-based approaches like SD and DiT. Instead of a full diffusion model, **we adopt the diffusion concept and use only three Cross-Attention layers and three Linear layers in the noise prediction module**. The table below presents a comparison of overall model parameters. Additionally, we conducted experiments not only on the VG dataset but also on large-scale datasets such as Open Images and GQA (results in reviewer FDij). Our method outperforms existing approaches, demonstrating an effective trade-off between resource efficiency and performance while also proving its robustness. |Method|Params(M)| |:------:|:------:| |EBM|322.2| |VCTree-TDE|361.3| |VCTree-EBM|372.5| |BGNN | 341.9| |**VCTree-NoDIS(ours)**|**306.06**| |**Motifs-NoDIS(ours)**|**314.8**| |**Transformer-NoDIS(ours)**|**295.42**| - *For Question 2*: We appreciate your question. We previously attempted online feature enhancement based on VAE and GAN, but these methods often encountered gradient explosion or vanishing issues during training. Our analysis indicates that VAE and GAN heavily rely on pre-trained weights. Moreover, **both methods(VAEs, GANs) model the overall pixel information of an image, whereas the SGG task determines relationships based on localized ROI features**. As a result, these methods fail to train effectively in this task and cannot enhance feature representation. ### If you have any further questions or concerns, please let us know, and we will provide additional clarification.
Summary: The paper introduces a diffusion-based feature enhancement approach to broaden the visual space of predicate representation and improve feature diversity. It first extracts entity representations using a baseline model and refines them with a Transformer for contextualization. Gaussian noise is then applied to the contextualized predicate features, and a diffusion-based model estimates and removes the noise. To address the discrete distribution challenge inherent in diffusion-based models, a mapper module is introduced for predicate classification. The method is evaluated with three different baselines and significantly improves F@K. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria is appropriate for scene graph generation. Theoretical Claims: The paper does not have any theoretical claims. It essentially borrows the variants of Diffusion-based modelling into scene graph generation, especially to model the predicate representation. Experimental Designs Or Analyses: The experimental designs follow the standard protocol of SGG. They have demonstrated their performance with sufficient ablation studies. Supplementary Material: I did not review the supplementary materials. Relation To Broader Scientific Literature: The paper finds an interesting application of diffusion-based denoising in enhancing the visual space of predicates. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper successfully applies the diffusion-based denoising in classifying the predicates of scene graphs. The motivation to employ diffusion-based denoising is well-written and also their proposed method is discussed coherently. Proper ablation studies and evaluation with SOTA baselines demonstrate the superiority of the proposed method. However, the paper lacks in discussing the classification of subject and object. Since they are contextualizing baseline entity features, the entity classification results will be different from the baseline models. Also, a overview of the method should be included in the caption of Figure 2. Other Comments Or Suggestions: N/A Questions For Authors: See weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### We sincerely appreciate your time and effort in carefully reviewing our paper and providing constructive feedback. We are also grateful for your recognition of our work. - #### *For Weaknesses*: Thank you very much for pointing this out. We classify subject and object entities using the same method as the baseline, which relies on ROI features extracted by Faster R-CNN and determines the entity category using a simple linear layer. We will update Section 3.1 of the paper to include an explanation of this part. ### If you have any further questions or concerns, please let us know, and we will provide additional clarification.
null
null
null
null
null
null
The Disparate Benefits of Deep Ensembles
Accept (poster)
Summary: This paper presents an empirical study of the fairness properties of deep ensembles. On several image datasets, the authors explore the fairness/accuracy tradeoff of deep ensembles at varying numbers of members, and explore a hypothesis which relates decreasing fairness metrics with varying amounts of predictive diversity between groups. They demonstrate that common post-processing methods can be helpful for improving the fairness of the ensemble. ########### UPDATE AFTER REBUTTAL: Thanks for the rebuttal. I'm not totally sure what to do with this paper as I still have some reservations about the central argument and I see I'm the main holdout here. I'm also not totally sure what to do with the extra table in the last comment - I'm looking at the left column which corresponds to the distribution over "Y=0,A=0 / Y=0,A=1 / Y=1,A=0 / Y=1,A=1" - however it seems like the base rates are equal between groups at all values in the left hand column: e.g. "0.22 / 0.22 / 0.28 / 0.28" yields a 0.28/0.22 ratio in both group A=0 and 1. I tried flipping around the A and Y values to make sense but I don't quite see where the base rates variation comes in. I'm going to keep my score as is but acknowledge that I may just be missing something here given the scores of the other reviewers - apologies if I didn't understand something obvious about the paper. Claims And Evidence: The central thesis of this paper is that deep ensembles "unevenly favor different groups", which the authors call the disparate benefits effect. The key evidence relies on demonstrating that as models are added to an ensemble, we see accuracy rise and fairness get worse. I am not sure I am convinced of this thesis by the arguments in this paper. In particular, I don't think the authors dispel the possibility that this is simply a property of improving model (read: ensemble) accuracy. We frequently observe such a fairness/accuracy tradeoff across many types of models, in particular if group base rates differ (in this case, we must observe such a tradeoff for a metric like SPD). Various pieces of evidence are proposed, which I do not think necessarily demonstrate the argued conclusion: - argument around average predictive diversity: the argument here is that if diversity differs between groups then the benefits of ensembling would mostly accrue to that group. However, this seems to lead more naturally to an argument about subgroup accuracy than equalized odds type metrics, and certainly wouldn't apply to the SPD pattern seen here - Fig 4: this synthetic example does not necessarily demonstrate the desired pattern - it just constructs a case where there is differing DIV and disparate benefits. To show the pattern, one would need to also construct the counterfactual faces where there is equal DIV and equal benefits. Current data is consistent with the hypothesis that there is nothing special about ensembles e.g. difference in base rates is driving the disparity - Fig 3 does demonstrate the desired pattern, however it is just observed for a single model and I would want to see this pattern shown more systematically to support the claim - The fact that Hardt post-processing is successful is also concordant with the view that ensembles are just like any other classifier, and can have their fairness properties improved with smart post-processing. The point about thresholding in 5b is interesting but I'm not sure it distinguishes ensemble fairness that clearly; in fact, Fig 6 itself shows that post processing applies to individual models helps just as much as applied to the ensemble! Methods And Evaluation Criteria: I would be interested to know what the underlying base rates are on each dataset I think the OOD-style experiment using UTKFace is an interesting idea, enjoyed seeing that included Theoretical Claims: n/a Experimental Designs Or Analyses: see first section Supplementary Material: n/a Relation To Broader Scientific Literature: connection to fairness + ensemble literature is fine Essential References Not Discussed: n/a Other Strengths And Weaknesses: As stated previously, I find the overall argument of the paper to not be particularly convincing. However, I do think the direction here is an interesting place to explore, and I found the overall clarity of the writing to be pretty good. Other Comments Or Suggestions: Fig 6: it's not totally clear to me why there are 2 red dots downstream of the starting point on each plot Questions For Authors: I would like to see evidence which tells me specifically how the fairness properties of ensembles behave differently from the fairness properties of individual models - e.g. in the DIV hypothesis, a more fully fleshed out series of observations along the lines of what's in Fig 3, or a clearer case where you construct a counterfactual example synthetically with/without high DIV that results in different levels of benefit disparity Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their critical assessment of our work and the insightful questions. We first address the main question, followed by responses to the specific concerns outlined in the review. ## Main Question We appreciate the request for clearer evidence linking predictive diversity to the disparate benefits effect. As you correctly noted, with real-world datasets we cannot directly control predictive diversity. This limits us in Fig 3 to comparing tasks where the disparate benefits effect is observed (top row) versus not observed (bottom row) as per the results in Tab 1. However, believe the experiment on synthetic data reported in Apx F.1. aligns with your suggestion. There, we systematically varied the level of predictive diversity using a parameter $\alpha$, while keeping all other factors constant. As shown in Fig 16 and Tab 19, the disparate benefits effect increases with higher diversity and vanishes entirely when $\alpha = 0$. Importantly, this setup uses equal base rates ($p(Y=y, A=a) = 0.25$ for all combinations), ruling out base rate differences as the driver of the observed disparities. Based on your feedback, we will move this experiment into the main paper in the camera-ready version to strengthen our core argument. ## Comments **Base rates** We provide the base rates in the legends of Fig 3 (see Fig 13-15 for all tasks), as well as Fig 4c for the synthetic experiment. Notably, there are some tasks (e.g. FF Y=age, A=gender) with very balanced base rates exhibiting significant levels of disparate benefits (see Tab 1). Also, we designed the synthetic task with equal base rates to eliminate this possible confounding factor. **Fairness/accuracy tradeoff** We appreciate the concern about a potential fairness/accuracy tradeoff. However, we do not believe the disparate benefits effect merely reflects such a tradeoff. In some tasks (Fig 7 and 8), both accuracy and fairness improve through ensembling. Moreover, for tasks where fairness declines, we find that applying Hardt Post-processing (HPP) to the Deep Ensemble (DE) restores fairness to the level of individual models, without sacrificing the ensemble’s accuracy gains (Fig 6, red dot directly above the dotted line). This suggests the effect is not intrinsically an accuracy/fairness tradeoff. **Single model** Thank you very much for your observation. We inspected the average predictive diversity per target and group for all 5 model variants and obtained similar results to those reported in Fig 3. Based on your feedback, we will add a new set of Figures in the revised version of the paper, summarizing these results. **Hardt post-processing** We agree to the point that DEs are not different from any other probabilistic classifier regarding HPP. In essence, HPP takes any predictive distribution and group attribute as input independent of how this distribution was obtained. However, we believe that our finding regarding the improved calibration of the ensemble and thus its heightened sensitivity to the threshold is an interesting and novel finding. **Fig 6** We thank the reviewer for pointing out the lack of clarity regarding the two red dots in Fig 6, which correspond to different levels of fairness tolerance to be achieved by the post-processing of the DE. * The first level of fairness tolerance is indicated by the dotted line and corresponds to the average fairness of individual ensemble members (gray dot next to the dotted line). The red dot closest to the dotted line corresponds to applying HPP to the DE to achieve the same level of fairness as the individual members (dotted line). Note how this is achieved while increasing accuracy. Thus, it illustrates that the DE remains Pareto-dominant by increasing both fairness and accuracy. * The second level of fairness tolerance is marked by the dashed line and it corresponds to 0.05, which is a commonly used value in the literature. In this case, the DE is also Pareto-dominant when compared to the individual models after post-processing both to the same tolerance level. We acknowledge that Fig 6 contains a lot of information and will revise its caption and description for clarity. We believe these revisions and clarifications directly address the reviewer’s central concerns—particularly regarding predictive diversity, base rate confounding, and the distinct fairness behavior of ensembles. We hope this supports a more favorable reassessment of our work. --- Rebuttal Comment 1.1: Comment: Thanks for this rebuttal - a couple responses below: On the main question: - I think that some of the pieces of evidence you point to here are useful (Appendix F1, Fig 7/8 in Appendix E) are helpful to make the argument that what we're seeing goes beyond standard fairness/accuracy tradeoffs, but I think that this argument should be a much more fundamental piece of the paper and that relevant contrasts should be made much more centrally rather than in the Appendix. For example, while noting that equal base rates in Appendix F.1 does not remove this effect is helpful, I think that the overall argument requires a more systematic examination of how the effect changes wrt to base rates Thanks for the other clarifications. I do find the calibration threshold finding to be interesting. I think that my asks are fairly substantial and so this still won't be an accept from me - however, I'll consider changing my score in discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and engaging in discussion! We fully agree that the experiment in Appendix F.1 is central to the paper. It was the last section we moved to the appendix to fit within the 8-page limit for the submission. Upon acceptance, we will use the extra page to bring this experiment back into the main paper, as we believe it strengthens the core argument. Regarding your point about base rates, we’re happy to share an additional piece of evidence. We repeated the experiment from Appendix F.1 while systematically varying the degree of imbalance in the base rates. The table below reports results for several configurations, with base rates shown for the combinations Y=0,A=0 / Y=0,A=1 / Y=1,A=0 / Y=1,A=1. Despite these variations, the effect of predictive diversity ($\alpha$) on disparate benefits remains consistent, while the changes in base rates appear to have little to no effect on the fairness metrics. This suggests that the disparate benefits effect is not simply a function of group base rate imbalance. We will include this analysis in the final version of the paper to further reinforce our argument. | Base Rates | Setting | Δ Accuracy (↑) | Δ SPD (↓) | Δ EOD (↓) | Δ AOD (↓) | |--------------------------|---------------|---------------------|--------------------|--------------------|--------------------| | 0.25 / 0.25 / 0.25 / 0.25 | α = 0.0 | 0.005±0.001 | 0.004±0.005 | -0.004±0.007 | -0.001±0.003 | | 0.22 / 0.22 / 0.28 / 0.28 | α = 0.0 | 0.005±0.002 | 0.004±0.002 | 0.006±0.003 | 0.001±0.003 | | 0.19 / 0.19 / 0.31 / 0.31 | α = 0.0 | 0.005±0.001 | 0.000±0.006 | 0.003±0.003 | -0.001±0.004 | | 0.14 / 0.14 / 0.36 / 0.36 | α = 0.0 | 0.005±0.002 | 0.004±0.002 | 0.006±0.003 | 0.001±0.003 | | 0.25 / 0.25 / 0.25 / 0.25 | α = 0.2 | 0.017±0.003 | 0.010±0.006 | 0.014±0.011 | 0.005±0.011 | | 0.22 / 0.22 / 0.28 / 0.28 | α = 0.2 | 0.014±0.002 | 0.003±0.004 | 0.005±0.007 | 0.003±0.008 | | 0.19 / 0.19 / 0.31 / 0.31 | α = 0.2 | 0.013±0.003 | -0.003±0.002 | 0.007±0.004 | 0.010±0.003 | | 0.14 / 0.14 / 0.36 / 0.36 | α = 0.2 | 0.014±0.002 | 0.003±0.004 | 0.005±0.007 | 0.003±0.008 | | 0.25 / 0.25 / 0.25 / 0.25 | α = 0.4 | 0.025±0.002 | 0.015±0.009 | 0.037±0.011 | 0.024±0.006 | | 0.22 / 0.22 / 0.28 / 0.28 | α = 0.4 | 0.021±0.002 | 0.012±0.003 | 0.040±0.008 | 0.029±0.005 | | 0.19 / 0.19 / 0.31 / 0.31 | α = 0.4 | 0.023±0.002 | 0.007±0.007 | 0.037±0.009 | 0.031±0.004 | | 0.14 / 0.14 / 0.36 / 0.36 | α = 0.4 | 0.021±0.002 | 0.012±0.003 | 0.040±0.008 | 0.029±0.005 | | 0.25 / 0.25 / 0.25 / 0.25 | α = 1.0 | 0.030±0.002 | 0.017±0.009 | 0.055±0.012 | 0.040±0.004 | | 0.22 / 0.22 / 0.28 / 0.28 | α = 1.0 | 0.030±0.001 | 0.016±0.004 | 0.061±0.005 | 0.046±0.006 | | 0.19 / 0.19 / 0.31 / 0.31 | α = 1.0 | 0.029±0.002 | 0.010±0.007 | 0.053±0.012 | 0.043±0.005 | | 0.14 / 0.14 / 0.36 / 0.36 | α = 1.0 | 0.030±0.001 | 0.016±0.004 | 0.061±0.005 | 0.046±0.006 | We believe this additional analysis directly addresses the reviewer’s concern by demonstrating that changes in base rates do not account for the observed disparities. The results consistently point to predictive diversity as the primary factor driving the disparate benefits effect.
Summary: This paper studies the fairness of deep ensembles on three image datasets. They find that ensembling tends to reinforce unfair behavior of ensemble members. The authors provide an explanation in that the predictive diversity causes this effect, which they call disparate benefits effect and give empirical evidence. Finally, the authors suggest some strategies to mitigate the effect. ## Update after rebuttal I found the arguments risen by reviewer mL9u very reasonable and compelling. I hadn't seen it from that viewpoint. I still think that the paper makes interesting points, but will adjust my scoring below the acceptance threshold. Claims And Evidence: The paper claims that ensembling neural networks tends to have a negative impact on fairness. A well-designed and described set of experiments on three datasets sufficiently supports the claim. They further claim that this effect is caused by predictive diversity, for which they do NOT provide evidence, since they only show that the two phenomena have a significant correlation in the studied data (but both could have a common cause and be independent given that cause). Methods And Evaluation Criteria: Several evaluations are conducted throughout the study, all of them properly designed and executed except maybe the sample size of 5 and the statistical significance test using a t-test. The evaluation criteria are reasonable to me. Theoretical Claims: No theoretical claims are being made. Experimental Designs Or Analyses: I already commented on this above. Supplementary Material: n/a Relation To Broader Scientific Literature: Excellent Essential References Not Discussed: None I am aware of. Other Strengths And Weaknesses: None Other Comments Or Suggestions: - I think that while Table 1 is interesting, it would be important to have a scatter plot with the absolute violations of a single member (x-axis) and the ensemble (y-axis). - The paper coins the term "disparate benefits effect" but fails to properly introduce and maybe formalize it. I searched with Ctrl + F and found that curiously the abstract (not the introduction) gives the best intuition. - Also, while the results show a consistent tendency to reduce fairness by ensembling, the effect is not very pronounced. Statistical significance doesn't mean that it is particularly relevant. I mean even the highest difference is only 0.022, not particularly high. Questions For Authors: - As far as I can see, you have only five ensembles (seeds) for each setup; I doubt that (i) the t-test is a proper way of measuring statistical significance here (why would you believe that the assumptions are satisfies?). - In Fig. 3, is the disparate benefits effect visible in the Figure? Or do we only know this from the selection based on table 1? I am inclined to the second option but would like to confirm this. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their very positive assessment of our work. We are pleased that they consider our experiments to be well designed, described and sufficient to support our claim that ensembling can have a negative impact on fairness. ## Questions **Q1** We acknowledge that the sample size (5 ensembles per task) is relatively small, primarily due to the computational cost of training large, fully independent ensembles (e.g., 5 seeds × 10 models = 50 models, totaling ~3000 GPU hours for our experiments). This way we can assure independence of individual samples. To give more insight into the normality condition, we performed the Shapiro-Wilk test. While we acknowledge that normality is difficult to verify with small samples, the results generally support the use of the t-test. Noteworthy, there is one case rejecting normality at p < 0.05: SPD on CX (age). We will make the respective entry in Tab 1 non-bold and provide the following additional table in the final version of the paper. **Shapiro-Wilk test p-value** | | Accuracy | SPD | EOD | AOD | |-------|-------|-------|-------|-------| | FF (a/g) | 0.44 | 0.62 | 0.82 | 0.20 | | FF (a/r) | 0.44 | 0.28 | 0.97 | 0.35 | | FF (g/a) | 0.76 | 0.96 | 0.85 | 0.55 | | FF (g/r) | 0.76 | 0.31 | 0.48 | 0.63 | | FF (r/a) | 0.86 | 0.95 | 0.82 | 0.28 | | FF (r/g) | 0.86 | 0.28 | 0.30 | 0.73 | | UTK (a/g) | 0.30 | 0.21 | 0.07 | 0.15 | | UTK (a/r) | 0.30 | 0.88 | 0.20 | 0.90 | | UTK (g/a) | 0.64 | 0.40 | 0.31 | 0.57 | | UTK (g/r) | 0.64 | 0.17 | 0.76 | 0.21 | | UTK (r/a) | 0.22 | 0.32 | 0.06 | 0.82 | | UTK (r/g) | 0.22 | 0.40 | 0.61 | 0.58 | | CX (a) | 0.47 | **0.01** | 0.18 | 0.22 | | CX (g) | 0.47 | 0.29 | 0.33 | 0.14 | | CX (r) | 0.47 | 0.10 | 0.68 | 0.59 | We also considered increasing the sample size by combining members across seeds, but this would introduce dependencies between ensemble members, which we believe would compromise the statistical validity more than the limited sample size does. That said, we are open to alternative suggestions for robust significance testing if the reviewer has a preferred method in mind. **Q2** Yes, we selected according to Tab 1 as the disparate benefits effect is not directly visible in Fig 3. The purpose of Fig 3 is to illustrate how the disparate benefits effect emerges when there are differences in the predictive diversity between groups (top row vs bottom row in Fig 3). These differences are highlighted by the black arrows in the figure, showing greater disparities in predictive diversity precisely in the tasks where we have found disparate benefits as per Tab 1. For clarity, as explained in the caption of Fig 3, the top and bottom rows show examples of datasets where disparate benefits are and are not present, respectively. In the top row, we observe significant differences in predictive diversity, while in the bottom row, there are no or residual differences. This contrast demonstrates how predictive diversity differences correlate with the presence or absence of the disparate benefits effect. We will revise the caption and the accompanying text to clearly explain the distinction between the effect itself (shown in Tab 1) and its explanation (shown in Fig 3). ## Remarks **Causation between predictive diversity and disparate benefits** We fully agree that causality is very hard to establish. The real world experiments are observational in nature. Therefore, we conducted the interventional synthetic experiment. Note that we provide an extension of the experiment in Apx F.1., where we change the feature diversity per group (the $\alpha$ parameter). There we see strong correlations between the differences in predictive diversity and the disparate benefits effect (see Tab 19 and Fig 16). **Definition of "disparate benefits effect"** Thank you very much for this feedback. We agree with your assessment that the definition of the disparate benefits effect in the abstract is the best one and will adapt the description in the introduction accordingly, providing a more formal definition. **Effect size** The disparate benefits effect depends on the unfairness of the base models. For example, the absolute increase shown in Table 1 of SPD in 2.2% for FF (age/gender) corresponds to a 10% relative increase in unfairness. Furthermore, the change in unfairness is closely linked to the increase in accuracy due to ensembling, which is in the range of 1-3% on our considered tasks. Therefore, the absolute effect size of the disparate benefits can not be expected to be substantially larger than the effect of ensembling itself. **Scatter plot** Thank you for this suggestion, we will provide such plots in the final version of the paper. We thank the reviewer again for this thoughtful and constructive feedback. We hope to have properly addressed the questions and concerns.
Summary: The paper discusses two relations between deep ensembles and fairness notions. The first relation is the effect of the number of members on the overall fairness performance of the system. The second is what the authors call 'disparate benefit', which is the difference in fairness violation between the ensemble and the average fairness violation of the members. This is achieved by conducting several large scale experiments on three datasets. The results of the initial experiments are further analyzed by conducting new experiments investigating the relation between the disparate benefit phenomenon and the predictive diversity and expected calibration error. Claims And Evidence: The claims made in the paper are supported by experiments. The main body of the paper only provides results for one architecture, however more results are provided in the appendix. Methods And Evaluation Criteria: As concluded by the authors, the scope of the current experiments is rather limited, as the only use case is on image datasets and only one type of bias mitigation method is applied. The datasets used for the experiments are appropriate for this type of fairness research and the Hardt method is one of the strongest. The paper obscures quite some information with regards to fairness by only considering binary sensitive attributes (numerical and categorical attributes are mapped to be binary). This limits the scope of the analysis significantly and this choice could be communicated at an earlier stage of the paper. Theoretical Claims: No theoretical claims were made in the paper. Experimental Designs Or Analyses: The overal experimental design is sound with many variables and several random seeds to account for variance in the results. The analyses are elaborate enough when also accounting for the results provided in the appendix. The reporting of the figures would benefit from equalizing the ranges on the axes. For example in Figure 3 the reporting would improve if the y-axes were the same for the plots as the difference in length of the arrows between the top and bottom row is requied for this figure. The controlled experiment is a nice touch to validate the authors' suspicion with regard to the role that predictive diversity plays in affecting fairness violation behaviour. Supplementary Material: I went through most of the supplementary material superficially. I validated the claims that the model architecture does not greatly change the results discussed in the main body of the paper. Relation To Broader Scientific Literature: The field of fairness research has in recent years started to investigate ensembles as an ML framework which might have some significant influences on the fairness of a system. The paper itself does not discuss many other fairness papers that focus on ensembles specifically. The only comparison seems to be made to the work of Ko et al. (2023). The main bulk of fairness research cited by the paper are more general papers that are relevant for most fairness research. Examples of fairness research on ensembles includes the works of Gohar et al. (2023), Kenfack et al. (2021), Mishra and Kumar. (2023), and Tayebi and Garibay (2023). Usman Gohar, Sumon Biswas, Hridesh Rajan. Towards Understanding Fairness and its Composition in Ensemble Machine Learning (2023). ICSE '23 Patrik Joslin Kenfack, Adil Mehmood Khan, S.M. Ahsan Kazmi, Rasheed Hussain, Alma Oracevic, Asad Masood Khattak. Impact of Model Ensemble On the Fairness of Classifiers in Machine Learning. ICAPAI '21. Gargi Mishra, Rajeev Kumar. An individual fairness based outlier detection ensemble. (2023). Pattern Recognition Letters Aida Tayebi, Ozlem Ozmen Garibay. Improving Fairness via Deep Ensemble Framework Using Preprocessing Interventions. (2023). HCII '23 Essential References Not Discussed: The reviewer is not aware of any contradacting research results. Other Strengths And Weaknesses: This is an overall well-written research paper that focuses on the behaviour of fairness notions in the context of fairness violations. The results are abundantly communicated through the figures and also discussed in the main body of the paper. The scope of datasets and bias mitigation methods applied limit the validity of the paper, which was also acknowledged by the authors. Other Comments Or Suggestions: / Questions For Authors: / Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for engaging deeply with our work and for providing their thoughtful and positive assessment. We’re pleased that you found the experimental design sound, the analyses sufficiently elaborate, and the results well-supported. Your detailed comments helped us reflect on both the limitations and opportunities for improvement in the framing and positioning of our work. We would like to respond in more detail to two specific comments: > The paper obscures quite some information with regards to fairness by only considering binary sensitive attributes (numerical and categorical attributes are mapped to be binary). This limits the scope of the analysis significantly and this choice could be communicated at an earlier stage of the paper. We fully agree that explicitly stating the scope of the fairness setting—particularly the use of binary sensitive attributes and labels—is crucial for readers to assess our findings. We chose to focus on the canonical setting of binary labels and attributes to make our findings easily accessible and interpretable. However, exploring non-binary labels and group attributes (also possibly intersectional) is indeed an important direction for future work. While we currently state our setting in the opening paragraphs of Sections 3 (formal setup) and 4 (dataset description), we acknowledge that this may come too late for some readers. To make this clearer upfront, we propose updating the final paragraph on the first page to state: *“... each with multiple* **binary** *target variables and protected group attributes.”* to ensure that readers are immediately aware of our setup before engaging with the technical content. > Relation To Broader Scientific Literature Thank you for highlighting several important and relevant works on fairness in ensemble learning. While our Related Work section focused primarily on studies most closely related to the disparate benefit phenomenon—such as Ko et al. (2023)—we fully agree that placing our contribution within the broader literature on ensembles and fairness will strengthen the paper. In the revised version, we will expand the Related Work section to include the studies you suggested (Gohar et al., 2023; Kenfack et al., 2021; Mishra & Kumar, 2023; Tayebi & Garibay, 2023) as well as additional relevant works (e.g., Grgić-Hlača et al., Bhaskaruni et al.). For example: * Tayebi & Garibay apply pre-processing interventions to ensemble members of a Deep Ensemble (on tabular data), offering a complementary perspective to our work. * Gohar et al. investigate multiple research questions around the fairness impact of different ensembling techniques for classical models (e.g. SVMs, logistic regression, naive bayes), extending the discussion beyond deep learning. We believe these additions will further clarify our paper’s relationship to the broader fairness literature and underscore its specific contributions. Once again, we sincerely thank the reviewer for their valuable feedback. We appreciate your support of our work and are committed to addressing these points to improve the clarity and contextualization of our contribution in the final version. Please don’t hesitate to suggest any additional revisions that might further strengthen the paper. --- Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2017). On fairness, diversity and randomness in algorithmic decision making. arXiv:1706.10208. Bhaskaruni, D., Hu, H., & Lan, C. (2019, November). Improving prediction fairness via model ensemble. In 2019 IEEE 31st International conference on tools with artificial intelligence (ICTAI) (pp. 1810-1814).
Summary: This paper explores the impact of Deep Ensembles on algorithmic fairness. The authors find that Deep Ensembles, while improving overall performance, can unevenly benefit different groups, a phenomenon they call the "disparate benefits effect." The paper suggests that differences in predictive diversity among ensemble members across different groups could potentially explain this effect. The paper also shows that the classical post-processing method (Hardt et al., NIPS 2016) can mitigate the disparate benefits effect by utilizing the better-calibrated predictive distributions of Deep Ensembles. Claims And Evidence: The paper's central finding is the potential for unfairness in deep ensembles. Empirical results highlight this issue, though the reliance on vision datasets may limit the generalizability of these findings. Methods And Evaluation Criteria: The paper shows that an existing post-processing method (Hardt et al., NIPS 2016) can mitigate unfairness in deep ensembles, but it does not introduce a novel algorithm for this purpose. Comparing more diverse fairness algorithms would enhance the paper by providing a deeper understanding of different fairness approaches within ensemble learning. Theoretical Claims: The paper does not contain theoretical results. Experimental Designs Or Analyses: While the paper's diverse experimental results offer various insights, it would benefit from the inclusion of different dataset types (e.g., non-image) and a more comprehensive set of fairness algorithms, as discussed above. Supplementary Material: I reviewed several parts in Appendix, especially for the experimental results. Relation To Broader Scientific Literature: This paper focuses on potential unfairness issues in deep ensemble learning, which may offer some insights for applications of ensemble methods in scientific literature. Essential References Not Discussed: Mostly well-discussed. Other Strengths And Weaknesses: Strengths: - The paper is well-written, and the overall results provide valuable insights. - The paper offers an interesting intuition into unfairness in deep ensemble learning. Weaknesses: - Although the paper provides diverse experimental results, its reliance on vision datasets may limit the generalizability of the findings. - Additionally, the paper does not introduce its own fairness algorithm. While Hardt post-processing is a good existing algorithm that can mitigate this issue, it is not specifically designed for ensembling. This also weakens the connection between the intuition in Section 6 and the mitigation strategy. - Furthermore, a broader comparison with various types of fairness algorithms would be beneficial. Other Comments Or Suggestions: NA Questions For Authors: Questions are included in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough assessment of our work. We are pleased that it was found to be well written and to offer valuable insights. In response to the questions raised: > Limited generalizability of findings due to focus on vision datasets We agree with your observation and acknowledge this limitation in the final paragraph of the paper: *“The main limitations of our study are that we focus on vision tasks, and hence on ensembles of convolutional DNNs. [...]”*. That said, we aim to partially mitigate this limitation by including a diverse range of vision tasks spanning facial analysis (including under distribution shift) and medical imaging. These are application domains where Deep Neural Networks and Deep Ensembles are commonly deployed, making them highly relevant for studying their fairness implications. Moreover, we note that Deep Ensembles are rarely used in the tabular settings often featured in fairness research, where tree-based models remain dominant (Shwartz-Ziv, Ravid, and Amitai Armon. "Tabular data: Deep learning is not all you need." Information Fusion 81 (2022): 84-90). This context motivated our focus on vision data, where Deep Ensembles are widely considered. > The paper does not introduce its own fairness algorithm We agree that our paper does not propose a new fairness algorithm—this was not our objective. The main contribution of our paper is to show the existence of the disparate benefits effect of Deep Ensembles (Section 5) and to investigate its underlying causes (Section 6). While we do not claim to introduce a novel fairness method, we believe that presenting this new challenge without discussing potential remedies would leave the analysis incomplete. Therefore, in Section 7, we explore ways to mitigate the negative consequences of disparate benefits. We focus on post-processing methods that can be applied on the already trained individual ensemble members for practicality. Specifically: (1) we show that weighting members non-uniformly does not improve the performance-fairness trade-off whereas (2) Hardt Post-processing (HPP) is very effective in improving fairness while maintaining the utility of Deep Ensembles; and (3) we provide insights into why this is the case (improved calibration / threshold sensitivity). > [Hardt Post-processing] is not specifically designed for ensembling. This also weakens the connection between the intuition in Section 6 and the mitigation strategy. Indeed, HPP is not specifically designed to operate on ensembles. However, its model-agnostic nature allows it to operate on any predictive distribution, making it broadly applicable. Since Deep Ensembles produce predictions by averaging the output probabilities of individual models (Eq. 1), HPP can be applied in exactly the same way as with individual ensemble members. Regarding “the intuition in Section 6”, we note that predictive diversity is the reason that ensembles predict differently than individual models - if there were zero predictive diversity, all models would agree and the ensemble would predict exactly the same as every ensemble member. The averaging of diverse, and sometimes conflicting, predictive distributions leads to improved calibration (as shown in Figure 5). We show that HPP can effectively leverage this property: Deep Ensembles are more sensitive to thresholding decisions than individual models, and HPP, which relies on optimizing group-specific thresholds under fairness constraints, is particularly well-suited to take advantage of this threshold sensitivity. Thus, the improved calibration and diversity in ensembles directly support the effectiveness of HPP as a mitigation strategy. > Comparison with various types of fairness algorithms We agree that this is an important direction for future work. We explicitly acknowledge this in the final sentence of the paper: *“Furthermore, we intend to investigate the disparate benefits effect for Deep Ensembles where pre- or in-processing fairness methods have been applied to individual ensemble members.”* Given the wide variety of fairness interventions available, we believe this direction warrants a dedicated study. Expanding the scope further within the 8-page limit would have come at the expense of depth in our current analysis. We thank the reviewer again for their valuable feedback and thoughtful remarks. We hope that our rebuttal provides a more complete perspective on the contributions and scope of our study, and kindly ask the reviewer to consider a reassessment of their evaluation in light of these points.
null
null
null
null
null
null
RUN: Reversible Unfolding Network for Concealed Object Segmentation
Accept (poster)
Summary: This paper tackles the challenging Concealed Object Segmentation, which aims to segment objects that are visually blended with their surroundings. It introduces the Reversible Unfolding Network (RUN), a iteratively method to refine segmentation results and minimize uncertainties. RUN includes two main modules: the Segmentation-Oriented Foreground Separation (SOFS) module, which captures non-local context and applies the reversible strategy at the mask level, and the Reconstruction-Oriented Background Extraction (ROBE) module, which addresses conflicting foreground and background regions. The method was evaluated on multiple datasets to valid its effectiveness. Claims And Evidence: The paper claims that previous works use reversible strategies but are generally limited to the mask domain. In contrast, this work introduces RUN, which extends reversible strategies to both the mask and RGB domains. In P3L142, the paper states that integrating model-based and learning-based approaches remains underexplored due to the lack of intrinsic models. The paper should provide discussion to clarify the specific challenges involved and explain why addressing this gap is important. Methods And Evaluation Criteria: The methods is evaluated on multiple benchmarks using the standard metrics. According to Fig. 3, the architecture appears somewhat complex. The paper should provide a comparison of FPS and parameter count to better demonstrate the efficiency of this method. Theoretical Claims: N/A Experimental Designs Or Analyses: The methods is evaluated on multiple benchmarks using the standard metrics. Although previous works have not fully explored the RGB domain for mask refinement, this method appears to underperform compared to BiRefNet [1] and ZoomNeXt [2]. For instance, the S-measure on CAMO (higher is better) is 0.806 for this work, whereas BiRefNet achieves 0.904. Additionally, the paper does not provide the comparison results with these methods. The comparisons are insufficient. Please include a comparison with previous segmentation refinement methods, such as [3]. [1] Bilateral Reference for High-Resolution Dichotomous Image Segmentation, CAAI AIR, 2024. [2] ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection, TPAMI, 2024. [3] SegRefiner: Towards Model-Agnostic Segmentation Refinement with Discrete Diffusion Process, NeurIPS, 2023. Supplementary Material: This work does not include any supplementary material. Relation To Broader Scientific Literature: This work proposes an iterative method to refine segmentation results, which may benefit related segmentation tasks. Essential References Not Discussed: [1] Bilateral Reference for High-Resolution Dichotomous Image Segmentation, CAAI AIR, 2024. [2] ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection, TPAMI, 2024. [3] SegRefiner: Towards Model-Agnostic Segmentation Refinement with Discrete Diffusion Process, NeurIPS, 2023. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please refer to the comments in the above reviews. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the valuable comments. **W1. Challenges in introducing DUNs and importance of modeling high-level vision tasks** In P3L142, we state that deep unfolding networks (DUNs) are underexplored in high-level vision tasks for lacking intrinsic models. **Challenges:** DUNs rely on model-based optimization, which is effective in low-level vision tasks due to well-defined physical models (e.g., Retinex model for low-light enhancement, Atmospheric Scattering model for dehazing). **However, high-level vision tasks lack an explicit physical model, limiting DUN applications.** **Importance:** - Explicit constraints: Modeling high-level vision tasks enables task-specific constraints to guide optimization explicitly. For example, Eq. (4) introduces a residual sparsity constraint to refine segmentation and reduce uncertainties (P3L130, right part). This is more direct than implicit backpropagation. Table 5 shows a performance drop when removing this constraint (w/o prior $\hat{\mathbf{M}}_k$). - Introducing DUNs into high-level vision: With an explicit model, DUNs integrate optimization with deep networks, balancing interpretability and generalizability (P2L58, right part). RUN benefits in effectiveness (Table 1), efficiency (see **W2**), and degradation resistance (Fig. 5) in COS. **W2. Efficiency analysis** The designs of $\hat{\mathcal{M}}(\cdot)$ and $\hat{\mathcal{B}}(\cdot)$ follow mathematical principles and may appear complex, but we only make fixed hyperparameters learnable and enforce weight sharing for SOFS and ROBE across stages. In the table, our method is more efficient than SOTAs. |||Param. (M)/FLOPS (G)/FPS| |-|-|-| |ResNet50|FocusDiff|166.18/7618.49/0.23| ||RUN|30.41/43.36/22.75| |Res2Net50|FEDER|45.92/50.03/14.02| ||RUN|30.57/45.73/20.26| |PVT V2|CamoFocus|68.85/91.35/9.63| ||RUN|65.17/61.83/15.82| **W3. Comparison with more methods** BiRefNet [1] targets high-resolution segmentation (HRS); ZoomNeXt [2] focuses on multiple input scales (MIS), **differing from our setting**. Higher resolution and multiscale inputs enhance subtle discriminative cues but add computational overhead. We also report our results in these settings. For HRS, we retrain our method following BiRefNet [4], replacing our encoder with SwinL and training at 1024 x 1024 resolution. We evaluate our results on the three datasets reported in BiRefNet and find that RUN outperforms BiRefNet in most metrics. This is primarily due to our reversible modeling strategy. |HRS ($M$/$F_\beta$/$E_\phi$/$S_\alpha$)|CAMO|COD10K|NC4K| |-|-|-|-| |BiRefNet|0.030/0.895/0.954/0.904|0.014/0.881/0.960/0.913|0.023/0.925/0.953/0.914| |RUN|0.032/0.898/0.951/0.910|0.012/0.878/0.967/0.920|0.022/0.946/0.961/0.927| For MIS, we compare with ZoomNeXt (352 × 352). In the table, RUN outperforms ZoomNeXt in 8 out of 12 metrics. Unlike ZoomNeXt’s specialized extraction-fusion strategy, RUN simply concatenates multiscale features. Adding ZoomNeXt’s strategy to RUN yields RUN++, improving results. |MIS ($M$/$F_\beta$/$E_\phi$/$S_\alpha$)|CHAMELEON|CAMO|COD10K|NC4K| |-|-|-|-|-| |ZoomNext|0.020/0.872/0.963/0.912|0.069/0.782/0.883/0.822|0.026/0.765/0.918/0.855|0.038/0.827/0.919/0.869| |RUN|0.022/0.878/0.967/0.911|0.064/0.807/0.902/0.832|0.027/0.772/0.920/0.843|0.040/0.836/0.922/0.868| |RUN++|0.019/0.885/0.971/0.916|0.063/0.815/0.906/0.833|0.025/0.781/0.930/0.852|0.038/0.847/0.932/0.875| **W4. Performance in segmentation refinement** RUN, an end-to-end network for COS, can also serve as a segmentation refiner by initializing $\mathbf{M}_0$ as the coarse mask. We compare with several segmentation refiners: traditional methods (dense crf (DCRF) [3] and biliteral solver (BS) [4]) and learning-based methods (SegRefiner [5] and SAMRefiner [6]). To ensure fairness, we retrain the learning-based ones for COS, SegRefiner+ and SAMRefiner+, following their training rules. Traditional methods fail to refine FEDER’s segmentation in COD10K due to the challenges of COS (objects blending with surroundings). SegRefiner and SAMRefiner also perform suboptimally with their provided models, as coarse COS masks contain segmentation errors and uncertain regions, complicating refinement. Retraining improves SegRefiner+ and SAMRefiner+, with SAMRefiner+ getting results comparable to RUN. However, RUN (FPS: 22.75) outperforms SAMRefiner (FPS: 0.92) and SegRefiner (FPS: 0.89) in efficiency. Refinement results for FGANet’s masks are discussed in **W3** (reviewer 45Vr), with similar conclusions. ||FEDER|+DCRF|+BS|+SegRefiner|+SegRefiner+|+SAMRefiner|+SAMRefiner+|FEDER-R (Ours)| |-|-|-|-|-|-|-|-|-| |$M$|0.032|0.039|0.043|0.037|0.033|0.038|0.033|0.031| |$F_\beta$|0.715|0.683|0.666|0.691|0.718|0.686|0.723|0.721| |$E_\phi$|0.892|0.853|0.862|0.863|0.889|0.857|0.906|0.897| |$S_\alpha$|0.810|0.797|0.770|0.781|0.803|0.783|0.805|0.812| [1] BiRefNet, CAAI AIR24 [2] ZoomNext, TPAMI24 [3] Dense CRF, NIPS21 [4] Bilateral Solver, ECCV16 [5] SegRefiner, NIPS23 [6] SAMRefiner, ICLR25
Summary: This paper proposes the first deep unfolding network, RUN, for the COS task, aiming to cope with one of the intrinsic challenges in the COS task, which is neglecting the importance of applying reversible strategies in the RGB domain. To achieve this goal, two reversible modules, SOFS and ROBE, are further proposed. By treating the foreground-background separation problem as the noise elimination problem, ROBE serves as a valuable assistant for SOFS in promoting accurate segmentation. Abundant experiments verify the potential of this method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The theoretical analysis in this paper is accurate. Experimental Designs Or Analyses: The abundant experiments are both convincing and well-discussed. Supplementary Material: N/A Relation To Broader Scientific Literature: This is the first application of deep unfolding network, a widely-used strategy in low-level vision tasks that is favored in achieving a balance in interpretability and generalizability, in high-level vision tasks. This brings positive effects to the whole high-level vision field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The proposed RUN framework is not only a novel DUN-based application for the high-level vision task but also allows for the combination of high-level and low-level vision, ensuring robust performance in high-level vision tasks even in degraded scenarios. Weaknesses: 1 The related work is very short. Please consider providing a more detailed description in the revised version. 2 Why use a very simple reconstruction network in ROBE, given its reconstruction capacity is highly limited in this condition? 3 Also, the reviewer wants to know if changing the simple network in ROBE to a more complex framework brings an evident performance gain. 4 The reviewer is curious about if the single SOFS module achieves comparable performance with existing methods. 5 More experiments are preferred to analyze the performance in complex degradation. The reviewer thinks this is a major challenge in COS. Please fully prove whether RUN can address this challenge and make comparison with existing strategies, such as bi-level optimization. Other Comments Or Suggestions: I think this paper is very interesting. Please consider addressing the weaknesses. I will consider modifying the score based on the response. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for the valuable comments. **W1. More related works** In related works, the COS component focuses on the development of the reversible modeling strategy, while our coverage of DUN is limited due to the underexploration of this technique in high-level vision tasks. To provide a more comprehensive overview, we will add a section that systematically reviews the advancements in the COS field. **W2&W3. Why use a simple network in ROBE? How about replacing it with a more complex one?** (1) The reconstruction task in ROBE is relatively easy, compared to most low-level vision tasks, as the input images are of high quality. Thus, we select a lightweight network to balance performance and computational efficiency. Experimental results validate the effectiveness of this design choice. |||Parameters (M)|FLOPS (G)|FPS| |-|-|-|-|-| |ResNet50|FocusDiff|166.18|7618.49|0.23| ||FSEL|29.15|51.03|2.19| ||RUN|30.41|43.36|22.75| |Res2Net50|FEDER|45.92|50.03|14.02| ||RUN|30.57|45.73|20.26| |PVT V2|CamoFocus|68.85|91.35|9.63| ||RUN|65.17|61.83|15.82| (2) As shown in Table 5, replacing the simple network with more complex networks brings limited performance gains. However, the results in Fig. 5 indicate that a more complex network can better resist degradation conditions. (3) Furthermore, as shown in the table, increasing the stage number can also improve the network’s degradation resistance capacity. These findings can help us further exploit the potential of this DUN-based framework. |$F_\beta$|0|0.1|0.2|0.3|0.4|0.5| |-|-|-|-|-|-|-| |RUN-4 stages|0.747|0.740|0.730|0.718|0.698|0.676| |RUN-6 stages|0.749|0.742|0.733|0.723|0.708|0.694| |RUN-8 stages|0.751|0.743|0.735|0.726|0.714|0.701| |$E_\phi$|0|0.1|0.2|0.3|0.4|0.5| |-|-|-|-|-|-|-| |RUN-4 stages|0.903|0.898|0.892|0.885|0.876|0.865| |RUN-6 stages|0.905|0.898|0.893|0.886|0.878|0.869| |RUN-8 stages|0.905|0.900|0.896|0.891|0.885|0.877| **W4. Performance of SOFS** As shown in the table, removing ROBE leads to a noticeable performance decline. The resulting model underperforms compared to the state-of-the-art method, FocusDiff, further highlighting the effect of our approach. This can also be verified by Table 11, where replacing SOFS with the core structures of existing methods results in clear performance improvements. ||w/o ROBE|RUN (Ours)| |-|-|-| |$M$|0.032|0.030| |$F_\beta$|0.713|0.747| |$E_\phi$|0.892|0.903| |$S_\alpha$|0.816|0.827| **W5. Compare with bi-level optimization on more degradation scenes** (1) As shown in Fig. 5, replacing the original simple network with a more complex dehazing network leads to RUN+, which improves robustness against haze degradation. To further evaluate our approach, we test its performance within a bi-level optimization (BLO) framework following the setup of HQG-Net [1], where ROBE and SOFS are treated as low-level and high-level vision blocks, as illustrated in Fig. S2. Evaluating performance on COD10K using $F_\beta$ and $E_\phi$, we find that RUN+ consistently outperforms the BLO framework across different haze concentrations. |$F_\beta$|0.1|0.2|0.3|0.4|0.5| |-|-|-|-|-|-| |BLO|0.741|0.732|0.720|0.705|0.697| |RUN+|0.743|0.735|0.726|0.718|0.708| |$E_\phi$|0.1|0.2|0.3|0.4|0.5| |-|-|-|-|-|-| |BLO|0.897|0.890|0.883|0.872|0.860| |RUN+|0.899|0.894|0.889|0.883|0.876| (2) Additionally, we further analyze the performance in two degradation scenarios that are extremely challenging for the COS task, i.e., the low-light and low-resolution scenarios. Low-light scenarios deepen the concealment of objects by lowering color contrast, while low-resolution scenes directly reduce the discriminative cues by reducing the number of valid pixels. As with the above content, we also simulate degradation on concealed images in COD10K. For the low-light scenarios, we applied image darkening [2] and degraded the dataset into three levels: slight low-light, medium low-light, and severe low-light. We employ Reti-Diff [3] as the reconstruction network in the ROBE module. Experiments verify that our RUN framework achieves better performance than the BLO version. |$F_\beta$ / $E_\phi$|slight|medium|severe| |-|-|-|-| |BLO|0.719/0.870|0.691/0.853|0.662/0.813| |RUN+|0.735/0.889|0.716/0.878|0.683/0.855| For the low-resolution scenes, we use bicubic downsampling and select x2, x4, x8 for reconstruction and segmentation. DiffIR [4] is selected as the reconstruction network. As shown in the table, RUN still outperforms BLO in the low-resolution challenges. |$F_\beta$/$E_\phi$|x2|x4|x8| |-|-|-|-| |BLO|0.706/0.865|0.662/0.847|0.580/0.786| |RUN+|0.729/0.881|0.707/0.866|0.625/0.820| To sum up, our RUN framework is not only a novel unfolding-based application for high-level vision tasks but also **effectively integrates high-level and low-level vision, ensuring robust performance even in degraded scenarios**. [1] HQGNet, TNNLS23 [2] StableLLVE, CVPR21 [3] Reti-Diff, ICLR25 [4] DiffIR, ICCV23 --- Rebuttal Comment 1.1: Comment: The authors have resolved all my concerns, especially that the proposed RUN effectively bridges low-level and high-level tasks, ensuring robustness in degradation scenarios. It is indeed a strong piece of work. Given the inspired contribution and the comments from the other three reviewers, I am inclined to raise my score and recommend strong acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s recognition of the significance of our contribution. The RUN framework is not only a novel unfolding-based approach but also effectively integrates high- and low-level vision, ensuring robustness even in degraded scenarios. Your acknowledgment is highly valuable to us and reinforces our commitment to advancing research in this field!
Summary: The authors propose Reversible Unfolding Network (RUN) for Concealed Object Segmentation. RUN integrates optimization-based solutions with deep learning, enabling reversible modeling across both mask and RGB domains. It also introduces the Segmentation-Oriented Foreground Separation (SOFS) module and the Reconstruction-Oriented Background Extraction (ROBE) module. Extensive experiments validate its superiority. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - RUN develops the first deep unfolding network for COS, revealing the potential of deep unfolding network in COS. - RUN introduces SOFS and ROBE modules to direct attention to uncertain regions through iterative optimization - RUN achieves excellent performances across five different tasks against other task-specific methods. - RUN can integrate with existing methods to further boost performance, showcasing its flexibility. Weaknesses - This paper does not have any discussion of computational efficiency. It can be found that the performance is worse than most SOTA methods when the stage number K is small by comparing the results in Table 7 and Table 1. However, a larger K requires longer inference time. The authors should compare the computational cost between RUN and other methods. - Ablation studies are unclear. What is the baseline for this method? How does the model perform when the proposed SOFS and ROBE are removed? - The proposed method can be regarded as a post-processing method. Therefore, it should be compared with common post-processing methods in segmentation, such as dense crf and bilateral solver. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the valuable comments. **W1. Efficiency analysis** We compare the parameters, FLOPS, and FPS between our RUN against cutting-edge methods on three backbones. Our stage number is set to 4. As shown in the table, our method is more efficient across all three backbones with the input size of 352×352. |||Parameters (M)|FLOPS (G)|FPS| |-|-|-|-|-| |ResNet50|FocusDiff|166.18|7618.49|0.23| ||FSEL|29.15|51.03|2.19| ||RUN|30.41|43.36|22.75| |Res2Net50|FEDER|45.92|50.03|14.02| ||RUN|30.57|45.73|20.26| |PVT V2|CamoFocus|68.85|91.35|9.63| ||RUN|65.17|61.83|15.82| Besides, we analyze the computational cost of our RUN framework with varying stage numbers, along with their performance using ResNet50 as the backbone on COD10K. Since the SOFS and ROBE modules at different stages share the same weights, all versions of RUN maintain a consistent parameter count. As shown in the table, our method outperforms existing state-of-the-art methods once the stage number reaches 3. For an optimal balance between efficiency and performance, we set the stage number to 4. ||FLOPS (G)|FPS|$M$|$F_\beta$|$E_\phi$|$S_\alpha$| |-|-|-|-|-|-|-| |RUN-1 stage|29.95|28.61|0.033|0.715|0.885|0.803| |RUN-2 stages|34.12|25.27|0.031|0.727|0.893|0.812| |RUN-3 stages| 38.73|23.63|0.030|0.735|0.898|0.822| |RUN-4 stages| 43.36|22.75|0.030|0.747|0.903|0.827| |RUN-6 stages| 52.65|19.86|0.030|0.749|0.905|0.826| |RUN-8 stages| 62.02|16.53|0.030|0.751|0.905|0.830| **W2. Ablation study: baseline and more results** As shown in Table 5, we conduct ablation studies on the ResNet50-based RUN framework. To ensure the integrity of our reversible modeling strategy, we retain both SOFS and ROBE rather than performing breakdown ablations. Instead, in Table 5, we evaluate the contribution of each content within SOFS and ROBE by either replacing them with alternative designs or directly removing them, both of which lead to performance decreases. Here we report the results of removing SOFS and ROBE. To remove the segmentation module SOFS, we add an extra segmentation head, i.e., the convolution blocks, after ROBE, with the segmentation loss unchanged. For the case w/o ROBE, we simply remove ROBE and keep SOFS. Besides, we include a baseline where both SOFS and ROBE are removed, leaving only the segmentation head in each stage. As shown in the table, adding SOFS and ROBE brings a clear performance gain. ||Baseline|w/o SOFS|w/o ROBE|RUN (Ours)| |-|-|-|-|-| |$M$|0.053|0.038|0.032|0.030| |$F_\beta$|0.552|0.683|0.713|0.747| |$E_\phi$|0.718|0.855|0.892|0.903| |$S_\alpha$|0.682|0.797|0.816|0.827| **W3. Comparison with segmentation refinement methods** Our RUN is an end-to-end network designed for COS. For its structural characteristics, it can also be applied for refining coarse masks by initializing $\mathbf{M}_0$ as the coarse mask. We compare our method with segmentation refinement methods, including traditional methods (dense crf (DCRF) [1] and biliteral solver (BS) [2]) and learning-based methods (SegRefiner [3] and SAMRefiner [4]). We use segmentation results from FEDER and FGANet as coarse masks. For a fair comparison, we also provide retrained versions of the learning-based refiners for the COS task, SegRefiner+ and SAMRefiner+, following their original training rules. As shown in the table, traditional methods fail to improve the segmentation results of FEDER and FGANet. SegRefiner and SAMRefiner also achieve suboptimal performance with their provided models. This is because concealed object segmentation is an inherently difficult task, where objects are visually blended with their surroundings. Hence, unlike common segmentation tasks, this task provides far fewer discriminative cues for feature extraction. Besides, the coarse segmentation masks often suffer from significant quality issues, such as mis-segmentation, blurred edges, and high uncertainty in certain pixel regions, making mask refinement particularly challenging. After retraining, the learning-based refiners show noticeable performance gains, with SAMRefiner+ achieving results comparable to our RUN. However, our RUN framework achieves an FPS of 22.75, outperforming SAMRefiner with only 0.92 FPS. ||FEDER|+DCRF|+BS|+SegRefiner|+SegRefiner+|+SAMRefiner|+SAMRefiner+|FEDER-R (Ours)| |-|-|-|-|-|-|-|-|-| |$M$|0.032|0.039|0.043|0.037|0.033|0.038|0.033|0.031| |$F_\beta$|0.715|0.683|0.666|0.691|0.718|0.686|0.723|0.721| |$E_\phi$|0.892|0.853|0.862|0.863|0.889|0.857|0.906|0.897| |$S_\alpha$|0.810|0.797|0.770|0.781|0.803|0.783|0.805|0.812| ||FGANet|+DCRF|+BS|+SegRefiner|+SegRefiner+|+SAMRefiner|+SAMRefiner+|FGANet-R (Ours)| |-|-|-|-|-|-|-|-|-| |$M$|0.032|0.041|0.042|0.043|0.034|0.037|0.034|0.032| |$F_\beta$|0.708|0.665|0.649|0.662|0.698|0.682|0.725|0.716| |$E_\phi$|0.894|0.846|0.828|0.847|0.887|0.866|0.900|0.897| |$S_\alpha$|0.803|0.772|0.764|0.765|0.787|0.774|0.803|0.805| [1] Dense CRF, NeurIPS21 [2] The fast bilateral solver, ECCV16 [3] SegRefiner, NeurIPS23 [4] SAMRefiner, ICLR25 --- Rebuttal Comment 1.1: Comment: The rebuttal responded to my questions accordingly, and I decide to raise the final rating to weak accept. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s recognition of the significance of our contribution. The RUN framework is not only a novel unfolding-based approach but also effectively integrates high- and low-level vision, ensuring robustness even in degraded scenarios. Your acknowledgment is highly valuable to us and reinforces our commitment to advancing research in this field!
Summary: The paper introduces a Reversible Unfolding Network (RUN), a novel deep unfolding network that integrates object segmentation and distortion restoration tasks. RUN combines a Segmentation-Oriented Foreground Separation (SOFS) module and a Reconstruction-Oriented Background Extraction (ROBE) module to achieve more accurate segmentation by reducing uncertainties. The framework allows reversible modeling of foreground and background in both the mask and RGB domains, focusing on uncertain regions to improve accuracy. Experimental results across various COS tasks demonstrate that RUN outperforms existing methods and provides flexibility for integration with other models. Claims And Evidence: This paper claims to be the first deep unfolding network for Concealed Object Segmentation (COS) problem. Based on the literature review of the past research works done for COS, this claim is clear and convincing. Other than this claim, this paper is mainly positioned as an application paper, which optimize the objective function with new parameters and new architecture designs to tackle COS tasks. Therefore, the claim is specified for solving the particular COS problems among the given benchmarking datasets. Based on the experiments in the main paper and the supplementary materials, the proposed optimization techniques and networks are well supported. Methods And Evaluation Criteria: The theoretical part of the proposed method is inspired by the observation that segmentation mask gradients should be sparse. It introduces an extra term in the objective function to enforce the certainty. The optimization of the objective function is achieved by implementing corresponding network and train the network based on the objective function loss. The methods are evaluated in 4 widely used benchmarking datasets : CHAMELEON, CAMO, COD10K, and NC4K. The baseline methods are selected from the recent state of the art methods in this task. The proposed method consistently outperforms the baseline methods. Theoretical Claims: I checked and verified the correctness of the equations from Eq. (4) to Eq. (17) for theoretical derivation of the proposed objective function. Experimental Designs Or Analyses: The experiments have been conducted to compare the proposed method with a series of state of the art methods on the aforementioned 4 datasets. The evaluation metrics are selected properly. For the ablation study, I can see only COD10K dataset is selected, is there any particular reason for not running it over all the datasets? Supplementary Material: I have reviewed the entire supplementary material and the provided anonymous code. Relation To Broader Scientific Literature: The proposed method has contribution to the restricted area of image segmentation. The proposed optimization techniques can be applied to a broader scientific literature where the objective function has similar content in nature. The sparsity of gradients can be found in many fields including optical flow, denoising, etc. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths 1. This paper is well written with thorough theoretical derivation, implementation details and experiment verification. 2. The supplementary material has provided extra complement information. The code is available and valid for generating the proper results claimed in the paper. Other Comments Or Suggestions: The authors are suggested to address the weakness mentioned above in the rebuttal period. Questions For Authors: 1. What is the processing speed of the propose method compared to other state of the art methods? What is the trade-off and what is the best scenario for application. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable comments. **W1. Why only conducting ablation studies on COD10K** Given that COD10K is a representative and high-quality dataset, we follow existing methods [1,2,3] to conduct ablation studies on COD10K. We have now included the results of ablation studies on three extra datasets. As shown in the table, the outcomes remain consistent with our initial conclusions. We will add this content to the revised manuscript. |||$\mathbf{C}\rightarrow E(\mathbf{C})$|w/o RSS|w/o VSS|w/o prior $\hat{\mathbf{M}}_k$|w/o prior $\mathbf{B}_{k-1}$|w/o$\mathbf{E}_k$|$\mathcal{B}_1(\cdot)\rightarrow\mathcal{B}(\cdot)$|$\mathcal{B}_2(\cdot)\rightarrow\mathcal{B}(\cdot)$|w/o $\hat{\mathbf{C}}_k$|Fixed$\rightarrow$Learnable|RUN (Ours)| |-|-|-|-|-|-|-|-|-|-|-|-|-| |CHAMELEON|$M$|0.053|0.030|0.028|0.029|0.027|0.028|0.027|0.026|0.029|0.029|0.027| ||$F_\beta$|0.764|0.837|0.842|0.832|0.845|0.849|0.859|0.861|0.845|0.837|0.855| ||$E_\phi$|0.885|0.938|0.943|0.936|0.945|0.948|0.953|0.956|0.944|0.940|0.952| ||$S_\alpha$|0.802|0.883|0.889|0.886|0.889|0.891|0.893|0.900|0.893|0.889|0.895| |CAMO|$M$|0.097|0.075|0.071|0.072|0.070|0.071|0.069|0.068|0.072|0.072|0.070| ||$F_\beta$|0.693|0.752|0.773|0.766|0.775|0.776|0.783|0.780|0.770|0.764|0.781| ||$E_\phi$|0.788|0.836|0.858|0.847|0.863|0.861|0.870|0.866|0.852|0.845|0.868| ||$S_\alpha$|0.723|0.785|0.799|0.790|0.796|0.801|0.805|0.808|0.802|0.798|0.806| |NC4K|$M$|0.072|0.049|0.043|0.046|0.043|0.043|0.043|0.042|0.045|0.046|0.042| ||$F_\beta$|0.735|0.801|0.819|0.800|0.820|0.817|0.826|0.825|0.816|0.807|0.824| ||$E_\phi$|0.823|0.889|0.903|0.898|0.900|0.902|0.904|0.907|0.902|0.897|0.908| ||$S_\alpha$|0.792|0.835|0.845|0.838|0.849|0.846|0.853|0.852|0.845|0.842|0.851| **W2. Efficiency, trade-off, and the best scenario of RUN** (1) Efficiency: We compare parameters, FLOPS, and FPS in three backbones and find that our RUN surpasses existing cutting-edge methods across all three backbones. |||Parameters (M)|FLOPS (G)|FPS| |-|-|-|-|-| |ResNet50|FocusDiff|166.18|7618.49|0.23| ||RUN|30.41|43.36|22.75| |Res2Net50|FEDER|45.92|50.03|14.02| ||RUN|30.57|45.73|20.26| |PVT V2|CamoFocus|68.85|91.35|9.63| ||RUN|65.17|61.83|15.82| (2) Structure-level trade-off: Although different stages in RUN share the same weights, the stage number still affects the overall computational cost. To analyze the trade-off, we investigate the impact of the stage number on our ResNet50-based framework. In the table, our method surpasses existing methods when the stage number reaches 3. For an optimal balance, we set the stage number to 4. ||FLOPS (G)|FPS|$M$|$F_\beta$|$E_\phi$|$S_\alpha$| |-|-|-|-|-|-|-| |RUN-2 stages|34.12|25.27|0.031|0.727|0.893|0.812| |RUN-3 stages| 38.73|23.63|0.030|0.735|0.898|0.822| |RUN-4 stages| 43.36|22.75|0.030|0.747|0.903|0.827| |RUN-6 stages| 52.65|19.86|0.030|0.749|0.905|0.826| (3) Framework-level trade-off: RUN achieves cutting-edge performance for the theoretical coupling of the segmentation module (SOFS) and the reconstruction module (ROBE). As shown in the table, ROBE brings an evident gain with limited costs. This can also be verified by Table 11, where replacing SOFS with the core structures of existing methods brings clear profits. Ablations in Table 5 indicate that replacing the simple reconstruction network with more complex ones yields marginal gains. Hence, we opt for a simple network here. ||w/o ROBE|RUN (Ours)| |-|-|-| |$M$|0.032|0.030| |$F_\beta$|0.713|0.747| |$E_\phi$|0.892|0.903| |$S_\alpha$|0.816|0.827| |Parameters (M)|26.97|30.41| |FLOPS (G)|37.64|43.36| (4) The best application scenarios: RUN is not only a novel unfolding-based application for COS in segmenting clean concealed objects, but also has the potential to **ensure robust performance in degraded scenes**. As shown in Fig. 5, when replacing the reconstruction network with a more complex one, we get RUN+ and observe better hazy resistance capacity. To further verify this, we compare with bi-level optimization (BLO), a degradation-resistant framework. The differences between the two frameworks can be seen in Fig. S2, where our framework is superior in the theoretical coupling of the two models. We retrain our network (RUN+) in the BLO framework and report the results on COD10K with $F_\beta$ and $E_\phi$. We find that our RUN+ surpasses BLO across different hazy concentrations. |$F_\beta$/$E_\phi$|0.1|0.2|0.3|0.4|0.5| |-|-|-|-|-|-| |BLO|0.741/0.897|0.732/0.890|0.720/0.883|0.705/0.872|0.697/0.860| |RUN+|0.743/0.899|0.735/0.894|0.726/0.889|0.718/0.883|0.708/0.876| Besides, we analyze performance in two other degradation scenes, i.e., the low-light and low-resolution scenes, and observe that our RUN framework consistently surpasses BLO. For space limitations, we put the results in the response of **W5** of the reviewer MZkc. The reviewer can refer to the tables for more details. [1] PFNet, CVPR21 [2] ZoomNet, CVPR22 [3] FEDER, CVPR23
null
null
null
null
null
null
Noisy SIGNSGD Is More Differentially Private Than You (Might) Think
Accept (poster)
Summary: The authors study the privacy benefits of the map $sign(x)$ when combined with additive Gaussian noise in the Noisy SIGNSGD algorithm. They show that since $sign(x)$ drops the magnitude information, it indeed “amplifies” the privacy. The results show that the use of logistic noise may not be superior to using $sign(x)$ with additive Gaussian noise as reported in prior works when such a privacy amplification effect is considered. The claim is supported by both theoretical analysis and experiments on image classification tasks. ## update after rebuttal I thank the authors for their explanation. I have no further questions, and I will keep my rating. I do think adding the comparison and discussion in the related works will further improve the manuscript. Claims And Evidence: Yes. I find the main claim is well supported. Methods And Evaluation Criteria: Yes. The main claim on the privacy benefits of $sign()$ is well supported by not only theoretical analysis but also numerical results to illustrate the resulting privacy bound compared to the one leveraging logistic noise. Theoretical Claims: I did not check the proof in the appendix but the proof sketch provided in the main text makes sense to me. Experimental Designs Or Analyses: The experiment results show that using Noisy SignSGD can give comparable utility while saving 32x communication overhead since only one bit is transmitted. Supplementary Material: No. Relation To Broader Scientific Literature: The finding is quite generic. Given that 1-bit quantization (i.e., signSGD) is indeed important in some settings, I believe the result of the paper can benefit the community. Essential References Not Discussed: I wonder how the Noisy SignSGD (i.e., 1-bit compression) is compared to the sketching-based approaches, such as [1] and the relevant literature? I feel it is interesting to also see the trade-off along the communication complexity aspect in greater detail. Nevertheless, I think it is also fine to just focus on the Noisy SignSGD-based method. This question is just out of curiosity. ### References [1] Optimal compression of locally differentially private mechanisms, Shah et al. AISTATS 2022. Other Strengths And Weaknesses: The paper is well-written, and the problem is very well-motivated. While the analysis is not super technical, I like the fact that the authors summarized their ideas in a way that the general privacy audience can understand. It is a joyful read for me. I do not find any major weaknesses in the work. Other Comments Or Suggestions: No. Questions For Authors: Maybe the question is out of the scope, but I wonder how Noisy SignSGD, especially when adopting the improved privacy analysis from the authors, compared to the sketching-based approaches? What is the corresponding privacy-utility-communication trade-off? It is totally fine if the authors cannot answer this question for now, but I feel this is a very interesting question to think about for distributed learning with DP constraints in general. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer nyXX We appreciate your time and effort in reviewing our paper and providing a positive evaluation. Please find our response below. **Comparison with sketching-based approaches**: Thank you for pointing out this important aspect. We note that the compression of differentially private mechanisms aims to compress and simulate the distribution of the DP randomizer, usually in the presence of some shared randomness. The resulting compressed mechanisms have a smaller communication cost compared to the original mechanism while retaining the (or weakened) privacy guarantee. On one hand, our work endeavors to quantify the privacy amplification effect of the sign-based compressor instead of achieving minimum communication cost for a given differential privacy mechanism without ruining its privacy guarantee. On the other hand, we believe that our method can be further combined with these compression schemes (i.e., simulating a Bernoulli distribution). Considering the privacy-utility-communication trade-off in this case would be an interesting future direction. We will add a literature review and the corresponding discussions in related works in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanation. I have no further questions, and I will keep my rating. I do think adding the comparison and discussion in the related works will further improve the manuscript.
Summary: This paper investigates how sign-based gradient compression–specifically, Noisy signSGD–can inherently amplify differential privacy. In a distributed learning scenario, the authors introduce theoretical analysis for the privacy guarantees under the f-DP framework and compare to two variants: G-NoisySign and L-NoisySign. Claims And Evidence: Privacy amplification via compression: The paper rigorously shows that the act of discarding the gradient magnitudes via the sign operator amplifies differential privacy. In the theoretical analysis, the authors derive tight privacy bounds that quantify this effect. Methods And Evaluation Criteria: The methods and the experimental results are appropriate to the problem at hand. However, the number of repeats (5) is small for generalization. Theoretical Claims: All the theoretical analyses are well written; however, the reviewer raises an issue in the error analysis (lines 233-), the comparison of the **estimation error**. For more details, please refer to the additional comment section. Experimental Designs Or Analyses: In the experiments, the authors show that the proposed approach achieves the privacy-utility trade-off comparable to the classical DP-SGD. The experimental design makes sound. Supplementary Material: I have reviewed part A of the appendix. Relation To Broader Scientific Literature: There have been many papers for DPSGD and signSGD. The reviewer thinks that this paper provides a very tight bound for the DP + signSGD. Essential References Not Discussed: The key contribution of this work is to propose f-DP-based federated learning. Since the idea of the f-DP with federated learning is also discussed in the paper below, the reviewer recommends to cite one additional paper. ``` @inproceedings{zheng2021federated, title={Federated f-differential privacy}, author={Zheng, Qinqing and Chen, Shuxiao and Long, Qi and Su, Weijie}, booktitle={International conference on artificial intelligence and statistics}, pages={2251--2259}, year={2021}, organization={PMLR} } ``` Other Strengths And Weaknesses: **Strengths**: This paper provides a tight bound based on the neyman-pearson Lemma. Due to the tight bound, the proposed method in the experimental results achieves (or outperforms) the DP-SGD, whereas previous studies have a slightly lower accuracy compared to the DP-SGD. **Weakness**: Despite their theoretical contributions, the performance gap between the proposed method and the baselines are marginal/ Other Comments Or Suggestions: - I think the analysis with the linear mean estimation (Eqs. (14)-(16)) is not an appropriate method because it is not optimal. For example, if we have $E[f(x)] = x + 0.1 x^2$, $x\in[0,1]$, we may not treat the $x^2$ term as an error. If we use a proper linear estimator, the relative magnitude of the second term in the mean is not an error. - In Figure 3, like dithering, a proper amount of the additive noise is helpful for successful aggregation. In that situation, the aggregation is always wrong if $\sigma_{DP}$ is near zero. Inversely, it seems like G-NoisySign offers larger noise std compared to L-NoisySign if $\sigma_{DP}$ is small. Can you please add a discussion for this result? - Also, regarding Figure 3, the reviewer suggests adding figures for more general cases. Ex)10 workers are selected for each round, $x_m=-1.0 + \mathcal{N}(0,1)$ for $m<7$ and $x_m= -1.0 + \mathcal{N}(0,1)$ o.w. - The reviewer suggests the authors add **more repeats** for the experimental results. - The authors mentioned that the G-NoisySign is more suitable for distributed learning with heterogeneous data. The reviewer suggests adding experimental results for **the homogeneous dataset (i.i.d.) setting** for verification, like Tables 1 and 2. - The reviewer suggests adding more results with a smaller number of participating workers. For example, 5 workers are participating in each communication found, like Tables 1 and 2. - (Optional) Consider deeper neural networks with more complex datasets (e.g., cifar100). Questions For Authors: - In the numerical results (e.g., table 1), they said that they run all the algorithms for 5 repeats and **present the best results**. What is the correct one.. However, it seems like **mean accuracy** is reported. The reviewer guesses they run 5 repeats and present the mean results and the error bar (std) of the results. Is it really the best results with 5 repeats? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer t9hH We appreciate your time and effort in reviewing our paper and providing constructive comments. Please find our point-by-point response below. **Question about error analysis:** The rationale behind the analysis in Eqs (14)-(16) is that unbiased estimate of gradients is generally preferred in distributed/federated learning, considering that the majority of convergence analyses are established on the assumption of unbiased gradient estimator. Particularly, assuming a smooth loss function, the key is to bound $\langle\nabla F(\boldsymbol{w}^{(t)}), \boldsymbol{w}^{(t+1)}-\boldsymbol{w}^{(t)}\rangle$ (see Eq. (71)), where $\boldsymbol{w}^{(t+1)}-\boldsymbol{w}^{(t)} = -\eta\frac{1}{M}\sum_{i\in \mathcal{H}}\boldsymbol{g}\_{i}\^{(t)}$ for SGD and $-\eta\frac{1}{M}\sum\_{i\in \mathcal{H}}sign(\boldsymbol{g}\_{i}\^{(t)} + \boldsymbol{n}\_{i}\^{(t)})$ for Noisy SignSGD. In this case, it is desired that $\mathbb{E}[sign(\boldsymbol{g}\_{i}\^{(t)} + \boldsymbol{n}\_{i}\^{(t)})]$ is as close to (a scaled version of) $\boldsymbol{g}_{i}^{(t)}$ as possible such that the performance of Noisy SignSGD approaches that of SGD. **Results in Figure 3:** Indeed, for small $\sigma_{DP}$, Gaussian noise $\mathcal{N}(0,\sigma_{DP})$ tends to have a larger standard deviation than Logistic noise $Logistic(0,s)$. More specifically, to ensure the same privacy guarantees, $s = c/\ln(\Phi(c/\sigma_{DP})/\Phi(-c/\sigma_{DP}))$ (see discussion below Eq.(13)), and $\sigma_{DP} > s\pi/\sqrt{3}$ (the standard deviation of logistic noise) when $\sigma_{DP}$ is small. In this sense, G-NoisySign may not always be better than L-NoisySign, especially when $\sigma_{DP}$ is small and data heterogeneity is not a concern. However, we note that heterogeneous data and a large $\sigma_{DP}$ (i.e., more stringent privacy requirement) are of particular interest, especially in differentially private federated learning. We will add the discussion and numerical results for more general cases in the revised version. **Additional experiments**: We perform additional experiments on FMNIST with $\alpha = 100$ to simulate homogeneous data, and the results (10 repeats) are given below. |$\mu$|0.04|0.08|0.4|0.8|1.6| |--------|-------|-------|-------|-------|-------| |Gaussian Noise|$45.47\pm3.25\\%$|$59.21\pm2.25\\%$|$74.29\pm 0.42\\%$|$77.57\pm0.23\\%$|$80.31\pm 0.27\\%$| |G-NoisySign|$45.46\pm3.08\\%$|$59.84\pm2.40\\%$|$74.64\pm0.34\\%$|$77.61\pm0.44\\%$|$80.24\pm0.22\\%$ |G-NoisySign-Vote|$42.42\pm3.40\\%$|$57.32\pm3.42\\%$|$73.60\pm0.29\\%$|$77.18\pm0.30\\%$|$79.81\pm 0.28\\%$ |L-NoisySign|$45.26\pm3.19\\%$|$59.43\pm2.51\\%$|$74.91\pm0.55\\%$|$77.69\pm0.55\\%$|$80.28\pm0.23\\%$ |L-NoisySign-Vote|$42.80\pm2.77\\%$|$55.42\pm2.56\\%$|$73.32\pm0.72\\%$|$77.00\pm0.32\\%$|$79.80\pm0.22\\%$ We also perform more experiments on CIFAR-10 (5 repeats due to limited time) with $\alpha = 100$. |$\mu$|0.8|1.6|4|8|1.6| |--------|-------|-------|-------|-------|-------| |Gaussian Noise|$41.31\pm1.11\\%$|$48.67\pm0.41\\%$|$56.65\pm0.92\\%$|$63.95\pm0.86\\%$|$67.07\pm1.30\\%$| |G-NoisySign|$40.47\pm0.78\\%$|$48.13\pm0.78\\%$|$57.48\pm1.03\\%$|$63.41\pm0.45\\%$|$67.21\pm0.50\\%$ |G-NoisySign-Vote |$38.91\pm0.44\\%$|$46.75\pm0.59\\%$|$55.87\pm0.67\\%$|$61.67\pm1.09\\%$|$66.38\pm0.69\\%$ |L-NoisySign|$40.85\pm0.33\\%$|$48.55\pm0.50\\%$|$57.32\pm0.91\\%$|$62.63\pm1.15\\%$|$67.79\pm0.75\\%$ |L-NoisySign-Vote|$39.08\pm0.68\\%$|$46.45\pm1.07\\%$|$55.10\pm0.90\\%$|$62.06\pm0.53\\%$|$65.70\pm1.00\\%$ **Fewer workers**: We perform more experiments on FMNIST (10 repeats) with 5 workers selected in each round with $\alpha = 0.1$. |$\mu$|0.04|0.08|0.4|0.8|1.6| |--------|-------|-------|-------|-------|-------| |Gaussian Noise|$27.90\pm3.69\\%$|$43.00\pm3.63\\%$|$69.06\pm0.98\\%$|$73.15\pm0.60\\%$|$76.06\pm0.97\\%$| |G-NoisySign|$28.53\pm4.42\\%$|$42.67\pm3.23\\%$|$68.93\pm1.33\\%$|$73.26\pm0.87\\%$|$76.37\pm0.59\\%$ |G-NoisySign-Vote| $24.39\pm3.57\\%$ |$37.81\pm2.08\\%$|$67.20\pm1.02\\%$|$72.33\pm0.78\\%$|$75.60\pm0.85\\%$ |L-NoisySign|$25.86\pm4.71\\%$|$42.44\pm2.95\\%$|$68.78\pm1.13\\%$|$73.18\pm0.96\\%$|$76.31\pm0.70\\%$ |L-NoisySign-Vote|$24.17\pm4.75\\%$|$37.46\pm4.09\\%$|$66.92\pm1.25\\%$|$72.34\pm1.01\\%$|$75.69\pm0.55\\%$ It can be observed that G-NoisySign and L-NoisySign attain comparable performance with the Gaussian mechanism in all the experiments above. **Performance gap with baselines & best results:** Note that the goal of our paper is not to outperform DP-SGD in accuracy but to show the privacy amplification of sign-based compression. Please refer to our response to Reviewer AkDc under **Marginal improvement over DP-SGD** and **Concern about presenting the best results** for more details. **Number of repeats & essential references:** We will increase the number of repeats and add the reference as suggested in our revised version. We hope that your comments have been addressed adequately. Please let us know if there are any questions. --- Rebuttal Comment 1.1: Comment: ## Round 1 Sincerely sorry for my late response. Most of the concerns have been cleared. - **Question about error analysis:** Thank you for the authors' response. - **Results in Figure 3:** I want to see the results of the more general cases I mentioned before. Can you please describe how Figure 3 changes in general cases? I guess the results will be flipped if the setting goes to homogeneous cases. I think this might help to strengthen a bit more what this paper is claiming and in what situations it might be beneficial. - Because the example provided in Figure 3 is an extreme case, it cannot represent the general case. Consider the minimum wrong aggregation probability is higher than 0.4. - **Additional experiments:** Thank you for the authors' efforts. Everything is clear now. I believe the slight degradation of G-NoisySign compared to L-NoisySign is due to the homogeneous setting ($\alpha=100.0$). --- ## Round 2 Dear authors, I sincerely appreciate your kind response to my questions. I confirm that all the comments I raised are clearly addressed. - **Suggestion (optional and does not affect the score):** I have one additional comment about the assumption for the bounded gradient. For the FL scenarios, many studies use the gradient heterogeneity assumption instead of the bounded norm to consider heterogeneity of the data distribution. I think it will better represent how the non-i.i.d. datapoint distribution affects the convergence. Anyway, since my initial questions are clearly resolved, I have changed the recommendation from (3: weak accept) to (4:accept). If you have any further questions regarding my suggestion, please let me know. Thank you for your efforts to respond to my questions. --- Reply to Comment 1.1.1: Comment: Dear Reviewer t9hH, We appreciate your further comments and questions, which greatly help improve our paper. -------- **Results in Figure 3**: Indeed, for less heterogeneous cases, L-NoisySign could outperform G-NoisySign. To alleviate your concern, we further describe how Figure 3 changes here. We perform some additional experiments, in which a set of 10 workers are considered, with $x_{m} = \mathcal{N}(-1,1)$ for $m < 7$ and $x_{m} = \mathcal{N}(x_{heterog},1)$ otherwise. We examine $x_{heterog} \in \\{1, 3, 5, 10\\}$ for different levels of data heterogeneity. It is observed that when the data distribution is less heterogeneous (i.e., $x_{heterog} = 1$), L-NoisySign outperforms G-NoisySign, and the probability of wrong aggregation is always smaller than 0.5. In this case, the vanilla SIGNSGD converges well, and Gaussian noise tends to result in a larger variance than Logistic noise with the same privacy guarantees. When the data distribution becomes more heterogeneous (i.e., $x_{heterog} = 3$), G-NoisySign first outperforms L-NoisySign for an appropriately small $\sigma_{DP}$ and then underperforms when $\sigma_{DP}$ keeps increasing (i.e., there exists a crossover point). Intuitively, G-NoisySign yields a smaller probability of wrong aggregation when adding some appropriate noise helps (like dithering), and the performance might degrade when the noise is too large. As data heterogeneity becomes more severe, the crossover happens at a larger $\sigma_{DP}$ (and the difference in probability of wrong aggregation becomes negligible). Moreover, we observe that G-NoisySign attains a smaller minimum probability of wrong aggregation than L-NoisySign for $x_{heterog} \in \\{3, 5, 10\\}$. This validates that G-NoisySign may be more suitable for heterogeneous cases. It is worth mentioning that since $x_{m}$'s are Gaussian in this case, the level of data heterogeneity is somewhat reduced. Therefore, we also examine the scenarios where $x_{m}$'s are fixed, and the results agree with the discussion above. We provide the corresponding numerical results in the anonymous link https://anonymous.4open.science/r/AnonymousICML25Rebuttal-05F8/, and will revise the manuscript accordingly. -------- Thank you again for reading the rebuttal. We hope that your comments and concerns have been adequately addressed, and if this is the case, it would be great if you are willing to kindly increase your score. Please let us know if there are any further questions. -------- ## Round 2 Dear Reviewer t9hH, We appreciate your further comments and kind suggestions. **Regarding the bounded gradient assumption**, we agree that using the gradient dissimilarity assumption will provide a more accurate characterization of the impact of data heterogeneity. In fact, the bounded gradient assumption implies the bounded gradient dissimilarity assumption (and, therefore, is stronger). The bounded gradient assumption in this work is not introduced to alleviate the difficulty in convergence analyses caused by data heterogeneity but the bias introduced by gradient clipping. Several existing works [R1, R2] have considered the highly non-trivial impact of clipping on DP-SGD. The joint impact of sign-based compression and gradient clipping is more complicated and challenging to analyze without the bounded gradient assumption, which is an interesting direction for future work. Thanks again for your constructive comments and suggestions. ------- [R1] X. Zhang, et al. Understanding clipping for federated learning: Convergence and client-level differential privacy, ICML 2022. [R2] X. Chen, et al. Understanding gradient clipping in private SGD: A geometric perspective, NeurIPS, 2020
Summary: SignSGD is a technique to compress a gradient in order to reduce its communication cost. It is typically applied in decentralized or federated SGD where the transmission of gradients is a regular operation. The key idea behind it is to transmit the sign of each component, reducing the communication cost to one bit per dimension. The paper studies the differential privacy (DP) guarantees of SignSGD. Current techniques to make SignSGD differentially private rely on the addition of noise before compression. They focus on making the uncompressed gradient differentially private which, by post-processing, implies the differential privacy guarantees of the compressed gradients. However, these techniques ignore the privacy amplification obtained by the sign compression. The current contribution quantifies this amplification, showing that less noise is required to obtain DP, which results in more accurate models. Under the framework of $f$-DP, the paper shows that Noisy SignSGD using Gaussian or Logistic noise achieves similar performance than the uncompressed version despite the 32x improvement factor in compression. ## update after rebuttal Authors have clarified my concerns. Therefore I keep my score supporting acceptance. Claims And Evidence: Appropriate theory and experiments back the claims of the paper. Methods And Evaluation Criteria: The evaluation setting is reasonable for the current problem. Theoretical Claims: I have checked the proofs of Theorems 1 and 2, which seem correct. Experimental Designs Or Analyses: The experiment design is reasonable. However, it would have been more informative to show the convergence across iterations to have an impression of the convergence speed of the evaluated techniques instead of just the final accuracy in Table 1. Supplementary Material: I have reviewed appendices A.1, A.2 and C. Relation To Broader Scientific Literature: The paper seems to reasonably address its relation with respect to sign based compressors. However, it does not position itself with respect to other compression techniques under privacy constraints such as [R1-R5] listed below. References: [R1] Triastcyn, Aleksei, Matthias Reisser, and Christos Louizos. "Dp-rec: Private & communication-efficient federated learning." arXiv preprint arXiv:2111.05454 (2021). [R2] Bassily, Raef, and Adam Smith. "Local, private, efficient protocols for succinct histograms." Proceedings of the forty-seventh annual ACM symposium on Theory of computing. 2015. [R3] Feldman, Vitaly, and Kunal Talwar. "Lossless compression of efficient private local randomizers." International Conference on Machine Learning. PMLR, 2021. [R4] Shah, Abhin, et al. "Optimal compression of locally differentially private mechanisms." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. [R5] Liu, Yanxiao, et al. "Universal exact compression of differentially private mechanisms." Advances in Neural Information Processing Systems 37 (2024): 91492-91531 Essential References Not Discussed: To the best of my knowledge, there are no essential references missed in the paper. Other Strengths And Weaknesses: I have enjoyed reading the paper. It is in general well written and provides clear explanations of each key concept. Other Comments Or Suggestions: I have only minor comments: - The third sentence of the abstract is a bit long and complicated. It could be reformulated for clarity - The font size of Figure 2 is too small and should be increased Questions For Authors: Please address the broader positioning to other privacy preserving compressors mentioned in the review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer Jupt, We appreciate your time and effort in reviewing our paper and providing a positive evaluation. Please find our point-by-point response below. **Relation to other compression techniques under privacy constraints:** Thank you for pointing out this important aspect. We note that the compression of differentially private mechanisms aims to compress and simulate the distribution of the DP randomizer, usually in the presence of some shared randomness. The resulting compressed mechanisms have a smaller communication cost compared to the original mechanism while retaining the (or weakened) privacy guarantee. On one hand, our work endeavors to quantify the privacy amplification effect of the sign-based compressor instead of achieving minimum communication cost for a given differential privacy mechanism. On the other hand, we believe that our method can be further combined with these compression schemes. The problem concerning privacy-utility-communication trade-off is important and interesting, which will be considered in our future work. We will add a literature review and the corresponding discussions in related works in the revised version. **Convergence across iterations:** We will add the corresponding figures showing the convergence with respect to the communication round or communication overhead in the revised version. **Third sentence of the abstract:** We will revise and break it down into two sentences for clarity. **The font size of Figure 2:** We will revise accordingly. We hope that your comments have been addressed adequately. Please let us know if there are any questions. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your reply. I have no further comments or concerns. I will confirm my score after the discussion with other reviewers.
Summary: This paper considers the privacy guarantees of Noisy SignSGD, an algorithm that adds noise to a value, then releases its sign. It is shown that releasing the sign of the value, rather than the noisy value itself, amplifies the privacy guarantees. A method of majority vote for aggregating the sign gradient is also proposed. Numerical experiments confirm that the proposed method, despite its biased nature, is on-par with the classical DP-SGD algorithm. Claims And Evidence: Theoretical claims are provided with full proofs, and empirical claims are in line with the empirical results. Methods And Evaluation Criteria: Benchmark datasets and the used baselines make sense. Theoretical Claims: Not in all details, but derivations presented in the main text seem correct. Experimental Designs Or Analyses: Experimental design is rather sound, with reasonable choice of datasets and problems and a wide range of hyperparameter choice. However, clipping threshold is set arbitrarily to $C=1$ or $C=2$, which may have an important impact on the behaviour of the algorithm, especially since it relies only on the sign. The statement "We run all the algorithms for 5 repeats and present the best results" is slightly concerning, as it is a bit unusual in my opinion. Generally, one would rather report the average and standard deviation rather than maximum and standard deviation. Supplementary Material: I only skimmed through it. Relation To Broader Scientific Literature: Related work with respect to sign SGD and to differential privacy are discussed. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: **Strengths** 1. The paper shows that Sign SGD can enhance privacy guarantees, which is in line with the intuition that releasing only the sign can improve privacy guarantees as less information is released. 2. Numerical experiments show that the proposed methods is on par with DP-SGD for a given privacy guarantee. **Weaknesses.** 1. While it is shown that privacy guarantees are amplified, the amplification is rather small. This should rather be referred to as a "tighter evaluation of the privacy of sign SGD" rather than proper amplification. This is in line with Figure 1, which shows that the analysis is tighter rather than shows amplification (e.g. with noise to add divided by the magnitude of the gradient). 2. Majority voting as an aggregation procedure is interesting, but almost systematically decrease accuracy in experiments, which is a bit disappointing. 3. Theoretical guarantees for noisy sign SGD are rather weak, and do not account for clipping, although one could expect that clipping has a somewhat important impact on the result. 4. Experiments only highlight marginal improvement from sign SGD in comparison with DP-SGD with Gaussian noise. Moreover, step size is heavily tuned: in sign method, this may lead to "guessing the scale of the gradient by hyperparameter tuning". Tuning this hyperparameter may have less impact on DP-SGD. Other Comments Or Suggestions: It seems that sign SGD could allow gradients not to be clipped before applying the sign function. For instance, one may add noise to the gradient (possibly using the gradient's scale). This does not provide privacy in itself, but this could be enforced later using mechanisms like randomized response to perturb the sign of the gradient. The resulting procedure could allow to give DP guarantees without requiring to know the scale of the gradient before-hand. As such, no clipping is required when computing the gradient, while privacy is preserved by randomized response. I understand this is a complicated question, and mention this as a potential direction for future research, but would it be the case that randomizing the sign of the algorithm could further improve the privacy guarantees and help to have additional amplification? Questions For Authors: 1. It seems that "amplification" is rather small: are there settings where one can obtain arbitrarily large gains, or is amplification bounded by a constant factor? (In which case I would rather talk about "tighter analysis of DP guarantees of noisy SignSGD".) 2. How important is the hyperparameter tuning step for noisy Sign SGD, in comparison with DP-SGD? In particular, is one of the two algorithms more sensitive to hyperparameter tuning than the other? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer AkDc, We appreciate your time and effort in reviewing our paper and providing constructive comments. Please find the response to the comments below. **Concern about presenting the best results:** This is due to confusion. We run the algorithms for 5 repeats, compute the mean and standard deviation, and present the results with the highest mean accuracy over all the tuned learning rates. We will revise it accordingly to avoid confusion. **Privacy amplification or tighter evaluation:** We note that in Fig. 1, the black curve shows the privacy of the Gaussian mechanism, and the blue curve for the Gaussian mechanism combined with $sign$. The improvement is due to the privacy amplification effect of $sign$. **Privacy amplification is small:** As we discussed in Remark 2, compared to the Gaussian mechanism without $sign$, there is an improvement by a factor of $\sqrt{\frac{\pi}{2}} \approx 1.25$. In such a case, if a Gaussian mechanism with variance $\sigma$ gives $\mu_{G} = \frac{2C}{\sigma}$-GDP, further incorporating $sign$ provides $\mu_{GNS} = \frac{2C}{\sigma\sqrt{\frac{\pi}{2}}} \approx 0.8\mu_{G}$. There is a decrease in $\mu$ for around 20\%. **Accuracy decrease for majority vote:** Note that majority vote improves the server-to-client communication efficiency and may provide another improvement in privacy if central DP is concerned (consider privacy leakage of releasing the aggregated results), considering that taking the majority vote is equivalent to adopting $sign$ at the server side. Unfortunately, the improvement in communication efficiency and central DP comes with a loss in privacy. The interlay among communication, privacy in terms of central DP, and accuracy would be an interesting future direction. **Impact of clipping:** We agree that studying the impact of clipping is important, which has been considered in some existing works, e.g., [R1-R2]. However, we want to emphasize that the convergence analyses are not our major contributions. Instead, we focus more on the privacy amplification effect of sign-based compressors. Therefore, we follow the existing literature and adopt the bounded gradient assumption. [R1] X. Zhang, et al. Understanding clipping for federated learning: Convergence and client-level differential privacy, ICML 2022. [R2] X. Chen, et al. Understanding gradient clipping in private SGD: A geometric perspective, NeurIPS, 2020 **Marginal improvement over DP-SGD:** We would like to clarify that our goal is not to show that noisy SignSGD outperforms DP-SGD in accuracy, but that compression by $sign$ does not result in much loss in privacy-accuracy tradeoff. Particularly, discarding the magnitude leads to a significant improvement in communication efficiency (32$\times$ assuming each float number is represented by 32 bits), which may in turn hinder the accuracy. We show that the loss in accuracy caused by compression leads to privacy amplification. A comparable privacy-accuracy tradeoff with DP-SGD means that the improvement in communication efficiency is obtained almost for free. **Randoming the sign:** Randomizing the sign could further improve the privacy guarantees, and Theorem 1 in the paper can capture the amplification as long as the probability of flipping is known. However, without clipping, we may not utilize the privacy amplification effect of taking the signs since it is not DP on its own. **Step size tuning**: In our experiments, we find that DP-SGD is more sensitive to learning rates. The best learning rates of DP-SGD vary for different $\mu$'s, while those of G-NoisySign and L-NoisySign (and the majority vote variants) only change slightly. This may be attributed to the fact that adding noise with a large variance to the gradients makes the training process less stable (e.g., a large noise may lead to a large deviation in model updates), and a smaller learning rate is preferred. The clipping effect of the $sign$ compressor may help in this case. We will add more discussion in the revised version. **The choice of clipping threshold:** We note that many existing works adopt a similar clipping threshold for experiments at a similar scale. For example, [R3] suggested a consistent performance with $C \in \\{1,2,3,4,5\\}$ for MNIST. [R4] adopted $C=1$ for both MNIST and CIFAR-10. [R5] set $C \in \\{1, 1.5\\}$ for MNIST. In our experiments, we tested different $C \in \\{1,2\\}$ for FMNIST, and the trends are consistent with those presented in the paper. We will add more detailed discussions in the revised version. [R3] M. Abadi, et al. Deep learning with differential privacy. ACM SIGSAC conference on computer and communications security, 2016. [R4] Q. Zheng, et al. Federated f-differential privacy. AISTATS, 2021 [R5] J. Dong, et al. Gaussian differential privacy. Journal of the Royal Statistical Society: Series B, 2022 We hope that your comments have been addressed adequately. Please let us know if there are any questions. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I understand that this can be seen as some kind of "amplification", although this is a very small amplification (about 1.25x), the phenomenon is interesting to study, and the result is nice. Nonetheless, I still think that the title "Noisy SIGNSGD Is More Differentially Private Than You (Might) Think" is a bit of an overstatement, since we are talking about a 1.25x amplification. > "We would like to clarify that our goal is not to show that noisy SignSGD outperforms DP-SGD in accuracy, but that compression by $sign$ does not result in much loss in privacy-accuracy tradeoff." I understand, and I agree this is an interesting phenomenon, especially in settings where communication bandwidth is limited. Nonetheless, **this phenomenon mostly seems due to the fact that variance dominates the bias in such problems, rather than to the amplification of privacy by a factor at most 1.25**. This can not be seen in the current experiments, that only include the "amplified noisy-signSGD" and the classical DP-SGD with Gaussian noise. > "However, without clipping, we may not utilize the privacy amplification effect of taking the signs since it is not DP on its own." It could be if randomizing the sign, and this could have more impact on "amplification". --- Reply to Comment 1.1.1: Comment: Dear Reviewer AkDc, We sincerely appreciate your further comments and questions. ------------- The composition property in Lemma 2 suggests that, for the same overall privacy guarantee, **utilizing the privacy amplification effect allows for $\frac{\pi}{2} \approx 1.5\times$ training steps with the same Gaussian noise variance per step**. We believe that 50% more training steps is meaningful in improving the overall performance, as also suggested in [R6]. To alleviate your concern, due to limited time, we further perform some additional experiments on Fashion-MNIST ($\alpha = 0.1$, 10 repeats). Particularly, we add two more baselines: (1) G-NoisySign without utilizing the privacy amplification analyses (i.e., adding Gaussian noise with the same variance as DP-SGD); (2) G-NoisySign that runs 320 rounds (slightly more than $500*2/\pi$ rounds) instead of 500 rounds. The results are given below. **It can be observed that G-NoisySign and G-NoisySign-Vote outperform the two additional baselines in all the examined privacy budgets, which validates that the privacy amplification indeed leads to improvement in test accuracy.** Note that the results for the 5 algorithms in the manuscript are slightly different since we run more repeats here. |$\mu$|0.04|0.08|0.4|0.8|1.6| |--------|-------|-------|-------|-------|-------| |Gaussian Noise|$43.78\pm1.98\\%$|$58.69\pm1.48\\%$|$73.48\pm 0.49\\%$|$77.17\pm0.51\\%$|$79.90\pm 0.25\\%$| |G-NoisySign|$45.24\pm2.51\\%$|$58.46\pm2.48\\%$|$73.78\pm0.57\\%$|$77.20\pm0.29\\%$|$79.57\pm0.57\\%$ |G-NoisySign (320 Round)|$40.94\pm4.49\\%$|$54.49\pm2.68\\%$|$72.71\pm0.68\\%$|$76.15\pm0.46\\%$|$78.70\pm0.65\\%$ |G-NoisySign w.o. privacy amplification|$37.36\pm4.07\\%$|$52.06\pm2.54\\%$|$72.83\pm0.62\\%$|$76.49\pm0.43\\%$|$79.11\pm0.45\\%$ |G-NoisySign-Vote|$42.18\pm4.03\\%$|$56.59\pm1.91\\%$|$73.23\pm0.66\\%$|$76.47\pm0.61\\%$|$79.27\pm 0.37\\%$ |G-NoisySign-Vote (320 Round)|$35.79\pm5.61\\%$|$49.85\pm2.65\\%$|$71.69\pm0.64\\%$|$75.17\pm0.39\\%$|$78.01\pm0.60\\%$ |G-NoisySign-Vote w.o. privacy amplification|$38.00\pm5.26\\%$|$49.85\pm3.22\\%$|$71.64\pm0.83\\%$|$75.84\pm0.55\\%$|$78.52\pm0.39\\%$ |L-NoisySign|$42.52\pm3.11\\%$|$58.39\pm1.53\\%$|$73.89\pm0.78\\%$|$77.18\pm0.31\\%$|$79.66\pm0.45\\%$ |L-NoisySign-Vote|$40.46\pm3.82\\%$|$53.89\pm1.82\\%$|$73.23\pm0.47\\%$|$76.49\pm0.31\\%$|$79.24\pm0.31\\%$ ------- **Randomizing the sign**: Indeed, further randomizing the signs on top of the current mechanism will add another level of privacy amplification, and the gain could be arbitrarily large (flipping the sign with a probability of 0.5 yields perfect privacy). ------- We hope that your comments have been addressed adequately. Please let us know if there are any questions. [R6] J. Jang, et al., Rethinking dp-sgd in discrete domain: Exploring logistic distribution in the realm of signSGD. ICML, 2024.
null
null
null
null
null
null
Large Language Model-driven Large Neighborhood Search for Large-Scale MILP Problems
Accept (spotlight poster)
Summary: This paper proposes LLM-LNS, a framework leveraging Large Language Models (LLMs) to drive Large Neighborhood Search (LNS) for solving large-scale Mixed Integer Linear Programming (MILP) problems including Online Bin Packing, Traveling Salesman Problems, SC, MVC, MIS, and MIKS. The main novelty lies in a dual-layer self-evolutionary LLM agent that generates and refines heuristic strategies for neighborhood selection. Experiments validate superior performance over state-of-the-art methods like FunSearch, EOH, and Gurobi. ## Update after rebuttal I maintain my review after the rebuttal. Claims And Evidence: ### Key Points 1. The author claims that the proposed LLM-LNS framework surpasses traditional LNS methods, advanced solvers like Gurobi and SCIP, and modern ML-based frameworks. However, the dataset used in the comparison is four academic ones, so a more realistic dataset like MIPLIB 2017 should be used to validate its effectiveness. Methods And Evaluation Criteria: The authors introduce a dual-layer LLM agent, which evolves heuristic strategies and evolutionary prompts. The framework adapts neighborhood size dynamically based on search progress, allowing for efficient exploration of large solution spaces. Theoretical Claims: None Experimental Designs Or Analyses: I would like to know the settings of Gurobi and SCIP because most of the solving time is spent on proving optimality. Supplementary Material: Yes, I would suggest the moving Algorithm 1 into the main part. Relation To Broader Scientific Literature: This paper aligns with recent advances in combining LLMs and optimization heuristics, following the trend of using machine learning to automate the design of optimization strategies. Essential References Not Discussed: A few recent works like "BTBS-LNS: Binarized-Tightening, Branch and Search on Learning LNS Policies for MIP" are closely related but are not discussed in the paper. Other Strengths And Weaknesses: **Strengths:** - The proposed framework is highly innovative and demonstrates clear improvements over existing methods. **Weakness:** - more realistic large-scale datasets like MIPLIB 2017 should be tested. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 4X7L for the thoughtful and constructive feedback. We are glad that you find the proposed LLM-LNS framework innovative and recognize its performance improvements over existing methods. We appreciate your suggestions regarding dataset realism, solver settings, and related work, and we address each of these points in detail below. **Claims And Evidence and Weakness:** **A1:** Thank you very much for the helpful suggestion. We fully agree that evaluating on realistic datasets is important. While MIPLIB 2017 is a valuable benchmark, many of its instances are either heterogeneous or not large enough to meaningfully evaluate LLM-driven heuristics. Therefore, we carefully selected two problem classes from MIPLIB — dws and ivu — which both contain multiple large-scale instances of the same type, allowing for a meaningful train/test split and consistent evaluation. Specifically, we used the largest instances from each class as the test set, and the remaining instances for training. All methods were evaluated under a 3000s time limit. The results below demonstrate that LLM-LNS generalizes well to realistic, large-scale industrial problems. We will include these experiments in the appendix and plan to evaluate broader MIPLIB categories in future work. | | Random-LNS |ACP| CL-LNS | Gurobi | GNN&GBDT | Light-MILPOpt | LLM-LNS(Ours) | | :-----------: | :--------: | :------: | :----: | :------: | :------: | :-----------: | :-----------: | | dws(Minimize) | 189028.6 | 186625.3 | - | 146411.0 | * | 147417.7 | **143630.5** | | ivu(Minimize) | 14261.3 | 9998.6 | - | 27488 | * | - | **3575.9** | **Experimental Designs Or Analyses:** **A1:** Thank you for the question. We used the default settings for both Gurobi and SCIP. To ensure a fair comparison, all baselines were run under the same computational constraint: 16 cores and 32 threads within a fixed time limit. We will include a description of these settings in the Experimental Setup section of the revised manuscript. **Supplementary Material:** **A1:** Thank you very much for the suggestion! We agree that moving Algorithm 1 into the main text would help clarify the overall workflow of our method. We will include it in the main paper if space permits during the revision. **Essential References Not Discussed:** **A1:** Thank you very much for pointing this out. BTBS-LNS is a recently accepted paper at ICLR 2025, which proposes a reinforcement learning-based LNS framework with a binarized-tightening and branching mechanism for solving MIP problems. While the approach is related in spirit, the code is not publicly available at this time. Moreover, the experiments in BTBS-LNS are mainly conducted on relatively small-scale problems (typically with tens of thousands of decision variables), and our previous experience suggests that reinforcement learning struggles to scale effectively in much larger problem settings, which are the main focus of our work. We will add a discussion of BTBS-LNS in the Related Work section, and if the code is released in time, we will consider including a comparative experiment in the final version. We sincerely appreciate Reviewer 4X7L’s insightful comments and constructive suggestions. They have helped us significantly improve the clarity and completeness of our paper. We have carefully addressed each point in our revision and will incorporate the proposed changes into the final version. Thank you again for your thoughtful review and valuable feedback.
Summary: This paper describes an evolutionary LLM-based framework to produce heuristics represented as code. It extends previously-proposed evolutionary methods for heuristic search with two additional techniques: prompt evolution and directional evolution. Prompt evolution focuses on finding good LLM prompts to produce heuristic strategies, and directional evolution uses meta-prompts to provide strategy fitness values to LLMs as feedback to improve strategies. This is applied for both typical combinatorial heuristics, evaluated on online bin packing and the traveling salesman problem, and for variable selection heuristics in Large Neighborhood Search in MILP, evaluated in four typical MILP problem classes. Computational results are encouraging when compared with previous non-ML, ML, and LLM-based approaches. ## Update after rebuttal I maintain my review after the rebuttal. Claims And Evidence: There are two main methodological contributions: 1, the strategy-search addition to the stack of evolutionary LLM layers for searching for heuristics to solve a combinatorial optimization problem, which enable the LLM to diversify and control its own search strategies, and 2, the use of LLM-based heuristic search in the context of Large Neighborhood Search. Their effectiveness is well-supported by a comprehensive set of experiments in various forms: only the LLM framework without the LNS for online bin packing and TSP in Sec. 4.1, ablation studies on each LLM strategy in Appendix G, and the full framework with LNS in Sec. 4.2. The appendix also contains other supporting experiments, such as variations in population size, variations in LLMs, and stability. Experiments on online bin packing and TSP show that this stack of evolutionary methods can improve upon previous similar approaches by a reasonable margin, and the ablation studies confirm that both prompt evolution and directional evolution make a significant difference. These improvements may not always be consistent in the sense that turning off one or the other could be better, but they appear to be consistently better than not using either approach. The computational results on LNS are also very positive and encouraging. In general, the computational evidence is comprehensive and covers a good number of different scenarios. Methods And Evaluation Criteria: The LLM strategy search makes sense: the heuristic search itself might require some guidance of which directions to explore. The example in Figure 3 is useful to understand what it is doing, and it is very interesting to see how the prompt strategy can guide the heuristic strategy towards certain known algorithmic directions. While it is always tricky to truly evaluate LLM behavior with high-level methods like these, the ablation tests suggest meaningful improvements. I suspect that this could also lead to more interpretable heuristics since the heuristic search is directed by broad algorithmic strategies, but this study is not present in the paper (it would be very interesting to see, but I do not expect it for this rebuttal). The LNS research direction is also promising. Based on my own experience as an OR practitioner, I have had the prior belief that if there is any entry point to combine LLM-based function search with OR solvers, LNS is probably one of the most suitable ones. Variable selection strategies in LNS are flexible enough so that improvements in functions can produce significant impact, simple in scope enough to let LLMs work without a lot of complexity, and leverage the full generality of OR solvers. It is encouraging to see this in action and working well. As far as I know, both of these approaches are novel, though I am more familiar with the LNS literature than the LLM one. The computational evaluation appears to be careful and generally comprehensive. The experiments cover a good variety of both problems and baselines. For the MILP instances, I tried to get a sense of how realistic these instances are, but unfortunately I could not find how these instances were generated. Based on the number of variables and constraints, they seem rather sparse (e.g. 3x constraints on independent set or vertex cover means that the average degree of the graph is 6). That said, while ideally it would be better to have more realistic applications, it is not a significant concern. I do have some concerns related to the LNS baselines: 1. I am surprised that your CL-LNS baseline is performing so poorly, which is not what you would see in (Huang et al., 2023b). Do you know why? I do notice that you are solving much larger instances than in that paper, but it is still a little surprising and I wonder what exactly is the issue. 2. If the behavior of CL-LNS is indeed correct, then I would argue that this paper is missing something closer to state-of-the-art LNS. If random LNS is your best LNS baseline, then I do not think you have SOTA LNS in your experiments. What I suggest is adding CP-SAT as a baseline to the paper. While CP-SAT does a lot more than LNS, it contains a competitive LNS implementation and it does not require you to have to implement yet another LNS algorithm. You have two options here, use it as a full blown solver (which would belong in the Gurobi/SCIP category), or if you want only its LNS capabilities for purposes of comparison, you can turn on the use_lns_only parameter. Make sure to increase the number of workers from the default as the various types of LNS are only active with a certain number of workers (I suggest 16, or 32 if you want; this may depend on how many threads other baselines are using for a fair comparison). It is of course not an ML-based approach, so it may not perform better than other ML-based approaches, but I expect it to be a better baseline than random LNS, since it implements random LNS and other LNS techniques. Theoretical Claims: The paper has no theoretical claims. Experimental Designs Or Analyses: The experimental setup appears sound. As mentioned above, I consider these experimental results to be very positive, but I have a few comments and suggestions for improvement. 1. Looking at the results from Table 4, I first thought that you might be slightly overstating the contributions: my read of Table 4 is that LLM-LNS is a clear advantage over Gurobi and SCIP, but it is only a slight advantage to Light-MILPOPT. However, I believe your results are actually stronger than it may first appear from reading the main text. * This is more visible in Fig. 8: even if the results are fairly close at the time limit, LLM-LNS can obtain very good solutions early on, which is valuable in practice. I would like to make three recommendations to emphasize the strength of your own results: 1, show Figure 8 in the main text if you can find some space, 2, report the primal integral as well, and 3, include a sentence on this in the main text. The primal integral is a common way in the OR/MILP literature to quantify heuristic performance over time (I left another comment on this below). * Furthermore, I would actually consider simply being competitive with the ML-based approaches a success: in certain use cases, the result of LLM-based approaches can be preferable over typical neural network approaches because they are significantly easier to deploy and maintain. From a software engineering / systems perspective, it is easier to copy-and-paste and maintain a short snippet of a code rather than needing to store and maintain neural network data for an algorithm. Of course, they have different obstacles in the "training" step, so this opinion might not be prevalent or suitable for every use case, but this is definitely a relevant advantage for those interested in practical deployment. Perhaps this is something you might want to note in the paper. Next, I discuss some areas that could have been better covered: 2. I was looking for results on the amount of time and compute needed to produce these heuristics, and I could not find them. I am not sure if I missed them, but I believe these are important to include (even if you only have estimates). Please include them if they are not in the paper, or let me know where they are if they already are. 3. If I understand correctly, I do not see a per-subproblem time limit in LNS. This is a little surprising for a number of reasons: 1, practical LNS implementations typically have some time limit in subproblem solves so it does not blow up, especially when they accidentally set the neighborhood size too high; 2, although the fitness function would filter these out, you may be wasting time running slow code proposed by LLMs; 3, as noted in Section F.1, this is risky if you are trying to generalize to larger instances. I believe a subproblem time limit can help make the approach more robust. That said, it is encouraging to hear that this method works well even without a per-subproblem time limit. I understand that this is not something that you can do at rebuttal, but it feels omitted and a brief discussion about this somewhere in the paper would be useful to at least highlight that this is something that can be done. 4. Another topic that I felt could be better covered in the paper is the fitness function. The paper proposes simple unnormalized averaging. This works well for the random instances studied in this paper because all objective values likely have about the same order of magnitude, but this is not always true for more heterogeneous sets of problems. In those cases, instances with larger orders of magnitude in objective value may be much more emphasized than those with smaller ones, and thus the heuristic may overfit to them. While simple averaging is fine for this work, I feel it is appropriate to include a short discussion about this in the fitness function section in the Appendix. 5. There is little discussion on the input parameters to the variable scoring heuristic, which I feel is an important topic. Looking at the examples, I see names like "site", "value", "constraint", but I cannot figure out what exactly they are. Even those that are self-explanatory, it would be nice to know how they are represented. Could you please describe what they are in the paper? In particular, why did you choose these parameters and not others? Could you have done better if you have chosen others, or even omitted some (as this affects how the LLM behaves)? Please add a discussion of this topic to the paper. Supplementary Material: Given the length of the appendix, I skimmed through it and read in detail some of the more important sections, but not in its entirety. Relation To Broader Scientific Literature: I believe the paper does a decent job at relating to previous work. Essential References Not Discussed: I do not have further references to add. Other Strengths And Weaknesses: The paper is generally comprehensive and well-written. Overall, I believe this paper is a solid step towards effectively leveraging LLMs in practical OR applications. Other Comments Or Suggestions: Here are other minor comments, mostly on presentation: 1. In Figs 7-10, Light-MILPOPT seems to sometimes become worse. I am guessing you are taking the current solution, but this is not the right way to present the result, since you can always keep the best solution. Please correct these plots to report the best solution value found so far in each time step, rather than the current one. 2. Table 4 is difficult to read since improvements on each column depend on whether it is a minimization or maximization problem. An option here is to use primal gap instead, which is traditional in MILP. See, for example, Table 2 in the CL-LNS paper (Huang et al., 2023b in your paper). If you do not want to use it, at least add some indicator on whether you are minimizing or maximizing in each column. 3. As previously discussed, primal integral would be great to see as well (see Berthold, "Measuring the impact of primal heuristics"). You seem to have the data to compute that based on Figs. 7-10. Please note comment 1: use the data for the best solution found so far to compute the primal integrals, not the current value like Light-MILPOPT is showing in the submitted version. 4. The ablation study is important to understand the impact of the prompt evolution and the directional evolution. Space permitting, consider adding a sentence in the main text summarizing the results, and point to the appendix. 5. Could you see if you could fit into the start of Sec. 4.1 a couple of sentences explaining the heuristic framework for online bin packing and TSP, i.e. a very brief summary of what you have in Appendix C.4? In my readthrough, I got confused as I thought you were using LNS for these problems, but you are not using LNS here. I believe this misunderstanding is easy to make, so I would even encourage to explicitly add in Sec. 4 or 4.1 that this is not using LNS for these problems. 6. Could you mark in Table 17 which TSPLib instances were used for training? This should be easy enough to add and it would be helpful to get some more information on generalization. 7. In Table 4, please add to the caption the difference between set 1 and 2. 8. Sec. 4.1.2: I presume that you are using the routing solver in Google OR-Tools and not CP-SAT. Please include for clarity, since there are multiple solvers in OR-Tools. 9. Section C.2: Could you please add here that $T$ is for the entire LNS solve (i.e. not for each subproblem), and that you solve each subproblem to optimality? I believe this is the case. 10. Section C.5 seems to be missing information on the instance generation process. 11. Would you be willing to provide in the final version the optimal heuristics for all six problems? This can be included with the code. It would be interesting to see if there is anything interpretable in those heuristics. Furthermore and optionally, it might be insightful to highlight in the paper the algorithmic directions that appeared in prompts of the best heuristics for each problem class. Questions For Authors: All my questions are throughout the above sections. While I am recommending acceptance, there are a few gaps in the paper and I am assuming the authors can provide a reasonable rebuttal. Please go through them carefully and address them, as I believe it can strengthen the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer r212 for the detailed, thoughtful, and constructive feedback. We greatly appreciate your recognition of the contributions and your suggestions for improving the clarity, robustness, and completeness of the work. **Methods And Evaluation Criteria:** **A1:** We agree that fully evaluating high-level LLM behavior is challenging. As suggested, we will include the full evolution traces of both strategies and prompts in the appendix to improve interpretability. Regarding instance realism, as noted in our response to Reviewer 4X7L, we have added experiments on two large-scale MIPLIB problem classes to further validate the robustness of our method. **A2:** CL-LNS was designed for small-scale MILPs and struggles to scale to our large instances with hundreds of thousands of variables. Training takes over 8 hours per instance, and inference is 30× slower than our method, making it impractical. Following your suggestion, we added CP-SAT (16 threads, both original and use_lns_only) as a new baseline. As shown in Table 1 at the supplementary link, CP-SAT supports only integer programs and performs well on easier problems like MIS and MVC. However, on more complex large-scale tasks such as SC${}_{2}$, its performance lags behind several other baselines. **Our method still outperforms all baselines across tasks.** Due to time constraints, further analysis of CP-SAT—including using it as a subsolver to enhance LLM-LNS—will be included in future work. **Experimental Designs Or Analyses:** **A1:** We agree your suggestions help better highlight our method’s strengths. We will revise the paper to include Figure 8 in the main text if space permits. Following your advice, we also report the primal integral in Supplementary Table 2, which further confirms the effectiveness of LLM-LNS. We appreciate your point on the deployment advantage of LLM-based methods and will add a note on this practical benefit in the revision. **A2:** Our method evolves prompt strategies during training, not inference, so it does not affect solving efficiency. Although we add a prompt evolution layer, its total time remains comparable to EOH, with minor overhead from fitness evaluation via testing. Evolution times (in minutes) for BinPacking and TSP over 20 generations are reported in Rebuttal to Reviewer G8Pp, and will be added to the appendix. **A3:** You're absolutely right—setting a time limit per subproblem is important for robustness. In our implementation, we do impose a maximum time per LNS iteration: 100 seconds for problems with ~100K variables, and 200 seconds for ~1M variables. We will clarify this in the experimental setup section of the paper. **A4:** We agree that unnormalized averaging may be biased on heterogeneous datasets. In ongoing work, we explore multi-metric fitness (e.g., gap estimation integral, k-step improvement rate). We will briefly discuss this in the Appendix as a direction for future work. **A5:** The terms site, value, and constraint describe the constraint matrix: involved variables, coefficients, and RHS. We'll clarify this in the paper. In ongoing work, we observe that giving full constraint details may lead to strategies with high time complexity. We're exploring feature extraction to simplify inputs, and will discuss this in the Appendix. **Other Comments Or Suggestions:** **A1&A3:** We have updated the plots to report the best-so-far solution at each time step. The corrected results are shown in Appendix Figures 1–4. The primal integral results (based on the best-so-far solutions) have been added in Appendix Table 2. **A2:** We have added the minimization or maximization indicator for each problem in Appendix Table 1. If helpful, we can further include a primal gap comparison in the Appendix to make the results clearer. **A4&5&8&9:** We agree this will make the paper clearer, and will revise the corresponding sections. **A6:** We confirm that, following the same setup as EOH and FunSearch, we use five TSPLib instances (d198, eil76, rat99, rl1889, and u1060) as the training set for evolving our policies. None of the other TSPLib instances used in evaluation are seen during training. We will update Table 17 to clearly mark this. **A7:** We have explained the difference between set 1 and set 2 in Appendix Table 5 (Sec. C.5). We'll also add a sentence in the main text to clarify this and point to the appendix. **A10:** We will revise Appendix C.5 to include a clear description of the instance generation process. **A11:** We will include the final heuristics in the code and highlight key algorithmic directions in the appendix to aid interpretability and better understand LLM behavior. Due to the rebuttal length limit, supplementary figures and tables are provided at https://anonymous.4open.science/r/Supplementary-Figures-and-Tables-8DBB/. If any part of our response is unclear or insufficient, we would be very happy to further clarify and continue the discussion. --- Rebuttal Comment 1.1: Comment: Thank you for the excellent rebuttal. I am happy that the authors are able to address all my concerns. I believe these changes will strengthen the quality of the paper, especially the addition of the CP-SAT and MIPLIB baselines. I maintain my recommendation of acceptance.
Summary: The authors propose a Large Language Model (LLM)-driven LNS framework for large-scale MILP problems. Their approach introduces a dual-layer self-evolutionary LLM agent to automate neighborhood selection, discovering effective strategies with scant small-scale training data that generalize well to large-scale MILPs. The inner layer evolves heuristic strategies to ensure convergence, while the outer layer evolves evolutionary prompt strategies to maintain diversity. Experimental results demonstrate that the proposed dual-layer agent outperforms state-of-the-art agents such as FunSearch and EOH. It also achieves superior performance compared to advanced ML-based MILP optimization frameworks like GNN & GBDT and Light-MILPopt. ## update after rebuttal:I'd like to keep the score unchanged. Claims And Evidence: Claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria make sense. Theoretical Claims: No theoretical results. Experimental Designs Or Analyses: Experiments are sound. Supplementary Material: I have briefly checked the supp. material. Relation To Broader Scientific Literature: The method is interesting and extends LLM reasoning to MILP. Essential References Not Discussed: References are complete. Other Strengths And Weaknesses: The paper was well written with extensive experiments and supporting material. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer fpWD for the positive and encouraging feedback. We are glad that you found our method interesting and the experimental design sound. Your recognition of the contributions and the clarity of our work is truly appreciated and motivates us to further refine and extend our research.
Summary: This paper proposes LLM-LNS, a Large Language Model-driven Large Neighborhood Search framework for solving large-scale MILP problems. The method introduces a dual-layer self-evolutionary LLM agent: an inner layer evolves heuristic strategies to ensure convergence, while an outer layer optimizes evolutionary prompts to maintain diversity and avoid local optima. Experiments demonstrate that LLM-LNS outperforms state-of-the-art solvers like Gurobi, ML-based LNS methods, and heuristic evolution frameworks such as FunSearch and EOH. The approach shows strong generalization from small-scale training instances to large-scale MILP problems, achieving superior performance in combinatorial optimization tasks like bin packing and the Traveling Salesman Problem. Claims And Evidence: see the section Strengths And Weaknesses Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Reasonable experimental designs and analyses Supplementary Material: I have reviewed all the supplementary material. Relation To Broader Scientific Literature: This paper situates itself within the broader scientific literature by addressing key challenges in solving large-scale Mixed Integer Linear Programming (MILP) problems, a critical area in optimization research. Essential References Not Discussed: Essential References are well-discussed Other Strengths And Weaknesses: ### **Strengths** 1. The paper is well-organized and easy to follow, with a clear structure that enhances readability. 2. Unlike many current works that primarily follow the pipeline of EoH, this paper introduces significant contributions to the LLM-based heuristic evolution pipeline. It proposes an outer layer to evolve prompting strategies, which is novel, and introduces directional evolution based on differential memory. Ablation studies confirm the effectiveness of these innovations. 3. The paper includes comprehensive experiments for the ablation study, thoroughly evaluating each component of the pipeline. ### **Weaknesses** I believe this is a generally good paper. I really like the idea of prompt evolution. I have the following concerns about this paper, which if well-addressed will lead to an even better paper. 1. Although various problem classes are considered, the experiments are conducted on selected instances rather than using a standard instance distribution, as is typical in the learning-for-optimization community. 2. A key baseline, ReEvo [1], is omitted, which is essential given the state-of-the-art context of this work. 3. The reflection procedure appears similar to the directional evolution based on differential memory. Clarification of their differences is needed. [1] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution Other Comments Or Suggestions: See the questions part. Questions For Authors: Besides the questions in the Strengths And Weaknesses section, I also have the following questions: 1. How are the prompting strategies evolved? Are fixed strategies employed in the evolution process? 2. What is the “evolution time” for this pipeline, and how does it compare with previous works? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer G8Pp for the thorough review and positive evaluation of our work. Your encouraging comments and constructive suggestions are highly appreciated and have helped us further improve the clarity and completeness of the paper. In the following, we address your concerns and questions point by point. **Strengths And Weaknesses:** **A1:** Thank you for this valuable comment. We would like to clarify that the instances used in our experiments are not arbitrarily selected, but are generated based on standard formulations of canonical MILP problems, as described in Appendix C.5. For each problem class and size range, instances are randomly sampled from the same distribution. Training and testing sets are formed via random partitioning, which ensures fair evaluation and avoids selection bias. We will make this clearer in the main text and explicitly refer readers to Appendix C.5, where we have also added pseudocode for the instance generation process. **A2:** Thank you very much for your suggestion! We agree that ReEvo [1] is an important baseline. As detailed in Appendix H.7, we conducted a comparative experiment on the Bin Packing problem under the same settings. When using lightweight models such as GPT-4o-mini, our method exhibited significantly better stability and achieved superior performance across all test instances. We plan to add a summary of this comparison in the main text and include a pointer to Appendix H.7 for clarity. **A3:** Thank you for your insightful comment. While our differential memory mechanism shares conceptual similarities with the reflection-based evolution in ReEvo, there are key differences in both design and learning strategy. ReEvo employs a pairwise comparison of parent strategies based on short-term memory. In contrast, our approach learns from differences across multiple parents, enabling the discovery of more directional and effective evolutionary strategies. By leveraging the objective values of these strategies, the large language model performs contrastive learning to internalize beneficial search directions. Furthermore, differential memory is integrated into a dual-layer agent architecture, allowing deeper interaction between prompt optimization and strategy generation. We will clarify these differences in Appendix H.7. **Questions For Authors:** **A1:** Thank you for the question. Our method adopts a dual-layer self-evolutionary agent, where the outer layer evolves prompt strategies and the inner layer evolves heuristic strategies. In the outer layer, prompt strategies guide the LLM to generate new heuristics through fixed crossover and mutation templates. While these operators are fixed, the prompt strategies themselves evolve dynamically: they are evaluated based on the performance of the heuristics they produce, and low-performing prompts are pruned over time. This design enables adaptive exploration while maintaining diversity and preventing premature convergence. **A2:** Thank you for the question. Our method follows a learning-based optimization paradigm, where evolution takes place during the training stage, not inference. Therefore, it does not affect solving efficiency. Although we introduce an additional layer for prompt strategy evolution, the overall evolution time remains comparable to EOH. The slight overhead mainly comes from evaluating new prompt strategies through testing, which is necessary to compute their fitness. To provide a concrete comparison, the total evolution time on the 20th generation is: - BinPacking: Ours – 144.7 minutes, EOH – 138.7 minutes - TSP: Ours – 67.3 minutes, EOH – 59.6 minutes We believe this small difference is acceptable given the improved performance, and again, it occurs only during training, not during actual problem solving.We will include these results in the appendix to provide a more detailed comparison. Thank you again for the helpful suggestion. Thank you again to Reviewer G8Pp for the valuable suggestions — they will undoubtedly help us improve the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. I believe my concerns have been addressed, and I have decided to keep my score.
null
null
null
null
null
null
SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning
Accept (poster)
Summary: This paper focuses on the continual learning task for multimodal large models, termed MCIT. The authors innovatively categorize catastrophic forgetting issues in this field into two distinct types: superficial and essential forgetting. They propose the SEFE model, which incorporates two key modules, ASD and RegLoRA, to respectively address these two categories of forgetting problems. Experimental results demonstrate the effectiveness of the proposed model. Furthermore, the authors introduce a novel dataset, CoIN-ASD, specifically designed to evaluate the performance of MCIT tasks. The article exhibits a well-organized structure and maintains clear semantic coherence throughout its presentation. ## Update After Rebuttal I thank the authors' rebuttal and keep my rating to accept this paper. Claims And Evidence: The claims are generally well-supported by experimental results, demonstrating the effectiveness of SEFE, ASD, and RegLoRA. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-structured and align with the challenges of incremental learning in MLLMs. The distinction between superficial and essential forgetting is insightful, and ASD and RegLoRA offer effective solutions. While the approach is promising, providing clearer explanations in certain areas would further strengthen the presentation and impact of the work. Theoretical Claims: For catastrophic forgetting, the authors address the problem in two parts, distinguishing between superficial and essential forgetting. The proposed methods, ASD and RegLoRA, effectively mitigate these issues, providing a well-reasoned and empirically supported solution. While the paper does not focus on formal theoretical proofs, the conceptual framework is sound and aligns well with the task objectives. Experimental Designs Or Analyses: The experimental design is well-structured and effectively evaluates the proposed methods for addressing catastrophic forgetting. The distinction between superficial and essential forgetting is clearly tested, and the results demonstrate the effectiveness of ASD and RegLoRA. Supplementary Material: Yes, I reviewed the supplementary material, including all sections in the appendix and supplementary. Relation To Broader Scientific Literature: The authors are the first to break down the issue of catastrophic forgetting into two categories: superficial and essential forgetting problems. They propose corresponding methods to address each, providing new insights and solutions for the development of this field. This distinction offers a fresh perspective on understanding and mitigating catastrophic forgetting, contributing significantly to advancing current research. Essential References Not Discussed: The authors have provided the relevant references to support their work. Other Strengths And Weaknesses: Strengths: 1.The author divides the catastrophic forgetting problem in MCIT into superficial and essential forgetting problems, and proposes new methods, ASD and RegLoRA, to address them. 2.The author proves the effectiveness of the proposed method through experiments. 3.The author puts forward a new dataset for the MCIT task. 4.The overall structure of the article is clear and easy to understand. Weaknesses: 1.The abstract is too long and needs to be shortened. 2.The motivation and implementation of the module design are not clearly explained. Other Comments Or Suggestions: 1. Is there a difference between RegLoRA and the regularization-based methods in continual learning? Both seem to constrain the update of some parameters during training to adapt to new tasks. 2.Why does ASD convert tasks into these five ways? Is there any prior knowledge? Or are these five ways sufficient to answer all questions in the real world? 3.In CoIN-ASD, why only a part of the questions are converted into other forms? Would it be better if all questions were converted according to the five forms? 4.Does it need to train a RegLoRA for each new task? How are the RegLoRAs of different tasks combined? What are the overall input, output, training, and testing processes of the model? Can a schematic diagram or pseudo-code be provided? Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing the strengths of our paper and for your effort in reviewing our submission. Below is a detailed response to your concerns: ### **The Abstract Should Be Shortened** Thank you for the suggestion. We will shorten the abstract in the revised version to improve conciseness. ### **Difference Between RegLoRA and Existing Regularization-Based Methods** RegLoRA can be viewed as a regularization-based method. However, it differs from most existing approaches in several key aspects: - As LoRA fine-tuning becomes increasingly prevalent for adapting large-scale models, there is a growing need to mitigate catastrophic forgetting in LoRA-based settings. In this context, RegLoRA—which applies regularization to LoRA adapters—is naturally well-suited, whereas existing regularization methods that operate on the full model may be less appropriate. - Unlike some existing regularization-based approaches, RegLoRA does not require auxiliary models or a significant number of additional parameters on GPUs. This substantially reduces GPU memory usage. - RegLoRA focuses on regularizing a small number of key elements rather than entire weight matrices or feature maps, which also distinguishes it from most existing methods in terms of strategy. In summary, while RegLoRA belongs to the family of regularization-based methods, its design choices make it more efficient and practical for our task setting. ### **Rationale for Converting Tasks in Five Ways** Our Answer Style Diversification (ASD) paradigm reformulates tasks into five question types based on prior knowledge. Specifically, we examined 15 widely-used MLLM benchmarks and analyzed the applications of MLLM in real-world scenarios. Our analysis reveals that the identified categories—yes/no questions, multiple-choice questions, short answer questions, brief explanation/description questions, and detailed explanation/description questions—cover the majority of use cases. The list of benchmarks referenced is provided in lines L97–L101, all of which belong to these five types. In practical settings, if additional question types arise in specific domains, our approach can also be extended to accommodate them. ### **Reason for Partially Converting Questions** If all questions are converted, the model cannot learn the answer format of the original question style. Since the test set retains the original format, a model that has never seen samples in this style is unlikely to perform well during evaluation. As shown in Table 3, when the conversion ratio is high (*e.g.*, 80%), performance drops compared to lower ratios (*e.g.*, 20%), indicating that preserving a substantial portion of original-format questions is essential. ### **Possibility to Combining RegLoRAs and Pseudo-Code Explanation** We agree that combining RegLoRAs—or more generally, LoRAs—from different tasks is an intriguing and promising direction. If successful, this approach could enable the construction of a unified multi-faceted MLLM by integrating task-specific LoRAs, which holds significant practical value. However, this idea lies beyond the current scope of our work. While we are not yet able to evaluate its feasibility or propose a concrete method, we appreciate your insightful suggestion and will continue exploring this direction in future research. Regarding the overall training procedure of RegLoRA, we provide the pseudo-code below for clarity: --- **Require:** base MLLM $M_0$, training sets of all tasks {$D_1, D_2,...,D_T$} **for** task $j$ in $1$ to $T$ **do** &nbsp;&nbsp;&nbsp;&nbsp;Insert a new LoRA adapter $LoRA_j$ into model $M_{j-1}$ &nbsp;&nbsp;&nbsp;&nbsp;**for** each batch in $D_j$ **do** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Compute language modeling loss $L_{lm}$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**if** j > 1 **then** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Compute regularization loss $L_{reg}$ using all previous masks {$R_1, R_2,...,R_{j-1}$} according to Eq. 2. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Set total loss $L_{total} = L_{lm} + L_{reg}$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**else** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Set total loss $L_{total} = L_{lm}$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Update model parameters to minimize $L_{total}$ &nbsp;&nbsp;&nbsp;&nbsp;Compute the regularization mask $R_j$ for the current task &nbsp;&nbsp;&nbsp;&nbsp;Merge $LoRA_j$ into $M_{j-1}$ to form $M_j$ --- Here, the language modeling loss $L_{lm}$ refers to the next-token prediction loss used by the base MLLM. After learning each task $j$, we evaluate performance on all learned tasks using the updated model $M_j$.
Summary: This manuscript delves into the area of Multimodal Continual Instruction Tuning (MCIT). It contributes by differentiating between two forms of catastrophic forgetting: *superficial forgetting* and *essential forgetting*. Superficial forgetting is defined as the forgetting of the response style, while essential forgetting denotes the loss of knowledge. To tackle these problems, the manuscript introduces two corresponding solutions. Firstly, it introduces the Answer Style Diversification (ASD) paradigm. This paradigm unifies questions within each task into five predefined styles. By doing so, it alleviates the variability in question domains, the main cause of superficial forgetting. Secondly, the manuscript presents RegLoRA. This method restricts modifications to the elements in LoRA’s weight update matrices that are highly relevant to previously learned knowledge to prevent essential forgetting. The experimental results show that the proposed overall approach, SEFE, attains state-of-the-art performance. The effectiveness of both the ASD and RegLoRA techniques is also validated. Claims And Evidence: This manuscript's main claims are well-supported by experimental evidence, rendering them convincing. However, I take issue with a statement in the Introduction (line 30-31), which reads: "Training these models typically involves two main phases: pre-training and instruction tuning". Based on my knowledge, the training of Multimodal Large Language Models (MLLMs)/Large Language Models (LLMs) usually encompasses more than two stages. It may also incorporate a Preference Optimization/Reinforcement Learning phase. Although this statement does not substantially undermine the manuscript's contributions, I still suggest that the authors revise it to ensure accuracy. Methods And Evaluation Criteria: The proposed SEFE method holds value in the continuous improvement of MLLMs to enable their adaptation to new demands. Additionally, the introduced CoIN-ASD benchmark can be useful for evaluating essential forgetting within the MCIT domain. Theoretical Claims: This manuscript primarily focuses on method design and evaluates its effectiveness through experimental observation and analysis. It does not include significant theoretical discussion or mathematical derivation. Therefore, this section is not applicable. Experimental Designs Or Analyses: - **Comparison with existing methods**: The authors utilize the public CoIN benchmark and the CoIN-ASD benchmark proposed in this work. The evaluation includes two components: Truth Alignment (TA) and Knowledge Capability (KC). Each component encompasses the accuracy of all tasks, along with four aggregate metrics: Mean Fine-tune Accuracy (MFT), Mean Final Accuracy (MFN), Mean Average Accuracy (MAA), and Backward Transfer (BWT). These metrics are quite comprehensive. - **Ablation studies**: The ablation experiments cover the validation of ASD, RegLoRA, and the selection of hyperparameters. The metrics employed are the aggregate metrics of TA and KC. The authors present an analysis for each experiment, which is reasonable overall. Supplementary Material: The supplementary materials include the code for the proposed SEFE, sample data from CoIN-ASD, and the prompts for data conversion. I mainly reviewed the prompts. Relation To Broader Scientific Literature: The RegLoRA proposed in this manuscript is a regularization-based continual learning scheme. It has certain relevance to prior methods like EWC [1] and PODNet [2]. These strategies reduce the magnitude of model updates following specific rules to alleviate forgetting. [1] Kirkpatrick, James, et al. "Overcoming catastrophic forgetting in neural networks." Proceedings of the national academy of sciences (2017). [2] Douillard, Arthur, et al. "Podnet: Pooled outputs distillation for small-tasks incremental learning." ECCV, 2020. Essential References Not Discussed: Since the RegLoRA proposed in this manuscript is a variant of LoRA, I suggest adding an introduction to other LoRA variants in Appendix B (Related Works) for differentiation. For example, the following papers: [1] Zhang, Qingru, et al. "AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning." arXiv preprint arXiv:2303.10512 (2023). [2] Hayou, Soufiane, Nikhil Ghosh, and Bin Yu. "Lora+: Efficient low rank adaptation of large models." arXiv preprint arXiv:2402.12354 (2024). [3] Liu, Shih-Yang, et al. "Dora: Weight-decomposed low-rank adaptation." arXiv preprint arXiv:2402.09353 (2024). Other Strengths And Weaknesses: The primary strength of this work lies in its definition of superficial forgetting and essential forgetting. The experimental results and analysis support the need to investigate these phenomena separately. Additionally, the proposed method is logically sound and demonstrates strong performance. However, several weaknesses remain, in addition to the previously mentioned issues (lack of rigor in one statement and the absence of a review of other LoRA variants): - **Limited dataset diversity**: Although this study includes both CoIN and CoIN-ASD benchmarks, these datasets are essentially derived from the same source, which may weaken the generalizability of the experimental results. That said, given that CoIN is the only publicly available MCIT benchmark, this limitation is somewhat understandable. - **Incomplete definition of the learning objective**: In line 272-274, the authors state that “L_reg is added to the original loss of the base MLLM to form the complete learning objective.” However, the manuscript does not clearly define the original loss or the overall learning objective. This omission reduces the study’s reproducibility. Other Comments Or Suggestions: The Related Works section is currently located in the appendix. This makes it hard to quickly grasp the distinctiveness of this manuscript without referring to the appendix. I suggest adding a concise Related Works section within the main text. The introduction to LoRA in the RegLoRA section could be moved there. A more in-depth review of related works can still be retained in the appendix. Questions For Authors: I am curious about one aspect: In the superficial forgetting scenario where the model responds to any question with a bounding box (e.g., Fig. 1(d)), consider a situation where both the question and the answer contain objects, yet these objects differ. Will the bounding box indicate the object in the question or the one in the answer? For instance, if the question is “What is to the left of the sheep?” and the correct answer is “A cow”, when superficial forgetting occurs, does the returned bounding box point to the sheep or the cow? Or perhaps it points to neither? I think this aspect is valuable for exploring the essential forgetting. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review and for recognizing the value of our contributions. Below, we address your comments in detail: ### **Training Stages of MLLMs** Thank you for pointing this out. Indeed, the training process of MLLMs/LLMs often involves more than two stages. We will revise the sentence to: "Training these models typically involves multiple phases, with pretraining and instruction tuning being two crucial ones." ### **Introduction of LoRA Variants** Thank you for your constructive feedback. We will include an overview of LoRA variants, such as those you mentioned: AdaLoRA, which adaptively allocates the parameter budget across weight matrices based on their importance scores; LoRA+, which assigns different learning rates to matrices A and B to enhance performance in settings with large embedding dimensions; and DoRA, which decomposes pretrained weights into magnitude and direction components, applying LoRA solely to the direction component to reduce the number of trainable parameters. While these methods are not primarily designed to mitigate catastrophic forgetting and may not be directly applicable to MCIT, some of their underlying principles may offer valuable insights. We have therefore included a discussion of these approaches in a separate subsection within the Related Work section for readers’ reference. ### **Incomplete Definition of the Learning Objective** The original loss function is the standard language modeling loss, *i.e.*, next-token prediction loss, consistent with the base model (LLaVA-1.5). It is defined as: $$L_{lm} = - \sum_{i=1}^{L} \log P(x_i | X_v, X_\text{instruct,<i}, X_{a,<i}),$$ where $L$ is the length of the answer, $X_v$ denotes the visual input tokens, $X_\text{instruct,<i}$ and $X_{a,<i}$ represent the instruction and answer tokens prior to the current prediction token $x_i$, respectively. During training on the first task, the objective is solely this language modeling loss. For subsequent tasks, the total loss becomes a combination of the language modeling loss and the regularization loss $L_{reg}$ as defined in Eq. 2 of the paper: $$L_{total} = L_{lm} + L_{reg}.$$ We will incorporate this complete formulation and explanation into the revised version. ### **Adding a Related Works Section in the Main Text** Thank you for the helpful suggestion. As recommended, we will add a concise Related Works section to the main text, include the introduction to LoRA and a brief review of MCIT works within it, and retain the more detailed review of related literature in the appendix. ### **Bounding Box Responses under *Superficial Forgetting*** Your idea is compelling, but it is challenging to identify samples in our existing dataset that match the scenario you described—where both the question and the answer contain an object present in the image. To investigate this, we manually constructed a few examples. Our observations indicate that the model’s predicted bounding boxes tend to align with the object mentioned in the question, rather than the one in the answer. This behavior reflects an inductive bias introduced by the grounding task, in which training samples typically require the model to localize the object referenced in the question. These findings suggest that *superficial forgetting* may lead the model to over-rely on task-specific biases, hindering its ability to adapt to the actual demands of the current query. Rather than interpreting the intent of the question, the model appears to default to habitual responses based on prior training. If the model had even partially understood the question, we would expect it to at least identify the object mentioned in the answer. This further illustrates how *superficial forgetting* can obscure the model’s ability to adjust its behavior based on contextual demands. Thank you for your insightful question.
Summary: This paper introduces SEFE for multimodal continual instruction tuning. Within SEFE, an ASD paradigm is proposed to eliminate superficial forgetting by converting all questions into five unified question types. The authors also create a CoIN-ASD benchmark by applying ASD to the public CoIN benchmark, which can be used to assess essential forgetting. Moreover, a Reglora is proposed to eliminate essential forgetting by adding regularization to key elements of lora weight. Claims And Evidence: In general, all major claims are proven by experiments. I didn’t find any major claims that is obviously not convincing. Methods And Evaluation Criteria: Yes. SEFE makes sense for continually adapting MLLMs to new tasks, which can be helpful in dynamic scenarios. The evaluation criteria are also solid, covering many aspects. Theoretical Claims: This paper doesn’t have theoretical claims. Experimental Designs Or Analyses: Yes. The experimental designs are basically sound and fair. They involve a reasonable setting and maintain same for all compared and ablation methods (incrementally training CoIN tasks one-by-one). Supplementary Material: Yes. I quickly browsed the supplementary material, without a particular focus on any individual part. Due to time constraints, I didn't try to reproduce the results using the provided code. Relation To Broader Scientific Literature: The definition of superficial and essential forgetting in this paper seems to be expandable to continual learning of LLM and other large-scale models, and the proposed solutions may also be applicable to these and related areas. Essential References Not Discussed: None. This paper reviewed sufficient literatures. Other Strengths And Weaknesses: # Strengths: 1. The performance is satisfactory. 2. Sufficient experimental results, including comparisons and ablation studies, are reported. # Weaknesses: I have some concerns regarding the ASD paradigm: 1. When you generate Yes/No questions and MCQs, you use InternVL2 to generate distractors. However, for some cases the answer domain of a task is limited (for example, in ImageNet, the answer domain contains only 1000 category names), this generation step is not necessary, because distractors can be simply selected from the answer domain (excluding the correct answer). This issue is not sufficiently discussed and defined in the paper. 2. InternVL2 is used to generated explanations in ASD. However, my concern is that, if InternVL2 can generate accurate explanations, doesn’t that mean InternVL2 can totally understand and solve the task? Why we need to continually train another MLLM? Other Comments Or Suggestions: Figure 3 is not very clear. It is recommended to use a plus sign to indicate the combination of R1 of R2. Questions For Authors: Please refer to the weaknesses and comments in previous two sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and for your positive feedback on the performance and experimental validation of our proposed method. Below, we address your concerns in detail: ### **Unnecessary Use of InternVL2 in Certain Distractor Generation Scenarios** As you correctly noted, using InternVL2 for distractor generation is unnecessary in some cases. In the CoIN dataset, this applies to the ImageNet and Grounding tasks. Accordingly, we did not employ InternVL2 for these tasks. For ImageNet, distractors are randomly sampled from the class list excluding the ground-truth one. For Grounding, we generate random bounding boxes with Intersection-over-Union (IoU) less than 0.5 with respect to the ground-truth box. While this special handling is briefly mentioned in Appendix C.2, a formal definition was not provided. Following your suggestion, we will incorporate a more rigorous definition based on the characteristics of the answer domain, enabling this issue to be addressed not only in CoIN but also in other benchmarks and real-world applications. ### **Capability of InternVL2 to Understand and Solve the Task** The ability of InternVL2 to generate explanations does not imply a full understanding of the task. This is because the model is provided with the ground-truth answer during explanation generation, eliminating the need for answer reasoning. Consequently, the model’s task is only generating an explanation conditioned on the correct answer, which is significantly easier. To validate this point, we conducted experiments evaluating InternVL2-26B on the eight tasks in CoIN. InternVL2-26B achieves an average accuracy of 53.76%, whereas our SEFE model attains 58.57% after continual learning across all tasks. Notably, SEFE is based on a 7B model, considerably smaller than InternVL2-26B. These results demonstrate that continual training of an MLLM is meaningful because it can yield superior performance with fewer inference resources. ### **Modification of Fig. 3** Thank you for the suggestion. We will revise Fig. 3 to include a plus sign, clearly indicating the combination method of $R_1$ and $R_2$. --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ responses, which have addressed my previous concerns. I have no further comments. Having seen that other reviewers are also supportive, I will maintain my support as well. --- Reply to Comment 1.1.1: Comment: Thank you very much for your support and the positive feedback on our paper.
Summary: This paper introduces SEFE (Superficial and Essential Forgetting Eliminator), a novel framework for Multimodal Continual Instruction Tuning (MCIT), which aims to prevent catastrophic forgetting in multimodal models. The authors identify two distinct types of forgetting in MCIT”: superficial forgetting and essential forgetting. To mitigate these issues, the paper proposes two key techniques: Answer Style Diversification (ASD) and RegLoRA for these two types of forgetting respectively. The proposed SEFE method achieves state-of-the-art performance in MCIT benchmarks, including CoIN and the CoIN-ASD introduced by the paper, demonstrating its effectiveness in reducing both superficial and essential forgetting. ## Update After Rebuttal My original assessment was supportive, so I will maintain my current score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes, but I only review the parts that help my understanding of the main content. Relation To Broader Scientific Literature: Catastrophic forgetting is a key challenge in continual learning. This paper offers a new perspective by categorizing it into two types: superficial forgetting and essential forgetting. Essential References Not Discussed: The paper lacks a discussion of and comparison with related works on understanding catastrophic forgetting such as [SP’2024]. [SP’2024] Zheng, J., Cai, X., Qiu, S., & Ma, Q. (2025). Spurious Forgetting in Continual Learning of Language Models. arXiv preprint arXiv:2501.13453. Other Strengths And Weaknesses: Strengths: 1. This paper proposes and provides evidence to support a new perspective for understanding catastrophic forgetting. Weaknesses: 1. Lack of comparison and discussion of a line of works for understanding catastrophic forgetting such as [SP’2024]. Other Comments Or Suggestions: No additional comments. Questions For Authors: 1. Can the authors clarify the difference between their work and a similar work, [SP’2024]? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your time and effort in reviewing our paper, as well as your recognition of our contributions. Below, we respond to your comments in detail: ### **Comparison with the [SP’2024] Paper** #### **Differences between the Two Papers** Thank you for pointing out this related work. This study indeed shares our goal of understanding catastrophic forgetting, and its concept of *spurious forgetting* is in some respects similar to our definition of *superficial forgetting*. However, there are several key differences between the two studies. - *Superficial forgetting* refers to cases where the model fails to generate responses in the expected format, indicating a loss of response style. In such situations, it remains unclear whether the underlying knowledge has actually been forgotten. In contrast, *spurious forgetting* describes scenarios in which the model loses task alignment without any genuine loss of knowledge. Since *spurious forgetting* can be easily recovered, it is referred to as "spurious", whereas *superficial forgetting* does not emphasize recoverability. Thus, these two concepts are distinct. - The [SP’2024] paper focuses primarily on the analysis of forgetting, examining changes in gradients and feature principal components during forgetting. Although it proposes a mitigation strategy—*Freeze*, which freezes a few bottom layers—this approach is simple, is not the core contribution of the paper, and shows limited effectiveness (*e.g.*, ~60% forgetting rate). In contrast, our work focuses on practical solutions. We propose Answer Style Diversification (ASD) and RegLoRA to address both *superficial forgetting* and *essential forgetting* via data reconstruction and weight update constraints. On the CoIN benchmark, our method reduces the average forgetting rate to 10.45%, substantially lower than the 60% reported in the [SP’2024] paper. While differences in datasets limit direct comparability, the trend highlights the greater effectiveness and practicality of our approach. By contrast, *Freeze* serves more as an analytical tool of *superficial forgetting* than a standalone solution. - In addition to *superficial forgetting*, our study also examines *essential forgetting*, where the underlying knowledge is genuinely lost. In contrast, the [SP’2024] paper focuses exclusively on *spurious forgetting* that involves no knowledge degradation. This highlights another fundamental difference between the two studies. - Finally, the [SP’2024] paper focuses on LLMs, whereas our work addresses forgetting in MLLMs, marking a clear difference in research scope. #### **Performance Comparison** We also evaluated *Freeze* on our benchmark, with the following results: | Method | MFT | MFN | MAA | BWT | |---------|-------|-------|-------|--------| | FFT | 65.87 | 35.45 | 36.73 | -30.42 | | Freeze | 66.04 | 39.53 | 37.60 | -26.51 | | Ours | **69.02** | **58.57** | **63.04** | **-10.45** | Here, FFT refers to full-parameter fine-tuning and serves as the baseline. While *Freeze* outperforms the baseline, it consistently underperforms compared to our method across all metrics. This is likely because *Freeze* is not well-suited for Multimodal Continual Instruction Tuning (MCIT), as it was not originally designed for this setting. Additionally, as discussed earlier, *Freeze* functions more as a supplementary analytical tool, so its performance may not be very satisfactory as a dedicated continual learning method. ### **Comparison with Other Papers** In addition to the [SP’2024] paper, we identified several other studies that investigate catastrophic forgetting. For example, [1] shows that catastrophic forgetting during LLM fine-tuning becomes more pronounced as the loss landscape sharpens, suggesting a strong positive correlation between sharpness and forgetting. [2] argues that in MLLMs, catastrophic forgetting arises as fine-tuning shifts the model’s focus from general visual-text alignment to dataset-specific overfitting, resulting in performance degradation even when the vision encoder is frozen. In contrast to these approaches, our work proposes decomposing catastrophic forgetting in MCIT into two components—*superficial forgetting* and *essential forgetting*—and addresses them separately, offering a new perspective on the problem. As suggested, we will include a comparison between our work and other related works (such as [SP’2024], [1], [2]) for understanding catastrophic forgetting in the revised version. [1] [EMNLP 2024] Revisiting Catastrophic Forgetting in Large Language Model Tuning [2] [CPAL 2024] Investigating the Catastrophic Forgetting in Multimodal Large Language Models --- Rebuttal Comment 1.1: Comment: Thank you to the author for addressing my concerns. I will maintain my assessment as it has been supportive. --- Reply to Comment 1.1.1: Comment: Thank you so much. Your support is truly appreciated.
null
null
null
null
null
null
RuleAdapter: Dynamic Rules for training Safety Reward Models in RLHF
Accept (poster)
Summary: The paper introduces a dynamic method for selecting safety rules in RLHF. Instead of using a fixed set, it adaptively chooses 5 out of 100 rules for each prompt–response pair based on the score difference (discrepancy) and rule relevance. This approach is both theoretically justified and empirically validated, leading to improved safety performance. Claims And Evidence: The claims are clear and supported by both theoretical and empirical evidence. Methods And Evaluation Criteria: - The motivation is clear, and the method is interesting. However, I have concerns regarding the generalization of the classification model. Did you hold out a validation set to evaluate its multi-label classification performance on unseen scenarios? What does accuracy look like? - I also have some concerns regarding the motivation. While generating 100 rules can naturally lead to overlap and duplicates, this may not be the best practice. It might be more effective to carefully tune the prompts to generate only the 5 most critical and representative rules, which could be sufficient to achieve decent performance. - Evaluating the reward model on safety benchmarks like RewardBench and SafetyBench provides a comprehensive view of the model's performance. These benchmarks are designed to assess various aspects of safety (e.g., refusals, helpfulness, and correctness), making them appropriate for this application. Theoretical Claims: The theoretical claims are reasonable. Experimental Designs Or Analyses: The main experimental results are strong, and the ablation studies are comprehensive. In Table 1, the authors compare the scores of RAMO with various baselines on RewardBench, and in Table 2, it is notable that RAMO outperforms the AllRules. Supplementary Material: I checked the Supplementary Material. Relation To Broader Scientific Literature: Prior work (e.g., Anthropic's Constitutional AI and OpenAI's 21 safety rules) typically applies a fixed set or randomly selected rules to label preferences. In contrast, this paper introduces a dynamic rule selection mechanism—selecting the top 5 rules based on maximum discrepancy and relevance for each prompt–response pair—which offers a more adaptive and informative approach. This idea refines earlier efforts in multi-attribute reward modeling by tailoring rule selection to the context at hand. Essential References Not Discussed: The related works are properly cited. Other Strengths And Weaknesses: The paper creatively combines dynamic rule selection with multi-attribute reward modeling, extending prior work that relied on fixed or random rule selection. This dynamic approach, backed by theoretical analysis using information-theoretic concepts, provides a novel way to optimize preference labeling in RLHF. Other Comments Or Suggestions: I don't have additional comments. My comments and questions are in the above chunks. Questions For Authors: My questions and concerns are listed in Methods And Evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We really appreciate the comments and compliments by Reviewer. Below we provide our responses to the concerns: # 1. Validation Performance of RuleAdapter during Training: We use a validation set of size 500 during the training of RuleAdapter. During training, the best checkpoint achieves: accuracy 0.967, precision 0.9645, and recall 0.9658 on the validation set. # 2. Duplication of 100 Rules: As mentioned in paper, we initially generated 400 raw safety rules. During generation, we considered the 19 safety categories from PKU and 133 from CollectiveCAI (see Section 4.1 for details and citations). Then we perform a **deduplication step** and reduce the pool to 100 rules. We have **manually checked** that our rule set almost covers all the safety aspects in PKU, CollectiveCAI, and other papers cited in Section 2, with no obvious duplication (some rules might talk about similar topics but they evaluate safety from different aspects). This confirms that our final set of 100 rules achieves a suitable balance of diversity and independence. # 3. Generate Only 5 Critical Rules If the reviewer means prompting strong LLMs such as GPT to dynamically generate 5 critical rules for each sample, in the Ablation Study (Section 5.1), we included this baseline and the averaged Safety performance is 92.8, not comparable to the SOTA performance (score 95.1) of our RAMO. For a fixed set of 5 rules, we considered baselines including random 5 rules, and fixed 5 rules selected randomly at the beginning. f the reviewer suggests selecting the 5 most critical and representative rules upfront and applying them across all data, we have conducted this additional experiment. These 5 critical rules were selected by GPT and verified by the authors. The results are as follows: | Model |DoNotAnswer|RefusalsDangerous|RefusalsOffensive|XstestShouldRefuse|XstestShouldRespond|Safety| |-------------|-----------|-----------------|-----------------|------------------|-------------------|------| |Critial5Rules|77.2 |94.0 |99.0 |96.1 |96.0 |92.7 | We see the **performance is not comparable to RAMO, and even slightly worse than using dynamic 5 rules by GPT**. Hence this further confirms the effectiveness of our dynamic-rule scheme based on the max-discrepancy strategy. We really thank the reviewer for all the comments and sincerely hope our clarification and additional experiments would address the concerns. Those explanations and additional experiments will be added to the paper, and we sincerely hope the reviewer could consider raising the score.
Summary: This paper introduces a dynamic approach to selecting safety rules for training reward models in RLHF. Rather than applying a fixed set of rules or randomly sampling from a large rule pool, the authors propose a method that adaptively selects the most critical rules for each pair of responses. Contributions: - A framework that selects rules based on maximum discrepancy between paired responses - Theoretical justification showing this strategy optimizes mutual information between rule-based labeling and ground-truth preferences - An 8B parameter reward model that achieves top performance on the RewardBench safety benchmark Claims And Evidence: Claims about the supremacy of the method are partially supported by RAMO's performance on RewardBench. However, the theoretical claim that maximizing discrepancy optimizes mutual information, while mathematically derived, lacks empirical validation with real human preferences. The method is only evaluated on synthetic data or existing preference datasets, not with new human evaluations. Methods And Evaluation Criteria: The method of dynamically selecting rules is reasonable for the problem of safety alignment. Using a maximum discrepancy approach to identify where responses differ most substantially is conceptually sound. However, the evaluation is limited to RewardBench, a single close-ended benchmark. There is no open-ended evaluation of the resulting models, which would better demonstrate real-world safety improvements. The lack of human evaluation is particularly problematic for safety claims, as automated benchmarks may not capture all nuances of human values. Theoretical Claims: The proofs seem correct in broad strokes. I did not check the correctness of the proofs in detail. Experimental Designs Or Analyses: The experimental design has several limitations: - The preference datasets used are synthetic, with labels generated by LLMs rather than humans - The effectiveness of the final aligned models is only evaluated on closed benchmarks, not real-world usage - The 100-rule pool, while diverse, is still a constructed set and may not comprehensively cover all safety concerns The ablation studies are reasonably thorough but don't fully explore whether the performance gains come from the dynamic rule selection or simply from using a more diverse set of rules during training. Supplementary Material: I skimmed the proofs. Relation To Broader Scientific Literature: The paper builds on several research directions: - multi-attribute reward modeling - fine-grained annotation approaches in RLHF - constitutional AI The work extends these approaches by introducing dynamic rule selection. However, the core idea of using multiple facets or principles for evaluation is not fundamentally novel, as similar approaches appear in Constitutional AI and multi-attribute reward modeling. The main contribution is the adaptive selection mechanism rather than a new conceptual framework. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths - The framework for rule selection is well-developed, with theoretical grounding. - The performance on RewardBench is impressive, especially for a model of its size. - The integration into a full RLHF pipeline shows practical application potential Weaknesses: - The idea of using multiple rules is not a particularly novel invention, as similar approaches exist in multi-attribute reward models - The evaluation is only on RewardBench, a single close-ended benchmark, with no open-ended evaluation. - There is no human evaluation of the quality of the dynamically selected rules or the resulting model outputs. - The approach still relies on large language models to evaluate against rules, which may propagate their biases. - The paper doesn't fully address how to ensure the rule pool itself is comprehensive and unbiased. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the comments and provide our responses below (the new evaluation results will be added to the paper). # 1. Human Evaluation on Selected Rules: In Appendix F, we have provided **case studies with real data trios and analysis on the selected rules by RuleAdapter**. To further address the reviewer’s concern, we **conducted an evaluation** involving 8 volunteers (all are PhD or master students in STEM) to assess 100 randomly selected trios from HH-RLHF: volunteers are asked to provide binary labels indicating whether each rule selected by RuleAdapter is related and critical to the given trio. After averaging and scaling by the max score of 5, we obtain a **quality score of 89% (std 7%), meaning on average, 4.45 of the 5 rules are considered relevant and important, showing that RuleAdapter successfully selects out important rules dynamically**. # 2. Open-ended Evaluation of the Resulting Models: First we would like to emphasize that **RewardBench is a combined benchmark dataset containing multiple tasks**. For safety, it includes 5 datasets: DoNotAnswer, RefusalsDangerous, RefusalsOffensive, XTestShouldRefuse, XTestShouldRespond. These components make RewardBench a well-recognized, comprehensive, and robust benchmark for evaluating reward models. Our RAMO demonstrates SOTA performance on almost all safety tasks. Besides our evaluation of the reward model using RewardBench, we also evaluated the aligned model using SafetyBench, to ensure comprehensiveness. Second, to address the reviewer’s concern about open-ended human evaluation, we **conducted a survey involving 200 randomly selected prompts from HH-RLHF**. We collect responses from the original Llama3.2-1B-Intruct and the further aligned model guided by RAMO. For each pair, 5 human annotators compared and chose the response that is less harmful, less offensive, more ethical, and aligns better with safety guidelines. The **average win rate is 91% (std 7%), clearly indicating that RAMO effectively provides accurate rewards to enhance the model’s safety**. # 3. Labels Generated by LLMs instead of Human: One of our contributions is that using the fine-grained ratings based on **dynamic, critical rules simplifies and enhances the accuracy of the final rating compared to overall ratings, thus lowering accuracy requirements for the rater**. This advantage is validated by our SOTA performance in Table 1. Also, we highlight that human ratings are substantially more costly than our rater–the open-source Llama3 model. We **manually evaluated rule-based ratings on over 50 samples and found that the Llama3 ratings according to specific rules are quite reasonable, especially compared to giving an overall rating** for each sample (even humans can have huge disagreement in this case). We will provide more details about this part and include a few case studies in the revised paper. # 4. 100 Rules May Not Comprehensively Cover Safety Concerns: As mentioned in paper, we initially generated 400 raw safety rules. During generation, we considered the 19 safety categories from PKU and 75 public constitutions + 58 standard constitutions from CollectiveCAI (see Section 4.1 for details and citations). The **133 constitutions from CollectiveCAI are collected from humans and established documents**. Then we perform a deduplication and reduce the pool to 100 rules. We have manually checked that our rule set almost covers all the safety aspects in PKU, CollectiveCAI, and other papers cited in Section 2. This confirms our rule set’s comprehensiveness and effectiveness in representing critical safety aspects people care about. # 5. Is the Gain Simply from a More Diverse Rule Set? If this is the case, then possible baselines are: A. **Use all 100 rules**. B. Under the same setting of a 5-rule budget, **use 5 random rules**. C. **Human-selected 5 rules**. D. Similar to our dynamic scheme, but **query GPT to select 5 rules dynamically**. In Ablation Study (Table 2) and the additional experiment in the response to Reviewer Af7f, we have **compared against all these baselines**, and none of them demonstrate comparable performance** to our method. Moreover, querying GPT entails a higher cost compared to our RuleAdapter (a Llama3.2-3B model). Hence the ablation study confirms that the performance gain is not simply from a more diverse set of rules. # 6. Novelty of Multi-principle Evaluation: We **did not claim the novelty of using multi-principle evaluation in our contribution list**. Our main novelty is the dynamic rule scheme for the fine-grained rule-based rating, and we use comprehensive experiments, theoretical justification, and thorough ablation studies to show the effectiveness of our framework. We deeply appreciate the reviewer’s insightful comments and valuable suggestions. We sincerely hope our clarification and additional evaluations above would address the concerns, and respectfully hope the reviewer could raise the score.
Summary: The paper introduces a dynamic approach to RLHF that adaptively selects the most critical rules for evaluating response pairs, moving beyond traditional binary preference selection. The authors provide mathematical justification showing their method optimizes mutual information between rule-based labeling and ground-truth preferences. Their 8B reward model reportedly achieved the highest safety performance on RewardBench, outperforming larger models. The work addresses key challenges in fine-grained annotation approaches where human opinions vary and comprehensive response comparison is difficult. Claims And Evidence: The dynamic selection of rule is important. Exist approach (e.g., randomly select rule, applying a large number of rules, using a small fixed set of rules) is sub-optimal. Methods And Evaluation Criteria: The propose dynamic rule selection is motivated by the fact: during reward model training, it relies on the trio and the preference label to learn the preference of chosen and rejected responses. The paper develops a rule selection strategy based on the max discrepancy measure and train the Rule Adapter to achieve the dynamic selection of the most critical rules, enhancing the quality and interpretability of preference labeling. Theoretical Claims: The paper theoretically prove that our max-discrepancy method effectively maximizes the mutual information between the preference labels by the selected rules and the hidden ground-truth preference labels. Experimental Designs Or Analyses: The use GPT-4 to generate 400 raw safety rules and finally select 100 rules. To train rule adapter, they selected 5K prompts from ShareGPT (a dataset featuring real user conversations). The corresponding responses are generated using 6 models (at 7B scale). The rule adapter is trained as a multi-label classification task. The critical rule is identified by the max-discrepancy strategy proposed in section 3.3 Supplementary Material: No Relation To Broader Scientific Literature: The rule adapter is important for fully automatic RLAIF without human preference labeling. Essential References Not Discussed: NA Other Strengths And Weaknesses: The accuracy of the max-discrepancy strategy itself is very important, which need more details explanation and support by experiments Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We really appreciate the comments. For the request for more details on the accuracy of the max-discrepancy strategy, since “accuracy of the max-discrepancy strategy” can be interpreted in several ways, below we provide our responses to all the interpretations: # 1. Validation Performance of RuleAdapter during Training: We use a validation set of size 500 during the training of RuleAdapter. During training, the best checkpoint achieves: accuracy 0.967, precision 0.9645, and recall 0.9658 on the validation set. # 2. Quality of Rules Selected by RuleAdapter: In Appendix F, we have provided **case studies with real data trios and analysis on the selected rules by RuleAdapter**. To further evaluate the quality of selected rules, we **conducted an evaluation survey** involving 8 volunteers (all are PhD or master students in STEM) to assess 100 randomly selected trios from HH-RLHF: volunteers are asked to provide binary labels indicating whether each rule selected by RuleAdapter is related and critical to the given trio. After averaging and scaling by the max score of 5, we obtain a **quality score of 89% (std 7%), meaning on average, 4.45 of the 5 rules are considered relevant and important, showing that RuleAdapter successfully selects out important rules dynamically**. # 3. Accuracy of the Aligned Models: First, RewardBench itself evaluates the performance of reward models by their accuracy on the binary preferences on 5 preference datasets: DoNotAnswer, RefusalsDangerous, RefusalsOffensive, XTestShouldRefuse, XTestShouldRespond. These components make RewardBench a well-recognized, comprehensive, and robust benchmark for evaluating reward models. Our RAMO demonstrates SOTA performance on almost all safety tasks. For example, on RefusalsOffensive, RAMO achieves a score 99.0, meaning that for 99% of the test data, RAMO matches human’s annotated preferences. Besides our evaluation of the reward model using RewardBench, we also evaluated the aligned model using SafetyBench, to ensure comprehensiveness. Second, to further provide open-ended human evaluation, we **conducted a survey involving 200 randomly selected prompts from HH-RLHF**. We collect responses from the original Llama3.2-1B-Intruct and the further aligned model guided by RAMO. For each pair, 5 human annotators compared and chose the response that is less harmful, less offensive, more ethical, and aligns better with safety guidelines. The **average win rate is 91% (std 7%), clearly indicating that RAMO effectively provides accurate rewards to enhance the model’s safety**. We thank the reviewer for all the comments and sincerely hope our clarification and additional evaluations above would address the concerns. These details and explanations will be added to the paper, and we respectfully hope the reviewer could consider raising the score.
null
null
null
null
null
null
null
null
Adaptive Localization of Knowledge Negation for Continual LLM Unlearning
Accept (poster)
Summary: This paper focuses on the scenario of LLM continual unlearning, which is a more practical and challenging setting than one-time unlearning. Most existing unlearning methods achieve forgetting by fine-tuning the pretrained model, which unavoidably impairs the general performance of the LLMs. In continual unlearning scenarios, a more severe decline in general utility is observed, primarily due to two factors: accumulative decline and cascading degradation. The former refers to a continuous decline in model utility as unlearning is repeatedly performed, while the latter refers to the exacerbated decline in model utility caused by the inter-task dependencies. To mitigate these issues in LLM continual unlearning, the authors propose an integrated framework comprising entropic optimization objective, dynamic gradient sparsity, and learning rate modulation. The entropic optimization loss adaptively adjusts the unlearning objective based on the LLM's current level of memorization of the target content. The dynamic gradient sparsity method identifies and fine-tunes only the most critical parameters for each unlearning task, thereby preserving the model's general utility. These modules of the proposed framework collaboratively mitigate the utility decline issue in LLM continual unlearning. The proposed method outperforms various baseline methods across multiple benchmarks and a newly constructed dataset, TRAVIS. Claims And Evidence: ## Pros - In this paper, the authors argue that continual unlearning poses greater challenges than one-time unlearning, primarily due to two issues: sustained utility decline and detrimental inter-task interference. To validate the latter phenomenon, the authors present theoretical proofs using a toy example and empirical evidence on the TOFU dataset. This in-depth analysis of the underlying causes of utility decline in continual unlearning strengthens the motivation of the proposed method. - The authors claim that the task vector method reduces the risk of excessive unlearning compared to GA. To substantiate this claim, the authors conduct a theoretical comparison between the baseline methods task vector and GA, resulting in a convincing and inspiring argument. ## Cons - The paper lacks a thorough discussion regarding the issue of accumulative decline in model utility. Although it is intuitive that the utility of the LLM decreases during each unlearning task and that such declines accumulates over time, a more detailed discussion is necessary. For example, the rate of decline may diminish as the model is exposed to more tasks, potentially making the accumulative decline issue less severe. - The task vector method is claimed to be able to reduce the risk of excessive unlearning compared to GA. However, GA is typically used alongside regluarization terms such as gradient descent on the retain dataset or a KL divergence term, whereas the training of task vectors does not incorporate retain data. This may render the task vector method less effective in practice, which needs further discussion. Methods And Evaluation Criteria: The proposed method makes sense overall. The utilization of the task vector is supported by theoretical validation. Furthermore, the three modules in the proposed method are well-motivated and thoughtfully designed to address the issues of utility decline in LLM continual unlearning. Nevertheless, there are some questions regarding the proposed method. Theoretical Claims: The authors provide theoretical proofs on a toy example to validate the claim that cascading degradation results in larger changes to model parameters during continual unlearning, potentially exacerbating the decline in model utility. Additionally, the comparison between the task vector method and GA is also theoretically supported. Proposition 2.1, Theorem B.1, and Theorem B.2 have been verified and are correct. Experimental Designs Or Analyses: I have checked the validity of the experiments in the main text. The experimental designs are robust, and the comparisons across various methods are comprehensive. However, an ablation study examining the impact of hyperparameters is not provided. Supplementary Material: I reviewed Appendix A to Appendix D. Relation To Broader Scientific Literature: The study falls within the scope of LLM unlearning, a field where research on continual unlearning remains scarce. And the investigation of the cascading degradation issue, along with the proposed method, represents a novel contribution to the literature. Essential References Not Discussed: The essential related works are cited in the paper as far as I know. Other Strengths And Weaknesses: ## Strengths - The authors investigate the intrinsic causes of utility decline in LLM continual unlearning, combining theoretical and empirical analysis. Their exploration of the cascading degradation issue is both insightful and thought-provoking. - The proposed framework is comprehensive and novel. The derivation process of the dynamic gradient sparsity module is particularly ingenious, offering valuable insights not only for general LLM unlearning but also for other related fields. - To validate the effectiveness of the proposed method, the authors conduct extensive experiments across three benchmarks. Additionally, they construct an evaluation dataset to enable a more comprehensive assessment of the LLM utility. The evaluation of the proposed and baseline methods is sufficient and comprehensive. ## Weaknesses - The authors argue that cascading degradation of model utility occurs when different unlearning tasks are related. However, it remains unclear whether task relevance is common in real-world scenarios. Specifically, are unlearning requests arriving at different times inherently related? If not, cascading degradation may be less prevalent in practice than suggested. - The authors introduce a new evaluation corpus, TRAVIS. However, the difference and advantages of TRAVIS compared to existing benchmarks are not clearly explained. A more detailed explanation of the motivation behind TRAVIS's construction should be included in the main text. - The paper lacks a detailed discussion on the computational complexity of the proposed method. Specifically, the dynamic gradient sparsity module of the proposed method involves calculating and storing gradient masks, which may incur extra computational overhead and memory consumption. Other Comments Or Suggestions: In line 280, "mu" should be displayed with the symbol. Questions For Authors: - In this paper, the authors argue that there are two critical issues in continual unlearning: accumulative decline and cascading degradation. While the proposed modules appear to focus primarily on mitigating cascading degradation, it remains unclear how the method addresses the issue of accumulative decline. - Gradient sparsity is dynamically computed during the fine-tuning process. Why not calculate the gradient sparsity only once prior to the fine-tuning process? This may save some computational overhead. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer for the constructive and positive comments. ## Con 1: lack discussion of accumulative decline 1. As for the accumulative decline in model utility, the rate of decline should not diminish because **each unlearning process could cause the same degree of model parameter changes, thereby the same degree of general utility degradation**. 2. Given the widespread presence of cascading degradation, **the damage to utility from each unlearning task generally increases progressively**. Therefore, the rate of decline usually does not diminish but increases. **Please refer to the experiments in Figure 2 for evidence**. Thanks for pointing out the issue. We will extend the above discussion in our revision of the paper. ## Con 2: Task vector does not incorporate retain data 1. Although GA training can incorporate retain data, **the limited retain data struggles to represent the broad knowledge possessed by the LLM**. As a result, methods like GA+RT or GA+KL still lead to a significant decline in model performance. 2. Although vanilla task vector training cannot incorporate retain data, our proposed method utilizes retain data in updating the gradient mask. By incorporating it during mask updates rather than directly involving retain data in the optimization objective, this approach mitigates the bias introduced by retain data to some extent. 3. **The experimental results on several datasets demonstrate that the task vector method exhibits comparable ability of utility preservation with GA+RT and GA+KL**, even if it does not incorporate retain data. ## Experiment: Ablation study of hyperparameters Thanks for pointing out the issue. We have supplemented ablation studies for hyperparameters $s$ and $\lambda$ on the TOFU dataset. The results of the final task are displayed. |$s$|F-Rouge|FQ|MU| |-|-|-|-| |1|0.2934|1.5e-2|0.4355| |5|0.3257|1.4e-2|0.4487| |10|0.3314|1.1e-2|0.4534| |20|0.3536|9.5e-3|0.4596| |$\lambda$|F-Rouge|FQ|MU| |-|-|-|-| |0.5|0.3448|1.0e-2|0.4526| |0.8|0.3314|1.1e-2|0.4534| |1.0|0.3305|1.1e-2|0.4528| |1.2|0.3195|1.4e-2|0.4203| Please refer to the responses of Q4 to Reviewer ex1z for more ablation studies. ## Weakness 1: Whether cascading degradation occurs commonly in real-world scenarios. 1. **In real-world applications, continuously arriving unlearning tasks are typically correlated in terms of data format and content.** For instance, as shown in the example in Figure 1, data from different users on the same website tends to be quite similar. Therefore, such inter-task dependency commonly results in cascading degradation in real-world application. 2. **Even if different tasks are unrelated to each other, the decline in general utility caused by preceding unlearning tasks can lead to partial forgetting of subsequent tasks, resulting in cascading degradation.** We conducted an experiment to verify this: after completing the WHP unlearning task, the performance of the model (F-Rouge) on the TOFU and MUSE News forget set also exhibited some decline, as shown in the table below. |State|TOFU|MUSE News| |-|-|-| |Before unlearning WHP|0.9824|0.5862| |After unlearning WHP|0.7853|0.5134| ## Weakness 2: Advantages of TRAVIS Please refer to the responses of Q1 to Reviewer ex1z. ## Weakness 3: Computational complexity and memory consumption We propose a computationally efficient algorithm and a memory-efficient algorithm for the mentioned issue. The running time and memory consumption of our method are comparable to baselines. Please refer to Appendix and the responses of Q4 to Reviewer ex1z. ## Q1: How the method addresses the issue of accumulative decline 1. Our method alleviates the decline in utility during conducting each unlearning task, thereby minimizing the overall utility degradation in the continual unlearning process. 2. **The components of our method also incorporate mechanisms to address accumulative decline**. For instance, the *Dynamic Gradient Sparsity module* facilitates selective fine-tuning of different parameter sets for distinct tasks, preventing the cumulative drift of model parameters that leads to accumulative decline. Additionally, the *Adaptive Parameter Modulation* module applies smaller learning rates to model parameters fine-tuned by preceding tasks, further avoiding cumulative parameter drift. 3. **The mechanisms in our method to mitigate cascading degradation also serve to alleviate accumulative decline**, as cascading degradation causes the utility decline of each unlearning task to be increasingly severe, exacerbating the issue of accumulative decline. ## Q2: Why not calculate the gradient sparsity only once Thanks for the insightful question. We apply dynamic gradient sparsity to obtain a trade-off between fine-tuning all parameters and drastic changes to a small subset. We also propose a computationally efficient algorithm for gradient sparsity. Please refer to Appendix E.4 and E.2 for more details. --- Rebuttal Comment 1.1: Comment: The authors have addressed my questions and concerns regarding the method and experiments. Although the GA baseline method can incorporate retain data, the retain data is biased and cannot fully reflect the overall utility of LLMs. The rebuttal explains the commonality of the cascading degradation problem and the severity of accumulative decline, highlighting that their combined effect can lead to a catastrophic decline in LLM utility. The rebuttal also explains the mechanism by which the proposed method addresses the accumulative decline problem and supplements ablation studies on hyperparameters. Overall, the method proposed in this paper effectively tackles the two major challenges encountered in continual unlearning, and the additional efficient algorithms reduce time and space complexity. I have raised my score to 4, and I request that the authors incorporate the hyperparameter-related experiments and the discussion on accumulative decline into the paper. --- Reply to Comment 1.1.1: Comment: **Thank you so much for raising the score and recognizing the value of our work!** Your insightful questions and comments are crucial for enhancing the validity of the paper. The comparison of the GA baseline method and task vector indeed requires additional discussions for clarity. The issues of cascading degradation and accumulative decline also need more explanation regarding their commonality and harms. We will supplement these discussions and additional ablation studies in our revision to the paper. We are truly grateful for your valuable feedback and encouragement!
Summary: LLM unlearning seeks to eliminate sensitive knowledge from LLMs. Existing methods for such scenarios often lead to significant degradation of the model's general ability, with utility losses accumulating over time. Additionally, interactions between previous and current unlearning tasks can cause partial forgetting, leading to over-unlearning. To address these issues, the authors propose ALKN, which is built on the task vector paradigm. Three modules are employed in the fine-tuning phase of the task vector to adaptively regulate the gradients of LLM parameters, enabling the model to preserve the general utility of LLMs while sufficiently unlearning target content. For rigorously evaluating unlearning methods, the authors introduce TRAVIS, an evaluation corpus composed of synthetically generated pre-training data spanning diverse topics. Experimental results demonstrate the effectiveness of ALKN across multiple benchmarks. Claims And Evidence: Overall, the claims made in the paper are clear. However, certain aspects warrant evidence. **Question 1** While TRAVIS is positioned as a superior evaluation corpus, its efficacy relative to existing benchmarks like TOFU remains unproven. The authors omit empirical comparisons to validate its precision in assessing model utility post-unlearning. Methods And Evaluation Criteria: The proposed method makes sense and is well designed. But certain concerns remain. **Question 2** The authors argue that the target contents of different unlearning tasks often exhibit homogeneity, leading to significant utility degradation during unlearning, termed cascading degradation. The proposed ALKN is designed to address this issue. However, it remains unclear how the method performs when the unlearning tasks are not correlated. Could the authors provide insights or experimental results to clarify the effectiveness of ALKN in such scenarios? **Question 3** The dynamic gradient sparsity module selectively updates vital parameters to preserve general performance. However, limiting adjustments to a subset of parameters risks residual retention of sensitive information in unaltered parameters, potentially leading to incomplete unlearning. Theoretical Claims: Yes, no issues found Experimental Designs Or Analyses: I have checked the soundness of the experimental designs, but there are some experiments missing that can further validate the proposed method. **Question 4** Appendix E.2 introduces efficiency-focused algorithms for gradient mask computation. Additionally, an algorithm to reduce memory usage is also introduced. However, the impact of these algorithms on performance remains unverified, as no supporting experiments have been conducted. Supplementary Material: Yes. A.RelatedWork, E.Implementationdetails, F.ExperimentalSetup, and H.Moreexperimentalresults. Relation To Broader Scientific Literature: The proposed dynamic gradient sparsity module is related to model sparsity methods across multiple domains [1, 2]. However, this paper introduces a distinctive mechanism to progressively sparsify gradients throughout the unlearning process, effectively balancing the objectives of forgetting and retaining. [1] Jia, Jinghan, et al. "WAGLE: Strategic weight attribution for effective and modular unlearning in large language models." arXiv preprint arXiv:2410.17509 (2024). [2] Von Oswald, Johannes, et al. "Learning where to learn: Gradient sparsity in meta and continual learning." Advances in Neural Information Processing Systems 34 (2021): 5250-5263. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** This paper studies a practical scenario, with a clear motivation supported by both theoretical and empirical results. The proposed method is novel and inspiring, particularly the dynamic gradient sparsity module. The utilization of the task vector method and the proposed modules are well suited to addressing the issues of LLM continual unlearning. The experimental designs are comprehensive. **Weakness** The proposed dataset, TRAVIS, demands more empirical validation. Some ablation experiments that could further validate the effectiveness of the method are missing. Moreover, some aspects of the proposed method, as previously discussed, require clarification and empirical evidence. Other Comments Or Suggestions: No Questions For Authors: Please refer to the questions in other parts of the review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive and insightful comments. ## Q1: Empirical validation regarding TRAVIS dataset Thanks for the constructive advice. We would like to explain it as follows: 1. The TRAVIS dataset consists of a wide variety of topics, enabling a comprehensive evaluation of the model’s overall utility. 2. TRAVIS is generated by the LLM prior to unlearning, thus allowing an accurate assessment of the LLM’s original utility. 3. To evaluate the sensitivity of the TRAVIS dataset to model performance, we conducted a synthetic experiment by adding a small amount of random Gaussian noise to the model parameters. We compared the Rouge scores of the TRAVIS dataset, the TOFU dataset, and the WHP dataset. As shown in the table below, the TRAVIS dataset is the most sensitive to changes in model utility, with TRAVIS Rouge showing a decline when the noise standard deviation reaches 0.5%. |Noise Std (%)|TRAVIS Rouge|TOFU Rouge|WHP Rouge| |-|-|-|-| |0%|0.7643|0.8965|0.6940| |0.1%|0.7643|0.8965|0.6940| |0.5% |0.7576|0.8965|0.6940| |1%|0.7425|0.8965|0.6940| |2%|0.7320|0.8958|0.6940| ## Q2: When the unlearning tasks are not correlated Thanks for the valuable question. We would like to explain it as follows: 1. Since our method is adaptive, it can effectively handle cases where there is no correlation between tasks. When the unlearning of previous tasks does not affect the performance of subsequent tasks, the soft labels in the Entropic Soft Labels module reduce to one-hot labels; the Dynamic Gradient Sparsity module naturally selects different model parameters for different tasks, and consequently, the Adaptive Parameter Modulation module applies normal learning rates to these non-overlapping parameters. Overall, our method is directly applicable to uncorrelated tasks. 2. In Figure 5, we conducted an experiment on continual unlearning with unrelated tasks, where tasks from different datasets were interleaved. The experimental results demonstrate that the performance of our method surpasses baseline methods in this scenario. ## Q3: The dynamic gradient sparsity may lead to incomplete unlearning Thanks for the insightful question. We would like to explain it as follows: 1. Vital parameters are dynamically selected during the training process. At the beginning of training, the threshold is set low, allowing a larger number of model parameters to be chosen for fine-tuning. As training progresses, the threshold is gradually increased, resulting in fewer model parameters being selected. This approach enables us to achieve a better trade-off between the two objectives of effective unlearning and utility preservation, thus avoiding incomplete unlearning. Please refer to Appendix E.2 and E.3 for more details. 2. To validate the effectiveness of forgetting, we attacked the unlearned model using a relearning method. Specifically, we fine-tuned the model with gradient descent using a small portion of the forget set data and assessed the extent to which the model’s performance on the full forget set (F-Rouge) recovered, thereby determining whether the unlearned knowledge could be easily 'reawakened.' This allowed us to evaluate whether the unlearning algorithm thoroughly forgets the target data. As shown in the table below, compared to other baseline methods, the knowledge unlearned by our method is less likely to be 'reawakened' by this attack approach. |Method|Unlearned|5% Data|10% Data|20% Data| |-|-|-|-|-| |GA+KL|0.3108|0.3543|0.4208|0.5125| |DPO+RT|0.3771|0.4531|0.5807|0.7923| |WAGLE|0.3731|0.4837|0.6242|0.8627| |Ours|0.3314|0.3583|0.4072|0.5037| ## Q4: The dynamic gradient sparsity may lead to incomplete unlearning Thanks for pointing out the issue. We supplement the following ablation study on the TOFU dataset and will extend it in our revision of the paper. Specifically, we validate whether using the efficient threshold calculating algorithm in Appendix E.2 (eff-threshold) and whether using the memory-efficient algorithm in Appendix E.4 (eff-memory) result in performance degradation. As shown in the table, using these two efficient algorithms sacrifices a slight amount of precision but does not significantly impact model performance. |Method|F-Rouge|FQ|MU| |-|-|-|-| |w/o eff-threshold|0.3354|1.2e-2|0.4566| |w/o eff-memory|0.3302|1.1e-2|0.4527| |Ours|0.3314|1.1e-2|0.4534| ## Weakness: Empirical validation and ablation studies Thanks for your valuable suggestions. Please refer to the responses above and ablation studies in the paper. Please also refer to ablation studies in the responses to Reviewer HgFq. We will supplement and extend the above experiments in our revision of the paper. If the reviewer has any further concerns, we are more than happy to address them. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed most of my concerns on empirical validation. --- Reply to Comment 1.1.1: Comment: We are glad to know that the rebuttal has addressed most of the concerns of the reviewer. If the reviewer has any additional questions, we are more than happy to address them. We express our gratitude to the reviewer for their insightful and positive comments, which have been greatly enlightening and are crucial for enhancing the quality of our paper. In revising the paper, we will incorporate these discussions and empirical results.
Summary: The paper proposes a combination of techniques for continual unlearning while maintaining model utility. Those are (1) fine-tuning with soft labels, (2) dynamically sparsifying the gradients and (3) adaptively setting learning rates depending on how significantly they have been adjusted for past tasks. In the experiments, the method trades-off forgetting the relevant data with maintaining model utility better than baselines from the literature on TOFU, MUSE News and Who's Harry Potter. Claims And Evidence: Primarily, the paper develops a method and compares it favorably to baselines from the literature on relevant benchmarks. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: The setup of the experiments makes sense and the paper considers appropriate baselines, even augmenting some existing methods with auxiliary objectives to prevent forgetting in the continual setting. Supplementary Material: Briefly checked Appendix D and E.1+2 Relation To Broader Scientific Literature: Generally, the paper cites most important reference. The discussion in section 2.3 looks from my perspective like an instance of catastrophic forgetting as in continual supervised learning, perhaps this could be touched on (whether or not the authors agree with the analogy). Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: * the method seems to work well * it is described clearly * the experiments are extensive * appropriate ablations are included Weaknesses: * the method is overall rather ad-hoc and it is not clear to me that it will lead to much follow-up work * the theoretical discussion in 2.3 seems rather disconnected from how the method is constructed Other Comments Or Suggestions: Table 1 is incredibly hard to read (one needs to keep track of trade-offs between metrics, track them over tasks and compare between methods) and should be moved to the appendix to replace with figures in the main text. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive and very valuable comments. Below are our responses to the comments. ## Q1: The connection to catastrophic forgetting in continual learning We agree with the reviewer that the connection can be further clarified. The cascading degradation issue we studied in Section 2.3 is similar to, yet distinct from, the catastrophic forgetting problem in continual learning. The intrinsic causes and outcomes of the two are different. - Catastrophic forgetting happens in continual learning primarily due to model parameters overwriting. And it results in utility declines of previous tasks. - Cascading degradation happens in continual unlearning primarily due to inter-task relationships. And it results in drastic declines in normal utility. We will expand this discussion in the related work section in our revision. ## Q2: The method is ad-hoc Thanks for the insightful comment. We would like to explain it from two aspects. **The proposed method is an integrated system, not merely an aggregation of separate components:** 1. **The core idea of our method is the dynamic modulation of model parameter changes during each unlearning process**. Based on this idea, we propose three components regarding the training objective, model parameters, and optimization, with the core idea running throughout. 2. **The different components of our method are interconnected as shown in Figure 3**. For example, the *Adaptive Parameter Modulation* module leverages the learnable mask $M$, computed by the *Dynamic Gradient Sparsity* module, to represent the relationship between model parameters and the features of each task. This enables adaptive learning rates to prevent the re-unlearning of already forgotten features. **We believe that our work introduces multiple new contributions that inspire further exploration.** Here, we would like to list some factors that merit follow-up studies: 1. **We are the first to study the cascading degradation phenomenon in continual unlearning**, which can easily lead to complete model failure when handling continuous unlearning requests. Although we propose a highly promising method to address this issue, performance of the algorithm can still be further improved. 2. Our proposed method designs the algorithm through three perspectives, **paving several potential paths for effective LLM unlearning**. 3. The task vector method has received limited attention in LLM unlearning. **We validate the effectiveness of the task vector method in LLM unlearning from both theoretical and experimental perspectives, establishing it as a highly promising direction**. Our research also aims to inspire interest and motivate further in-depth exploration of this important direction. 4. **The method we propose may also provide insights for other domains**. The approach of adaptively identifying crucial model parameters based on data and dynamically adjusting the parameter mask could provide insights for the fields of model editing and model sparsity. We sincerely appreciate your comment and we will add the related discussion in the introduction as well as the conclusion sections in our revision. ## Q3: Theoretical discussion in 2.3 seems disconnected from the method Thanks for the very constructive comments. We would like to explain it as follows: 1. Proposition 2.1 in Section 2.3 is presented to formally verify the existence of the cascading degradation issue. Specifically, we study how the first unlearning task influences the second task. 2. Our method is built on the task vector method. **Theorem B.1 and Theorem B.2** mentioned in Section 2.2 theoretically compare the GA and task vector methods, demonstrating that task vector is less prone to over-unlearning and may cause less utility degradation in LLMs. 3. Our method includes formulas obtained through theoretical derivation. For instance, the update rule for underlying vector $m^t$ is derived based on a utility-retaining objective in Equation 8. The derivation details are in Appendix D. 4. **We also introduce the following corollary that validates the effectiveness of our proposed method**, which will be added in our revision: **Corollary 2.1**. Consider the optimization scenario where the model successively unlearns on $D^s$ and $D^f$ with entropic soft labels, yielding intermediate and final parameters: $\theta_s^{E}$ and $\theta_{CUL}^{E}$. The parameter changes in such a scenario during unlearning on $D^s$ are less than vanilla continual unlearning: $$||\Delta\theta_{CUL}^E||_{X^TX}=C^E||\Delta\theta_{CUL}||_{X^TX},$$ where $0<C^E<1$ is a constant depending on the datasets. The corollary is based on Proposition 2.1 and it demonstrates that using the proposed entropic soft labels method yields fewer parameter changes in continual unlearning and may result in milder utility declines of LLMs. ## Q4: Hard to read Table 1 Thanks for the valuable advice. We will replace it with figures in our revision of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the additional discussion on the concerns I had raised. I trust these will be incorporated into the final version of the paper and will increase my score to facilitate a consensus. --- Reply to Comment 1.1.1: Comment: **Thank you so much for your positive response and raising the score.** Your questions and comments are insightful and greatly helpful for improving the quality of the paper. We will incorporate discussions regarding the connection between our study and catastrophic forgetting in continual learning, the integrity of the proposed method, and the potential follow-up research our method may inspire into the revised paper. We will also supplement the corollary along with its details and proof to validate the effectiveness of our proposed method in our revision. Thank you again for your valuable comments and recognition of our paper!
Summary: This paper provides a new method for continual unlearning where unlearning happens in multi stage with multiple tasks where tasks could be slightly related to each other, and utilizing existing methods for unlearning for such scenario can result in severe degradation of model utility. Their proposed method includes three main components, and improve over existing baselines. Claims And Evidence: Yes, the paper is well-written and experiments are well-designed. Methods And Evaluation Criteria: The method, and the task setup to evaluate continual unlearning has been proposed very well. Theoretical Claims: Yes, they look good. Experimental Designs Or Analyses: Experiments and analysis look compeliing. Supplementary Material: Yes, the appendix looks comprehensive. Relation To Broader Scientific Literature: The problem of continual unlearning is indeed very important, as similar to the problem of sequential model editing that has been studied in the literature. Essential References Not Discussed: Literature is covered pretty well. Other Strengths And Weaknesses: This paper introduces continual unlearning, a novel problem within the field, and demonstrates the limitations of existing methods for this task. It proposes a thoughtfully designed approach that effectively addresses these limitations, achieving improved results over current techniques. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s positive and encouraging feedbacks, as well as their recognition for the value of our work. We are delighted that the reviewer finds our approach and theory compelling. In our future work, we plan to build upon this foundation by further refining our methods and exploring additional applications of our findings such as sequential model editing. If the reviewer have any additional questions, we are more than happy to address them.
null
null
null
null
null
null
The Power of Random Features and the Limits of Distribution-Free Gradient Descent
Accept (poster)
Summary: This paper studies the learning capacity of stochastic gradient algorithms. The main results focus on the case of binary classification with either square loss or $01$-loss trained with mini-batch SGD with $c$-approximate clipped gradient. Moreover it is assumed that the data is generated using a function in the considered function class $\mathcal{F}$, ie, we are in a realisable setting. In this setting, it is shown that if the algorithm is able to learn a good predictor regardless of the source data distribution (which is generated by a feature distribution and a function in $\mathcal{F}$), then most of the functions of the function class can be approximated by a linear combination of random features, in a certain sense. This result has two main interpretations: first it means that if we want a gradient algorithm to work on every data distribution, then it not necessary to use models that are more complex than random features. On the other hand, it shows that the function classes that can be learnt from SGD in a distribution-free way mostly contain simple functions. The authors prove their result by extending existing statistical tools, in particular buy introducing a notion of average probabilistic dimension complexity. Claims And Evidence: The main claims are well explained, first in an informal way and then more formally with proofs. Methods And Evaluation Criteria: The approach seems to be well-aligned with the existing literature and makes sense for proving the claims. Theoretical Claims: I did not check the correctness of the theoretical claims, as the involved tools are a bit far from my expertise. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: The authors extend the setting of (Abbe et al., 2021) and the links with the aforementioned paper are made clear. Moreover, an existing notion of probabilistic dimension complexity from (Kamath et al., 2020) is extended. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Here are a few potential weaknesses: - The analysis is limited to binary classification and in particular does not encompass regression problems. The mains theorems are restricted to the square loss, which might be restrictive in the case of binary classification. - The setup is restricted to the realisable case, ie, when the function class contains the optimal predictor that has been used to generate the data distribution. This might not be realistic in a modern machine learning setting. Other Comments Or Suggestions: N/A Questions For Authors: - Is it possible to extend the results to cross-entropy loss and / or regression problem? - Can you explain why SGD with $c$-approximate clipped gradient is considered? Does you results hold with vanilla SGD? This part is not clear to me. - What is the notation $\sup_{h\leftarrow A}$? The way the error is defined in section 3.2 is not clear to me. - You mention the $01$-loss, but how can this be used in your results, as you consider gradient-based optimisation? - Do your results extend to the case where there is label noise, ie, the data is not generated using a deterministic predictor Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer AcC1, Thank you for your thoughtful consideration. We will respond directly to the potential weaknesses that you outlined, as well as your questions. > The analysis is limited to binary classification and in particular does not encompass regression problems. The mains theorems are restricted to the square loss, which might be restrictive in the case of binary classification. Yes, we consider classification, but our results can extend to other loss functions like bSGD with logistic loss, instead of sq loss. This is true because of the flexibility of the transformation from bSGD to SQ of [1]. Additionally, we do not rule out obtaining an analogous result with sq loss or logistic loss instead of 0-1 loss for the random features. To accomplish that, we could use a different boosting theorem (we use an adaboost theorem). If there is a boosting theorem that we can use to boost weak learners on 0-1 loss to a strong learner under a different loss function, then this should allow one to expand the conclusion to other loss functions. > The setup is restricted to the realisable case, ie, when the function class contains the optimal predictor that has been used to generate the data distribution. This might not be realistic in a modern machine learning setting. Our setup is separate from realizable / non-realizable learning. Note, we do not make *any* restrictions on the class $\cal{F}$ of functions (it can be the set of all functions). Instead, complexity is implicitly handled because our theorem’s conclusion (i.e., the size of the random feature representation) is expressed in terms of the number of parameters and gradient steps needed to learn the class $\cal{F}$. For example, if the number of parameters or gradient steps needed to learn $\cal{F}$ is massive due to huge complexity of $\cal{F}$, then the random feature representation would also be massive. A typical example to keep in mind for this case is learning the set of parity functions of the uniform hypercube with bSGD. **Questions** > Is it possible to extend the results to cross-entropy loss and / or regression problem? Yes to cross-entropy, please see the answer to your first weakness above. For regression, it is likely that new techniques would be needed, but we don't think it is inherently impossible. > Can you explain why SGD with c-approximate clipped gradient is considered? Does you results hold with vanilla SGD? This part is not clear to me. In short, our theorem is not true for non-clipped gradients. GD with non-clipped gradients is as powerful as general PAC-learning (see [1]), so it’s not possible to convert such algorithms to random features with violating SQ lower bounds. > What is the notation suph←A? The way the error is defined in section 3.2 is not clear to me. Supremum is used because we consider worst-case behavior for gradient clipping/rounding. That is, error is with respect to a valid gradient clipping chosen by an adversary at each gradient step. Please see line 195 for explanation of how the gradient clipping works. > You mention the 01-loss, but how can this be used in your results, as you consider gradient-based optimisation? The 01 loss is only used to evaluate the random feature representation, which is constructed from the bSGD algorithm. For the bSGD algorithm, indeed the loss must be differentiable such as sq loss or a logistic loss. > Do your results extend to the case where there is label noise, ie, the data is not generated using a deterministic predictor We don’t rule out this possibility, but extending it to this setting requires verifying that nothing breaks down across the variety of transformations we have borrowed from other works. For example, the transformation from bSGD to SQ from [1], and SQ lower bounds from [2]. [1] Abbe, E., Kamath, P., Malach, E., Sandon, C., and Srebro, N. On the power of differentiable learning versus pac and sq learning. Advances in Neural Information Processing Systems, 34:24340–24351, 2021. [2] Blum, A., Furst, M., Jackson, J., Kearns, M., Mansour, Y., and Rudich, S. Weakly learning dnf and characterizing statistical query learning using fourier analysis. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pp. 253–262, 1994.
Summary: This paper proposes a new link between learning a parametric model with mini-batch stochastic gradient descent (SGD) and the approximation of a function class by random features: if a family of functions can be learned, then there exists a distribution of random features (in dimension that is polylogarithmic in all relevant parameters) so that all functions in the family can be well approximated by a linear combination of features. This is done with a new notion of dimension complexity ("adc"), and through a reduction to statistical query learning. Claims And Evidence: The authors do their best to show how the theorem is proved, but given that I was not familiar with the papers that the current paper builds on (e.g., the paper by Abbe et al.), it is hard for me to understand the proof and its validity (in a traditional reviewing form, I would put low confidence, but this does not seem to be possible here). As in most theory papers, the authors also rightfully try to interpret their result. They claim surprise in the proposed equivalence, with the underlying understanding that linear random features are a form a weak learners. I mostly agree with them with the fact that it is surprising that learnability by mini-batch SGD implies the existence of a random feature distribution. However, the author should insist more on the fact that the distribution of random feature is unknown and depends on the function class in ways unknown to a practitioner. Most of the related work that the paper describes corresponds to explicit random features. Methods And Evaluation Criteria: N/A (theory paper) Theoretical Claims: Not in details. Experimental Designs Or Analyses: N/A Supplementary Material: No Relation To Broader Scientific Literature: The paper makes the effort of relating to previous work, but since I don't know that line of work, I can't tell if this is done correctly or not. Essential References Not Discussed: N/A Other Strengths And Weaknesses: * Other Comments Or Suggestions: In the related work section, the description of the work of Chizat et al. , 2019 (on lazy training) is inaccurate: all layers do not move, and not only the bottom layer, showing that the NTK regime is achieved. Questions For Authors: (1) Is there any issue with the fact that stochastic gradient descent may have difficulties reaching a global minimum? (I suspect not) (2) Could you clarify which losses are considered and where, as at the moment it is a bit unclear? (e.g., from line 165, it could be both square and 0/1, but this is probably not the case any more for GD as 0/1 is not differentiable, so the one in line 192 is probably the square loss). (3) The paper focuses on target function classes with values in {-1,1} and the square loss, which is not the standard in ML. Would this apply to logistic loss as well? ADDED AFTER REBUTTAL Thanks for your responses. If the paper ends up being accepted and you have one more page, I would strongly suggest to make it more self contained. It currently requires to know in depth several other papers without much explanation. This would greatly increase its impact. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer 7FE4, Thank you for your review, we hope to answer your questions below. > (1) Is there any issue with the fact that stochastic gradient descent may have difficulties reaching a global minimum? (I suspect not) No, since the statement considers probably approximately correct learnability in terms of the global optimum. > (2) Could you clarify which losses are considered and where, as at the moment it is a bit unclear? (e.g., from line 165, it could be both square and 0/1, but this is probably not the case any more for GD as 0/1 is not differentiable, so the one in line 192 is probably the square loss). Right now we are using sq loss for the bSGD learning, since as you say it needs to be differentiable. For the random features, we are using 0-1 loss. However, in both cases, we could potentially consider other losses. To generalize the sq loss in bSGD learning, we can use the flexibility of the conversion from bSGD to SQ of [1]. For flexibility on the 0-1 loss, we could use a different boosting theorem (we use an adaboost theorem). If there is a boosting theorem that we can use to boost weak learners on 0-1 loss to a strong learner under a different loss function, then this should allow one to expand the conclusion to other loss functions. > (3) The paper focuses on target function classes with values in {-1,1} and the square loss, which is not the standard in ML. Would this apply to logistic loss as well? Our results start by invoking the transformation of [1], to convert a bSGD algorithm into a related SQ algorithm. In [1], the proof of this transformation considers square loss on labels {0,1}, but the choice of loss function can indeed be modified up to change of technical details in the proof. What is important is the differentiability. So, yes, our results could begin with bSGD on logistic loss instead. [1] Abbe, E., Kamath, P., Malach, E., Sandon, C., and Srebro, N. On the power of differentiable learning versus pac and sq learning. Advances in Neural Information Processing Systems, 34:24340–24351, 2021.
Summary: This paper revisits the power of learning with gradient based methods in a distribution free setting. The main result is to show that if a hypothesis class $F$ can be learnt using gradient descent on some parameterized model with a distribution-free guarantee then for any prior distribution $\mu$ over $F$, there exists a probability distribution over random features such that $h \sim \mu$ with high probability can be approximated by a linear function over those random features; this is formalized in the notion of _average probability dimension complexity_ introduced in the paper. This is interpreted in the paper by saying that any distribution-free guarantee on learning with gradient descent must be very limited because it implies that _most_ functions in the class can be represented efficiently using a random feature representation. The proof technique is as follows: 1. A result of Abbe et al. (2021) is invoked to say that learning with gradient descent implies a statistical query learning algorithm, which by standard arguments, implies upper bounds on the _statistical query (SQ) dimension_. 2. The main technical contribution of the paper then is to show an upper bound on average probabilistic dimension complexity in terms of the SQ dimension of the class. As a corollary, Part 2 resolves a weaker version of a conjecture from Kamath et al. (2021) that asks for an infinite separation between _probabilistic dimension complexity_ and (deterministic) dimension complexity; namely, here an infinite separation is shown between _average probabilistic dimension complexity_ and _dimension complexity_. The proof for Part 2 involves techniques from communication complexity, a "random feature lemma" and a boosting procedure. ### Post-rebuttal update Based on the rebuttal discussion, I am increasing my score to 3, since the authors have acknowledged that the concerns raised in this review would be discussed in a revision. Claims And Evidence: Claims are supported by proofs in the main body or in the Appendix. Methods And Evaluation Criteria: The paper is entirely theoretical, so I believe this question is not relevant. Theoretical Claims: Proofs have been provided for all theoretical claims in the paper. One thing that I feel is problematic is: In Theorem 3.3, why is the condition $bc^2 \\ge \\Omega(\\log Tp/\\delta)$ not required? The proof seems to simply invoke Theorem 3.4 as the first step, which does seem to have this condition. If this condition is indeed required, then I feel the narrative of the paper changes quite a bit. The story is not really about _any_ gradient descent based method, but only one with "low precision" (as defined in Abbe et al. (2021)). In that sense, the paper is really about statistical query learning methods, and the "gradient descent" part is not providing that much insight. Experimental Designs Or Analyses: The paper is entirely theoretical and there are no experiments. Supplementary Material: There is no additional supplementary material beyond the appendix. I have looked through most of the appendix, but I did not verify everything line by line. Relation To Broader Scientific Literature: The paper introduces a novel notion of average probabilistic dimension complexity, and shows that it can be upper bounded in terms of the SQ dimension. This is the main result in the paper. Essential References Not Discussed: I think all essential references have been discussed. Other Strengths And Weaknesses: ### Strengths I think the paper shows a nice result that average probabilistic dimension complexity can be upper bounded in terms of statistical query dimension. While the techniques are borrowed from powerful techniques from prior work, I feel there is some novelty in this technical contribution. ### Weaknesses I already raised a concern about Theorem 3.3 (main theorem) regarding the condition $bc^2 \\ge \\Omega(\\log Tp/\\delta)$ (see "Theoretical Claims" section). I feel this breaks the narrative about "gradient descent" in the paper. But beyond that, I feel the notion of average probabilistic dimension complexity is not _that_ well motivated. In particular, it does not really help with learning a worst case hypothesis in the class, which is the usual setting of learning. All this notion says is that for any prior $\mu$ over the hypothesis class, there exists a random features representation such that most functions in the class are well approximated. But since this random features representation depends on $\mu$, this does not lead to learning algorithm. So, from that point of view, I don't fully buy the narrative that having small average probabilistic dimension complexity suggests any limitation of the hypothesis class, even though, I agree that parities are hard even in this sense. Other Comments Or Suggestions: I think the notation of average probabilistic dimension complexity would simpler with a single parameter $\epsilon$, defined as, $\\mathrm{adc}\_\\epsilon(\\mu)$ equal to the smallest $d$ such that there exists a distribution $\\mathcal{E}$ over $d$-dimensional representations such that $\\mathbb{E}\_{h \\sim \\mu} \\sup\_{\\rho \\in \\Delta(X)} \\mathbb{E}\_{\\phi \\sim {\\cal E}} \\inf\_{w} {\\cal L}^{\\mathcal{D}\_{h, \\rho}}\_{01}(\\langle w, \\phi \\rangle) \le \epsilon$. It is possible to convert to the two parameter version by using Markov's inequality, since only the case of small constant $\delta$ is considered. Incidentally, there is [another paper](https://arxiv.org/abs/2411.10784) that also tries to address the conjecture in Kamath et al. (2021), but it does it in a very different way, by considering partial functions. I believe the techniques are also very different. But it might still be interesting to discuss this paper in the context of the conjecture. I was looking at the paper of Kamath et al. (2021), and they have a notion of probabilistic distributional dimension complexity $\\mathrm{dc}\_{\\epsilon}^{\\mathcal{D}}$ and they show an "infinite gap" between this and deterministic dimension complexity. Is there any relation between $\\sup\_{\\mathcal{D}} \\mathrm{dc}\_{\\epsilon}^{\\mathcal{D}}(F)$ and $\\sup_{\\mu} \\mathrm{adc}_{\\varepsilon}(F)$ (as in the one-parameter version of $\\mathrm{adc}$ defined above)? Maybe there isn't, but I was just wondering. ### Minor comments * Line 215 (left): There seems to be some typo in definition of $\phi_t : X\\{\pm 1\\} \to [-1, 1]$ Questions For Authors: I would like to hear from the authors regarding the weaknesses pointed out above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer 9ucH, We appreciate your thoughtful review. We will respond to specific weaknesses that you addressed in your review. > One thing that I feel is problematic is: In Theorem 3.3, why is the condition bc^2≥Ω(log⁡Tp/δ) not required? Yes, it is required. This restriction is introduced in the text preceding the theorem (top of paragraph 2 in section 3.2 - line 238). We will include this inside the theorem for extra clarity. > If this condition is indeed required, then I feel the narrative of the paper changes quite a bit. The story is not really about any gradient descent based method, but only one with "low precision" (as defined in [4]). In that sense, the paper is really about statistical query learning methods, and the "gradient descent" part is not providing that much insight. We believe that your interpretation is technically correct in some sense. However, there is a large body of work that studies limitations of gradient descent through SQ lower bounds. For a quintessential example, see [5]. Additionally, we want to point out that expecting this type of result to apply to any gradient descent based method is not well-founded. This is because our theorem *does not hold* when there is no restriction on the precision of the gradients. To see this, we can borrow from [4], who show in theorem 1a of their paper that (distribution-free) PAC-learning algorithms can be simulated by (distribution-free) bSGD algorithms, when allowed fine enough gradient precision c (depending on the batch size b, specifically, c < 1/8b; note, in their paper they use \rho to denote gradient precision). Using this theorem, it follows that for (b,c) such that c < 1/8b, then bSGD can learn parities in the distribution-free case. From here, we can conclude that our transformation from bSGD to random features cannot hold for (b,c) such that c < 1/8b, since the random feature representation would violate SQdim lower bounds for parities. We do think this discussion is insightful and plan to discuss this in the final version of this paper. > I feel the notion of average probabilistic dimension complexity is not that well motivated... All this notion says is that for any prior μ over the hypothesis class, there exists a random features representation such that most functions in the class are well approximated. But since this random features representation depends on μ, this does not lead to a learning algorithm. So, from that point of view, I don't fully buy the narrative that having small average probabilistic dimension complexity suggests any limitation of the hypothesis class It is true that the order of quantifiers makes it so that “the learner must know the prior μ” so-to-speak. This is why we interpret the theorem, as explained in the introduction, as support for the the statement: *If a function class is learnable in a distribution-free manner by gradient descent (with some restricted precision), then most functions in the class must have a relatively simple random feature representation.* This statement is fully supported by our theorem if you fix μ to be the uniform distribution, as this means most functions in the class (i.e., a large fraction of the class, or any large subset of the class) have a small random feature representation. Moreover, **we must emphasize** that the fact that any large subset of the class has a random feature representation is a major limitation. Many works give relevant examples of classes that **cannot** be represented in such a way at all: [1] [2] [3]. [1] does this for parity functions on k bits, [2] does (single) ReLUs, and [3] does “ResNets.” **Hence, the fact that we show such a scheme *exists* at all is a strong statement.** On the other hand, as you pointed out, when μ is “unknown,” there is no explicit feature distribution. Thus, we do not interpret the theorem as saying that gradient descent can be replaced by random feature learning algorithms. This is touched on in the first page of the introduction, but we are happy to expand this discussion in the final version of the paper for maximum clarity. References [1] Amit Daniely and Eran Malach. Learning parities with neural networks. arXiv preprint arXiv:2002.07400, 2020. [2] Gilad Yehudai and Ohad Shamir. On the power and limitations of random features for understanding neural networks. In Advances in Neural Information Processing Systems, pages 6598–6608, 2019. [3] Zeyuan Allen-Zhu and Yuanzhi Li. What can resnet learn efficiently, going beyond kernels? In Advances in Neural Information Processing Systems, pages 9017–9028, 2019. [4] Abbe, E., Kamath, P., Malach, E., Sandon, C., and Srebro, N. On the power of differentiable learning versus pac and sq learning. Advances in Neural Information Processing Systems, 34:24340–24351, 2021. [5] Surbhi Goel, Gollakota, A., Jin, Z., Karmalkar, S., Klivans, A. Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent. arXiv:2006.12011, 2020.
Summary: This paper investigates how powerful are gradient-based algorithms on some differentiable models for **distribution-free** learning settings. The main theoretical result (see Theorem 3.2) is: essentially, whenever the function class is SQ learnable or differentiably learnable in the sense of [Abbe et al. 2021] then it is also learnable with some random feature kernel up to polynomial blow-ups. So the result essentially establishes that without making any assumptions on the input distribution, it is not possible to show exponential separations between Gradient algorithms and random feature kernels. The only small caveat, and perhaps necessary to show this result, is that the guarantee holds with high probability over the function class for an arbirary prior. But does not hold for any function in the class. Perhaps, this is inherently necessary to show the result. So to show such guarantees, the paper introduces another notion of average complexity measure--average probabilistic dimension complexity (adc)---which is shown to be polynomially related to statistical query dimension, to achieve the desired claim. Claims And Evidence: Yes. Methods And Evaluation Criteria: NA Theoretical Claims: I havn't checked. But the claims seem to be correct and enough discussed in the main text. Experimental Designs Or Analyses: NA Supplementary Material: Skimmed for some important references. Relation To Broader Scientific Literature: When NN trained with gradient-based algorithms are more powerful than kernel methods is an important question in learning theory. At least hundreds of works, if not more, have looked at this question from several different points of view. The results of this paper are essentially saying that the assumption on the input distributions is necessary to understand this separation. Otherwise, if the goal is to succeed over all distributions, then this separation collapses, and they are the same up to polynomial factors. I think the findings are new. Essential References Not Discussed: NA Other Strengths And Weaknesses: See summary for strengths. The paper has a good theoretical contribution. Weaknesses: Overall, the paper's exposition can be improved. Line 21: "though the distribution free learning is the desired goal." I strongly disagree with this statement. I don't think this is the ultimate goal. The goal is to come up with the right assumptions that can inform us why learning works. And it is well-known that distribution-free settings are too pessimistic due to the worst-case hardness results. This is never taken as the ultimate goal. "The "theoretical value of this result" paragraph can be improved. 1. Line 64-67 "a common finding...learning outcomes": The statement does not read well. It is not that assumption enables better learning outcomes. Perhaps a better way to put this point forward, which I did not find very clearly in the entire paper either, is as follows. "Input data comes from a specific distribution. However, the distribution-independent settings are easier to discuss and have been extensively studied. However, indeed, the lower bounds on this should not be seen as the fundamental barrier, as this bounds are too pessmistic, i.e. it holds over some worst-case distribution. The result in this paper makes this formal in the context of kernel vs gradient-based method on NNs." This can be also said in the "This work in a nutshell" paragraph before the last line. 2. The entire 2nd (line 76-86) on general references on parity can be either completely removed (or delayed if authors mention these works anyway). But they don't add anything to the point. It hurts the flow of reading to the most important paragraph to follow after that. Also, in lines 100-101, in what distribution-specific setting parties are not hard to learn? Generally, uniform distribution is also seen as the distribution-specific setting, so it is better to specify the distribution as well. Other Comments Or Suggestions: Please see the weaknesses. Also, though I don't have line-by-line suggestions, see if the introduction can be made more effective overall. Can you please make the dependence on \ell explicit throughout in the notation of adc itself, like Kamath et al? Though it has been specified several times clearly, I suggest you create a notation where this is explicit. Also, the square loss notation can be ell_{sqr} rather than capital SQ, which is relevant for the statistical query. Please try to make this different if possible. IMPORTANT: doesn't the centered statement in lines 52-53, right side at the end of page 1 summarizing the result, create a wrong impression of the order of quantifiers? This is also used in TL;DR. Perhaps, it should have been: "...., then there exists a random feature (depending on a prior) that can express most functions under the prior." or make it even more clear: "...., then for any prior over function class, there is a random feature model that can express most functions under that prior." Questions For Authors: 1. Is the 0-1 loss in line 254 defined incorrectly? Should it have \neq instead of equal-to? This mistake is also there at other places. 2. Can you please walk me through the argument as to why this prior mu over function is necessary for this type of result to hold? What is the main intuition on why this could be necessary? 3. Why do you ell_sq for bSGD, but the result for kernel is shown over 0-1 loss? I understand that for bSGD, you need the loss to be an almost differentiable one, so can you show this for general surrogate losses? More importantly, can you show the result for random feature approximation for, let's say sqr loss? Can you explain to me a bit about this discrepancy and to what extent it could be avoided? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer NhLC, Thank you for your thoughtful review. We will directly respond to the weaknesses and questions that you outlined. > Overall, the paper's exposition can be improved. Line 21: "though the distribution free learning is the desired goal." I strongly disagree with this statement. I don't think this is the ultimate goal. The goal is to come up with the right assumptions that can inform us why learning works. And it is well-known that distribution-free settings are too pessimistic due to the worst-case hardness results. This is never taken as the ultimate goal. Thank you for this feedback. Line 21 says “While distribution-free learning is a desirable goal, it is often computationally challenging, as it requires the algorithm to handle worst-case scenarios.” We feel that this line is meant to say something different than the way it was interpreted, so we will modify it to prevent this. We do not intend to weigh in on what the “ultimate goal” is (or should be), and we just meant to say that distribution-free algorithms can be desirable (in a literal sense), and add the context that it is often computationally challenging to achieve. > Line 64-67 "a common finding...learning outcomes": The statement does not read well... Thank you for pointing it out, and we intend to improve it. To improve it, we like your suggested point as a template, and will include something similar in the “this work, in a nutshell” paragraph. E.g., Input data comes from a specific distribution. However, the distribution-independent settings are easier to discuss and have been extensively studied.Yet, hardness results in this setting should not be seen as a fundamental barrier. Taken at face-value, they only apply to some worst-case distribution. In this paper, we make this formal in the context of kernel vs gradient-based method on NNs, by showing that ... > The entire 2nd (line 76-86) on general references on parity can be either completely removed... We will move this information elsewhere. Indeed, it may break the flow, but for some readers less familiar with parities, and the study of parities, it may be helpful context. > in what distribution-specific setting parties are not hard to learn? Generally, uniform distribution is also seen as the distribution-specific setting, so it is better to specify the distribution as well. [1] Consider a family distributions where parities are not hard. As a special case, they consider a biased product distribution over the hypercube. We will specify this as an example. > IMPORTANT: doesn't the centered statement in lines 52-53, right side at the end of page 1 summarizing the result, create a wrong impression of the order of quantifiers? Using the phrasing then **most** functions in the class must have a relatively simple random feature representation alludes to the case where the prior μ is fixed to uniform. In this case there is one random feature distribution constructed by our proof. So while our statement is correct, it may not be clear, and we definitely do not want to create the wrong impression. We will clarify in the final version. Questions: 1. Yes (latex rendering error) 2. The main intuition is that in the proof of the random feature lemma (page 10), we use the ability to sample new concepts from the distribution. This seems unavoidable to apply our communication complexity technique which operates in the “distributional” communication model. While our techniques basically operate only in this setting, it is not clear if it is necessary. 3. Yes, it can hold for more general loss functions than sqr loss, using the flexibility of the theorem of [2], which converts bSGD to SQ. For the random feature representation, our communication complexity technique produces random features that are essentially weak learners for the target distribution w.r.t. 0-1 loss. However, if there is a boosting theorem that we can use to boost such weak learners to a strong learner under a different loss function, then this should allow one to expand the conclusion on the random feature representation to other loss functions. [1] Malach, E. and Shalev-Shwartz, S. Learning boolean circuits with neural networks. arXiv preprint arXiv:1910.11923, 2019. [2] Abbe, E., Kamath, P., Malach, E., Sandon, C., and Srebro, N. On the power of differentiable learning versus pac and sq learning. Advances in Neural Information Processing Systems, 34:24340–24351, 2021. --- Rebuttal Comment 1.1: Comment: So do authors believe this limitation of prior mu is necessary for showing such distribution-free equivalence between SQ methods and random feature methods? Or is it just the limitation coming from the current techniques used? I could not understand clearly from the response. So far, I was under the impression that this seems to be fundamentally necessary due to lines 142-146. How I should interpret these lines? --- Reply to Comment 1.1.1: Comment: Lines 142-146 are: > Our relaxed notion of average probabilistic dimension complexity is sufficient for an affirmative resolution, but we also show that there may exist complexity theoretic barriers to demonstrating that our relaxed notion is necessary for the separation. "sufficient for an affirmative resolution" refers to Corollary C.3, which is the result showing that there is $\cal{H}, \mu$ such that $adc(\mu)$ is independent of $n$, while $dc(\cal{H})$ is exponential in $n$: Corollary C.3 > There exists a hypothesis class $\cal{H}$, with domain $\{\pm 1\}^n$ and range $\{\pm 1\}$, which satisfies for 0/1 loss and \textbf{any} prior distribution $\mu$ over $\cal{H}$, and arbitrarily small constant $\delta > 0$: - $dc(\cal{H}) \in 2^{\Omega(n^{\frac{1}{4}})}$ - $adc(\cal{H}) \in O(1/\epsilon)$. Perhaps it is useful to now restate lines 142-146 for clarity: > Our relaxed notion of average probabilistic dimension complexity is sufficient to prove corollary C.3. It would be nice to show that it is also *necessary* to prove C.3. But, in theorem C.7, we show that under certain restrictions on the weights of the random features$^1$, proving that the relaxed notion is *necessary* for this separation implies an explicit super-polynomial depth-2 threshold circuit lower bound, which resolves a major open problem in circuit complexity theory. Here, "*necessary* to prove C.3" means showing there exists $\cal{H}, \mu$ such that $adc(\mu)^{\omega(1)} < dc _\epsilon(\cal{H})$. In theorem C.7, we show that proving this inequality resolves the circuit lower bound question. **conclusion/tldr:** All in all, we don't know if the prior $\mu$ is necessary. We do show that, when using the current techniques (broadly construed), then *proving* the prior $\mu$ is necessary is very hard -- since it resolves a major open conjecture in complexity theory. For what it's worth, *proving* the prior $\mu$ is necessary would resolve the complexity conjecture in the "expected" way, so our theorem C.7 **does not** indicate that the prior $\mu$ is not necessary -- only that *proving* that it is necessary is difficult. We view lines 142 -146, and the accompanying theorem C.7, as fairly supplementary to the main results and body of work in this paper. Footnotes 1. Our construction satisfies these restrictions.
null
null
null
null
null
null
Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient Text-to-Image Generation
Accept (poster)
Summary: This paper suggests reducing the parameter in the Text-to-Image (T2I) generative models by pruning the transformer layers in the text encoder for conditioning. They highlight that most of the parameters in T2I models are available in the text encoder. Thus, they propose Skip and re-use layers (Skrr) that prune the layers in the text encoder with beam search and re-use the non-pruned layers as if they were pruned layers. In terms of similarity metric for pruning, authors use MSE as they observe cosine similarity leads to divergence of the feature norm of the unconditional embedding. Quantitative and qualitative experiments the improvement of generation quality compared to other LLM pruning methods, and ablations studies show the significance of re-use and beam search. Claims And Evidence: - They claim that most of the parameters in the latest text-to-image generative models are the text encoder (e.g. T5-XXL encoder in SD3), so they need to focus on reducing the number of parameters of the text encoder to achieve a memory-efficient T2I generative model. I agree with this claim as the size of the text encoder significantly affects the memory and size of the checkpoint in storage. Methods And Evaluation Criteria: - Pruning of the proposed method computes the similarity in embedding space of T2I generative model. It makes sense as this space would have the most information T2I model uses. - Authors justify that the usage of MSE as similarity by showing the problem of cosine similarity. - Authors show the similarity between features in adjacent layers, and it strengthens their argument about the re-use of non-pruned layers. - Evaluating the T2I generation on MS-COCO is a common metric as far as I know, and they show various evaluation metrics about T2I performance. Theoretical Claims: - It seems that their claim makes sense, but I didn't check the theoretical proof thoroughly. Experimental Designs Or Analyses: - Most of the experimental designs seem clear and valid. - In discussion, the authors discuss the enhancement in FID of the pruned model and relate it to recent guidance methods using perturbed or degraded models. Does this imply that the unconditional score in the pruned model would be a good guidance like the degraded counterpart in autoguidance? Then, what if we compute conditional score from the pruned model and guide it with the original model? Would it worsen the FID? - As I understand, the lower metric score is better for both metric_1 and metric_2, since it implies the pruned model works alike the original model. In such case, for every metric, (c) is better than (b) in Figure. 4. However, text in Figure. 4 claims that metric_2 is better than metric_1 and it makes some confusion for me. Supplementary Material: - I just skimmed the supplementary material, and did not thoroughly review it. Relation To Broader Scientific Literature: - Compressing the large-scale text-to-image generative model is a significant problem for its usage in the real-world scenario. This paper claims the importance of considering text encoder for memory-efficient T2I generative models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: One weakness of this paper is that it only reduces the memory and parameters, not actual computing cost such as FLOPS. Other Comments Or Suggestions: N/A Questions For Authors: - The maximum sparsity the paper mentioned is 41.9, and I wonder if the authors have tested with a more highly compressed scenario. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful comments and encouraging feedback regarding our motivation for pruning text encoders, our embedding-space-based similarity approach, the justification for using MSE over cosine similarity, and the clarity and validity of our experimental design and evaluations. --- > **Question 1. Clarification on pruned model's unconditional score effectiveness and impact on FID when guiding original conditional scores.** **Response:** We believed that it is worth reporting and discussing (at least in the Discussion section) this interesting phenomenon of decreasing FIDs since this was also observed in many different contexts such as fine-tuning for concept erasing [1–3], distillation [4–5], quantization [6], or compression [7–8], despite an intuitive expectation of potential performance degradation. Furthermore, as you suggested, our experimental results, summarized in Table 1, indicate that guiding conditional embeddings from the Skrr-compressed model using unconditional embeddings from the dense model leads to degraded FID performance compared to employing both conditional and unconditional embeddings entirely from the Skrr-compressed model. A deeper exploration of this intriguing phenomenon and its underlying mechanisms would constitute compelling future work. **Table 1.** FID and CLIP score with combinations of conditional / unconditional scores from dense / compressed models in PixArt-$\Sigma$. |Conditional|Unconditional|FID↓|CLIP↑| |:-:|:-:|:-:|:-:| |Dense|Dense|22.89|0.314| |Compress w/ Skrr|Compress w/ Skrr|19.93|0.312| |Compress w/ Skrr|Dense|24.31|0.307| [1] Lu, et al. Mace: Mass concept erasure in diffusion models. CVPR (2024). [2] Zhang, et al. Forget-me-not: Learning to forget in text-to-image diffusion models. CVPR (2024). [3] Lee, et al. Concept pinpoint eraser for text-to-image diffusion models via residual attention gate. ICLR (2025). [4] Zhao, et al. MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices. In ECCV (2024). [5] Feng, et al. Relational diffusion distillation for efficient image generation. ACM MM (2025). [6] Li, et al. SVDQuant: Absorbing Outliers by Low-Rank Component for 4-Bit Diffusion Models. ICLR (2025). [7] Yuan, et al. Ditfastattn: Attention compression for diffusion transformer models. NeruIPS (2024). [8] Chen, et al. $\Delta$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers. arXiv (2024). --- > **Question 2. Typo error.** **Response:** Your understanding is correct, and the point you mentioned is indeed a typo. Thank you for pointing this out. The values of $\text{Metric}_1$ for (b) and \(c\) in Figure 4 should be swapped. --- > **Question 3. Experiments on extreme sparsity.** **Response:** Thank you for proposing this insightful experiment. We conducted evaluations under high sparsity levels beyond 50%, and these results are presented in Table 2. While other pruning methods produce images with severely compromised fidelity under such extreme sparsity, Skrr, despite exhibiting some degree of performance degradation, continues to generate images with relatively high fidelity that remain reasonably aligned with the textual descriptions. We will include these detailed experimental results in the supplementary materials of our revised manuscript. **Table 2.** Quantitative results on sparsity over 50% with PixArt-$\Sigma$. |Method| Sparsity | FID↓ | CLIP↑ | DreamSim↑ |Single↑|Two↑|Count.↑|Colors↑|Pos.↑|Color attr.↑|Overall↑| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Dense|0.0|22.89|0.314|1.0|0.988|0.616|0.475|0.795|0.108|0.255|0.539| |LaCo|48.6|280.3|0.194|0.170|0.003|0.0|0.0|0.0|0.0|0.0|0.001| |FinerCut|51.3|154.7|0.191|0.176|0.078|0.008|0.0|0.027|0.0|0.0|0.019| |**Ours (Skrr)**|51.4|20.04|0.307|0.699|0.888|0.333|0.366|0.686|0.038|0.055|0.394| --- > **Question 4. FLOPs reduction in Skrr.** **Response:** Skrr not only reduces memory and parameters but also lowers FLOPs compared to the dense model, as shown in Table 2 of the main manuscript. While the Re-use mechanism slightly increase FLOPs relative to methods using pruning alone, the text encoder constitutes a minor fraction of the total T2I pipeline. As a result, Skrr reduces overall FLOPs by 0.04% compared to the dense model, with at most a 0.17% increase over other pruning approaches.
Summary: This paper propose skrr method, which can effectively reduce the memory consumption of the text encoder in the Text to Image (T2I) model while maintaining the quality of image generation. Claims And Evidence: The motivation is reasonable. Methods And Evaluation Criteria: The paper proposes a two-stage pruning method (Skip and Re use) , which is clear. However, Beam search method may introduce additional computational overhead, and the author did not quantitatively analyze the efficiency and cost of the pruning process itself. The authors use multiple widely recognized evaluation metrics and multiple existing state-of-the-art diffusion models as evaluation benchmarks. Theoretical Claims: It seems correct. Experimental Designs Or Analyses: When pruning with high sparsity, the FID score (image quality index) actually improves (i.e., the image quality improves). The author briefly discusses this in the Discussion section (from line 433 onwards), suggesting that this may be due to the disturbance of the null condition vector. However, without sufficient and clear quantitative experiments or theoretical support for this phenomenon, the improvement of FID may only be an accidental phenomenon or evaluation bias. More detailed experimental analysis should be conducted on this phenomenon (such as evaluating the stability of FID scores under different random seed or prompt distributions) to confirm whether FID improvement is truly statistically significant. Supplementary Material: The supplementary material provides detailed experiments setup and additional experiments, like more visual comparisons. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: see above Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful comments and encouraging feedback regarding the clarity of our proposed two-stage pruning method (Skip and Re-use), as well as our comprehensive evaluation using multiple widely recognized metrics and state-of-the-art diffusion models. --- > **Question 1. Computational cost of beam search-based algorithm.** **Response:** Thank you for the valuable suggestion. As you correctly pointed out, beam search introduces additional computational overhead, and we have clearly articulated its time complexity with respect to the number of transformer blocks $L$ and beam size $k$ in Table 1. However, since $k$ is set to a relatively small value (specifically, $k=3$ in Skrr) compared to $L$ (with $L=48$ for T5-XXL), the additional computational overhead remains minimal. Furthermore, it is crucial to emphasize that this overhead occurs only once during the pruning stage when blocks are selected and does not recur during the actual image synthesis phase. Thus, the computational cost from beam search has negligible practical impact on the applicability and efficiency of Skrr. We will include a detailed analysis addressing this aspect explicitly in our revised manuscript. **Table 1.** Time complexity of each block-wise pruning method for calibration. For Skrr, $k=3$. |ShortGPT|LaCo|FinerCut|Ours (Skrr)| |:-:|:-:|:-:|:-:| |$O(L)$|$O(L)$|$O(L^2)$|$O(kL^2)$| --- > **Question 2. Clarification on statistical significance and stability of FID improvements observed under text encoder pruning.** **Response:** We emphasize that our evaluation of FID scores was conducted using the MS-COCO dataset with 30,000 prompts, a sufficiently large and widely used benchmark to ensure robustness and statistical reliability. The use of such an extensive dataset substantially reduces the likelihood that the observed improvements in FID scores at high sparsity are due to randomness or evaluation bias. Additionally, it should be noted that the calibration subset derived from CC12M and the MS-COCO dataset employed for FID evaluations are entirely disjoint, further diminishing concerns regarding dataset-induced bias. We believed that it is worth reporting and discussing (at least in the Discussion section) this interesting phenomenon of decreasing FIDs since this was also observed in many different contexts such as fine-tuning for concept erasing [1–3], distillation [4–5], quantization [6], or compression [7–8], despite an intuitive expectation of potential performance degradation. A deeper exploration of this intriguing phenomenon and its underlying mechanisms would constitute compelling future work. [1] Lu, et al. Mace: Mass concept erasure in diffusion models. CVPR (2024). [2] Zhang, et al. Forget-me-not: Learning to forget in text-to-image diffusion models. CVPR (2024). [3] Lee, et al. Concept pinpoint eraser for text-to-image diffusion models via residual attention gate. ICLR (2025). [4] Zhao, et al. MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices. In ECCV (2024). [5] Feng, et al. Relational diffusion distillation for efficient image generation. ACM MM (2025). [6] Li, et al. SVDQuant: Absorbing Outliers by Low-Rank Component for 4-Bit Diffusion Models. ICLR (2025). [7] Yuan, et al. Ditfastattn: Attention compression for diffusion transformer models. NeruIPS (2024). [8] Chen, et al. $\Delta$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers. arXiv (2024). --- Rebuttal Comment 1.1: Comment: I have read the author response to my review and it addressed my concerns. --- Reply to Comment 1.1.1: Comment: We are pleased that our responses addressed the concerns raised and sincerely appreciate the reviewer’s feedback.
Summary: This paper introduces Skrr, a pruning strategy for text encoders in text-to-image (T2I) diffusion models. Skrr reduces memory usage by selectively skipping and reusing transformer layers, leveraging the redundancy in T2I-specific text encoders. Experimental results on FID, CLIP, DreamSim, and GenEval are reported. ## update after rebuttal Thanks for the new evaluations. The Flux based results may be presented in the main text as well. The field is changing rapidly -- I'm sure that the authors know the native image gen in gpt4o, which seems to be using special autoregressive tokens as conditioning for diffusion. Maybe in future versions some discussions on how this work fits into the 4o style structure will make the paper more interesting. Claims And Evidence: On the Claim of Novelty: The paper states, “To the best of our knowledge, this is the first work to tackle the challenge of constructing a lightweight text encoder for T2I tasks.” While technically correct, this phrasing may be misleading. Many text encoder pruning methods can be considered as contributing to lightweight text encoders. The unique aspect of this work lies in how it prunes the text encoder specifically for T2I tasks. However, the specific challenges in optimizing text encoders for T2I models are not clearly articulated, making it difficult to assess the true novelty of the approach. Methods And Evaluation Criteria: The paper uses GenEval as a benchmark for evaluating text encoder performance. However, GenEval primarily consists of simple prompts that do not fully test the capabilities of powerful text encoders. More challenging benchmarks, such as the DSG score (Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for Text-to-Image Generation, ICLR 2024), would provide a better assessment of how well the pruned text encoder captures fine-grained text-image alignment. Theoretical Claims: The paper presents Lemma 3.1 (Error Bound of Two Transformers) and Theorem 3.2 (Tighter Error Bound of Re-use) to support its pruning strategy. However, these theorems involve Lipschitz continuity, which the paper does not explicitly evaluate on real models. Without verifying the Lipschitz properties in practical settings, the theoretical claims may appear somewhat loose, and their applicability to actual T2I models remains uncertain. Experimental Designs Or Analyses: The primary baseline for evaluation is PixArt (in Tab 1), which is known to be a relatively weak T2I model. Furthermore, the paper tests its method on only this single model. As the proposed pruning strategy is intended to be a general solution for text encoder optimization, it should be validated on stronger T2I models—especially those that demonstrate superior text generation capabilities, such as Flux. Additionally, because PixArt’s performance on GenEval is highly variable, the reported results may lack reliability. Testing on multiple high-performing models would strengthen the claims of generalizability. Supplementary Material: n/a Relation To Broader Scientific Literature: not related Essential References Not Discussed: n/a Other Strengths And Weaknesses: no other comments Other Comments Or Suggestions: no other comments Questions For Authors: 1. Flux is tested, but why number evals are not reported? 2. Fig 3 seems like a general text encoder study. Does it related to t2i? This is misleading. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful comments and encouraging feedback. --- > **Question 1. Novelty claiming on task.** **Response:** We acknowledge the reviewer's comment regarding the overstatement of novelty. We appreciate this important point and will carefully revise our manuscript to more accurately reflect the nature of our contribution. Specifically, we will clarify that our primary novelty lies in effectively addressing the particular challenges involved in pruning text encoders within text-to-image (T2I) diffusion models, rather than implying broader methodological novelty. --- > **Question 2. Evaluation on additional benchmark.** **Response:** GenEval was selected in this work since it is a widely adopted benchmark among prior T2I works, thus can provide direct comparisons with our proposed method. To relieve your concern for additional evaluation, we now evaluate our Skrr on two more benchmarks (Tables 1 and 2 below), T2I-CompBench [1] (generating 4 images per prompt, thus 3,600 images total) and DSG-1K [2] (generating 4 images per prompt, thus 4,240 images total), in BLIP-VQA score and DSG score, respectively. These new results are consistent with our prior evaluation, demonstrating Skrr’s robustness across diverse and rigorous benchmarks. **Table 1.** Quantitative results on BLIP-VQA score in T2I-CompBench dataset with PixArt-$\Sigma$. |Method|Sparsity|Complex↑|Shape↑|Texture↑| |:-:|:-:|:-:|:-:|:-:| |Dense|0.0|0.5137|0.4595|0.5715| |LaCo|40.5|0.3127|0.3155|0.3370| |FinerCut|41.7|0.4162|0.3146|0.4155| |**Ours (Skrr)**|41.9|0.4438|0.3756|0.4721| **Table 2.** Quantitative results on DSG score in DSG-1k dataset with PixArt-$\Sigma$. |Method| Sparsity | TIFA↑| Paragraph↑| Relation↑| Count↑| Real user↑| Pose↑| Defying↑| Text↑| Overall↑| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Dense|0.0|0.879|0.886|0.836|0.740|0.600|0.626|0.808|0.683|0.761| |LaCo|40.5|0.763|0.812|0.630|0.602|0.518|0.594|0.595|0.582|0.649| |FinerCut|41.7|0.715|0.661|0.716|0.569|0.470|0.585|0.549|0.502|0.597| |**Ours (Skrr)**|41.9|0.797|0.791|0.757|0.636|0.537|0.553|0.691|0.605|0.677| [1] Huang, et al. T2I-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. NeurIPS (2023). [2] Cho, et al. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-to-image generation. ICLR (2024). --- > **Question 3. About Lipschitz continuity assumption.** **Response:** The assumption of Lipschitz continuity is widely used in deriving convergence rates and error bounds. Prior work has empirically shown that reasonably trained large transformer models satisfy this condition [3], and theoretical analyses have confirmed its validity in diffusion models [4]. Thus, our use of this assumption is well-justified, supporting the relevance of our theoretical framework to real-world T2I models. [3] Khromov & Singh. Some Fundamental Aspects about Lipschitz Continuity of Neural Networks. ICLR (2024). [4] Liang, et al. Unraveling the smoothness properties of diffusion models: A gaussian mixture perspective. arXiv (2024). --- > **Question 4. Evaluation on FLUX.** **Response:** We provide extensive additional qualitative comparisons for stronger T2I models, such as SD3 and FLUX, in the appendix. Nevertheless, to directly address your concern regarding generalizability, we have also conducted quantitative evaluations on the FLUX.1-schnell model, and these results are presented in Table 3 below. The outcomes clearly demonstrate that our method preserves the performance of dense models in FLUX compares to other baselines, reinforcing our claim regarding the broad applicability and generalizability of our pruning strategy across multiple strong T2I diffusion models. **Table 3.** Quantitative results on FLUX.1-schnell. |Method|Sparsity|FID↓|CLIP↑|DreamSim↑|Single↑|Two↑|Count.↑|Colors↑|Pos.↑|Color attr.↑|Overall↑| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Dense|0.0|20.45|0.312|1.0|0.994|0.879|0.603|0.738|0.280|0.500|0.666| |LaCo|40.5|28.67|0.292|0.631|0.756|0.275|0.275|0.372|0.025|0.053|0.293| |FinerCut|41.7|38.87|0.268|0.536|0.522|0.169|0.072|0.298|0.008|0.008|0.179| |**Ours (Skrr)**|41.9|24.28|0.300|0.698|0.925|0.439|0.300|0.617|0.058|0.053|0.399| --- > **Question 5. Relationship between Figure 3 and T2I models.** **Response:** Figure 3 presents experimental results specifically obtained from the T5-XXL encoder, which is widely adopted in T2I models. Note that T5-XXL is independently trained from T2I models, so it is reasonable to investigate this component separately while still having a strong relevance to the T2I context. To avoid potential confusion, we will revise the text to explicitly highlight the connection to T2I.
Summary: This paper introduces Skip and Re-use Layers (Skrr), a method for compressing text encoders in text-to-image (T2I) diffusion models to improve memory efficiency. The large-scale text encoders in T2I diffusion models consume a significant amount of memory despite contributing little to FLOPs. Skrr addresses this by selectively skipping and reusing layers in the text encoder. It uses a T2I diffusion-tailored discrepancy metric and beam search in the Skip phase to identify layers for pruning, and in the re-use phase, it reintroduces adjacent layers to skipped ones to mitigate performance degradation. Experiments show that Skrr outperforms existing blockwise pruning techniques, maintaining image quality and text alignment even at high sparsity levels. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. Skip Algorithm and Re-use Algorithm. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, B.2, B.3, and B.4. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - This paper proposes Skrr, which significantly reduces the memory usage of text encoders in T2I diffusion models. - It outperforms existing blockwise pruning techniques such as ShortGPT, LaCo, and FinerCut. - The paper provides theoretical support for the Re-use phase. Theorem 3.2 shows that incorporating the Re-use phase can lead to a tighter error bound compared to just skipping layers. - Extensive experiments across various metrics show the effectiveness of the proposed method. Weaknesses: - The Geneval score appears to be insufficient as a standalone metric. Are there alternative evaluation benchmarks, such as T2I-CompBench or DPG-Bench, that could provide a more comprehensive assessment of the proposed method? - Even when considering the Geneval score, a significant performance drop is observed compared to the dense model (where sparsity is 0), as shown in Table 1. This substantial decline severely limits the practical applicability of skrr in real-world scenarios. - Whether memory is a limiting factor in practice remains an open question. During training, it is feasible to extract features offline using the text encoder since backpropagation (BP) is not required. In such cases, optimizing memory usage may not be a critical concern. However, if the proposed algorithm sacrifices significant performance to achieve memory efficiency, its practical utility could be severely constrained, which still needs to be further discussed. - The authors state that the calibration dataset consists of 1k text prompts sourced from CC12M. However, it is unclear whether this dataset is representative of the text prompts used in text-to-image generation tasks. There is insufficient validation to ensure that the results derived from this calibration dataset generalize well to text-to-image tasks. For instance, the authors might have overfitted the calibration dataset when selecting re-used or skipped layers, which could lead to suboptimal layer choices for text-to-image generation. This issue requires thorough exploration and justification. Other Comments Or Suggestions: See weaknesses. Questions For Authors: Based on the above concerns, I suggest i) the authors should conduct more extensive validation of their method, including comparisons on additional datasets; ii) a rigorous analysis should be performed to verify the consistency between the calibration dataset and text-to-image datasets. Most importantly, the current method appears to sacrifice too much performance, which significantly limits its practical applicability. Addressing this performance degradation is critical for real-world deployment. Given these issues, I currently vote for weak reject. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful comments and encouraging feedback regarding our method's superior performance over existing blockwise pruning techniques, theoretical support, and extensive experiments across diverse metrics. --- > **Question 1. Evaluation on additional benchmark.** **Response:** GenEval was selected in this work since it is a widely adopted benchmark among prior T2I works, thus can provide direct comparisons with our proposed method. To relieve your concern for additional evaluation, we now evaluate our Skrr on two more benchmarks (Tables 1 and 2 below), T2I-CompBench [1] (generating 4 images per prompt, thus 3,600 images total) and DSG-1K [2] (generating 4 images per prompt, thus 4,240 images total), in BLIP-VQA score and DSG score, respectively. These new results are consistent with our prior evaluation, demonstrating Skrr’s robustness across diverse and rigorous benchmarks. **Table 1.** Quantitative results on BLIP-VQA score in T2I-CompBench dataset with PixArt-$\Sigma$. |Method|Sparsity|Complex↑|Shape↑|Texture↑| |:-:|:-:|:-:|:-:|:-:| |Dense|0.0|0.5137|0.4595|0.5715| |LaCo|40.5|0.3127|0.3155|0.3370| |FinerCut|41.7|0.4162|0.3146|0.4155| |**Ours (Skrr)**|41.9|0.4438|0.3756|0.4721| **Table 2.** Quantitative results on DSG score in DSG-1k dataset with PixArt-$\Sigma$. |Method| Sparsity | TIFA↑| Paragraph↑| Relation↑| Count↑| Real user↑| Pose↑| Defying↑| Text↑| Overall↑| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Dense|0.0|0.879|0.886|0.836|0.740|0.600|0.626|0.808|0.683|0.761| |LaCo|40.5|0.763|0.812|0.630|0.602|0.518|0.594|0.595|0.582|0.649| |FinerCut|41.7|0.715|0.661|0.716|0.569|0.470|0.585|0.549|0.502|0.597| |**Ours (Skrr)**|41.9|0.797|0.791|0.757|0.636|0.537|0.553|0.691|0.605|0.677| [1] Huang, et al. T2I-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. NeurIPS (2023). [2] Cho, et al. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-to-image generation. ICLR (2024). --- > **Question 2. Clarification on Skrr's practical applicability considering performance relative to dense models.** **Response:** As you may know, true practicality does not only needs excellent performance, but also requires efficient memory usage and we believe that our proposed Skrr can provide a way to controling this trade-off while minimizing performance degradation. As shown in all quantitative results, including Table 1 and Table 2 in question 1, Skrr consistently achieves better performance than all baseline pruning techniques, even at high sparsity. Furthermore, qualitative examples in both the main text (Fig. 5) and appendix (Fig. A10-A16) clearly show that Skrr maintains visual fidelity and semantic alignment with the dense model. These results underscore Skrr’s strong practical applicability despite compression. Additionally, Skrr is entirely training-free, which enhances its practicality and distinguishes it as a meaningful contribution to the field. --- > **Question 3. Practical utility considering memory efficiency versus performance trade-off in text encoder pruning.** **Response:** We would like to clarify that our method is training-free, thus there is no claim regarding training; instead, our approach is explicitly designed for inference scenarios. In practical situations where it is challenging to anticipate incoming text prompts, offline feature extraction becomes impractical. Consequently, reducing the size of the text encoder is essential for effective model deployment. Reviewer v9A7 also acknowledged this motivation, explicitly stating, "I agree with this claim as the size of the text encoder significantly affects the memory and size of the checkpoint in storage." Concerns regarding performance drops associated with memory-efficient designs have already been addressed above. --- > **Question 4. Clarification on the representativeness and generalizability of the CC12M calibration dataset for text-to-image tasks.** **Response:** We would like to clarify that the CC12M dataset is widely recognized and extensively utilized for training and evaluating various T2I models, including Stable Diffusion 3. To address your concern about potential overfitting to our calibration dataset, we emphasize two key points: First, all evaluation metrics employed in our experiments—FID, CLIP, DreamSim, GenEval, T2I-CompBench, and DSG score—were computed on datasets entirely distinct and mutually exclusive from the calibration set. Consistent performance across these diverse benchmarks strongly suggests minimal risk of overfitting. Second, since all baseline methods evaluated in our experiments utilized the same calibration set for their respective pruning strategies, the fairness and comparability of our results remain intact.
null
null
null
null
null
null
M3-JEPA: Multimodal Alignment via Multi-gate MoE based on the Joint-Embedding Predictive Architecture
Accept (poster)
Summary: - The work proposed a method that aligns multimodal information in the latent space with a MoE setup, treating single-modality encoder as experts, and is optimised by single modality alignment loss with alternating input/output modality each iteration. To avoid representation collapse over alternative training, the author included mutual information loss as a regularisation term. - The conceptual ideal is good — an MoE encoder is trained such that any modality latent x can be projected to latent y, and if the assumption is true, it can enable flexible handling of any unknown modality. - Results demonstrate that the proposed method outperforms baselines across a variety of multimodal tasks and modalities. ## update after rebuttal I would like to keep my scores, as the authors address most of my concerns. Claims And Evidence: - The claim that to alternatively optimise each single-modility encoder (i.e. expert) is better than to simultaneously optimise single/unified encoders as existing work did, is supported by Table 5. - The claim on the proposed method generalising well to domains and modalities is supported by the audio results in Table 2. - The claim on training- and inference-efficiency is partially supported by comparing the trainable parameter in Table 1. - The motivation is not strong, given limited evidence to support the argument on the limitation of prior works, i.e. optimising alignment with single or unified encoders may lead to information bias. Methods And Evaluation Criteria: - Using a mixture-of-experts is appropriate given that different modality pairs and tasks may require different mappings. The MoE allows specialization and presumably helps maximize mutual information more effectively. - The author argued that a different framework is needed to resolve ‘representataion collapse’, then proposed to solve with the mutual information loss, but the ablation experiment on this seems missing. - The evaluation criteria encompass a range of tasks to reflect general multimodal alignment quality. However, given that the author used Llama3-8B as a text encoder, more recent work using Llama3-8B should be included as the baseline (e.g. Cambrian, MiniCPM, LLaVA, LLaVA-Next, etc). Theoretical Claims: - The method description includes multiple misused and un-introduced symbols. The notation across figures (1-3) and formulas is inconsistent and hard to follow. - The terms ‘input’ and ‘output’ corresponding to $Zi$, $Zt$ are very confusing. I suppose i and t represent the image and text, but you want generalised terms; what about m1 and m2 for modality 1 and modality 2? - The task losses in section 2 and alignment losses in section 3 use identical notation ($L_{i \to t}$ and $L_{t \to i}$), but to my understanding, they are different, so they should be expressed differently. - The method is still not flexible to any modality as it is limited by what the modality-specific encoders (expert) been trained with, this leads to marginal advantage over existing work either train single or unified modality encoders without MoE. Experimental Designs Or Analyses: - There is a lack of experiment verifying that the proposed losses (mutual information and conditional entropies) would also work in the usual setting, i.e. separate encoder without MoE. - Table 1 is quite saturated, and many weak baselines are not using strong embeddings as this work (i.e. use LLama3-8B, Dinov2-Large and LanguageBind (Zhu et al., 2023) to encode text, image and audio modalities). It would be helpful to at least clarify the embeddings of each baseline. - Table 3 aims to verify RQ4 - can the method handle less-studied information, but the classification label is very well-studied, and many prior studies already verified text-to-image or image-to-text models learn representations that adapt well to classification tasks. Supplementary Material: - The dataset, implementation details and experiment setups provide sufficient details to support the main arguments. - The limitation section discussed improvemental areas such as scaling to the alignment of modilities more than 2, and more study in VQA validation could be performed. Relation To Broader Scientific Literature: - It references JEPA (Joint Embedding Predictive Architecture) by LeCun (Dawid & LeCun 2023, Assran et al. 2023), acknowledging the conceptual foundation. - The authors cited recent works on multimodal alignment, such as BLIP-2 (Li et al., 2023b), and included it in Table 1 as an important baseline. Essential References Not Discussed: Given that the author used Llama3-8B as a text encoder, more recent work using Llama3-8B should be included as the baseline (e.g. Cambrian, MiniCPM, LLaVA, LLaVA-Next, etc.). Other Strengths And Weaknesses: - The abstract and Figure 1 are hard to follow (over complicated). Is the MoE essentially learning to ‘map an unknown modility latent to the target modilty latent space’? - The overall writing is not well structured and is confusing from time to time. The ideal is simple and could be introduced with simple terms in general. Other Comments Or Suggestions: - l.075-076: ‘optimizing by alternating’ → ‘optimized by alternating’. - l.97: ‘input-output $(I,T)$’, ‘$e_i (·)$ and $e_t (·)$., and formular (1), these are confusing as the author keep alternating between set symble (captionised $I,T$) with invidual symble $(i,t)$, and the notation of $i,t$ are not introduced. - l.135-136 ‘modality encodings’ → ‘modality encoders’, and ‘task encodings’ … Questions For Authors: - Ablation of MoE and Loss Components: How crucial is the mixture-of-experts design and the combination of predictive + contrastive losses to M3-Jepa’s performance? Did you run experiments with a single predictor (no MoE) or with only contrastive or only predictive loss? - Scaling and Number of Experts: How does the number of experts influence performance? Did you experiment with different counts of experts n in the MoE? - Failure Cases / Limitations: Beyond the note that MoE struggled with complex relationships, were there any particular failure cases you observed? For instance, are there specific modality pairs or tasks where M3-Jepa underperforms a simpler baseline? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the feedback. > limited evidence to support ... the limitation of prior works In the CR version, we will add: - why self-supervise learning and energy-based model have less information bias than supervised learning and probabilistic model (refer to Section 2.4, Dawid & LeCun (2023)) - why aligning on the latent space is less biased than aligning on the token space (Section 4, Dawid & LeCun (2023)) > more recent work using Llama3-8B should be included as the baseline M3-JEPA primarily works on multimodal alignment and is not generation-based, while LLama3-based MM studies (e.g. Cambrian, MiniCPM, LLaVA, etc) mainly report generation metrics. So it is difficult to include them as baselines right now. We consider testing them for typical cases. > The terms ‘input’ and ‘output’ corresponding to Zi, Zt are very confusing. ...what about m1 and m2 for modality 1 and modality 2? We admit that the terms `(I, T)` were denoted for image-text tasks but are intended to generalize to any modality pair `(m1, m2)`. We will: - Replace `(I, T)` with `(m1, m2)` in all equations and figures. - Add a notation table to unify symbols (e.g., `z_m1`, `z_m2` for latent representations, `L_{m1→m2}` for task-specific losses). - Revise Figures 1–3 to reflect this generalization. > The task losses in section 2 and alignment losses in section 3 use identical notation (Li→t and Lt→i) - We will explicitly distinguish them in Section 2: - **Task losses**: `L_pred` (Eq. 5) and `L_CL` (Eq. 6) are *component-level* losses for individual tasks. - **Alignment loss**: `L_align` (Eq. 10) is the *meta-objective* combining mutual information and conditional entropy. - Add a subsection (2.5) to clarify their hierarchical relationship. > it is limited by what the modality-specific encoders (expert) been trained with, this leads to marginal advantage over existing work M3-Jepa’s flexibility stems from: - **Scalable Architecture**: MoE is agnostic to encoder choices. Any modality with a pretrained encoder (e.g., LiDAR) can be integrated, as demonstrated by audio experiments (Section 4.3). - **Empirical Evidence**: Table 2 shows strong zero-shot performance on unseen audio-text tasks. - **Further Revision**: for camera-ready, we will discuss extending to novel modalities (e.g., tactile sensors). > lack of experiment verifying that the proposed losses The first row of Table 5 experiments on this concern. By replacing MoE with an MLP, while keeping the loss unchanged, the performance degrades on COCO, compared to the formal M3-JEPA. We are sorry that Table 5 is not clear enough. > Table 1 is quite saturated...at least clarify the embeddings of each baseline. We will explicitly clarify this in Table 1: - **Dual-encoder models** (CLIP, ALIGN, FILIP) use ViT-B/32 + text transformers. - **Fusion-encoder models** (UNITER, OSCAR) typically use ResNet-50 + BERT. - **BLIP-2** and **BEiT-3** leverage ViT-L + OPT-2.7B. > the classification label is very well-studied. We thank the reviewer for highlighting prior work on this. We will cite those prior works and add them in Table 3. However, unlike prior studies (e.g., CLIP’s linear probe), M3-Jepa treats labels as a *modality* and aligns them *in the latent space* without supervised fine-tuning. > Is the MoE essentially learning to ‘map an unknown modility latent to the target modilty latent space’? MoE in M3-Jepa serves as a ​**multi-directional alignment module** that learns to harmonize *shared semantic information* across modalities while preserving *modality-specific details*. This is achieved through **Multi-Directional Tasks** and **Information Decomposition**. > Ablation of MoE and Loss Components: These studies are conducted in Section 4.6. - **MoE vs. MLP**: In the first row of Table 5, replacing MoE with a standard MLP caused a significant performance drop - **Loss Combination**: Figure 4 (sensitivity analysis of loss weight α) shows that the best performance occurs at α=0.5 (equal weighting of predictive and contrastive losses). Using only contrastive loss (α=0) or only predictive loss (α=1) led to a 15–20% relative degradation in R@1. > Scaling and Number of Experts The default setting is number of expert **$n$=12**. We expeirment on different number of experts (n = 2, 8, 12) during the series of VQA experiements. Below are their accuracies on the validation set accuracy of VQAv2: $n$ | VQAv2(val) | | ---- | --------- | | 2 | 55.15 | | 8 | 59.84 | | 12 | 68.03 | The number of expert contributes positively to the VQA performance. > Failure Cases / Limitations We found typlical failure cases on the VQA tasks, i.e. when there is more than one modality in the input or output. M3-JEPA is worse than BEiT-3 and is the second-best. The reason is we only use simple concatenation between different modality encodings, further solutions (e.g. unified positional embedding across V and Q) may strengthen M3-JEPA's performance.
Summary: This paper proposes a novel JEPA-based framework, M3-JEPA, for multimodal data (image, text, audio, etc.). Specifically, M3-JEPA follows the encoder-predictor architecture in JEPA, but replaced the target embedding with other modalities. M3-JEPA designed a novel predictor based on multi-directional MoE, which consists of projection layers and learnable task encodings to guide the cross embedding prediction. The proposed method is trained with contrastive loss and predictive L2 loss. For experiments, the authors conducts several experiments to effectively answered 8 key questions, with good results on image-text retrieval, audio-text retrieval, image classification, and VQA tasks. Claims And Evidence: The authors addresses 8 key questions as mentioned at the beginning of Section 4. Most of the claims are validated effectively through experiments. However, I have a concern about the first and last statement about training and inference efficiency. - RQ1: even though the trainable parameters in M3-JEPA is small, it needs heavy image/text encoders. While other baselines, such as BLIP-2 VIT-g, does not include extra parameters. Therefore, reporting only the number of trainable parameters in Table 1 might not be a fair and convincing comparison with other baselines. - RQ8: M3-JEPA uses heavy encoders for both images and text (DINOv2 and LLama-8B). During training, these encoders need to be partially fine-tuned by LoRA, which could incur high computational cost. In addition, the author compared the retrieval efficiency with CLIP, but with the M3-JEPA features pre-computed. This could lead to (1) unfair comparison, (2) the user-provided image/prompt might not exactly match with the pre-computed embeddings, which means the DINOv2 and LLama8B encoders are still needed during inference. Therefore, I would kindly cast doubt on the claim on training and inference efficiency. Methods And Evaluation Criteria: The proposed method and evaluation make sense. Theoretical Claims: The theoretical claims in the paper looks sound to me. Experimental Designs Or Analyses: The paper contains detailed explanation of the experiment setup. The experiment designs look sound. Supplementary Material: Yes, I checked all sections in the appendix. Relation To Broader Scientific Literature: The paper is related to creating a unified embedding space for multimodal data. Prior works including ImageBind, LanguageBind, etc, has shown that creating a shared embedding space across different modalities achieves competitive results on modality-specific tasks. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: - For the ablation experiment on connector architecture, is the MLP using comparable number of trainable parameters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s insightful comments. Below we clarify our methodology and provide additional context to address the concerns. > RQ1: even though the trainable parameters in M3-JEPA is small, it needs heavy image/text encoders. While other baselines, such as BLIP-2 VIT-g, does not include extra parameters. Therefore, reporting only the number of trainable parameters in Table 1 might not be a fair and convincing comparison with other baselines. > RQ8: M3-JEPA uses heavy encoders for both images and text (DINOv2 and LLama-8B). During training, these encoders need to be partially fine-tuned by LoRA, which could incur high computational cost. In addition, the author compared the retrieval efficiency with CLIP, but with the M3-JEPA features pre-computed. This could lead to (1) unfair comparison, (2) the user-provided image/prompt might not exactly match with the pre-computed embeddings, which means the DINOv2 and LLama8B encoders are still needed during inference. Therefore, I would kindly cast doubt on the claim on training and inference efficiency. We apologize that our current statement is somewhat confusing and does not differentiate the training and inference costs clearly. Although M3-JEPA has relatively heavy modality encoders (which impacts inference), the training cost is relatively low since only the lightweight MoE predictor and N=3 (LORA rank=64) encoder layers are needed64, reducing memory overhead. We have elaborated the comparison of \# Trainable Params in Table 1. We conduct further **Inference Efficiency** analysis to address the reviewer's concerns: - Pre-Computation vs. Real-Time: - For **static datasets** (e.g., COCO), pre-computed embeddings enable **0.02s retrieval** (vs. CLIP’s 0.16s), as shown in Section 4.6. - For **dynamic inputs** (user-provided images/text), the latency is dominated by encoder inference (~0.1s/image for DINOv2, ~0.3s/text for LLaMA-3-8B). We will clarify this distinction. - Mitigation Strategy: - For latency-sensitive applications, we recommend **caching frequent queries** (e.g., common prompts in retrieval systems). - The MoE connector’s **0.02s latency** still outperforms cross-attention baselines (e.g., BLIP-2’s Q-Former: ~0.05s). Furthermore, we need to clarify that M3-JEPA primarily works on multimodal alignment tasks and is not a generation-based method. There is no sample-level prompt needed, therefore the pre-computation of textual input can be done before a specific task is conducted online. Nevertheless, we also agree with the reviewer that this comparison needs to be more clear and comprehensive. To remedy this, we will make the modification to the camera-ready version: - In Table 1, also exhibit the column of parameter of encoders. - Add a **total parameter/FLOPS table** comparing M3-Jepa and baselines (including encoder costs). - Clarify **use-case assumptions** (pre-computation for static datasets vs. real-time constraints). - Add a paragraph to discuss the **trade-offs** between connector efficiency and encoder overhead in Section 6. > For the ablation experiment on connector architecture, is the MLP using comparable number of trainable parameters? Yes, the MLP in the ablation study uses a comparable number of trainable parameters to ensure a fair comparison. We are sorry that we did not make this point clear. We will add the detailed number of MLP parameters in Table 5 of the CR version.
Summary: The paper presents M3-Jepa, a multimodal alignment framework leveraging the Joint-Embedding Predictive Architecture (JEPA) and Mixture-of-Experts (MoE) for aligning diverse modalities in a shared latent space. It introduces a multi-directional MoE predictor that alternates between unidirectional alignment tasks using contrastive and predictive learning. The primary goal is to enhance multimodal alignment efficiency, reduce information bias, and improve generalization to unseen modalities and tasks. The study demonstrates state-of-the-art (SoTA) performance in multimodal tasks, including vision-language retrieval, audio-text retrieval, image classification, and visual question answering (VQA). Claims And Evidence: The paper's claims are largely supported by experimental results. It argues that M3-Jepa provides better multimodal alignment and efficiency than existing methods, with empirical evidence showing superior retrieval accuracy and classification performance. The MoE predictor's effectiveness is demonstrated through ablation studies, highlighting its role in improving cross-modal representation learning. However, the paper does not deeply analyze the failure cases, making it difficult to understand where the model struggles. Additionally, while the study claims computational efficiency, it does not provide direct comparisons of training costs with other SoTA models, leaving some uncertainty about real-world deployment feasibility. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-founded. M3-Jepa employs pretrained unimodal encoders and a multimodal MoE-based connector, which are evaluated through benchmark datasets such as COCO, Flickr30K, Clotho, Audiocaps, and ImageNet-1K. The use of contrastive and predictive losses in training aligns with current best practices in multimodal learning. However, one limitation is that the alternative optimization approach for training MoE is not compared with other optimization techniques, making it unclear if this is the most efficient strategy. Theoretical Claims: Theoretical claims in the paper include an information-theoretic justification for M3-Jepa, showing that it maximizes mutual information while minimizing conditional entropy for improved multimodal alignment. These claims are sound and mathematically derived, but the paper does not explicitly compare the information-theoretic advantages of JEPA vs. traditional generative models. Further theoretical exploration could strengthen its justification. Experimental Designs Or Analyses: The experimental design is robust, covering vision-language and audio-language retrieval, classification, and VQA tasks. M3-Jepa consistently outperforms prior methods in retrieval accuracy and classification metrics, demonstrating its generalization capabilities. The ablation studies confirm the importance of the MoE structure, alternative optimization, and contrastive/predictive loss balancing. However, the study does not analyze failure cases, and the impact of dataset biases on model performance is not addressed. Supplementary Material: The supplementary material provides details on datasets, training settings, and architectural configurations, which enhance the study’s transparency. However, additional insights into failure cases, model robustness across different domains, and real-world latency analysis would further strengthen the study. Relation To Broader Scientific Literature: M3-Jepa contributes to the broader multimodal learning literature by enhancing alignment efficiency through JEPA and MoE-based predictive modeling. It builds upon previous self-supervised multimodal methods while introducing a novel multi-directional MoE predictor. Compared to Flamingo, BLIP-2, and MoE-LLaVA, it offers a lighter-weight alignment method with improved retrieval and classification accuracy. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: A potential area for improvement is a more detailed failure analysis, particularly regarding cases where M3-Jepa underperforms in complex reasoning tasks or domain shifts. Additionally, a comparative efficiency analysis against computationally intensive SoTA models would help validate its claims of efficiency. Questions For Authors: How does M3-Jepa perform in real-world deployment scenarios, particularly in terms of inference latency on edge devices? Does the alternative optimization approach provide significant advantages over standard MoE training?. What are the common failure cases, and how does the model handle ambiguous multimodal inputs? Would incorporating temporal information (e.g., video-based tasks) further improve multimodal alignment? How does M3-Jepa handle modality imbalances (e.g., missing or weakly correlated modalities in training)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful comments. > additional insights into failure cases, model robustness across different domains > How does M3-Jepa handle modality imbalances (e.g., missing or weakly correlated modalities in training)? We found primary failure cases on the VQA tasks, i.e. when there is more than one modality in the input or output. Currently M3-JEPA is worse than BEiT-3 and is the second-best. The reason is, currently we only use simple concatenation between different modality encodings. A detailed example: -The image is a horse before a 5-step stairs to a building. -M3-JEPA correctly answers the number and color of the horse, but fails to answer the number of steps. -The reason might be the simple concatenation of V and Q embeddings before feeding to MoE. MoE might only be able to focus on the main part of image (horse) while hard to extract the minor details (stair). -Smart and unified positional embedding between Q and A, or cross-modality attention inside MoE may alleviate the problem. > does not provide direct comparisons of training costs with other SoTA models > additional insights into ... real-world latency analysis would further strengthen the study. We apologize that our current statement is somewhat confusing and here we further clarify it. Although M3-JEPA has relatively heavy modality encoders (which impacts inference), the training cost is relatively low since only the lightweight MoE predictor and N=3 (LORA rank=64) encoder layers are needed64, reducing memory overhead. We have elaborated the comparison of \# Trainable Params in Table 1. To further compare the training cost in Table 1, we will also explicitly list direct comparisons of FLOPs, GPU hours, and memory usage. For the inference cost, we conduct further **Inference Efficiency** analysis to address the reviewer's concerns: - Pre-Computation vs. Real-Time: - For **static datasets** (e.g., COCO), pre-computed embeddings enable **0.02s retrieval** (vs. CLIP’s 0.16s), as shown in Section 4.6. - For **dynamic inputs** (user-provided images/text), the latency is dominated by encoder inference (~0.1s/image for DINOv2, ~0.3s/text for LLaMA-3-8B). We will clarify this distinction. We will also include the inference efficiency analysis in the Discussion. > one limitation is that the alternative optimization approach for training MoE is not compared with other optimization techniques > does not explicitly compare the information-theoretic advantages of JEPA vs. traditional generative models We acknowledge the importance of exploring other optimization strategies, and we are sorry about the possible confusion due to the different notations between "AGD", and "ALT" used in the ablation (Table 5). - Our ablation experiments (**Section 4.6, Table 5**) explicitly demonstrate the necessity of AGD. Removing AGD ("ALT" in Table 5) leads to a significant performance drop: - **Image→Text Retrieval (COCO)**: R@1 declines from **88.1% → 68.2%** - **Text→Image Retrieval (COCO)**: R@1 drops from **90.1% → 74.2%** This empirically validates that AGD is critical for bidirectional alignment, as the standard optimization (randomly mixing task samples) fails to capture mutual dependencies between modalities. - **Theoretical Grounding**:The information-theoretic analysis (**Section 3**) demonstrates that AGD inherently: - Maximizes mutual information $ I(I; T) $ - Minimizes conditional entropy $ H(T|I) $ and $ H(I|T) $ This aligns with the theoretical framework of JEPA [1], where alternating optimization ensures the latent space retains **shared cross-modal information** while filtering modality-specific noise. Competing strategies like joint optimization risk collapsing the latent space or overfitting to a single modality direction. - Joint optimization would require simultaneous updates to conflicting parameter subsets (e.g., image→text and text→image predictors), leading to **gradient interference**. AGD decouples these updates, mirroring the success of alternating training in multi-task learning [2,3]. - **Future Directions**: While AGD is currently effective, we agree that exploring alternatives could be informable and will address it in future work. > the impact of dataset biases on model performance is not addressed. We will add a discussion paragraph about the zero-shot text-audio retrieval experiment. In short, we train the model on wavtext5k and freesound, while test on cloth and AudioCaps. We find the fraction of freesound over wavtext5k is extremely sensitive to the final result, due to the sparsity of audio data, and their different sampling intervals and lengths. --- **References:** [1] Dawid & LeCun (2023). *Introduction to Latent Variable Energy-Based Models*. [2] Akbari et al. (2023). *Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception*. [3] Likhosherstov et al. (2021). *Alternating Mirror Descent for Constrained Min-Max Games*.
Summary: The paper "M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA framework" introduces a novel modality-agnostic multimodal alignment paradigm called M3-Jpea, which utilizes the Joint Embedding Prediction Architecture (JEPA) to align representations of different modalities into the same latent space and achieves multimodal alignment via a multi-directional mixture of experts (MoE) structure. The authors propose this approach to address the limitations of current multimodal alignment strategies, which typically rely on a single or unified modality encoder and may lead to information bias. M3-Jpea aims to achieve alignment of different modalities by alternating between optimizations of different unidirectional tasks to maximize mutual information and minimize conditional entropy. The main contributions of the paper include: (1) a scalable and modality-agnostic architecture for multimodal alignment. (2) optimization of different unidirectional alignment tasks using alternating gradient descent via multi-directional MoE as a cross-modal connector. (3) information-theoretic analysis to verify the optimality of the framework. (4) Experiments verify that M3-Jpea achieves the best performance in different modalities and tasks (including image-text retrieval, audio-text retrieval, image classification, and visual question answering) while maintaining computational efficiency. The experimental results highlight the model's ability to generalize to unknown datasets and domains. Claims And Evidence: Overall, the paper presents convincing evidence in most experimental results. However, the supporting evidence for the following claims is somewhat insufficient: (1)Cross-modal alignment performance: The paper doesn't compare with other lightweight cross-modal methods, which may affect the comprehensiveness of the parameter advantage. More visualization results are needed to strengthen the persuasiveness of the cross-modal alignment effect. (2)Multi-directional MoE module design: While ablation studies show the superiority of multi-directional MoE over simple MLP or non-alternating optimization strategies, there's no formal explanation of how MoE disentangles shared and modality-specific information. This part lacks systematic theoretical analysis and multi-dimensional experimental comparisons, making the connection between theory and practice less robust. (3)Rigor of theoretical derivation: The paper derives the alternating optimization strategy from an information-theoretic perspective (see formulas 9 to 11), showing that modality alignment can be achieved by maximizing mutual information and minimizing conditional entropy. However, the derivation is somewhat brief, with key assumptions and proof steps not fully elaborated. This makes it hard to be entirely convinced of the method's theoretical optimality. Supplementing with detailed proofs and clear assumptions would better validate this theoretical claim. Methods And Evaluation Criteria: The experiments selected well-known benchmark datasets such as COCO, Flickr30K, ImageNet-1K, Clotho, Audiocaps, VQA, and NLVR-2, covering a wide range of tasks including image-text retrieval, audio-text retrieval, image classification, and visual question answering. This comprehensive approach allows for a thorough evaluation of the model's performance across different scenarios. The evaluation metrics used, such as Recall, Accuracy, and F1 score, are standard in the field and facilitate direct comparison with existing methods. Overall, the experimental setup is well-reasoned and highly relevant to current multimodal alignment issues and their associated applications. Theoretical Claims: I've reviewed the theoretical proof section, particularly the part in section three about proving the alternating optimization strategy's effectiveness using mutual information maximization and conditional entropy minimization (e.g., formulas 9 - 11). While the overall reasoning is sound, I found these issues: (1)Insufficient mathematical derivation details are provided for the transformation from formula 9 to 10, and inadequate experimental evidence is given for the weight parameter λ. (2)No clear explanation is offered on how alternating optimization of uni-directional tasks (like i→t and t→i) ensures convergence to a global optimal solution. There's no citation or proof of AGD's convergence theorem in multi-objective optimization, nor is there discussion of potential conflicts in the optimization process (such as inconsistent gradient directions across tasks). (3)The sensitivity analysis for parameter α lacks a theoretical explanation for why α=0.5 is the optimal choice to balance mutual information maximization and conditional entropy minimization. Relying solely on experimental observation isn't rigorous enough. Experimental Designs Or Analyses: While the overall design is reasonable, further verification or analysis is needed in these areas: (1)In the zero - shot generalization evaluation of the audio - text retrieval task, the hyperparameter alignment process for reproducing LanguageBind's results isn't detailed, raising concerns about the consistency of the reproduction method. (2)The sensitivity analysis of loss weight α indicates that α=0.5 is optimal, yet there's no exploration of whether different tasks require differentiated weights (for instance, whether audio tasks need a higher contrastive loss weight). Also, there's a lack of ablation experiments on the number of MoE experts and routing strategies (such as dynamic top - k) to verify the impact of sparsity. Supplementary Material: Yes, I've carefully reviewed the supplementary materials, focusing on: (1)Appendix A: Detailed explanations of datasets and experimental settings, covering statistics, preprocessing, and training configurations. (2)Experiment Details: Additional ablation studies and image-text similarity matrix visualizations to clarify module impacts. (3)Multi-directional MoE Design: In-depth descriptions of the architecture and parameter configurations to highlight key implementation details. Relation To Broader Scientific Literature: The paper's key contributions and their relation to existing literature are as follows: (1)Extension of the JEPA Framework: The paper builds on the Joint Embedding Predictive Architecture (JEPA) by Dawid & LeCun (2023) and Assran et al. (2023), expanding it to multimodal scenarios. Unlike the original JEPA, which focuses on single - modality latent space prediction, M3-Jepa uses a multi - directional MoE connector for cross - modal latent space alignment. This approach aligns with recent trends in multimodal representation learning, such as ImageBind and Flamingo. Compared to methods like BLIP-2 (Li et al., 2023b) that rely on heavy cross - modal fusion modules, M3-Jepa employs a lightweight MoE and alternating optimization strategy. (2)Application of Mixture of Experts (MoE): The paper leverages the sparsification advantages of MoE in cross - modal tasks, similar to MoE-LLaVA (Lin et al., 2024) and Pathways (2023). However, M3-Jepa further explores multi - directional routing mechanisms, such as dynamically selecting experts based on input modalities, contrasting with traditional single - directional MoE approaches like the Switch Transformer. (3)Alternating Optimization Strategy (AGD): The idea of alternating training for multi - task learning in M3-Jepa is similar to IMP (Akbari et al., 2023) and PolyViT (Likhosherstov et al., 2021). Yet, M3-Jepa applies this strategy to cross - modal bidirectional alignment rather than multi - resolution or multi - task learning. (4)Information - Theoretic Alignment Objective: The objective of maximizing mutual information I(I; T) in M3-Jepa is consistent with the contrastive learning framework of CLIP (Radford et al., 2021). But M3-Jepa more explicitly disentangles shared and modality - specific information by introducing the minimization of conditional entropy H(T∣I) and H(I∣T). Essential References Not Discussed: (1) For multi-directional MoE and sparse expert models, though the paper cites some relevant works, recent findings on Switch Transformer or MoE-LLaVA offer new insights into multimodal alignment and cross-modal fusion. Discussing these (e.g., progress in applying MoE strategies in large - scale multimodal models) would enhance the comprehensiveness of this paper's positioning. (2) From the information - theoretic perspective, the paper justifies alternating optimization using mutual information and conditional entropy, but similar ideas have been discussed in works like Dawid & LeCun (2023). Supplementing the literature comparison would help clarify the theoretical innovation and limitations of this paper's method. Other Strengths And Weaknesses: Strengths: Innovation: The paper combines the JEPA framework with multi-directional MoE, presenting a novel approach for cross-modal alignment in latent space. This combination helps solve information bias issues in traditional unified encoders. Comprehensive Experimental Validation: The authors conduct extensive experiments across multiple tasks, including image-text retrieval, audio-text retrieval, image classification, and visual question answering. Ablation studies and sensitivity analyses demonstrate the effectiveness of each component, highlighting the method's practicality. Weaknesses: Insufficient Theoretical Proof: The explanation of the alternating optimization strategy from an information-theoretic perspective lacks detailed mathematical derivations and assumptions, reducing the persuasiveness of the theoretical argument. Inadequate Detail Description: Key experimental details (e.g., data preprocessing, training strategies, hyperparameter configurations) are mainly described in the appendix. Brief descriptions in the main text may hinder readers' understanding of the experimental design and reproducibility. Further Literature Discussion Needed: The paper could benefit from a more in-depth discussion of recent advances in multi-directional MoE and sparse activation strategies in multimodal learning. Additional comparisons with relevant literature would help better position the paper's contributions. Other Comments Or Suggestions: The paper contains some minor errors that need polishing. For instance, in formula 4, the learnable matrix W2 is referred to as W3 and W4 in the explanation, which is unclear. In section 5.1, the description "d Firstly" in the sentence "However, as model scale continues to increase, several potential challenges emerge:d Firstly," is ambiguous. Additionally, there is no reference to Figure 5. It is recommended to carefully correct these grammatical errors in the paper. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > doesn't compare with other lightweight cross-modal methods We include such baselines on the R@1 where M3-Jepa still outperforms: | Model | # Param | Flickr30K | | COCO | | | -------------- | ------------ | --------- | ---- | ----- | ---- | | | | T → I | I→T | T → I | I→T | | MobileCLIP [1] | <30.7M | 67.7 | 85.9 | 40.4 | 58.7 | | TinyCLIP [2] | 63M+31M | 66.0 | 84.9 | 38.5 | 56.9 | | M3-Jepa | 140M | 97.8 | 97.8 | 87.9 | 89.7 | > how MoE disentangles shared and modality-specific information MoE implements information disentanglement through **differentiable routing based on latent factors** (Section 2.2). Formally, for modality embeddings $z_m^c$ with modality encoding $E_m$, the expert selection probability is: $$p(E_k|z_m) = \text{Softmax}(W_r[z_m \oplus E_c])$$ where $W_r$ and $E_m$ are learnable matrices. This allows: - **Modality-specific paths**: Experts specialize in processing unique modality patterns $E_m$ - **Shared representation**: Projection matrix $W_r$ creates a common subspace for cross-modal alignment > rigor of theoretical derivation (formula 9 to 10): We appologize that our original proof lacks some details; here we provide a more completed version. For paired modalities $(I,T)$, the total loss can be derived from the information bottleneck (Eq. 10) $$\mathcal{L} =-I(I;T) + \lambda \big(H(T|I) + H(I|T)\big)$$ a trade-off requires balancing: - **Compression**: Maxmize $\mathcal{I}(I;T)$ (redundancy reduction) - **Predictiveness**: Minimize $\mathcal{H}(T|I) + \mathcal{H}(I|T)$ (uncertainty reduction) COMPLETER [3] proves that the two implemented types of losses - $\mathcal{L}_{CL}$ maximizes $\mathcal{I}$ by separating negatives - $\mathcal{L}_{Pred}$ minimizes $\mathcal{H}$ by regression > inadequate experimental evidence for the weight parameter lambda > theory of senstivity for alpha Sorry for the redundant notations: actually, **$\lambda$ in Eq.10 (theoretically derived) is equivalent with $\alpha$ in Eq.3, Eq.4 and Fig.4 (emphiricially optimally obtained).** Re-derivation from Eq. 9 to Eq. 10 further validates the emphiricial optimum ($\alpha=0.5$): - Loss in Eq.10 mirrors the **free energy minimization** principle $$F = U - TS$$ in which $\mathcal{I}(I;T)$ corresponds to the iternal energy $U$, and $\mathcal{H}(T|I) + \mathcal{H}(I|T)$ corresponds to the entropy $TS$. The critical temperature $T_c$ where energy/entropy balance occurs corresponds to α=0.5. - For two consecutive steps with step large enough, a reasonable assumption is that $\mathcal{I}$ converge to each other, then $$\mathcal{L} → \frac{1}{2} (\mathcal{L}(i → t) + \mathcal{L}(i → t)) \rightarrow \frac{1}{2} \big( -\mathcal{I}(I;T) +\mathcal{H}(T|I) -\mathcal{I}(T; I) + \mathcal{H}(I|T) \big) = -\mathcal{I}(I;T) + \frac{1}{2} \big(\mathcal{H}(T|I) + \mathcal{H}(I|T)\big)$$ which indicates the optimal $\lambda = \alpha = 0.5$ > convegence of AGD Sorry here we clarify here: - Key Assumptions: **Modality Independence during Alternation**. At each step $t$, we update only one alignment direction (e.g., $\theta_{I→T}$ or $\theta_{T→I}$), with the other fixed. This is standard in alternative optimization (e.g., EM). - Proof Sketch for Optimality: Eq. 9's alternation between $\mathcal{L}(I→T)$ and $\mathcal{L}(T→I)$ is equivalent to a **block coordinate descent** on Eq. 10, and **convergence theorem of alternating optimization** [4][5] guarantees convergence to a local optimum if each subproblem (Eq. 9) is optimally solved. > hyperparameter for reproducing LanguageBind We use the model downloaded from their website and strictly followed their inference parameters. > whether different tasks require differentiated alpha alpha=0.5 is an optimal choice (emphirically) for most tasks except for VQA, where one image correlates with multiple questions. The occurance of CL loss is different with pred loss, therefore alpha changes by replacing batch-average with batch-sum. > ablation of expert number and routing strategy We have n and k's ablations. Please see response to Reviwer JJf4 for n. R@1 results of k (default 4) is: | $k$ | Flickr30K | | COCO | | | ---- | --------- | ---- | ----- | ---- | | | T → I | I→T | T → I | I→T | | 2 | 95.5 | 96.0 | 82.0 | 85.0 | | 6 | 97.0 | 97.5 | 86.5 | 88.0 | | 4 | 97.8 | 97.8 | 87.9 | 89.7 | > Further Literature Discussion; key details; types We appreciate insightful suggestions of related works and will include them in CR. We will also move more key implementation details from appendix to the mainpart, and fix the typos. ------ Reference: [1] Vasu & Anasosalu, CVPR2024 [2] Wu & Peng, ICCV2023 [3] Lin, Gou, et al. CVPR2021 [4] Jain & Kar. Non-convex optimization for machine learning. 2017 [5] Akbari, Hassan, et al. Alternating gradient descent and mixture-of-experts for integrated multimodal perception. NIPS2023
null
null
null
null
null
null
Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning
Accept (poster)
Summary: This paper introduces two token cleaning methods for LLM fine-tuning, Fixed-model cleaning and Self-evolving cleaning. The token cleaning method ignores the loss from "unimportant" tokens, thereby improving task performance. The authors also provide theoretical analyses and extensive experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I checked all the proofs and details in the appendix. Relation To Broader Scientific Literature: The work is very closely related to previous token selective fine-tuning methods. They differ from previous works by providing a semi-supervised style algorithm, self-evolving cleaning Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths - The paper is well written and easy to follow - Theoretical analysis is interesting and adequately supports the findings - Experiments are done thoroughly' ## Weaknesses - My main concern is on the effectiveness of this approach. In Table 1, I am not sure if the author's approach is significantly better than previous work (e.g. RHO). I would like to see the average result from multiple runs and corresponding t-tests to see if the authors' approaches are consistently better. - Another question is on the computational cost. It seems that multiple computations are needed in the token cleaning pipeline. How does the computation cost, e.g. in terms of latency, compare with vanilla SFT and other baselines? - Although minor, I have some concerns regarding the technical contributions. The idea of removing unneeded tokens for fine-tuning is not new, and may somewhat limit the novelty of this work. If all or part of my concerns are resolved, I am willing to re-evaluate my score. Other Comments Or Suggestions: See Weaknesses Questions For Authors: See Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank Reviewer fypD for their positive feedback and comments. We will address individual comments below. > **Weakness 1: the effectiveness of this approach** **Response**: Thank you for your insightful suggestions!We present the average results from three independent trials (using three different random seeds) across two base model configurations. Clearly, our proposed methods still outperforms all baselines. Additionally, the corresponding t-tests results confirm that the performance improvements are significant. | Method(LLaMA-3.2-3B)| truthfulqa | tydiqa | logiqa | mmlu | hellaswag | arc_c | boolq | Average | |--|--|-|-|-|--|--|--|--| | ds2 | 41.51 ± 1.08 | 41.83 ± 2.13 | 24.6 ± 0.39 | 56.97 ± 0.14 | 55.7 ± 0.09 | 44.39 ± 0.87 | 76.99 ± 0.62 | 48.87 ± 0.12 | | full_token | 43.92 ± 0.52 | 43.83 ± 2.32 | 24.81 ± 0.47 | 56.95 ± 0.11 | 55.59 ± 0.04 | 44.99 ± 0.49 | 74.24 ± 0.38 | 49.17 ± 0.21 | | random | 43.88 ± 0.09 | 42.77 ± 3.08 | 24.24 ± 0.74 | 57.09 ± 0.13 | 55.48 ± 0.1 | 44.96 ± 0.45 | 74.44 ± 0.53 | 48.97 ± 0.31 | | rho | 47.97 ± 0.77 | 50.93 ± 0.85 | 26.15 ± 0.09 | 57.16 ± 0.01 | 56.48 ± 0.04 | 46.03 ± 0.62 | 77.04 ± 0.1 | 51.67 ± 0.15 | | fixed_model_cleaning| 48.24 ± 0.61 | 51.8 ± 0.47 | 26.05 ± 0.16 | 57.03 ± 0.11 | 56.46 ± 0.04 | 45.97 ± 0.51 | 77.27 ± 0.27 | 51.83 ± 0.06 | | self_evolving_cleaning| 52.01 ± 0.93 | 54.38 ± 0.33 | 28.43 ± 0.36 | 56.31 ± 0.12 | 55.75 ± 0.05 | 46.68 ± 0.6 | 77.25 ± 0.07 | **53.0** ± 0.2 | | Method (LLaMA-3.1-8B) | truthfulqa | tydiqa | logiqa | mmlu | hellaswag | arc_c | boolq | Average| |--|-|--|-|-|-|-|--|-| | ds2 | 48.12 ± 1.3 | 53.02 ± 2.05 | 27.34 ± 0.32 | 65.81 ± 0.03 | 60.47 ± 0.1 | 53.37 ± 0.18 | 83.38 ± 0.0 | 55.93 ± 0.12 | | full_token | 49.08 ± 1.36| 52.5 ± 2.83| 28.06 ± 0.47| 65.77 ± 0.02| 60.33 ± 0.08 | 54.12 ± 0.2 | 82.92 ± 0.38 | 56.1 ± 0.2 | | random | 49.84 ± 1.01| 54.09 ± 1.39 | 27.7 ± 0.59 | 65.83 ± 0.06 | 60.32 ± 0.08 | 54.41 ± 0.3 | 83.24 ± 0.14 | 56.47 ± 0.12 | | rho | 55.76 ± 0.99| 58.49 ± 3.46 | 28.11 ± 0.93 | 65.75 ± 0.01 | 62.0 ± 0.12 | 54.95 ± 0.23 | 82.16 ± 0.45 | 58.2 ± 0.61 | | fixed_model_cleaning| 56.04 ± 0.11| 61.85 ± 0.72| 28.06 ± 0.27| 65.62 ± 0.08 | 61.96 ± 0.04| 55.04 ± 0.14| 82.74 ± 0.08 | 58.77 ± 0.15 | | self_evolving_cleaning| 59.75 ± 0.16 | 64.1 ± 0.85 | 26.56 ± 0.54| 65.17 ± 0.1| 62.65 ± 0.03 | 54.75 ± 0.2 | 82.54 ± 0.09 | **59.33** ± 0.23 | The t-test results indicate that Self-Evolving Cleaning significantly outperforms RHO, showing statistically significant improvements with a p-value less than 0.05. | Method Comparison (T-test value, p value) | LLaMA-3.2-3B | LLaMA-3.1-8B | |---|--|-| | Fixed-model cleaning vs. Rho| (1.77, 0.152) | (1.56, 0.193) | | Self-Evolving Cleaning vs. Rho | (9.18, 0.001) | (3.02, 0.039) | > **Weakness 2: Computational cost** **Response**: Thank you for highlighting this important issue. The computational costs associated with the token cleaning pipeline primarily consist of the training costs, akin to those of standard supervised fine-tuning (SFT), along with two types of additional inference costs. These additional costs stem from one base model and another reference model, both of which are used to calculate token-level influence scores. Compared to the Rho-1 baseline, our two proposed methods do not incur any additional inference costs since the total data size for inference remains unchanged. For the Naive Fixed-Model Cleaning, we perform a one-shot inference on all samples simultaneously, mirroring the process used in Rho-1. For the Self-Evolving Cleaning pipeline, we simply segment the data pool into several partitions for independent inference using different reference models, i.e., (50k samples -> 10k (reference model 1), 10k (reference model 2), …, 10k samples). Consequently, the inference cost for the Self-Evolving Cleaning pipeline is also equivalent to that of Rho-1. > **Weakness 3: technical contribution clarification** **Response**: Thank you for raising this concern. We would like to clarify that although the concept of masking unimportant tokens, initially introduced by Rho-1, is not new but remains very timely. Based on this, our work makes several novel contributions: - Self-Evolving Cleaning: Our iterative approach that progressively refines token selection is entirely new and shows superior performance. - Theoretical Framework: We provide a rigorous analysis of when and why token cleaning outperforms full-token training, with error upper bounds that explain the observed Matthew effect. - Noisy-Label Perspective: We reframe token selection as a noisy-label problem, providing a new theoretical lens that connects token cleaning to a rich body of work on learning with noisy labels. These contributions extend beyond prior work (Rho-1) on token selection, offering both theoretical insights and practical improvements. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttals. Most of my concerns have been resolved. I adjusted the score accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer fypD, We are pleased that most of the concerns have been addressed.Thank you for taking the time to carefully consider our rebuttal and for your thoughtful feedback! We are grateful for your positive impression and support for our work.
Summary: The paper introduces a token cleaning method for supervised fine-tuning (SFT) of large language models that operates at a fine-grained, token level rather than discarding whole samples. It employs an influence-guided scoring function to assess each token’s contribution by comparing loss differences between a base model and a reference model. Tokens deemed uninformative or redundant are filtered out using a threshold, thereby preserving key task-relevant information. Two cleaning strategies are proposed: a fixed-model approach that performs one-shot cleaning and a self-evolving approach that iteratively updates the reference model. The method is supported by theoretical analysis and extensive experiments across various downstream tasks. Claims And Evidence: The authors support their claims with theoretical analysis in proving training with cleaned token is lower bounded by training with full tokens under the assumption that influence function can be used to evaluate the quality of token. The proof seems sound under the assumption of model quality. However, I do have some concerns for this assumption, which is detailed in "Other Strengths And Weaknesses." Empirical results in demonstrates self-evolving method is performing better than fixed model on Llama-3.2 and Llama 3.1 models. In general, I think the claims made in this paper are well justified. Methods And Evaluation Criteria: The proposed methods and evaluation criteria makes sense. The idea of token cleaning flows naturally from pretraining (Rho-1) to fine-tuning, and is an intuitive extension from sample-level data cleaning to token-level. Theoretical Claims: I did not check the proof step-by-step, but it looks promising. Experimental Designs Or Analyses: In general the proposed experiments are sound in terms of design and baseline choices. Supplementary Material: N/A Relation To Broader Scientific Literature: This work is a natural extension of Rho-1. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. This paper has strong theoretical analysis supported in token selection during SFT, which is a novel contribution. 2. The paper is well written and easy to follow. Weakness: 1. One concern I have is how accurate is using influence function to assess the quality of token itself. Previous work [1] indicates that influence functions might not work well on LMs, and "can lead to strong biases in this estimator." If the influence function itself is biased, the soundness of the theoretical finding in this paper might be affected. While I understand that there are differences between theoretical assumptions and practical settings, I hope the authors can provide more insights and intuitive explanations in justifying using influence functions to score the token. 2. The size of the model considered in the experiment section are 3B and 7B. I'm wondering how effective this method is when presenting to a larger and more powerful model. Will larger model more robust to the SFT noise or will it still benefit from token cleaning? [1]Causal estimation of memorisation profiles, Other Comments Or Suggestions: N/A Questions For Authors: Refers to weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank Reviewer wKJy for their positive feedback and comments. We will address individual comments below. > **Weakness 1: how accurate is using influence function to assess the quality of token itself** **Response**: Thank you for raising this important question. We'd like to clarify with the following points: - The mentioned work made the claim because of the violations of required theoretical assumptions (e.g., positive-definite Hessian matrix) for language models. These assumptions are proposed for approximation. However, in this work, the method involves directly working with the difference in loss values rather than making approximations or Taylor expansion. Thus, the accuracy of the influence function would not be affected. Here is a more detailed comparison between our implementation of the influence function and the standard one who needs approximations. The standard influence form is defined as loss($\theta$) - loss($\theta’$), given that the $\theta$ model is the current model and $\theta’$ is the model by removing a token, which requires some approximations to avoid model re-training. However, in our scenario, we do not have to counter-factually remove a token to calculate the influence since the training procedures are naturally adding tokens, which also have a form of loss($\theta$) - loss($\theta’$) but $\theta’$ can be seen as the current model (for example) and $\theta$ can be seen as the model after adding one token. Specifically, in the self-evolving clearing pipeline, we are iteratively using the influence function to identify informative tokens for the next-round training. Therefore, we can obtain those two models naturally with the original base model as $\theta$ and the current model as $\theta’$. Therefore, the only difference is we are adding tokens actively [1-2] while the standard approach is deleting tokens (i.e., re-training) [3]. - Besides, our theoretical analysis is based on a unified and analytical framework, which mainly explains the when and why the SFT with cleaned tokens outperforms with full tokens, regardless of the specific token quality evaluation metric used. As such, our theoretical insights are independent of the precise accuracy of the influence function, focusing instead on the overall reduction in token noise rates. Finally, to address the reviewer's concern, there are more insights here. Intuitively, we hope that the base model, denoted as $\theta$, will perform similarly to the reference model, $ \theta' $, on downstream tasks after fine-tuning. To approximate this criterion, we use the token loss calculated by the reference model, expressed as $\\ell(x_{i,j} | x_{i,:j}, \theta') $. If the token loss calculated by the base model, $ \\ell(x_{i,j} | x_{i,:j}, \theta) $, is significantly higher than that calculated by the reference model, it indicates that this particular token requires fine-tuning to better align the base model with the reference model. Relying solely on $ \ell(x_{i,j} | \boldsymbol{x}_{i,:j}, \theta) $ as a metric for token quality—by ranking and selecting tokens based on this loss—would only consider token-level information and neglect the task-specific insights provided by the reference model. [1] Estimating Training Data Influence by Tracing Gradient Descent, NeurIPS’20. [2] Fairness without harm: an influence-guided active sampling approach, NeurIPS’24. [3] Understanding black-box predictions via influence functions, ICML’17. > **Weakness 2: larger-scale model performance evaluation** **Response**: Thank you for your valuable feedback. Given our constraints with GPU resources and time, we utilized LLaMA-2-13B-hf as the base model to assess the efficacy of our proposed methods. All experimental settings are consistent with the original settings. The results demonstrate the superiority of our methods. | Method | truthfulqa | tydiqa | logiqa | mmlu | hellaswag | arc_c | boolq | Average | |---------------------------|----------------|--------|--------|------|-----------|---------------|-------|---------| | base (llama-2-13b-hf) | 36.73 | 33.79 | 26.05 | 55.14| 60.11 | 47.80 | 80.64 | 48.6 | | ds2 | 37.48 | 35.47 | 27.91 | 55.19| 60.37 | 48.49 | 81.16 | 49.4 | | full_token | 41.73 | 38.29 | 27.60 | 55.33| 60.33 | 49.70 | 82.40 | 50.8 | | random | 41.67 | 37.44 | 27.60 | 55.30| 60.35 | 49.61 | 82.33 | 50.6 | | rho | 44.41 | 42.12 | 28.06 | 55.35| 61.33 | 50.99 | 81.87 | 52.0 | | fixed_model_cleaning | 44.03 | 40.07 | 27.91 | 55.63| 61.32 | 50.90 | 81.62 | 51.6 | | self_evolving_cleaning | 49.13 | 44.65 | 26.67 | 55.42| 62.23 | 51.16 | 82.49 | **53.1** |
Summary: This paper enhances the SFT process by scoring and filtering out uninformative tokens, treating them as noisy labels. They introduce a novel influence-guided token cleaning pipeline to address this issue. Furthermore, they offer rigorous analyses to demonstrate when and why SFT with cleaned tokens outperforms the use of the complete set of tokens. ## update after rebuttal I still don't believe the performance improvement is due to masking out uninformative tokens. Even in the example provided by the authors, they use the phrase "negatively impactful tokens" to explain how it works. I don't think such negatively impactful tokens can be considered merely uninformative. I believe the author should distinguish between negatively impactful tokens and uninformative tokens, rather than treating them merely as generic error labels. Claims And Evidence: I question the claim to regard the selection of tokens as a noisy-label problem. Methods And Evaluation Criteria: The evaluation is relatively too easy. As far as I know, the datasets mentioned are all classification tasks instead of generation tasks. I hope the authors can correct me if my understanding is wrong. Theoretical Claims: yes Experimental Designs Or Analyses: The evaluation seems overly simplistic. Based on my understanding, the datasets referenced pertain solely to classification tasks rather than generation tasks. If my interpretation is incorrect, I would appreciate clarification from the authors. I suggest you compare with more baselines such as the sample-level data selection to show why the token-level one is necessary. For example, you can compare with the method in "Speculative Coreset Selection for Task-Specific Fine-tuning." Supplementary Material: Appendix A-E Relation To Broader Scientific Literature: token-level understanding in SFT Essential References Not Discussed: NA Other Strengths And Weaknesses: The main issue is that it's not immediately clear why uninformative tokens should be considered noisy. I recommend providing a demonstration to clarify why these tokens can be categorized as such. From my perspective, the loss associated with uninformative tokens tends to be minimal during training. In the Next-Token Prediction paradigm, the loss of uninformative tokens is naturally down-weighted due to its small value. Therefore, I don't understand the necessity of manually removing them. Other Comments Or Suggestions: I want to summarize my main concerns here: 1. I'm unclear on the necessity of manually selecting uninformative tokens, as this process can be naturally handled by the loss function. The loss associated with uninformative tokens can decrease naturally, serving as a form of automatic reweighting. One possible explanation is that if the loss for these tokens becomes too low, it could negatively impact generalization. Therefore, it might be necessary to prevent further reduction in their loss by adjusting their weight. 2. I find the experiments somewhat lacking, especially when considering the benchmarks and baselines used. Questions For Authors: What is the cost of this method? I believe the cost is quite significant due to the additional inference required. Additionally, I find the definition of the influence function somewhat unusual, though still acceptable. The primary goal should be to assess the influence of a token. I think a token's influence should be measured by the change in loss across the entire dataset before and after the token is removed. This approach differs significantly from your current definition. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank Reviewer dD3A for the time and effort. We will address individual comments below. > **W1: evaluation benchmarks** **Response**: The benchmarks we used are standard in SFT studies (see Related Work) and assess a broad spectrum of capabilities, including question answering, reasoning, and multilingual comprehension, not just classification. While some benchmarks use multiple-choice formats, they test the model's generative abilities and knowledge without converting it into a classification model. For your concern, we use two popular LLM-Judge benchmarks, MT-Bench and Vicuna-bench, to evaluate our model generations (the higher numbers indicate the better results). These results highlight the superiority of our methods. | Method (llama-3b) | MT-Bench | Vicuna-Bench | |-|-|-| | DS2| 5.62 | 4.46| | Full_token| 5.40 | 4.46| | Random| 5.32 | 4.80| | Rho|6.33| 5.65| | Fixed-model cleaning| 6.27| 5.74| | Self-evolving cleaning| **6.63**| **5.76** | > **W2: Additional baseline STAFF** **Response**: We replicated the STAFF baseline with pruning rates of 40%/60%. Due to the absence of a comparable smaller model for Mistral-7B, we assessed STAFF using two LLaMA base models, using LLaMA-3.2-1B to compute speculative scores. While STAFF remains a strong baseline, yet Self-Evolving Cleaning shows competitive results. For instance, with LLaMA-3.1-8B, our method surpasses STAFF in average performance (59.20 vs. 57.0/56.4). Although STAFF performs better on BoolQ, Self-Evolving Cleaning consistently improves across various tasks. In our revised manuscript, we will cite this reference and discuss. | Model | TruthfulQA| TydiQA | LogiQA | MMLU | HellaSwag | ARC-C | BoolQ | Average| |-|-|-|-|-|-|-|-|-| | STAFF (llama-3b, 20k)|44.1| 45.92| 23.72| 56.86| 55.6|43.5|76.13| 49.4| | STAFF (llama-3b, 30k) | 43.15| 48.31| 24.03| 56.78|55.43| 44.19|73.78| 49.4| |Self-Evolving Cleaning (llama-3b) | 51.07| 56.38|28.22| 56.18| 55.81|45.99 |77.33 |**53.0**| | STAFF(llama-8b, 20k) | 46.3| 61.34| 26.82 | 65.07| 60.35| 54.61| 84.57| 57.0| | STAFF(llama-8b, 30k) | 46.06| 61.98 |24.03| 63.57| 60.29 | 55.12| 83.57| 56.4| |Self-Evolving Cleaning (llama-8b) |59.58| 63.58| 26.05 |65.07 |62.67|54.87|82.49 |**59.2**| > **W3: the necessity of manually selecting uninformative tokens** **Response**: We clarify the distinction between uninformative and noisy tokens as follows. In the SFT phase, every response token is labeled as 1, making it difficult to identify which tokens should be included for task-specific knowledge and which excluded for higher knowledge density. As a result, traditional labels are considered noisy, consisting of informative tokens (correctly labeled as 1) and uninformative tokens (incorrectly labeled as 1 but should be 0), with the latter corresponding to label errors. We acknowledge that while uninformative tokens have low loss, their impact on model convergence cannot be ignored. These tokens may lead the model to make trivial, non-task-specific predictions. Our findings, supported by curriculum learning literature [1], show that while "easy" patterns are learned first, "difficult" patterns require more focus. Therefore, adjusting the weights of these tokens is crucial for optimizing downstream task performance. [1] A Survey on Curriculum Learning, TPAMI'21. > **Q1: Computational cost** **Response**: There are two types of additional inference costs in our method: one associated with the base model and the other with the reference model. Besides, there is also training cost for the reference model on a small subset (10k). Compared to the baseline Rho-1, our approach does not incur any extra inference costs. This is because the Self-Evolving Cleaning pipeline merely divides the data pool into several partitions for independent inference, i.e. (50k samples -> 10k, 10k, …, 10k samples). > **Q2: Influence function** **Response**: The standard influence form is defined as loss($\theta$) - loss($\theta’$), given that the $\theta$ model is the current model and $\theta’$ is the model by removing a token. However, in our scenario, we do not have to counter-factually remove a token to calculate the influence since the training procedures are naturally adding tokens, which also have a form of loss($\theta$) - loss($\theta’$) but $\theta’$ can be seen as the current model (for example) and $\theta$ can be seen as the model after adding one token. Specifically, Self-Evolving Cleaning iteratively use the influence function to identify informative tokens for the next-round training. Therefore, we can obtain those two models naturally with the original base model as $\theta$ and the current model as $\theta’$. Therefore, the only difference is we are adding tokens iteratively [1] while the standard approach is deleting tokens (re-training) [2]. [1] Estimating Training Data Influence by Tracing Gradient Descent, NeurIPS’20. [2] Understanding black-box predictions via influence functions, ICML’17. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying the "multiple-choice formats" in the evaluation benchmarks, the comparison with additional baselines, and the computational cost. My comments are as follows: (1) The influence function in Equation (2) is not intended to evaluate the influence on the trained model caused by the token. Instead, it assesses the influence of the whole training data on the prediction of a single token $x$. This value then is used to evaluate the quality of the token $x$. I still think this is reasonable, but I believe the original manuscript's statement is unclear and potentially misleading. I do hope further clarification can be provided. However, I won’t give a negative rating because of this. (2) Such a definition of "influence" gave me a misleading impression that if the "influence" value p $x$ is not prominent, the token $x$ must be uninformative and potentially noisy. However, the definition of "influence" does not actually measure the influence caused by the token $x$. Thus, I don’t agree that tokens with low so-called "influence", as calculated in Equation 2, are necessarily uninformative. For example, another possibility is that these tokens may contain specialized knowledge that the model attempts to learn but inherently finds difficult—essentially, they are hard samples or knowledge that, due to their low occurrence in the training data, are neglected during the training process. While this paper doesn’t need to determine how many of these tokens are truly uninformative versus informative but difficult to learn, I hope the authors can provide some assumptions regarding the training data. For instance, the proposed method might only be effective if the proportion of informative but difficult-to-learn tokens is low. Additionally, I would appreciate a more careful discussion on the intuition behind treating such tokens as noisy. Again, this is not my main concern, and I won’t give a negative rating because of it. (3) My main concern remains the necessity of manually selecting uninformative tokens. While I understand that this approach is somewhat supported by existing data-centric methodologies like curriculum learning, I am trying to grasp the fundamental reason why this method is effective, especially given that only 40% of the tokens are filtered out, leaving 60%. For instance, I can understand focusing solely on the top 10% of tokens—although their losses are larger on a per-token basis, the overall loss is still significantly influenced by the remaining 90% of tokens, whose losses are smaller individually but collectively impactful when accumulated in the loss function. However, in this scenario, this paper is filtering out only 40% of the tokens, whose losses are naturally small. I would hypothesize that these filtered tokens originally contribute minimally to the training loss. Why, then, do they exert such a significant influence? In summary, what I am trying to convey is that while I do believe the methodology could be effective under some circumstances, I don’t think it should be framed as filtering out noisy tokens. Instead, I hope to explore deeper insights into why this method works. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful review of our responses and your valuable suggestions! We will address individual comments below. > **Concern 1**: The influence function in Equation (2) is not intended to evaluate the influence on the trained model caused by the token. **Response**: Thanks for showing the confusing point of our paper. We would like to clarify that Equation (2) is intended to evaluate the influence on **"future training data"** caused by learning with “some tokens”. Please note this definition follows the most recent novel application of influence functions [1–2]. As illustrated in Figure 1 and Algorithm 1, this generic form can serve as the score function to examine the token scores of different methods. For example, in Fixed-Model-Cleaning, the "future training data" is dataset $\widetilde D$, and “some tokens” correspond to the warmup data, meaning that learning with “some tokens” can impact the model’s predictions on different tokens differently. This difference is measured on “future training data”. If tokens in “future training data” are negatively impacted by training and we assume the model performs better after training (as explained before Eqn. (3)), we prefer to mask out these tokens in the next-step training. The same principle applies to Self-Evolving Cleaning. We’ll revise the paper and make it more clear. [1] Estimating Training Data Influence by Tracing Gradient Descent, NeurIPS’20. [2] Fairness without harm: an influence-guided active sampling approach, NeurIPS’24. > **Concern 2: additional assumption on train data** **Response**: Thank you for your thoughtful suggestion. Instead of putting assumptions on training data, we assume the model quality as in Line 188 (LHS). Specifically, we assume the model $\theta’$ performs consistently better than $\theta$ on the concerned task, then all tokens that improve the model performance should have a lower loss after training, i.e., a positive token score. Under this assumption, difficult tokens will also have a lower loss after training. We acknowledge that this assumption is strong since a consistently better model is hard to guarantee. But as long as the model is getting better in the majority cases, using our score functions to mask out tokens benefits. See more theoretical analyses in Corollary 5.2 and Sections 5.2, 5.3. In the revised version, we would more clearly highlight and discuss this assumption. > **Concern 3: the necessity of manually selecting uninformative tokens** **Response**: Thank you for your further discussion about this question. Please note our score uses the difference between token losses ($\text{loss}_1 - \text{loss}_2$) to distinguish between informative and uninformative tokens. Here, $\text{loss}_1$ represents the token loss on the base model, and $\text{loss}_2$ is the token loss on the reference model. We would like to clarify this in the following ways: 1. **loss $\\to 0$ is only a necessary condition of score $\\to 0$**. For example, we will have the same token score of 0.1 when $\text{loss}_1 - \text{loss}_2 = 0.2 - 0.1$ and $\text{loss}_1 - \text{loss}_2 = 4.2 - 4.1$, respectively. So our selection mechanism does not mask out all the difficult tokens. 2. **The sign of our scores matters.** For example, a difficult token may have fluctuating loss $\text{loss}_1 = 2$ and $\text{loss}_2 = 4$, or $\text{loss}_1 = 4$ and $\text{loss}_2 = 2$. The former case can be easily captured by our method when the threshold is 40% of low-score tokens since the score is negative. Therefore, our method is not simply masking out the tokens whose absolute scores $\\to 0$. We tend to mask out tokens that have a negative impact on model training. In other words, uninformative tokens are task-irrelevant (as mentioned in Section 4.1). They may cause a negative impact (should be masked out) or minimal impact (prefer to be masked out according to the logic of curriculum learning). 3. **Masking out 90% vs. 40%**. As illustrated in Figure 2, 40% works well empirically. It is also strategically reasonable. Let's consider the following motivating example. If we have a data sample with 10 tokens, where half are positively impactful and half are negatively impactful, our methods would involve filtering out the 4 most negatively impactful tokens. While 40% may not represent a 'perfect' threshold for identifying detrimental tokens, it is a more fault-tolerant number since even the scores made some mistakes, keeping some tokens whose scores are close to zero could be a better solution than aggressively removing 90% tokens.
Summary: Recent large language models are trained in multiple stages with vast amounts of data to achieve stellar performance. However such a training regime also brings in several challenges, including the challenge of observing similar high frequency phrases over and over again. This work aims to improve the supervised fine-tuning of large language models by performing token cleaning, masking out uninformative tokens while preserving the important ones. Through theoretical and experimental analyses, the work further aims to highlight the strengths of the proposed token cleaning pipeline. Claims And Evidence: The paper is built on the intuition that during SFT, the model observe a reiteration of the previously learned concepts in addition to the new information that the SFT aims to distill. The authors formulate this challenge analogously with a noisy label scenario and take inspiration from that literature. Specifically, they utilize influence functions and a hard thresholding mechanism on top for _cleaning the tokens_ that are not very informative. The authors then claim that this approach should _improve the optimization process by preventing misleading gradients_. With respect to this claim, the authors present theoretical analyses in addition to experimental results on several well-established large language models. While the theoretical analyses is not fully precise, e.g in Sections 5.1 & 5.2 it makes a few trivial observations such as the challenges caused by the noise in tokens, I believe that that the overall discussion and the examples used (especially the discussion in Section 5.3) are valuable for highlighting the motivations behind the work and support the claims. In addition, as shown in Table 1, the proposed approach(es) mostly improves the performance of different well-established models on several benchmarks. Thus, the overall claims of the paper are well-supported. Methods And Evaluation Criteria: - Benchmarks are well-known datasets in the field and are sufficient to support the claims and highlight the contributions of the work. - Utilized baseline models (namely two exemplars from the LLaMA-3 Herd and Mistral) are also reasonable and sufficient Theoretical Claims: I checked the theoretical claims and could not find any obvious issues. However, I would like to note that the theoretical claims do not explicitly support the proposed approach: They mainly provide a more intuitive explanation as to why the proposed approaches could ultimately work under several strong assumptions. I do not believe that this casts a shadow with respect to the support of the claims, but rather that the theoretical part is a more minor factor in doing so. Experimental Designs Or Analyses: The general experimental design follows the established standards and the analyses are sound based on the quantitative evidence. Supplementary Material: The supplementary material contains a brief discussion on limitations, proof for the theorem presented under Section 5, training details and additional experimental results. The supplementary material serves its purpose in providing more details to the presented content of the main paper. Relation To Broader Scientific Literature: Supervised fine-tuning is a widely adopted approach in large language model training. The provided issue in the work, namely the reiteration of the same knowledge, is also a very real problem, especially considering the scale of the pretraining data these models go through. Based on this, the work proposes an interesting and well-founded approach to addressing this issue and providing a more efficient training. I believe that the work could be interesting to the broader community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, the presented figures, tables and the algorithm are neatly explained. The narrative of the work is also fluent and was not very challenging to read through. I believe that this work overall is a strong and meticulous submission with interesting and well-supported ideas presented in it. Other Comments Or Suggestions: N/A Questions For Authors: - How do you think token cleaning effects the supervised fine-tuning efficiency, both with respect to the data requirements and number of training iterations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank reviewer W8kk for the positive feedback! We will address individual comments below. > While the theoretical analyses is not fully precise, e.g in Sections 5.1 & 5.2 it makes a few trivial observations such as the challenges caused by the noise in tokens... **Response**: We agree that LLM learning dynamics and convergence are much more complicated than our analytical framework, but our analyses in Section 5, form a complete theoretical framework based on error upper bounds that provides valuable support for our claims. Our main claim is that the two proposed token cleaning methods can effectively improve LLM SFT performance. Our theoretical analyses support this claim in two important ways: **Explaining why the proposed methods work**: - Theorem 5.1 and Corollary 5.2 establish an analytical framework that precisely describes when token cleaning outperforms using all tokens through the trade-off between data quality (η) and data quantity (M); - Section 5.2 analyzes the characteristics of Fixed-Model Cleaning, explaining why this method provides stable but bounded improvements - Section 5.3 analyzes the dynamic behavior of Self-Evolving Cleaning through the Matthew Effect, explaining why this method can produce more significant improvements in certain scenarios **Provide practical guidance for method selection and application**: - Clearly identifying when Self-Evolving methods might lead to performance degradation (G2 group data) - Providing theoretical basis for selecting appropriate cleaning methods in different scenarios - These alerts and guidance are validated in our experiments, as seen in Table 2 where performance changes across different tasks align well with theoretical predictions We agree that while some analyses provide foundational observations, our theoretical framework offers deep insights into method behaviors, supporting our claims and guiding practical applications. In the revised paper, we will clarify the links between theory and experiments to explicitly highlight the importance of the theoretical aspects. > However, I would like to note that the theoretical claims do not explicitly support the proposed approach... **Response**: We agree that our theoretical analyses primarily serve as intuitive explanations rather than explicit proofs for our proposed approaches, and they are indeed based on certain simplifying assumptions. We acknowledge that the experimental results constitute the primary evidence supporting our claims about the effectiveness of our token cleaning methods. The theoretical analyses were intended to complement these empirical findings by: - Providing a conceptual framework to understand when and why token cleaning can be beneficial - Offering explanations for the observed performance patterns across different methods - Helping practitioners make informed decisions about method selection in various scenarios We appreciate the reviewer's perspective that this does not diminish the overall support for our claims. In the revised manuscript, we will clarify the role of the theoretical analyses as providing intuitive explanations and insights, rather than positioning them as rigorous proofs of our methods' effectiveness. > How do you think token cleaning effects the supervised fine-tuning efficiency, both with respect to the data requirements and number of training iterations? **Response**: Token cleaning affects supervised fine-tuning efficiency in several important ways: - Regarding data efficiency, our approach selects only 60% of the most informative tokens rather than utilizing all tokens. This effectively reduces the data requirement by focusing the model's learning on tokens that contribute most significantly to performance. For our self-evolving cleaning method, we divide the total data samples into several chunks, each representing one iteration phase, which allows for progressive refinement of token selection. - Despite this data partitioning for self-evolving cleaning, the number of training iterations remains consistent with standard SFT. Therefore, token cleaning does not impact the efficiency of SFT in terms of training iterations. However, it's important to note that compared to standard SFT, token cleaning does not achieve computational speedup because the next-token prediction $x_{ij}$ still relies on the context formed by previous tokens $x_{i,:j}$. The current implementation simply masks out uninformative tokens to ignore their token loss, which is simple and compatible with all training paradigms. Existing token-level approaches, including RHO and our methods, mainly focus on performance efficiency rather than data (token) training efficiency. In practice, the GPU memory usage of these methods remains the same as when using full tokens. Investigating and enhancing token training efficiency represents a promising direction for future research, such as exploring ways to skip uninformative token memory occupation altogether. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed clarifications provided by the authors both to my review and to other reviewers. While I also agree with Reviewer dD3A on exploring why this works in more detail could be valuable, I do not think it undermines the existing contributions of the work. Thus, I would like to maintain my positive rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer W8kk, Thank you so much for taking the great effort to review our paper as well as other reviewers' comments. We sincerely appreciate your positive impression and support for our work. Your encouraging comments mean a lot to us. Wishing you all the best in your professional and personal endeavors! Authors
null
null
null
null
null
null
Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models
Accept (poster)
Summary: This paper investigates miscalibration issues in fine-tuned vision-language models (VLMs) like CLIP, revealing a trade-off between base and new classes where standard prompt tuning (e.g., CoOp) leads to overconfidence in new classes, while regularization-based methods (e.g., KgCoOp) cause underconfidence in base classes​ . To address this, the authors propose Dynamic Outlier Regularization (DOR), which samples textual outliers from WordNet and minimizes their feature deviation before and after fine-tuning, preventing excessive divergence in new class representations while maintaining base class accuracy​ . Extensive experiments across multiple datasets demonstrate that DOR significantly improves calibration without compromising model performance, and the technique can also be extended to visual fine-tuning methods​. Claims And Evidence: The claims made in the paper are **generally well-supported** by evidence: **1. Identification of Calibration Trade-Off in Fine-Tuned CLIP** Claim: Fine-tuning CLIP with prompt tuning (e.g., CoOp) leads to overconfidence in new classes, whereas regularization-based tuning (e.g., KgCoOp) results in underconfidence in base classes. Evidence: Empirical Analysis (Section 3.1 & 3.2): The authors analyze textual feature divergence (FD score) and show that CoOp increases FD, causing overconfidence in new classes, while KgCoOp reduces FD, leading to underconfidence in base classes. **2. Effectiveness of Dynamic Outlier Regularization (DOR)** Claim: DOR mitigates the calibration trade-off by sampling textual outliers from WordNet and minimizing their feature deviation, improving both base and new class calibration. Evidence: Table 1 & Table 2: Show that DOR consistently reduces ECE across 11 datasets when applied to various tuning methods (CoOp, CoCoOp, MaPLe, etc.). **3. Generalization of DOR to Visual Fine-Tuning (DOR-V)** Claim: DOR can be extended to visual fine-tuning by applying a similar regularization to image representations (DOR-V). Evidence: Table 5: Shows that DOR-V improves calibration in visual fine-tuning methods (VPT and CLIP-adapter) across multiple datasets. Methods And Evaluation Criteria: Basically, the authors sample new outliers to increase the diversity of text prompts (without making them exactly the same as the original prompts). This introduces an appropriate constraint during model optimization, enabling the fine-tuned model to **reduce overconfidence on new classes** when encountering them, thereby enhancing its generalization ability. Overall, the approach **makes sense**. Theoretical Claims: I have checked **Appendix C. Theoretical Justification**, and it looks correct. Experimental Designs Or Analyses: The authors conducted evaluations using the **two standard evaluation protocols**, **Base-to-New Generalization** and **Domain Generalization**, while comparing against **comprehensive and recent baselines**, such as **MaPLe and DEPT**. The **calibration metrics** used are also comprehensive, overall, the **experimental design and analysis are effective**. Supplementary Material: The supporting materials are comprehensive. **Table 8** provides specific implementation details of the outliers. Relation To Broader Scientific Literature: Miscalibration is an overlooked problem in the field of **prompt tuning**, and this work proposes an effective method to address it. Essential References Not Discussed: Not Discovered. Other Strengths And Weaknesses: Clear motivation, reasonable methodology, effective experiments, a well-structured paper. Other Comments Or Suggestions: None Questions For Authors: In your experiments, you mentioned that **randomly selecting outliers** also provides a **strong baseline**. Does this suggest a **valuable insight** for real-world **OOD applications**? How do you evaluate the improvement in **generalization performance** between **Near-Outliers and Random-Outliers** in practical applications? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful and positive suggestions. Here’s our response below: ### 1. The choice of Near-Outliers or Random-Outliers We agree that random outliers offer a practical and efficient approach for real-world out-of-distribution (OOD) applications. To investigate their feasibility, we conduct a detailed analysis of when random outliers are sufficiently effective. We hypothesize that the performance gap between near OOD and random OOD may depend on the task’s scope. Accordingly, we compare them across datasets of varying breadth: ImageNet, spanning diverse object classes, versus the fine-grained DTD (textures) and Flowers (flowers). In the table below, our results reveal a larger gap between random and near OOD on ImageNet compared to the two fine-grained datasets, suggesting that random outliers may be particularly viable for tasks with broad and diverse categories in our method. | Method | DTD | Flowers | ImageNet | |---------|-------|---------|----------| | Near | 8.58 | 6.12 | 1.64 | | Random | 9.54 | 6.61 | 1.66 | | Δ(Gap) | 0.96 | 0.49 | 0.02 | --- Rebuttal Comment 1.1: Comment: Thanks for the response. I do not have any further questions and support the acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Miv5, Thank you for supporting the acceptance of this paper. We're glad that our responses addressed all the concerns. We truly appreciate your valuable time for the reviewing. Best regards, Authors of submission 5881
Summary: This paper identifies a calibration trade-off in existing prompt tuning methods: standard tuning (e.g., CoOp) leads to overconfidence on new classes due to increased textual label divergence, while regularization-based tuning (e.g., KgCoOp) results in underconfidence on base classes despite improved accuracy. To address this, the authors propose Dynamic Outlier Regularization (DOR), a method that regulates textual divergence of novel classes using dynamically sampled textual outliers from a large vocabulary (e.g., WordNet), without restricting base class features. Claims And Evidence: The primary claim—that existing prompt tuning methods compromise calibration on either base or new classes—is backed by empirical analysis on datasets like StanfordCars and UCF101, showing CoOp’s overconfidence (ECE 14.58% on new classes) and KgCoOp’s underconfidence (ECE 5.82% on base classes). The explanation via textual feature divergence is substantiated with Feature Divergence (FD) scores and logit gap visualizations (e.g., Figures 2 and 3). The claim that DOR mitigates this trade-off is supported by comprehensive results. Methods And Evaluation Criteria: The proposed DOR method—minimizing textual feature discrepancy of dynamically sampled outliers—is sensible for addressing calibration in VLMs, as it targets the identified cause (textual divergence) without altering core fine-tuning objectives. Theoretical Claims: The paper includes a theoretical justification in Appendix C, linking textual divergence to confidence via logit variance in a binary classification setting (Proposition C.1). I checked the proof, which assumes logits follow a normal distribution and shows that higher variance (σ) increases expected maximum softmax probability. Experimental Designs Or Analyses: I reviewed the experimental designs and analyses, focusing on Sections 5.1-5.2 and Appendices F-J. The results are convincing. Supplementary Material: I reviewed the appendices referenced in the main text (B, C, D, F, G, H, I, J). Relation To Broader Scientific Literature: The paper builds on prior work in VLM fine-tuning and calibration. It extends CoOp, KgCoOp and more by addressing their calibration limitations, aligning with findings in existing works on novel class miscalibration. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength - The paper is original in combining dynamic outlier regularization with prompt tuning to break the calibration trade-off, a novel synthesis of ideas. - Its significance lies in improving VLM reliability for safety-critical applications (e.g., medical diagnosis), and the clarity of writing, figures (e.g., Figure 1), and tables enhance accessibility. T - he extensive evaluation across 11 datasets and 4 ImageNet variants is a major strength. Weakness - The theoretical analysis is limited to binary classification, reducing its generalizability to multi-class settings. - Method shows minor improvements on some of the dataset - Results do not include standard deviation over multiple runs, making it difficult to understand the significance of the improvement Other Comments Or Suggestions: N/A Questions For Authors: - The in Appendix C assumes binary classification. How do the authors expect textual divergence to influence confidence in multi-class settings with more complex logit distributions? - Table 10 shows Euclidean distance slightly outperforms cosine similarity. Why do the authors prefer cosine as the default, beyond convention? An explanation (e.g., stability, alignment with CLIP’s training) could strengthen the method’s justification. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your helpful and positive feedback. Our response is below: ### 1. Theoretical analysis on multi-class settings Thank you for the insightful suggestion. We presented the theoretical results in binary classification to help readers quickly understand the relationship between feature divergence and output confidence. Here, we draft a theoretical analysis in the multi-class setting. In particular, a set of logits $\\{z_j\\}_ {j=1}^K$ sampled from a Gaussian distribution $\mathcal{N}(\mu, \sigma^2)$ in $K$-class classification and maximum softmax probability is denoted by $p_{\max} = \max_j \frac{e^{z_j}}{\sum_k e^{z_k}}$. Now, we prove that the expected maximum softmax probability $\mathbb{E}[p_{\max}]$ increases strictly with $\sigma$. Similar to our former analysis, we assume $\mu = 0$ without loss of generality. We then standardized the logits as $z_j = \sigma u_j$, with $u_j \sim \mathcal{N}(0, 1)$ i.i.d., so $p_{\max} = \max_{1 \leq j \leq K} \frac{e^{\sigma u_j}}{\sum_{k=1}^K e^{\sigma u_k}}$. To analyze the monotonicity of $\mathbb{E}[p_{\max}]$ with respect to $\sigma$, we compute the derivative $\frac{d}{d\sigma} \mathbb{E}[p_{\max}]$. By using the Dominated Convergence Theorem, we interchange differentiation and expectation. The derivative simplifies to $\frac{d}{d\sigma} \mathbb{E}[p_{\max}] = \mathbb{E} \left[ p_m \left( u_m - \sum_{k=1}^K p_k u_k \right) \right]$, where $m = \arg\max_j p_j$. We ensure this step is valid by bounding the derivative $\left| \frac{\partial p_m}{\partial \sigma} \right| \leq 2 \max_j |u_j|$, which has a finite expectation for fixed $K$. Now, since $u_m = \max_j u_j$ and $e^{\sigma u_j}$ is increasing in $u_j$, we have $u_m - \sum_{k=1}^K p_k u_k = \sum_{k \neq m} p_k (u_m - u_k)$. Since $u_m > u_k$ for $k \neq m$ almost surely, and $\sum_{k \neq m} p_k = 1 - p_m > 0$, this expression is strictly positive. Given $p_m > 0$, the expectation $\mathbb{E} \left[ p_m \left( u_m - \sum_{k=1}^K p_k u_k \right) \right]$ is positive, implying $\frac{d}{d\sigma} \mathbb{E}[p_{\max}] > 0$. Hence, for $\sigma_2 > \sigma_1$, we conclude $\mathbb{E}[p_{\sigma_2}] > \mathbb{E}[p_{\sigma_1}]$ in the multi-class setting. We will incorporate this detailed analysis into the final manuscript. ### 2. Minor improvements on some datasets Thank you for raising the concern. We also identify that the severity of the calibration problem differs on various datasets: current prompt-tuning methods are miscalibrated on some datasets (e.g., DTD, EuroSAT, and FGVCAircraft) and well-calibrated on some other datasets (e.g., Caltech101 and ImageNet). We conjecture that the calibration performance might be relevant to the distribution distances between the pretraining dataset of CLIP and downstream datasets (Larger distances may cause worse calibration). Notably, our method can consistently improve the calibration performance on various datasets, especially significant on those challenging cases. ### 3. Add standard deviation to results Thanks for your valuable suggestion. We present the main results with standard deviation in Table 1 of [[link](https://anonymous.4open.science/r/icml_rebuttal-5881/rebuttal.pdf)]. The results confirm the significance of the improvement from our method. ### 4. The choice of cosine similarity We use the cosine similarity since it is widely adopted in the literature of CLIP. In addition, the features of CLIP are typically normalized so that using either Euclidean distance or cosine distance achieves similar impacts. In particular, the squared Euclidean distance is proportional to the cosine distance: $||\mathbf{x} - \mathbf{y}||_2^2 = 2 - 2 \cos \langle \mathbf{x}, \mathbf{y} \rangle$. Therefore, we default to using the cosine similarity due to its popularity in vision-language models. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for taking the time to respond to the questions. After reviewing the rebuttal and considering comments from other reviewers, I maintain my decision in favour of acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer MfZm, Thank you for supporting the acceptance of this work. We truly appreciate your valuable reviews and suggestions. Best regards, Authors of submission 5881
Summary: 1. This paper analyzes the trade-off between base and novel classes from the perspective of textual distribution divergence. 2. This paper proposes a simple DOR regularization method, which can compile with existing prompt learning methods. 3. Experimental results show promising performance compared with related methods. ## update after rebuttal Thanks for the author's responses, and I will maintain my original score. Claims And Evidence: The claims about the calibrating are basically supported by experimental results and theoretical analysis. Methods And Evaluation Criteria: The used benchmark datasets are reasonable in prompt learning fields. Theoretical Claims: This paper provides a theoretical analysis of the feature divergence and miscalibration, and gives the theoretical justification in the supplementary material. Experimental Designs Or Analyses: The experimental design is basically reasonable. Supplementary Material: I have reviewed all supplementary materials. Relation To Broader Scientific Literature: This paper gives insight into the miscalibration in existing prompt learning methods, which has not been discussed and digged before. Meanwhile, the calibration idea can be further explored in this field, so I think it contributes to the broader scientific literature to some degree. Essential References Not Discussed: More recent prompt learning methods should be analyzed and discussed, such as "Gallop" "PromptKD" and more. Other Strengths And Weaknesses: Strengths: 1. The idea of calibration is reasonable, which has not been explored enough in the field of prompt learning before. 2. This paper gives the theoretical analysis from feature divergence to miscalibration along with the trade-off between base and novel classes. 3. The designed method is simple, and the results are promising. Weaknesses: Please refer to "Questions For Authors". Other Comments Or Suggestions: None. Questions For Authors: 1. In my opinion, this paper uses the extra-textual information to mimic the new classes, through the similarity connection and existing word datasets. So, maybe it is not fair to compare with other methods directly to some degree, considering the "information leakage". This paper should consider this part carefully. 2. More recent prompt learning methods, such as Gallop and PromptKD, should be equipped with DOR, to show the performance. In fact, PromptKD also involves extra information, so the performance of PromptKD with DOR will give some insights and discussions. 3. The performance change should be discussed. As shown in Table 3, DOR decreases the base classes' performance in some methods, and what are the reasons? 4. Some visualization results about outliers or methods with/without DOR are helpful for understanding the effect of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. Clarification on "information leakage" Thanks for your insightful review. We believe this concern is closely related to the response #5 for Reviewer YQTf, which is about the overlap between outliers and new classes. We'd like to clarify that our method does not leak the information of new classes, as the outliers are selected with base classes. Moreover, the ablation study in Table 6 demonstrates that using even far-OOD and random-OOD can significantly improve the overall calibration performance. In the response #5 for Reviewer YQTf, we also provide an additional ablation study by removing the new classes from the outlier pool (e.g., WordNet). The results demonstrate that the effectiveness of our method does not rely on the overlap with new classes. Instead of causing "information leakage", our work opens up the possibility of utilizing cheap public texts for improving calibration performance. In summary, leverages general semantic information from language space rather than memorizing target classes, which ensures its fairness and generalizability. We will add this discussion to the final version. ### 2. Results on PromptKD Thank you for the great suggestion. we provide new results by incorporating DOR into a recent method - PromptKD. In particular, we apply DOR in the first stage to train a large teacher model(ViT-L-14), and keep the second stage unchanged. The results below show that our method can achieve meaningful improvements over PromptKD despite its strong performance. | Method | Base | New | HM | |----------|------------|------------|-------------| | PromptKD | 4.73±0.36 | 4.38±0.53 | 4.56±0.45 | | +DOR | 4.81±0.23 | **3.66±0.43** | **4.24±0.33** | ### 3. Performance decrease on base classes Thank you for the careful review. We'd like to clarify that our method achieves comparable accuracy to baselines on base classes. To illustrate this, we provide the accuracy with standard deviation in the table below. In particular, the performance gaps are negligible with most of them are less than 0.2%. We conjecture that adding regularization may slightly affect the training dynamic during the prompt tuning, leading to trivial changes on the accuracy (also increasing a little on new classes). Overall, our method can maintain the generalization performance of baselines on Base classes. | | CoOp | CoCoOp | KgCoOp | MaPLe | DEPT | TCP | CoPrompt | PromptSRC | PromptKD | |---------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | Vanilla | 82.97±0.56| 80.57±0.47| 82.29±0.25| 82.11±0.54| 83.70±0.32| 83.95±0.40| 82.32±0.51| 84.77±0.29| 85.74±0.35| | +DOR | 83.20±0.47| 79.89±0.57| 82.13±0.35| 82.08±0.50| 83.81±0.47| 83.89±0.29| 82.39±0.57| 84.79±0.34| 85.52±0.36| ### 4. Visualization on DOR Thank you for the great suggestion. we provide new visualization in Figures 1-2 [[link](https://anonymous.4open.science/r/icml_rebuttal-5881/rebuttal.pdf)]. The results show that KgCoOp significantly decreases the FD scores on both Base and New classes, whereas our method primarily impacts New classes. Consequently, our approach provides confidence scores that align with the improved accuracy on Base classes. We believe the visualization suggested by the reviewer can enhance the clarity of this paper.
Summary: The authors propose a method called Dynamic Outlier Regularization (DOR) to improve confidence calibration for both base and new classes after fine-tuning. Extensive experiments are conducted on multiple benchmark datasets to evaluate its effectiveness. Claims And Evidence: One of the major claims is that DOR outperforms kGCoOp due to its ability to prevent the increase of textual divergence on new classes. To strengthen this claim, I suggest visualizing DOR in the same manner as Figure 2 and Figure 6 in the supplementary material. This would better illustrate how DOR improves on textual divergence and solidify the contribution. Methods And Evaluation Criteria: I am concerned about the fair comparison in the experimental demonstration: 1. I noticed inconsistencies between the baseline results reported in the manuscript and those in [1]. For example, on new classes, in [1] CoOp reports ECE - 13.84, ACE - 13.76, MCE - 3.80, PIECE - 14.71; while in this paper they are ECE -14.58, ACE - 14.50, MCE - 3.73, PIECE - 15.27. Can the authors clarify why these numbers differ? If not justified, this could affect the validity of the comparison. 2. The authors claim that DOR improves over kGCoOp by preventing textual divergence from increasing on new classes. To this end, to ensure a fair comparison, I believe the manuscript should compare various baselines+DOR vs. various baselines+kGCoOp, as done in Table 2 & 3 of [1]. [1] Wang et al., Open-Vocabulary Calibration for Fine-tuned CLIP, ICML 2024 Theoretical Claims: I think the theoretical claims are sound to me. Experimental Designs Or Analyses: Please see Methods And Evaluation. Supplementary Material: Yes, I reviewed the supplementary materials, and everything looks sound to me. Relation To Broader Scientific Literature: The authors provide a good coverage of the related work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I find the overall presentation flow to be clear and easy to follow. The method is also well-described. The core idea is interesting, but it lacks solid support in certain areas, as I mentioned in Claims and Evidence & Methods and Evaluation Criteria. Strengthening these aspects would definitely enhance the technical contribution. Other Comments Or Suggestions: N/A Questions For Authors: One of the core components of the proposed method is the choice of textual outliers in Equation (7). I believe more discussion and experimentation on this part is necessary. Ablation study on outlier set size: (1) Do the authors have an ablation study on how the size of the outlier set affects performance? Is the method sensitive to the outlier set size? (2) How similar is the sampled outlier set to the actual new classes? Because I feel like one possible reason this method is effective might be that the sampled outliers overlap with or are similar to new classes. I am open to raising my score if the authors can address these concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. Please find our response below: ### 1. Visualization of DOR Thanks for the great suggestion. We'd like to clarify that our method outperforms KgCoOp as our method does not fix the confidence level on the base classes while preventing the increase of textual divergence on new classes. In contrast, KgCoOp anchors the confidence level, leading to underconfidence on Base classes. To illustrate this, we provide new visualization in Figures 1-2 [[link](https://anonymous.4open.science/r/icml_rebuttal-5881/rebuttal.pdf)]. The results show that KgCoOp significantly decreases the FD scores on both Base and New classes, whereas our method primarily impacts New classes. Consequently, our approach provides confidence scores that align with the improved accuracy on Base classes. We believe the visualization suggested by the reviewer can enhance the clarity of this paper. ### 2. Clarification on result inconsistency Thank you for your thorough review. We have carefully revisited the CoOp performance reported in our manuscript and DAC. We found that the slight differences occur specifically on two datasets—FGVCAircraft and EuroST—while the results for the remaining nine datasets align closely with those in DAC. We attribute these minor variations to differences in experimental settings, such as the random seed, hardware specifications, or software environment, which can subtly affect model performance in certain cases. Notably, these minor variations do not impact the contribution of DOR, which largely improves the calibration performance. For reproduction, we will release the full code of baselines on GitHub after publication. ### 3. Comparison between DOR and KG Thank you for the great suggestion. We conduct new experiments by comparing DOR and KgCoOp on 3 baselines (CoOp, MaPLe, and CoPrompt), and present the results in the table below. The results show that integrating with our method can **consistently outperform** those with KgCoOp on overall performance. In particular, our method achieves much better performance than KgCoOp on Base classes, while KgCoOp performs well on New classes. This is consistent with the analysis presented in Subsection 3.2: KgCoOp anchors the confidence level, leading to underconfidence on Base classes. In short, the results demonstrate the superiority of DOR in breaking the calibration trade-off between base and novel classes. | | Method | Base | New | HM | |----------|---------|------|------|-------| | **CoOp** | Vanilla | 3.07 | 14.49 | 8.78 | | | +KG | 5.82 | 4.48 | 5.15 | | | +DOR | 2.47 | 6.48 | **4.47** | | **MaPLe** | Vanilla | 2.75 | 5.46 | 4.11 | | | +KG | 4.01 | 4.29 | 4.15 | | | +DOR | 3.06 | 4.26 | **3.66** | | **CoPrompt** | Vanilla | 2.60 | 5.96 | 4.28 | | | +KG | 4.01 | 4.99 | 4.50 | | | +DOR | 2.98 | 5.14 | **4.06** | ### 4. Ablation on the size of the outlier set Yes, we present the ablation study in `Appendix H2`. The results show that our method is not sensitive to the outlier set size, especially when the size is larger than 500. In particular, the impact of set size is negligible on Base classes, while a larger set size promotes better performance on New classes. In an extreme case with $k = 10$, our method can still significantly improve the performance on new classes. Due to the low cost of outlier texts, we recommend $K=5000$ as the default to achieve optimal calibration performance. ### 5. Overlap between outlier texts and new classes Thank you for raising the concern. In Table 8, we present all selected outlier texts for 6 datasets, showing almost no class overlap especially on StanfordCars and FGVCAircraft. Moreover, the ablation study in Table 6 demonstrates that using even far-OOD and random-OOD can significantly improve the overall calibration performance. To further analyze the impact of overlap, we conduct a new ablation study by excluding all new classes from the outlier pool (denoted as DOR$^\dagger$). The average results on 11 datasets below show that DOR$^\dagger$ without overlap achieves comparable performance to DOR, significantly improving the performance on all three baselines. **Therefore, the effectiveness of our method does not rely on the overlap with new classes**. | Method | Variant | Base | New | HM | |--------------|---------------|-------|-------|-------| | **CoOp** | Vanilla | 3.07 | 14.58 | 8.83 | | | +DOR | 2.67 | 6.49 | 4.58 | | | +DOR$^\dagger$| 2.82 | 6.77 | 4.80 | | **MaPLe** | Vanilla | 2.75 | 5.46 | 4.11 | | | +DOR | 2.83 | 4.44 | 3.64 | | | +DOR$^\dagger$| 2.89 | 4.51 | 3.70 | | **CoPrompt** | Vanilla | 2.56 | 5.96 | 4.26 | | | +DOR | 2.96 | 4.69 | 3.83 | | | +DOR$^\dagger$| 2.71 | 4.93 | 3.82 | --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. Based on the responses, I am raising my score and now lean more towards acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Knmp, Thank you for raising the score and supporting our work. We're glad to hear that all concerns have been addressed during rebuttal. Your valuable suggestions have significantly enhanced the quality of this paper. Best regards, Authors of submission 5881
null
null
null
null
null
null
Distributionally Robust Active Learning for Gaussian Process Regression
Accept (poster)
Summary: The paper explores a distributionally robust active learning framework designed to minimize worst-case posterior variance over an ambiguity set of data distributions. It builds on bounds for posterior variance under uncertainty sampling and links these results to regret minimization. The analysis focuses on probabilistic bounds and sampling strategies, with an emphasis on uncertainty-based selection. Claims And Evidence: 1. Equation (1) is described as the posterior variance of f(x) but it actually corresponds to the posterior variance of the (noisy) observations, y(x). Since the noise variance is explicitly included in the model, the posterior variance of f should be f_var-sigma^2(given f and epsilon are independent). Additionally, the paper introduces an inconsistent notation for the noise term epsilon_i, implying that noise depends on the data points x_i, but later it was treated as a global data invariant parameter. 2. Lemma 3.5 is a core part of the paper's proposed method, yet it has a fundamental issue: it assumes without justification that minimizing an upper bound of E_T is equivalent to minimizing E_T itself. This assumption is incorrect unless the bound is proved to be somewhat tight, which is not discussed in the paper. Without that, we can not ensure that minimizing the bound effectively minimizes the original function. At the very least, the authors should quantify the bound’s gap, prove its asymptotic tightness, or demonstrate that the minimizers of both functions align. 3.A critical issue in the paper is the complete absence of any discussion regarding the structure, form, or assumptions on the ambiguity set P. In standard distributionally robust optimization (DRO), defining P is crucial, as it determines the tractability of the optimization, the validity of robustness guarantees, and the tightness of theoretical bounds. Typical DRO formulations either constrain P via distance metrics (e.g., Wasserstein distance or KL divergence), define it through known priors, or impose structural assumptions over some the distribution family. However, this paper does not provide any explicit definition or constraints on P, making its distributional robustness claim unclear. I am particularly confused about how one can discuss a DRO method without making any assumptions about P, as the entire foundation of DRO relies that. This omission is particularly problematic in Section 4, where the results are stated for all distributions in P, yet there is no indication of what P actually represents. Does the paper implicitly assume that the bound in Proposition 2.3 applies to all distributions in P?. In my understanding. Proposition 2.3 only bounds the posterior variance under uncertainty sampling, which does not necessarily extend to arbitrary distributions in P. Methods And Evaluation Criteria: The paper evaluates distributional robustness by defining the ambiguity set P as the set of distributions within an L_{\inf}neighborhood of a specific reference Gaussian distribution with the size of the neighborhood controlled by eta. However, this specification raises concerns about whether the evaluation fairly assesses distributional robustness. DRO evaluations typically test robustness across a broad range of potential distribution shifts(including realworld data distribution shift), rather than restricting P to a small perturbation around a fixed Gaussian. Since the method is only evaluated under this particular form, it remains unclear whether the proposed approach truly minimizes worst-case errors across diverse distributions or if its performance gains are specific to the chosen experimental setup. Theoretical Claims: 1. Proposition 2.3 lacks a complete proof, as it only cites previous results without detailing the crucial derivation steps. The inequality relies on Lemma 5.4, which only establishes that total regret is bounded by the information gain but does not directly bound the max posterior variance. Moreover, Lemma 5.4 assumes a GP-UCB sampling strategy, whereas Proposition 2.3 concerns uncertainty sampling, making it unclear whether the bound remains valid in this setting. Even if the inequality in Proposition 2.3 holds, the proof does not establish whether the max posterior variance actually converges to a constant or vanishes as T increases. A more rigorous derivation is needed to justify the claim. 2.The argument that Proposition 2.3 theoretically guarantees the effectiveness of uncertainty sampling and random sampling is stated too vaguely to me. The authors do not explicitly outline how Proposition 2.3 supports their claim, and when I tried to infer their reasoning as follows, a potential issue emerges: First, Lemma 3.5 provides an upper bound on E_T in terms of max of expectation under p of sigma_T. Then, they use the inequality max of expectation (under p) of sigma_T is less <= max of sigma_T, suggesting that minimizing the latter should also minimize the former. Finally, they use Proposition 2.3, which bounds max of sigma_ under US, concluding that US or RS and RS are theoretically justified. Now, even if we momentarily accept this reasoning and assume that both bounds involved are tight—meaning minimizing max of sigma_T effectively minimizes E_T—a fundamental issue remains: the variance sigma_T appearing in Proposition 2.3 is specific to a Gaussian process trained on data selected by US, whereas the quantity that should be minimized in Lemma 3.5 seems to be a general variance term that depends on the sampling strategy to be optimized(not necessarily US). The correct approach to minimizing max of sigma_T should be to optimize the data selection process so that the Gaussian process variance is reduced globally. However, Proposition 2.3 only states that if US is used, then the corresponding Gaussian process variance decreases at a certain rate. This does not imply that US is the optimal or even an effective choice for minimizing max of sigma_T in a general sense. Experimental Designs Or Analyses: See Evaluation Criteria. Supplementary Material: Yes. I read some of the proofs. Relation To Broader Scientific Literature: I am not sure. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the careful reading and constructive comments. However, we believe that several reviewer's comments come from misunderstandings, for which we would like to elaborate our claims. First of all, our proposed methods are stated in Section 4 and thus not US and RS. Therefore, we do not claim the optimality of the US. Rather, we claimed that the US could deteriorate in our problem setup, though its theoretical convergence (which may not be tight) can be shown by Proposition 2.3. Our main Theorems 4.1 and 4.2 (obtained by Lemmas 3.4 and 3.5) show that if our proposed methods run, the error $E_T$ must converge to zero if $\gamma_T$ is sublinear. Second, our theoretical claims do not depend on the definition of ambiguity sets, though we assume the existence of $\max_{p \in \mathcal{P}}$. Therefore, the theoretical claims hold for general ambiguity sets defined by, for example, Wasserstein distance and KL distance. On the other hand, the practical performance and feasibility of the algorithm may be problematic. Dealing with these practical problems is an important future work, as described in Section 7. >the posterior variance of f should be f_var-sigma^2(given f and epsilon are independent). As shown in, for example, Eq. (2.24) of [1], the posterior variance of $f(x)$ given $\mathcal{D}_t$ is $\sigma^2_t(x)$. Note that the noise variance $\sigma^2$ is affected to $\sigma^2_t(x)$ through the computation of $(K + \sigma^2 I)^{-1}$ in Eq. (1). [1] C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006. >it assumes without justification that minimizing an upper bound of E_T is equivalent to minimizing E_T itself. We believe that when an objective function is computationally intractable, minimizing the tractable upper bound (or maximizing the lower bound) is a very common approach in the machine learning community. This is because, for example, converging the upper bound to zero immediately implies that the objective function converges to zero. For instance, the evidence lower bound maximization for the variational Bayes learning is generally accepted. >A critical issue in the paper is the complete absence of any discussion regarding the structure, form, or assumptions on the ambiguity set P. As stated above, our theoretical analyses only assume the existence of $\max_{p \in \mathcal{P}}$ (and practical feasibility). Therefore, the ambiguity sets can be set almost arbitrarily depending on the specific problems. > The inequality relies on Lemma 5.4, which only establishes that total regret is bounded by the information gain but does not directly bound the max posterior variance. Moreover, Lemma 5.4 assumes a GP-UCB sampling strategy, whereas Proposition 2.3 concerns uncertainty sampling, making it unclear whether the bound remains valid in this setting. In the derivation of Lemma 5.4 in [2], the fact $\sum_{t=1}^T \sigma^2_t(x_t) \leq 2 \gamma_T / \log(1 + \sigma^{-2})$ is shown for any sampling strategy. Note that the GP-UCB sampling strategy is required to show $r_t \leq 2 \beta_t^{1/2} \sigma_t(x_t)$ in [2]. [2] Srinivas, N., Krause, A., Kakade, S., and Seeger, M. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on Machine Learning, pp. 1015–1022. Omnipress, 2010. >Even if the inequality in Proposition 2.3 holds, the proof does not establish whether the max posterior variance actually converges to a constant or vanishes as T increases As described immediately after Proposition 2.3, if $\gamma_t$ is sublinear, Proposition 2.3 implies that the maximum posterior variance converges to 0. Similarly, Theorems 4.1 and 4.2 show that the error $E_T$ converges to 0 if $\gamma_t$ is sublinear when our proposed algorithms are run. In addition, as described after Definition 2.1, $\gamma_t$ is sublinear for standard kernels such as SE and Matern kernels. >First, Lemma 3.5 provides an upper bound on E_T in terms of max of expectation under p of sigma_T. Then, they use the inequality max of expectation (under p) of sigma_T is less <= max of sigma_T, suggesting that minimizing the latter should also minimize the former. Our proposed methods aim to directly decrease $\max_{p \in \mathcal{P}} \mathbb{E}_{p(x^*)}[\sigma^2_t (x^*)]$. Please see Section 4.
Summary: The paper introduces a framework for Distributionally Robust Active Learning (DRAL) in Gaussian Process Regression (GPR), which focuses on minimizing the worst-case expected error over potential target distributions. The authors derive upper bounds on the worst-case expected squared error and showing that the worst-case expected error can be reduced to arbitrarily small values with a finite number of labeled data points. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of the proposed DRAL methods compared to existing AL strategies. Claims And Evidence: * The authors provide theoretical derivations, including upper bounds on the worst-case expected squared error and posterior variance convergence. * Probabilistic upper bounds (Lemma 3.4, Theorem 4.1) are established, showing that posterior variance can be reduced arbitrarily small with a finite number of labeled data points. * The authors conduct experiments on synthetic data and real-world datasets (King County house sales, red wine quality, auto MPG). The results show lower expected squared errors for the proposed DRAL methods compared to uncertainty sampling (US), random sampling (RS), and variance reduction. * While the paper claims that the methods are efficient, it lacks a detailed computational complexity analysis or runtime comparisons. It is not clear whether the scalable GP learning approaches can be combined with the proposed methods. * The evaluation focuses only on GPR and does not compare against deep learning-based AL techniques, such as transformer-based AL methods. Methods And Evaluation Criteria: * The evaluation focuses on expected squared error as the primary performance metric, which is appropriate for regression tasks. * The authors validated DRAL on both synthetic and real-world datasets; however, these datasets used are relatively small-scale and contain structured numerical data. * The paper does not discuss how well DRAL scales with increasing dataset size or feature dimensionality. Theoretical Claims: * The authors establish that posterior variance decreases monotonically with increasing labeled data. The proof assumes that the GP model is well-specified which can be common in practice and this convergence guarantee may not hold. The authors do not provide experimental validation showing how quickly posterior variance converges in practical settings. Experimental Designs Or Analyses: Yes, the numerical results look solid. Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typo: Line 010 right: gaion -> gain Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the careful reading and constructive comments. >While the paper claims that the methods are efficient, it lacks a detailed computational complexity analysis or runtime comparisons. >The paper does not discuss how well DRAL scales with increasing dataset size or feature dimensionality. >It is not clear whether the scalable GP learning approaches can be combined with the proposed methods Except for the computation of $\max\_{p \in \mathcal{P}} \mathbb{E}\_{p(x^*)}[ \sigma^2\_t(x^*) ]$, the doemstic computational complexity is $O(T^3)$, which comes from the computation of kernel matrix inversion. Furthermore, although the computational complexity of our algorithms does not depend on the feature dimension directly, the computation of the maximization and the expectation in $\max_{p \in \mathcal{P}} \mathbb{E}_{p(x^*)}[\sigma^2_t(x^*)]$ can become difficult in proportion to the feature dimension. As the reviewer suggested, the computational complexity $O(T^3)$ may be avoided by the scalable GP learning approaches. In such cases, more careful proofs incorporating the approximation in GP learning, like [1], are required for rigorous analysis. Regarding the computation of $\max_{p \in \mathcal{P}}$, we assume that this computation is feasible in this paper. On the other hand, several studies in the DR learning literature have considered the efficient computation of $\max_{p \in \mathcal{P}}$ for more complex ambiguity sets $\mathcal{P}$ defined by, for example, Wasserstein distance and KL divergence [2, 3]. The extension for those ambiguity sets is an important future direction, as described in Section~7. Finally, regarding the computation of the expectation $\mathbb{E}_{p(x^*)}[\sigma^2_t(x^*)]$, our proposed method DR random does not require this computation, although DR variance reduction requires this expectation computation. We consider that developing an algorithm that does not require the expectation computation and achieves performance as good as DR variance reduction is interesting as future work, as described in Section~7. [1] Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia, Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning, Proceedings of the 39th International Conference on Machine Learning, PMLR 162:21960-21983, 2022. [2] Hu, Z. and Hong, L. J. Kullback-Leibler divergence constrained distributionally robust optimization. Available at Optimization Online, 1(2):9, 2013. [3] Frogner, C., Claici, S., Chien, E., and Solomon, J. Incorporating unlabeled data into distributionally robust learning. Journal of Machine Learning Research, 22(56): 1–46, 2021. >The evaluation focuses only on GPR and does not compare against deep learning-based AL techniques, such as transformer-based AL methods. Although comparisons against deep learning-based AL methods are important for practical validation, we focused on the performance of the AL algorithm rather than the base regression models. We believe that, for the extension to the deep learning models, our experimental validation implies that our AL algorithm can be a better basis than other AL algorithms for the GPR. >The authors do not provide experimental validation showing how quickly posterior variance converges in practical settings. We show the convergence of $\max\_{p \in \mathcal{P}} \mathbb{E}\_{p(x^*)}[ \sigma^2\_t(x^*) ]$ for the synthetic problems in Figure 3 in Appendix C. If the hyperparameter estimation for the kernel is stable, the convergence of $\max\_{p \in \mathcal{P}} \mathbb{E}\_{p(x^*)}[ \sigma^2\_t(x^*) ]$ shows similar results also in the other datasets. --- Rebuttal Comment 1.1: Comment: Thank the authors for the informative response, and I maintain my original recommendation.
Summary: The authors investigate the problem of distributionally robust active learning for Gaussian process regression, addressing limitations of existing approaches that primarily rely on heuristic and information gain-based methods. They provide a rigorous theoretical analysis with guarantees on the posterior mean and implement their framework using greedy and random search acquisition functions. The proposed method is evaluated on both experimental and synthetic datasets, with the objective of minimizing worst-case prediction errors. Performance is benchmarked against several baseline models, including uncertainty sampling (US), random sampling (RS), variance reduction, expected predictive information gain (EPIG), and distributionally robust approaches both constrained and un-constrained. Claims And Evidence: Are the claims made in the submission supported by clear and convincing evidence? - Yes Methods And Evaluation Criteria: Do proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand? - Yes Theoretical Claims: Did you check the correctness of any proofs for theoretical claims? - Yes but not throughly. Experimental Designs Or Analyses: Did you check the soundness/validity of any experimental designs or analyses? - Yes Supplementary Material: Yes - section C Relation To Broader Scientific Literature: I am not very familiar with related works in the Target distribution aware AL and AL for worst case error so cant comment on this. Essential References Not Discussed: I am not very familiar with related works in the Target distribution aware AL and Al for worst case error so cant comment on this. Other Strengths And Weaknesses: Strengths: - Evaluates the proposed method on a diverse set of datasets, including both synthetic and experimental data, while providing comprehensive baseline model comparisons. - Clearly presents related work, effectively distinguishing prior contributions from the novel aspects introduced in this study. Weakness: - The scope of discussion is restricted to only Gaussian process regression (GPR) surrogates in active learning. However, active learning can incorporate alternative surrogate models, such as deep kernel learning and Bayesian neural networks (BNNs). Other Comments Or Suggestions: Please see Question's. Questions For Authors: How can the analysis we extended to other surrogates for active learning other than GPR? - Bayesian Neural Nets - Deep kernel's (Stochastic Variational Deep Kernel Learning - Wilson et al 2016) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the careful reading and constructive comments. >How can the analysis we extended to other surrogates for active learning other than GPR? For the extension to other surrogate models, such as Bayesian neural networks (BNN) and deep kernel learning (DKL), there is a problem that the posterior variance does not necessarily decrease. Therefore, the extension is not easy as with that the theoretical analysis of bandit with BNN and DKL models is still often difficult. We believe that the analyses for those models are crucial tasks in the community. On the other hand, our analysis is expected to be extendable for the GPR with neural tangent kernel by combining the analyses of neural bandits, for example, in [1, 2]. [1] Parnian Kassraie, Andreas Krause, Neural Contextual Bandits without Regret, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:240-278, 2022. [2] Taehyun Hwang, Kyuwook Chai, Min-Hwan Oh, Combinatorial Neural Bandits, Proceedings of the 40th International Conference on Machine Learning, PMLR 202:14203-14236, 2023. --- Rebuttal Comment 1.1: Comment: Thanks you to the authors for responding to my questions. I will keep my score!
Summary: This paper examines active learning in the context of Gaussian Process Regression (GPR). In particular, it seeks to minimize the worst-case error with respect to the (unknown) test-time marginal distribution, by considering the worst-case distribution in an ambiguity set. First the paper upper bounds this worst-case error as a function of the worst-case predictive variance. This motivates two selection algorithms (DR and CDR) which aim to minimize this worst-case variance, hence minimizing an upper bound on worst-case error. Similar theory is derived to show that this strategy also upper bounds worst-case absolute error and worst-case entropy. The empirical results evaluate the achieved value of worst-case error in comparison to number of sampling iterations on synthetic and real-world data. Claims And Evidence: The theoretical claims in this paper are well supported. The upper bound between worst-case error and worst-case variance is a nice connection, and is well-supported by theory. They also show a sample complexity analysis of their sampling approaches over the sampling horizon. To me, the potentially weak aspect of this paper is the empirical results. There are several issues that I see: - first, it is hard to interpret these results visually. 20 random trials per method seems low, especially for these small datasets, and more trials would help reduce the variance of the results. It is difficult to tell if there are any significant differences between the proposed methods and baselines. In one or two cases perhaps the uncertainty sampling (US) baseline does worse, but in other cases the DR methods don't do as well as the baselines. At a high level, the proposed methods do qualitatively seem to perform better in most cases, but these results could be strengthened by additional analysis such as significance testing between area under the performance curve values, and showing the number of times each method "wins" over other methods according to a significance test - along these lines, there is something I am very confused by in the experiments: when $\eta = 0$, this is equivalent to the simple transductive case where the test distribution is known exactly. I am confused why in this setting, the proposed DR methods seem to have the biggest gain over baselines. On the other hand, as $\eta$ is increased (which in theory should be where DR active learning shines), there is hardly any improvement over US. So it's not clear to me what the empirical advantage of this proposed approach is Methods And Evaluation Criteria: Yes, and in particular the worst-case error is measured as a performance benchmark. The authors should clarify on their y-axes that "expected squared error" means worst-case error E_T. I assume this does in fact mean squared error with respect to the worst-case p.m.f. over the actual (discrete) points for each dataset test-split. Is that correct? Can the authors clarify exactly what is being shown on the y-axis, for both the synthetic and real-world experiments? Theoretical Claims: I skimmed the proofs and they seem reasonable. The theoretical claims seem well-supported by tools in prior literature Experimental Designs Or Analyses: The experimental design seems reasonable, and I have included comments above Supplementary Material: Appendix Relation To Broader Scientific Literature: The authors present a thorough related work section. However, I think more discussion is needed about Frogner et al. 2021 who also propose distributionally robust active learning. This previous work is briefly mentioned, and the authors say that no theory was provided there, but on the empirical side there is no experimental comparison against that method (that I can see). Can the authors elaborate more about this prior work? What was the setting there - GPR regression or something else? Was there an empirical selection algorithm that could also be tested here? If so, why wasn't it evaluated as a baseline? What theory is shown in this prior work? From what I can tell that seems to be the most relevant prior work in the literature so it's important to clearly distinguish what the contribution is here, in comparison. Essential References Not Discussed: The paper Liu, A., Reyzin, L., & Ziebart, B. (2015, February). Shift-pessimistic active learning using robust bias-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 29, No. 1). is highly relevant here and should be discussed, since it also essentially designs an active learning algorithm for the worst case. It is for classification, so couldn't be compared here, but should be mentioned. Other Strengths And Weaknesses: I thought all aspects of this paper were excellent, until I got to Section 6 (Experiments). I think the authors need to address what to me seems like a discrepancy above: when the ambiguity set is small, the proposed methods do better, and when it is large, the baselines (in particular, US) does just as well. To me the first observation doesn't make sense, and the second observation makes it unclear what the utility of the proposed algorithms are, if they don't outperform baselines in the setting they are designed for. In particular, I am surprised how well US does in the large ambiguity set case. If the authors could better justify the empirical aspects of this work, I would increase my rating. Otherwise, the theoretical contributions, related work, introduction, background etc were all very interesting and very well written. Other Comments Or Suggestions: - in section 4, what is the "greedy algorithm" (line 260)? This should be defined. I assume it is greedy expected error reduction - the discussion right before equation (4) could be improved. Can you better motivate your choice of constraint set here? - this paper would be strengthened with lower bounds on worst-case expected error. the authors do acknowledge this in the limitations by mentioning how it would be good to know how optimal this approach is - substitute "rows" for "columns" in the figure captions - in line 384 there is a comment about the derivative of log(a), in the context of EPIG. This is intriguing but I didn't totally follow the logic here. Why do we care about this derivative when $a$ is small? Can the authors elaborate on the point they were going for here? - figures would be easier to interpret on a log-y scale. in fact this might be a necessity - it is very hard to read the figures when all of the methods perform so similarly - the "Impacts statements" section is actually an opportunity to comment more about the worst-case aspect. Why exactly do we care about the worst-case? Perhaps there could be society impacts of this (performance on minority groups, fairness aspects, etc). Now that I think about it, I think the paper could better motivate overall why we care about worst-case performance (either in the introduction or conclusion). - the plots in the appendix would also benefit from a log y-scale - can the authors include a simple figuring showing which points are actually selected by their method, in comparison to say US, on a standard GPR toy problem? I'd be curious to see qualitatively which points are selected for worst-case error mitigation Questions For Authors: Please see my comments above, especially about the experimental section, and please address what I perceive as contradictions in the experimental results regarding the size of the ambiguity set and how surprisingly well US performs when the ambiguity set is large, and how surprisingly well DR/CDR performs when the ambiguity set is small. I would have expected the *opposite* result, where US performs well when $\eta = 0$ and DR/CDR performs better when $\eta$ is larger. In fact, this observation is even more concerning in the appendix plots: US seems to do just as good of a job as DR/CDR in reducing the actual max variance. What benefit does DR/CDR really have then, since they are designed to minimize this worst-case variance, if US does just as good of a job? ## update after rebuttal Thank you for your response. Your clarifications have addressed my concerns. Please make any needed edits in the next iteration of the paper. I have increased my score from Weak Accept to an Accept Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for the reviewer's careful reading and insightful and detailed comments, based on which we will revise the paper as much as possible. We will concentrate on and answer several seemingly important questions below. >in line 384 there is a comment about the derivative of log(a), in the context of EPIG. This is intriguing but I didn't totally follow the logic here. Why do we care about this derivative when is small? EPIG aims to decrease the entropy $\mathbb{E}\_{p(x^*)} [ \log (\sigma^2_t (x^*))]$ instead of posterior variance $\mathbb{E}\_{p(x^*)} [ \sigma^2_t (x^*)]$. Then, if we can decrease $\sigma^2_t (x) = 1$ to $\sigma^2_{t+1} (x) = 0.9$, the difference of the posterior variance is $0.1$ and the difference of the entropy is $|\log 1 - \log (0.9)| \approx 0.1$. Therefore, the amounts of decrease in the posterior variance and the entropy are almost the same. On the other hand, if we can decrease $\sigma^2_t (x) = 0.1$ to $\sigma^2_{t+1} (x) = 0.01$, the difference of the posterior variance is $0.09$ and the difference of the entropy is $|\log (0.1) - \log (0.01)| \approx 2.3$. Therefore, the amount of decrease in the entropy is larger than that of the posterior variance in this case. This is because the gradient of logarithmic function $1 / a$ is large around $a = 0$. Hence, compared with our proposed algorithms, the EPIG focuses on additionally decreasing the posterior variance in the area where the posterior variance is already small. This is the reason why EPIG deteriorates when the performance measure ($E_T$) is based on the posterior variance. >when the ambiguity set is small, the proposed methods do better, and when it is large, the baselines (in particular, US) does just as well. To me the first observation doesn't make sense, and the second observation makes it unclear what the utility of the proposed algorithms are, if they don't outperform baselines in the setting they are designed for. In particular, I am surprised how well US does in the large ambiguity set case. In our experiments, the same ambiguity sets $\mathcal{P}$ is used for the computation of $E_T$ and for the actual algorithm. Therefore, our proposed methods are expected to perform well for all $\eta$. On the other hand, if $\eta$ is large, $\mathcal{P}$ becomes a large set, and our proposed methods approach the US. For example, $\eta = 1$, the ambiguity set $\mathcal{P}$ defined by $L_{\rm inf}$ ball contains all the distributions over $\mathcal{X}$. Therefore, $p_t(x) = {\rm argmax}\_{p \in \mathcal{P}} \mathbb{E}\_{p(x^*)} [\sigma^2_t (x^*)] $ becomes the distribution that have the probability 1 at ${\rm argmax}\_{x \in \mathcal{X}} \sigma^2_t(x)$ and 0 otherwise. Consequently, DR random and DR variance reduction results in the US. Inversely, if $\eta$ is small, the proposed methods can utilize the information of $\mathcal{P}$ compared with the US. Furthermore, since EPIG does not show superior performance for $E_T$ (see the above answer), our proposed methods perform better than baseline methods. Consequently, we believe that the benefit of the proposed algorithms is the versatility for the size of $\mathcal{P}$. The proposed algorithms approach the classical AL (the US) and the test distribution-aware AL for large $\mathcal{P}$ and small $\mathcal{P}$, respectively. Therefore, the proposed algorithms perform well for arbitrary sizes of $\mathcal{P}$. >However, I think more discussion is needed about Frogner et al. 2021 who also propose distributionally robust active learning. Frogner et al., 2021 focus on the DRAL for the ambiguity set defined by Wasserstein distance over continuous distribution. Therefore, their proposed algorithm does not match our experimental problem setup. In addition, the used heuristic AL algorithm is an expected model change, whose AF is the norm of the gradient with respect to the model parameter of the loss function. However, since the GPR (kernel ridge regression) is a nonparametric model, even the definition of the AF is not straightforward. For the above two reasons, we did not employ the method in Frogner et al., 2021 as the baseline. >Liu, A., Reyzin, L., & Ziebart, B. (2015, February). Shift-pessimistic active learning using robust bias-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 29, No. 1). We thank you for pointing out the vital related work, for which we will add the discussion. >the discussion right before equation (4) could be improved. Can you better motivate your choice of constraint set here? At least the US can satisfy the theoretical convergence, the constraint that enforces the choice of the input with large variance may be beneficial. Inspired by this thought, our constraint is derived from the proof. On the other hand, more intuitive motivation may be obtained by removing or alleviating the constraints. This is also an interesting future direction.
null
null
null
null
null
null
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement
Accept (poster)
Summary: The paper proposes a holistic approach for the LLM-based generation of verified code. First, the paper proposes an approach to address the scarcity of training/sample data in many real-world programming languages that could serve as code generation targets. To this end, the paper proposes an LLM-based technique that allows the translation of samples from "data-rich" languages to "data-poor" languages (e.g. from Dafny for which larger benchmark datasets exist to Rust). The paper evaluates the proposed translation pipeline (incl. ablation studies) and demonstrates the applicability of the resulting data set to verified code generation. ## update after rebuttal I appreciate the authors' response which lifted all open questions on my side. I keep my score and am in favour of accepting the paper. Claims And Evidence: The claims of the paper are well supported by empirical evidence. Methods And Evaluation Criteria: The paper uses appropriate methods for its objectives. While the "Critique" step of the translation pipeline does not provide any formal guarantees, the paper specifically discusses this limitation and the approach is nonetheless a significant step forward in correct translation. Theoretical Claims: N/A Experimental Designs Or Analyses: I checked the experimental results presented in the paper and appendix. In my view, the paper does a good job at grounding its claims with experimental results. Unfortunately, there is no baseline to which a direct comparison would be possible. Nonetheless, the authors put significant effort into comparing to related approaches where possible which is laudable. Supplementary Material: No Relation To Broader Scientific Literature: The paper indeed addresses an important problem: For many programming languages that would be natural targets for LLM-based verified code generation or specification generation, there exists relatively little data that can be used for training, fine-tuning or prompting. LLM-based verification has also been attempted for other real world programming languages such as C with ACSL specifications [arxiv23,FASE24,CAV24] or Java with JML specifications [AI4MATH24,ISOLA24,GPCE24]. In many cases, the availability of benchmarks is an issue. Consequently, AlphaVerus is a very welcome contribution. Sidenote: I do *not* expect you to cite all these publications, this is just meant to illustrate the wider landscape and emphasize that scarcity of data is a real problem. [arxiv23] https://arxiv.org/pdf/2311.07948 [FASE24] https://link.springer.com/chapter/10.1007/978-3-031-57259-3_13 [CAV24] https://link.springer.com/chapter/10.1007/978-3-031-65630-9_16 [AI4MATH24] https://openreview.net/forum?id=ZRTcPkNl7v [ISOLA24] https://link.springer.com/chapter/10.1007/978-3-031-75387-9_15 [GPCE24] https://dl.acm.org/doi/10.1145/3689484.3690738 Essential References Not Discussed: no Other Strengths And Weaknesses: I appreciate the effort the authors put into the filtering approach. The experimental results clearly show that this drastically increases the quality of translations, which is great! However, I have to note that I am not convinced that the rule-based approach can filter out all cases where a specification becomes trivially verifiable. For example `assume(false)` could be rephrased into something like `assume(0!=1)` (or even worse something depending on variables in the program but always evaluating to false). Other Comments Or Suggestions: Concerning Algorithm 1: - It is not clear from the algorithm that $D_{\text{exploit}}$ is used in the if statement. If possible, it would be better to also adopt the $\dots \sim G_{\text{exploit}}(\dots)$ notation here - $D_{\text{exploit}}^{(i+1)}$ seems to miss an initialization with $D_{\text{exploit}}^{(i)}$ - The algorithm looks like $S$ from step (I) is always used -- independently of whether a correct sample is found. This is confusing because on page 4 (at least superficially) it sounds like $S$ is only used if no correct sample was found. ("If no candidates verify for source $x$, candidates that are syntactically correct proceed to refinement") Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your very thoughtful review and your positive feedback on our work. We appreciate your recognition of AlphaVerus's potential and thorough understanding of our contributions. We address your points below: --- > “However, I have to note that I am not convinced that the rule-based approach can filter out all cases where a specification becomes trivially verifiable. For example assume(false) could be rephrased into something like assume(0!=1)” The critique modules components work in tandem to collectively filter out bad specifications, so we wouldn’t expect the rule-based filter to filter out all of them. Regarding the case you mentioned (assume(false)), we should clarify that we exclude all assume statements from final proofs, as they are only used for debugging. Therefore, variants like `assume(0 != 1)` will be filtered out. Further, our manual analysis (see response to Reviewer 1) found only 5% critique failiures. That said, we intentionally kept our rules here fairly simple, so expanding this part of the pipeline is interesting future work. --- > “Other Comments Or Suggestions” We thank the reviewer for the suggestions. We will update the algorithm accordingly. Further, to clarify, S from step 1 is only used if no correct solution is found. We will update the algorithm to make it clear. --- We also thank the reviewer for sharing additional references that demonstrate the widespread issue of data scarcity in verified code generation. We strongly agree with your perspective and greatly appreciate your acknowledgment that AlphaVerus addresses an important gap. We will appropriately incorporate the valuable citations you provided in our revised manuscript to further strengthen our motivation for AlphaVerus. --- We appreciate your valuable feedback and suggestions. We look forward to addressing any additional questions or points you may have.
Summary: The paper introduces AlphaVerus, a framework for generating formally verified code using LLMs, with focus on the challenges of programming languages with limited training data. AlphaVerus works by translating verified code from programming languages with lots of examples, into the target programming language. First it generates candidate translations, then refines them using a tree search algorithm with code-verifier feedback, and finally filters misaligned specifications and programs. AlphaVerus can generate formally verified solutions for HumanEval-Verified and MBPP-verified. Claims And Evidence: Claims supported by evidence: * The paper provides evidence that AlphaVerus improves the translation of programs from Dafny to Verus through more iterations. With experiments, it shows a steady increase in translation success rate.  * The authors show that Treefinement (their refinement approach) increases performance better than additional parallel sampling. * The critique phase is used to prevent reward hacking. One question I have is whether the comparison with SAFE and AutoVerus are valid, given they both differ in size and characteristics of datasets and the paper does not try SAFE or AutoVerus on the same datasets as AlphaVerus. Methods And Evaluation Criteria: Overall the approach seems reasonable. I wonder whether using Dafny as the source domain limits the applicability, since Dafny is not a mainstream language. Theoretical Claims: - Experimental Designs Or Analyses: Appendix D states that "critique [...] may not work in all cases, especially for more complex problems". It would be beneficial to expand on this point, if only to provide further characterization of the kinds of complex problems that lead to critique failure. Supplementary Material: Appendices A-D Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our paper. We're glad you found our approach reasonable and appreciated the evidence supporting AlphaVerus’s effectiveness including treefinement and critique phase to prevent reward hacking. We address your questions and suggestions below: --- > One question I have is whether the comparison with SAFE and AutoVerus are valid, given they both differ in size and characteristics of datasets and the paper does not try SAFE or AutoVerus on the same datasets as AlphaVerus. We do think the comparisons are valid, so let us make a few clarifications. First, the comparison in Table 4 uses the exact same benchmark dataset: MBPP-verified. SAFE++ and AutoVerus were specifically designed for the proof annotation task, so we used AlphaVerus on this task to give the baselines the strongest possible chance. Regarding SAFE++, the authors have not released their code, so we were limited to reporting their published numbers. For the verified code generation tasks, we attempted to adapt the publicly available AutoVerus framework, but despite significant effort, could not achieve non-trivial performance. As we note in the paper, this is almost certainly due to their task-specific, hand-engineered prompts. Finally, we note that this research area is still emerging, and even SAFE++ and AutoVerus are fairly concurrent works, so we made our best effort to construct a fair comparison given their results and models. We would be happy to further clarify any other aspects of this evaluation. --- > I wonder whether using Dafny as the source domain limits the applicability, since Dafny is not a mainstream language. Our goal in AlphaVerus is to perform formally verified code generation. Among automated program verification languages, Dafny is one of the most prominent in terms of code content and industrial use, including large-scale deployments [1]. Further, our primary goal was demonstrating the *feasibility of the bootstrapping pipeline* (translation -> refinement -> critique -> self-improvement) in a realistic data-scarce setting for verified code. Dafny was chosen strategically because it *has* accumulated sufficient verified examples over time to serve as a viable source, unlike many target verification languages (like Verus). Crucially, our pipeline makes *minimal assumptions about the source language* (e.g., no source verifier needed). The core methodology – leveraging a higher-resource language, iterative refinement with verifier feedback, and critique – is applicable to other language pairs where a similar resource disparity exists. --- > Appendix D states that "critique [...] may not work in all cases, especially for more complex problems". It would be beneficial to expand on this point... This statement in the limitations (Appendix D) was included in the spirit of acknowledging *potential future challenges* as problem complexity scales significantly beyond current benchmarks. However, within our experiments on DafnyBench translation and HumanEval/MBPP-verified generation, we did not observe failure patterns where the critique allowed misaligned or trivially verified programs through. That said, identifying specific limitations of the critique component and improving it further would be an interesting direction for future work to explore. --- We hope these clarifications address your concerns. We value your feedback and believe these points strengthen our contribution. We look forward to addressing any additional questions or points you may have. ## References: [1] Chakarov, Aleks, et al. Formally Verified Cloud-Scale Authorization. 2025, https://www.amazon.science/publications/formally-verified-cloud-scale-authorization.
Summary: The paper introduces a novel (ensemble of) technique(s) for the translation and formal verification of programs using Verus, a library based on the Rust language. The authors implement a 3 step process that utilizes an LLM to generate samples of a program translation and proof of formal verification, a tree based search to refine candidates that presents errors in the translation, and a final LLM critique step to test whether the specifications of the original program have been respected in the translation process and the formal proof has been conducted correctly. By iteratively applying this process, the successful translations are collected in a dataset to be used as in context examples for future iterations, leading to an incremental improvement in translation performance. The novel contributions are in the combined usage of the tree based search (Treefinement) and the critique actor; the latter include very language specific specifications (string matching) but also more generalizable methods and looks promising to be used in other context as well. ## update after rebuttal Nothing really changed with the rebuttal. I believe the paper should be accepted. Claims And Evidence: The claims seems to be well supported by the ablation studies showcased in the result and appendix sections. Treefinement is a win as it allows (in combination of exploration) to improve the results to best in class. The critique actor is also needed as showcased in figure 7. The idea is that without it the translations tend to degenerate in hacked solution rather than valid ones. Methods And Evaluation Criteria: Yes, the evaluation criteria of a successful translation and verification is sensible. The scope is limited to datasets with human labeled and verified translations to Verus, which are limited in size. Theoretical Claims: N/A Experimental Designs Or Analyses: Experimental designs are sound and present a fair comparison of this methodology with baselines. As this is the first paper addressing translation and formal verification in Verus, benchmarks rely on the few-shots techniques using accessible and well performing LLMs, but not on other specialized prompting and verification strategies. It seems we just trust the critique in the final evaluation of translation. While the reward hacking analysis is nice, it's not clear yet whether we can fully trust the generated specs. Supplementary Material: I skimmed through appendix A-C, reading some parts in full. Relation To Broader Scientific Literature: This paper is relevant to work at the intersection of formal verification and LLMs. Bootstrapping off one formal language to another is a common difficulty with data scarcity in the space. Beyond rust, this provides a promising method for using a lot of code. I'd be curious if this works in math with isabelle proofs -> lean proofs or similar. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper presents an interesting methodology to kick-start and augment datasets for translation and formal verification of programs. The feedback from the compiler during Treefinement looks particularly useful to generate sound proofs that pass the compiler test and thus are free of potential hallucinations, increasing the probability of the creation of valid of the code, especially in the early phases when an LLM could have trouble to generate valid proofs in the absence of data. The critique is also particularly useful to avoid the obvious reward hacking. However, it is unclear how scalable this last step is to other languages: it will require different human coded rules, and the peculiarity of other languages might also make reward hacking harder to identify. The self-improvement mechanism using in-context examples is interesting and is a good way of getting self improvement without being able to fine tune a model. That being said, its not clear if this approach is limited over methods that also improve the model weights as well. It was interesting seeing the exemplars translate to other models. Other Comments Or Suggestions: I think the section on the critique could use a bit more exposition in the main paper, though the detail of the appendix is appreciated. Questions For Authors: How can you evaluate the critique model in a general and scalable way. It seems like the reward hacking analysis focuses on one particular hack, but I imagine there is no shortage of other hacks. Can you subsample proofs and manually verify to get numbers that can be trusted. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and positive assessment of our work. We appreciate your recognition of AlphaVerus’s novelty, useful and interesting methodology, particularly the Treefinement and Critique components, while finding it a promising method for other contexts. We are happy to address your questions and suggestions below. --- > Yes, the evaluation criteria of a successful translation and verification is sensible. The scope is limited to datasets with human labeled and verified translations to Verus, which are limited in size. Thank you for the comments about sensible evaluation criteria. Regarding scope, we agree that current evaluation datasets (HumanEval-Verus, MBPP-verified, Sec 4) are limited. This is primarily because the research direction of formally verified code generation is still emerging, and verifying code is extremely difficult even for human experts. We would be excited if future work expands the scope of benchmarks. Aside from evaluation, we note that our pipeline makes limited assumptions about the underlying data, so we also see adapting AlphaVerus to different source domains as an exciting future direction. > It seems we just trust the critique in the final evaluation of translation. While the reward hacking analysis is nice, it's not clear yet whether we can fully trust the generated specs. Thank you for raising this interesting point about the generated specifications. Indeed, ensuring that the generated specification aligns with the source intent is challenging, as specs lack formal guarantees relative to intent. In our case, we evaluated AlphaVerus on downstream tasks in which the specifications are human-written. This means that the generated specifications from the AlphaVerus pipeline were at least useful for improving verified code generation performance. In addition, motivated by your feedback, we conducted a manual review on a random subset of 20 examples and found 95% to be correct and complete translations, suggesting that the AlphaVerus pipeline frequently produces reasonable specifications. However, fully trusting AI-generated specs is an open and challenging area for future work to explore. > The critique is also particularly useful to avoid the obvious reward hacking. However, it is unclear how scalable this last step is to other languages: it will require different human coded rules, and the peculiarity of other languages might also make reward hacking harder to identify. Although rules can vary by language, our critique stage includes components (comparison and exploit models) that make minimal assumptions about the target language. The rule-based component is also intentionally simple, involving basic pattern matching (e.g., checking for assume statements, trivial preconditions, and verifier bypass annotations). Most of these would carry over to other verification-oriented languages. Nonetheless, fully adapting AlphaVerus to other languages remains an interesting future direction. > I think the section on the critique could use a bit more exposition in the main paper, though the detail of the appendix is appreciated. Thank you for highlighting this. We will incorporate a clearer and more detailed summary of the critique step directly into the main paper. --- We hope these responses address your points. We appreciate the encouraging feedback and believe these clarifications will strengthen the final paper. We are happy to discuss any further concerns.
null
null
null
null
null
null
null
null
Predictive Data Selection: The Data That Predicts Is the Data That Teaches
Accept (poster)
Summary: The authors propose a new data selection based on the rankings of perplexity which a range of llama models assign to documents. The method is scaled to large datasets via means of a fasttext filter. The authors claim that this captures which training examples are predictive of broad downstream abilities. The data selected by this method achieves better downstream results than a number of baselines when evaluated in a comparable setting. The authors perform some high-level inspection of domains upweighted and downweighted by the different methods. Claims And Evidence: While the experimental results look impressive, I have major concerns about the justification and motivation for the method. **Confounding factors** The central thesis is that "data on which model losses are predictive of downstream abilities also contribute effectively to learning". However, their method only vaguely involves “prediction of downstream abilities” as they employ a coarse-grained rank-correlational method, where the average benchmark ranking of a small set of models is compared to the models’ perplexity ranking on a specific document. Their “downstream abilities” model ranking is fixed across all experiments: Llama-1-65B > Llama-1-30B > Llama-2-13B > Llama-1-13B > Llama-2-7B > Llama-1-7B which suggests the equally valid hypotheses that either “*documents on which additional model parameters benefits loss are useful for pre-training*” or “*documents on which additional pre-training compute benefits loss are useful for pre-training.*. The first hypothesis has previously been explored by ScalingFilter [1], which I believe is an important point of comparison for the proposed method. Their method only contrasts losses between two model scales, and you could show that intermediate levels actually help. To show that the method actually predicts “downstream abilities” beyond only pre-training scale, the authors should choose a setting where different downstream tasks elicit different model rankings. Note that this fine-grained downstream analyses is performed in Perplexity Correlations [2], which the authors discuss as the most similar prior work (although I think that ScalingFilter [1] is actually the most directly related paper). **Importance of group-level selection** The authors claim that sample-level selection is an important factor to their success of their method compared to Perplexity Correlations [2], which only performs group-level selection. However, I believe these methods are very different in nature. Therefore, the empirical justification for this claim should be to apply their method and compute perplexity rankings on a group level to select data, rather than comparing to the results of Perplexity Correlations. [1] Li et al., EMNLP 2024. ScalingFilter: Assessing Data Quality through Inverse Utilization of Scaling Laws [2] Thrush et al., ICLR 2025. Improving Pretraining Data Using Perplexity Correlations Methods And Evaluation Criteria: The method makes sense and the evaluation/experimental setting is reasonable for evaluating training data selection. Scaling the document selection to large corpora with fasttext is also understandable. **Reliance of large pre-trained models** One potential issue with the experiments is that they rely on much larger models (from 7B-65B) for selecting training data of a small model (1B). This may limit the utiliy of this method when wanting to scale to even larger models. Note for example, that recent work in perplexity filtering [3] (another very relevant paper) highlights the use of a smaller model for evaluating perplexity than used for training. I would suggest repeating the method with smaller models, for example the pythia models of sizes 14M, 31M, 70M, 160M and 410M. [3] Ankner et al., ICLR 2025. Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models Theoretical Claims: n/a Experimental Designs Or Analyses: The experimental design follows standard practices from prior works. The choice of downstream tasks is also good. However, besides the missing comparisons discussed above, **Add few-shot performance** It is not clear whether the main results in Table 1 are from zero-shot or few-shot prompting (whereas Table 2 is explicitly zero-shot). I would suggest adding few-shot results in the appendix at least. **Low performance** I'm confused why the general results in Table 1 are so low. For example, a 1B Random selection baseline on 25B tokens on C4 performs in Table 2 outperforms the 30B token DCLM baseline in Table 1 on ARC-easy, ARC-challenge and HellaSwag, despite the known strong performance of DCLM-baseline over C4. The low performance of the 1B model at 300B token is also a bit strange, including the decrease in MMLU score compared to 30B tokens. **Additional ablations** The proposed data selection objective is only instantiated with a single fairly arbitrary series of existing models. I would guess that the proposed rank coefficient is quite sensitive to the number and kind of models used. I think it is essential to introduce ablations to understand some of the implicit assumptions on reference models and demonstrate the robustness of the model, i.e. how would the results change when including or excluding large models and when using a different reference model family (e.g. Pythia). I realize that every type of ablation would be computationally too expensive, but even showing analyses like overlap and differences in selected data would be valuable to the reader. Supplementary Material: Yes, I reviewed it to find details about the reference models and the experimental settings. Relation To Broader Scientific Literature: In my view, the key contribution is that the paper provides further empirical evidence that the loss from pre-trained models can be useful signal for selecting pre-training data, following from work on using loss from one, two or many models to select data ([1], [2], and [3, 4] respectively). The authors achieve strong results with their method, and beat heuristic quality-based data selection baseline, highlighting the promise of automatic loss-based methods requiring less human intervention. However, there is no discussion or comparison to [1, 3, 4] in the paper. [1] Li et al., EMNLP 2024. ScalingFilter: Assessing Data Quality through Inverse Utilization of Scaling Laws [2] Thrush et al., ICLR 2025. Improving Pretraining Data Using Perplexity Correlations [3] Ankner et al., ICLR 2025. Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models [4] Marion et al., 2023. When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale Essential References Not Discussed: As mentioned above, the paper misses a comparison and discussion of ScalingFilter [1] and Perplexity Pruning [3, 4]. Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: * Equation (1) would be much clearer if you indicated that the score S and the ranks C's are per document! * Btw Figures 6 and 7 have the same data, despite meaning to show the most negative and positive domains in terms of selected data Questions For Authors: * I'm curious why you don't address the fact that the method seems to have a strong bias towards selecting ''adult content`` (in Figure 3) Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer xzzW, Thank you for your valuable suggestions and insightful comments! We address your comments one by one as follows: --- Q1.[ScalingFilter [1], which I believe is an important point of comparison for the proposed method] Thanks for pointing out a related work ScalingFilter[1] which uses the perplexity difference between a large model and a small model as a metric to select data. To show that the intermediate levels of models actually help, we compare with ScalingFilter under our setting, using llama-65B and llama-7B to select data under ScalingFilter while removing the mid-size models. Results of training 1B model on 30B tokens with our data pool (Table 1 setting) are below. |**Method**|**Average Performance**| |:-:|:-:| |Random |37.2| |ScalingFilter| 37.6| |PreSelect|**40.3**| We see that after removing the 4 mid-size models, the results drop by 2.7 points on average. We will add these results to the next revision of the paper. --- Q2. [choose a setting where different downstream tasks elicit different model rankings] Since our setting is pre-training, we didn’t intend to choose a specific task as the target which was analyzed by Perplexity Correlation[2]. However, in our initial experiments, we did explore such differences. For example, we chose a HellaSwag to represent intelligence that led to a different model ranking from using the average. This results in a performance improvement on knowledge-intensive tasks, indicated by 5% lower losses on wiki-related raw texts while having much higher (worse) losses on other domains such as 8% on math and 16% on code, at a 400M model and 8B token training scale. We think this is evidence that our method actually predicts “downstream abilities” beyond only pre-training scale. --- Q3. [Importance of group-level selection] Thank you for your suggestions, we further compare with computing perplexity rankings on a group level to select data. The results and analysis can be seen in Q6,7 of the response to reviewer 95of. --- Q4. [Add few-shot performance] Thank you for your suggestions! We apologize for this confusion. For experiments in Table 1, some of these are zero-shot while some are few-shot. We follow the common practice to use zero-shot for classification tasks (perplexity-based metric), and few-shot for generation tasks. We will indicate this more clearly in the revised version. --- Q5. [Low performance in Table 1] We believe this performance difference stems from differences in the experimental configurations between Tables 1 and 2, including variations in pre-training corpus (DCLM pool from CC vs. C4, quality differs in each subset), training framework, different evaluation framework (prompt), slightly different model size etc. Thus these two tables are not that comparable. Regarding the “strange” MMLU results from 1B models in Table 1, those numbers are all around random guess (25%) and do not reflect much, because MMLU is too difficult for small models to learn quickly. For example, in DCLM’s paper, even a 7B model trained at chinchilla optimal scale still shows a nearly random accuracy on MMLU. --- Q6. [Additional ablations] Thank you for this suggestion. We did try using many additional models (especially mixing different model families) to compute the ranking coefficient, but that brought significant noise due to variance of evaluation results from different models. Thus the trained models underperformed the current version significantly. And following your advice, we compute the ranking coefficient based on the pythia series (14M, 31M, 70M, 160M, 410M,1B), where we find the top domains of the llama produced data also appears in the top domains of the pythia produced data. And same for negative domains, they also share a large portion of overlap. However, due to the limited time period, we didn’t train them. We will show analyses of detailed overlap, differences and more insights about using different families of models in the revised version following your suggestions. --- Q7. [Missing comparison to [1][3][4]] We have added comparison to ScalingFilter[1] in the rebuttal (Q1). And for [3,4], we think they share the same insights with one of our baseline perplexity filtering[5]. And we will add them into related work for discussion. --- Q8. [Questions about adult content] In our observation, literature and knowledge related contents are more reflective of downstream tasks (generally have a better rank alignment) which includes essays and some adult contents. We believe filtering adult content is an orthogonal direction that could be done separately, which is not the focus of this work. For example, we think a rule-based pre-processing step to filter adult contents out could greatly mitigate this issue. --- [5] Wenzek et.al., LREC 2020. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data --- Rebuttal Comment 1.1: Comment: Thank you for your response. I believe these additional results and discussions are valuable and make the paper much stronger and I hope you can emphasize them in the updated draft - I would even encourage you strongly to highlight this in a rewritten introduction. I am impressed you were able to obtain a ScalingFilter baseline so rapidly. I wonder if you could share the spearman correlation between the ScalingFilter perplexity ratios and your proposed PreSelect scores? (I mean the actual values and not the values predicted by the the fasttext models) --- Reply to Comment 1.1.1: Comment: Thank you so much for your acknowledgement for our additional results and discussions. And we appreciate your suggestions which strengthen the quality of our paper. We will definitely emphasize these in our updated draft. It does take us several days to set up the ScalingFilter baseline but since we are based on our largest model (Llama-65B) and smallest model (Llama-7B) which we have already stored the normalized loss before and training a 1B model with 30B token on a 8 * H100 node takes around one day. Those reasons ensured a timely delivery of our additional experiments and analysis. Following your suggestions, we calculate the spearman correlation between the ScalingFilter perplexity ratios and our PreSelect predictive strength, which is 0.0533. And they also have a pearson correlation of -0.079 which indicate a low correlation between these two metrics. These are measured based on the actual predictive strength score where we calculate based on the sampled subset.
Summary: This paper explores the problem of data selection for pretraining language models. The authors propose a lightweight method that leverages predictive strength as an indicator to determine whether a document should be included in the pretraining data. To evaluate their approach, they train a group of language models of varying sizes on datasets of different scales, selected using various methods. Their findings suggest that the proposed method outperforms other data selection techniques, leading to more diverse data domains and a more balanced distribution of data length. Claims And Evidence: One key question regarding the experimental setup is why the baseline does not include a method that uses the full dataset without applying any selection. Including such a baseline could serve as a valuable reference to measure how much performance improves or declines due to data selection. This would help in assessing the effectiveness of the proposed method more comprehensively. Methods And Evaluation Criteria: Both methods and evaluation criteria make sense. Theoretical Claims: No. Experimental Designs Or Analyses: The experimental designs are valid. Supplementary Material: Yes. Relation To Broader Scientific Literature: Several prior works have explored different data selection methods for pretraining. Additionally, some studies suggest that the loss on specific data can reflect model performance on downstream benchmarks. This paper builds on that idea by assuming that data which better reflects model capability is more beneficial for pretraining. Based on this assumption, the authors introduce the predictive strength score as a criterion for selecting pretraining data. Essential References Not Discussed: no Other Strengths And Weaknesses: ### Strengths: 1. The paper is well-structured and logically organized. 2. It introduces a new method for data selection in language model pretraining. 3. The authors conduct a comprehensive analysis comparing their method to baseline approaches. 4. In addition to evaluating performance, the paper provides a detailed analysis of the characteristics of selected data and how they differ from those chosen by previous methods. ### Weaknesses 1. The experimental design does not fully verify the core assumption of the paper. The assumption is that data on which loss can better predict performance is more beneficial for pretraining. To validate this, a baseline that trains on the entire dataset without selection could be added. This would help determine the actual contribution of the unselected data and provide a clearer picture of the impact of data selection. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer k3Rj, Thank you for your valuable suggestions and insightful comments! We are grateful that you found our work well-structured and logically organized, and we deeply appreciate your recognition of the novelty of our method and the comprehensive analysis in our experiments. We address your comments one by one as follows: --- Q1. [Experimental design does not fully verify the core assumption] We think the most direct way to verify our core assumption is to compare the results between our selected data and randomly selected data, with the same amount of data trained. And we show this in Table 1 400M/1B/3B row Random & PreSelect, Table 2 410M/1B row Random & PreSelect, where training using data selected by PreSelect consistently and significantly outperforms random selection across model sizes and tasks. --- Q2. [Missing baseline that trains on the entire dataset without selection could be added] We thank you for highlighting this important point. In fact, on top of the random selection (i.e. without selection strategy), we did include a baseline that trains on the entire dataset (i.e. without selection). In Table 1 Row 4, we refer to it as “random with 300B”, the model is trained on the entire 300B dataset without selection, and this 300B dataset was our data pool for 1B experiments. The following rows are different selection strategies performed on this entire dataset/pool. We can see that by filtering the high-quality examples, PreSelect not only achieves a 10x reduction in compute requirements but also yields additional performance gains. --- Thank you again for your suggestions and we will describe this baseline more clearly in the revision.
Summary: The paper Predictive Data Selection (PRESELECT) introduces a method for selecting pretraining data based on its predictive strength, defined as the correlation between normalized loss values and downstream task performance rankings. It proposes using a fastText classifier trained on documents with high predictive strength scores to efficiently select high-quality pretraining data. The paper claims that PRESELECT outperforms other data selection methods, including PPL correlation, by offering finer granularity and improved efficiency in language model pretraining. Claims And Evidence: Normalized loss (compression efficiency) is predictive of downstream task performance. The authors cite prior work (Huang et al., 2024) showing that losses on certain text domains correlate with task performance. They extend this idea to a finer document level. PRESELECT is more effective than existing data selection methods, including PPL correlation. Experiments demonstrate performance improvements on 17 benchmarks, outperforming PPL correlation and other baselines. PRESELECT offers a more scalable and efficient approach to data selection. The method eliminates human heuristics, operates at the document level (not just domain level), and requires only a lightweight fastText classifier. Methods And Evaluation Criteria: The paper evaluates PRESELECT against several baselines, including: PPL correlation, DCLM and FineWeb-Edu, and random selection and low-perplexity filtering. Performance is assessed on 17 NLP benchmarks spanning general understanding, knowledge-intensive tasks, mathematics, and coding. The evaluation criteria include: Accuracy on NLP tasks, bpc for math, and code tasks, compute efficiency. Theoretical Claims: There's no theoretical claims in the paper, but emperic investigations of a hypothesis that ranking-based correlation is more robust than Pearson correlation for identifying predictive pretraining data. It suggests that documents where normalized losses strongly correlate with model rankings are more useful for training. This claim is similar to that of the PPL correlation paper but is framed as an original insight, despite the PPL paper pre-registering experiments on a similar premise. Experimental Designs Or Analyses: The experiments compare PRESELECT with multiple baselines using standardized datasets. The study measures predictive strength using rank-based correlation rather than Pearson correlation, arguing for better robustness. The claim that PRESELECT outperforms PPL correlation is supported by experimental results but does not fully acknowledge that the core methodology (perplexity-based ranking) was introduced in the PPL paper. Supplementary Material: The authors state they will open-source their trained data selection scorer and datasets. However, they have not done it yet. Details on dataset filtering, hyperparameters, and additional evaluation results are provided in the appendix. Relation To Broader Scientific Literature: The paper references prior work on compression as intelligence (Huang et al., 2024) and perplexity-based filtering (Thrush et al., 2024). It places PRESELECT within the context of data selection research, contrasting it with heuristic-based and supervised selection methods. However, it does not adequately credit perplexity correlation methods in prior literature as a direct predecessor. Essential References Not Discussed: The authors misrepresent PPL correlation by suggesting that it only uses Pearson correlation, despite the PPL paper pre-registering rank correlation-based experiments. They do not cite the updated version of the PPL correlation paper that includes additional benchmarks and domain-level experiments. Other Strengths And Weaknesses: Strengths: efficient, scalable data selection method, evaluated on many bench marks, demonstrates compute savings. Weaknesses: The novelty claim is questionable, given that PPL correlation pre-registered similar ideas months prior. The comparison with PPL correlation is misleading: The authors claim PPL correlation only uses Pearson correlation, which is false. They do not properly acknowledge the prior work’s pre-registration of page-level results. Overlap in experimental setup suggests that PRESELECT may be an incremental extension rather than a fundamentally new contribution. Other Comments Or Suggestions: The paper should explicitly acknowledge the PPL correlation work as a foundation and clarify what aspects are genuinely novel. Instead of dismissing domain-level PPL correlation, the paper should compare document-level vs. domain-level ranking on the same benchmarks to fairly assess their differences. The authors should engage with the PPL paper authors to resolve these concerns before publication. Questions For Authors: Why does the paper claim to be the first to propose rank-based correlation when the PPL correlation paper pre-registered similar results months earlier? How does PRESELECT fundamentally differ from PPL correlation, apart from working at a document level instead of a domain level? Why does the paper suggest that PPL correlation only uses Pearson correlation when the PPL paper explicitly discusses rank-based methods? Given that the PPL paper now evaluates on 22 benchmarks (per the OpenReview update), does PRESELECT still maintain a clear experimental advantage? How do the selected datasets from PRESELECT and PPL correlation compare qualitatively? Is there concrete evidence that document-level selection is superior to domain-level selection? Would the authors be open to revising their claims to reflect PPL correlation’s prior contributions? Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Research Integrity Issues (e.g., plagiarism)'] Ethical Review Concerns: The paper titled: "Predictive Data Selection: The Data That Predicts Is the Data That Teaches" shares an alarming amount of overlap with the paper "Improving pretraining data using perplexity correlations" published at ICLR 2025 [Thrush et al., 2024]. I have been in conversations with the authors who agree and have reached out for comments. It seems there's a fundamental disagreement. I would argue that this paper is not fit for publication until this has been resolved. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer 95of, Thank you for your review. Our submission was flagged with “Research Integrity Issues (e.g., plagiarism)” due to concerns related to the PPL correlation paper. We believe this is an unreasonable accusation and misleading to others. As we respond to the specific points below, we hope you can properly revise the statement. We are open to resolve concerns in our paper to discuss the relations between the two works more clearly (as we will mention below), but we also refute unreasonable accusations. First of all, we would like to sort out the timeline of different versions of the PPL correlation paper. The first version on arxiv was released on Sep 9, 2024 (PPLv1). The second version on arxiv was updated on March 10, 2025 (PPLv2) after the ICML submission. In our submission, we mainly referred to PPLv1. **While we will also discuss PPLv2 below, it should not be considered when judging this paper.** Second, As PPLv1 was released 5 months prior to this submission, we did not really mention our works are concurrent following the common definition of “concurrency” – **but in reality, these two works are indeed concurrent and we independently developed this idea inspired by Huang et al. 2024. PPLv1 was released early partially because it only conducted 160M-parameter experiments on small scales, and we aimed for 1B and 3B scales, more serious experimental settings (pre-filtered data pool), and more benchmarks so it took longer.** This may be the main reason why we write this paper in the current style and the reviewer feels PPLv1 was not acknowledged enough. Even so, we note that our submission and PPL correlation still admit significant differences, and lead to very different empirical results in the end. We continuously acknowledged the prior contribution of PPL correlation throughout our paper – where we included a dedicated paragraph in the Introduction section to discuss it, and made comparison with it empirically in the experiments. We address the concerns one by one below: --- Q1. [ This claim is similar to that of the PPL correlation paper but is framed as an original insight, despite the PPL paper pre-registering experiments on a similar premise.] In the introduction section, we explicitly pointed out that our submission shares similar intuition with PPLv1 (Line 95) – we were quite open acknowledging this. However, as we mentioned above, our idea was independently inspired by Huang et al. 2024 and this project started prior to release of PPLv1, thus we wrote the paper following our actual logic flow of performing this project. To mitigate the concerns on this point, we will revise our description of PPL correlation as “the first to explore domain-level correlation-based methods in data selection”. --- Q2.[The authors state they will open-source their trained data selection scorer and datasets. However, they have not done it yet] We were planning to release them after the review period. --- Q3. [Misrepresent PPL correlation by suggesting that it only uses Pearson correlation] We didn’t suggest PPL correlation only uses Pearson correlation. The reviewer got this impression maybe due to Line 116? We never intended to imply that, Line 116 only meant that grouping documents together as in PPLv1 could enable more stable computation of Pearson correlation. We will revise that sentence to avoid misunderstanding, and in the meanwhile, we will acknowledge PPLv1’s discussion on ranking-based methods in Section 2.2. --- Q4. [They do not cite the updated version of the PPL correlation paper.] If the reviewer refers to PPLv2, we could not cite it in the submission because it was released afterwards. --- Q5. [The novelty claim is questionable, given that PPL correlation pre-registered similar ideas months prior] We do not think the community has reached agreement on how to understand or treat the “preregistration experiments”, and it is not common practice to write them. All the conferences have no guidelines about it. While we fully understand the motivation of doing this from PPLv1, we do not consider a long list of "we will do xxxx in the future" statements to constitute prior contributions. We respectfully disagree with the notion that it is good practice for the community to simply post preliminary results and preregister experiments to claim priority. --- Because of the imposed character limitation, we keep addressing the remaining concerns in the response to reviewer VRK4. Sorry for any inconvenience and thank you for your understanding. --- Rebuttal Comment 1.1: Comment: Five months can no longer be considered concurrent, especially given the code has been open source. You have to resolve the concerns and adequately address prior work in your paper for my score to change. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 95of, To clarify, in our paper, we never mention PPLv1 was concurrent work, instead we acknowledged it in many places and empirically compared it. In the rebuttal we just wanted to explain that our idea was independently developed. Sorry for any confusion caused by the wording in our previous rebuttal. &nbsp; In our rebuttal, we believe we clearly addressed the differences between our work and PPLv1 and responded to all the reviewer’s questions. We are willing to revise some parts of the paper to address the reviewer’s concern as explained in the previous rebuttal, but we are unsure how to address the concerns further. To summarize the main points again (for reference to other reviewers and the ACs as well): 1. **We explicitly acknowledged the contributions of PPLv1 (the first version of the Perplexity Correlation paper) in multiple parts of our paper**, including a dedicated paragraph in the Introduction section to discuss it – where we acknowledged the high-level intuitions are similar, explicit statements that PPL correlation is the most relevant work (Line 169), as well as empirical comparison to it. &nbsp; 2. Compared to PPLv1, the main differences of our work are: a. **PPLv1 adopted domain-level estimation of correlation while we use document-level**, which proves important empirically as we show in experiments. b. **PPLv1 suggests using a diverse set of open-source LLMs to approximate correlations. In contrast, we used six Llama models, which avoids the sensitivity of different base models to evaluation configurations (e.g., prompts)**. In our preliminary experiments, using diverse open LLMs did not outperform random data selection – this aligns with PPLv2’s findings, where they failed to significantly outperform random selection with pre-filtered, high-quality data pools. c. **PPLv1 only conducted experiments for 160M models on limited benchmarks**, while we showed effectiveness of our approach up to 3B models trained on 100B tokens and assessed on 17 benchmarks. &nbsp; 3. The reviewer seems to be curious about the comparison between our paper and PPLv2 (the latest version of perplexity correlation) as well. First, PPLv2 was released in March after the submission, thus **it should not be considered when reviewing our paper**. Second, even compared to PPLv2, the key difference noted in 2.b above remains, which leads to distinct experimental results: **With a pre-filtered data pool, PPLv2 indicates quite negative results and it does not outperform the random baseline or DCLM significantly, while we achieve pretty strong results beating all the baselines under the DCLM pre-filtered data pool.** &nbsp; 4. PPLv1 “preregistered” many experiments as future tasks, some of them overlap with our settings. However: a. These “preregistered experiments” were not conducted at the time of our submission, even five months after they were initially proposed. b. We do not believe that preregistered experiments diminish our contributions. The research community has not reached a consensus on whether merely preregistering experiments constitutes a valid claim of prior contributions, and we leave it to others to form their own judgments on this matter. c. Our key difference (point 2.b above) remains and was not preregistered by PPLv1. d. As mentioned in point 3 above, even after incorporating these preregistered experiments, PPLv2 does not perform well on pre-filtered data pools, further underscoring the uniqueness and effectiveness of our approach. &nbsp; 5. We are very willing to revise some parts of our paper to mitigate the reviewer’s concerns, which we believe can be quickly modified, including: a. Acknowledge that PPLv1 discusses ranking-based correlation metric b. Acknowledge preregistered experiments of PPLv1 c.Add additional baselines from Q6 in the first rebuttal
Summary: This paper leverages the intuition that data on which losses of N models correlate with the benchmark performance (ranking) of the N models is the most useful. The paper proposes a simple numerical score for measuring this consistency in ranking (different and more numerically stable than Pearson correlation). To scale this effectively, after applying this method on a small subset of data, a fastText classifier is trained to classify text into that with high and low score using this methodology. Claims And Evidence: Yes, the main claim that the data that "predicts" is the most useful for training is empirically validated through experiments. After the main empirical results showing how on the DCLM benchmark, this method achieves better accuracy than baselines, the paper provides a qualitative exploration of what data is actually selected by this method. We see a preference for selecting more from reliable sources such as wikipedia etc. Also, it seems that this method removes the bias that other data selection methods make towards shorter phrases (for example, perplexity based selection). Methods And Evaluation Criteria: The DCLM benchmark is the most appropriate benchmark for evaluating data selection for pre-training. Theoretical Claims: No theoretical claims are made. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: This is a very useful contribution in the literature of pre-training data selection. The idea of selection based on a validation set has been around for some time now, but the approach of rank correlation using a variety of open source models & distilling this ability into a fastText classifier is highly intuitive and effective as demonstrated by the results. Essential References Not Discussed: Essential references are all discussed. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer VRK4, Thank you for your insightful comments! And we are grateful that you found our work very useful and effective. --- --- As Reviewer 95of raises serious concerns about our paper and our response to that is probably related to other reviewers as well, due to the imposed character limitation, we have to list some responses to Reviewer 95of here, we are sorry for any inconvenience and thank you so much for the understanding. --- Q6. [The paper should compare document-level vs. domain-level ranking on the same benchmarks to fairly assess their differences] Our experiments in Table 1 already compared document-level vs. domain-level ranking on the same benchmarks in a fair setting. Following your suggestions, we train the domain-level fasttext which takes high correlation domains as positive examples while low remaining domains as negative examples, to fairly assess the difference. |**Method**|**Average Performance**| |:-:|:-:| |Random |37.2| |domain-level filtering|37.1| |domain-level fasttext| 37.5| |PreSelect|**40.3**| By using fasttext, it performs slightly better than random baseline and domain-level filtering, with 37.5% accuracy across 15 accuracy-based benchmarks, which is lower than document-level PreSelect. As we will also discuss some details in Q8 below, incorporating many domains (especially there is noise inside each domain) in fasttext training cannot lead to robust and dominant features. --- Q7. [Is there concrete evidence that document-level selection is superior to domain-level selection?] And from the top selected data ((‘fanfiction.net', 0.36%), ('opentable.com', 0.32%),('ncbi.nlm.nih.gov', 0.30%), ('alldiscountbooks.net', 0.29%), ('hindawi.com', 0.26%), there is no significant good domains upsampled. Because of the word limit in rebuttal, we will add more comparisons (e.g. Learned fasttext features, selected data distribution…) to support the advantages of document-level over domain-level. --- Q8. [How does PRESELECT fundamentally differ from PPL correlation, apart from working at a document level instead of a domain level?] As we already acknowledged in the submission, PRESELECT and PPL correlation share similar ideas, thus the high-level idea is not that “fundamentally” different, which we never claimed to be. Even so, there are significant differences: 1. PPLv1 uses domain level to classify documents while we work on finer granularity, this causes large empirical difference as we show in our original experiments and the added results in the response to Q6,7. 2. PPLv1 suggested using a large number of open-source LLMs to approximate the correlation, while we only used 6 Llama models – while we did not mention in the submission, we tried using many open LLMs in the beginning and it did not significantly outperform the random baseline, and we finally opted to models from the same family to reduce potential noises from evaluating different-family models. 3. Empirically, compared to PPLv1 which only reported 160M-scale results, our experiments cover larger scales and comprehensive benchmarks with strong results, representing significant empirical contributions as well. --- Q9. [Why does the paper suggest that PPL correlation only uses Pearson correlation when the PPL paper explicitly discusses rank-based methods?] Please refer to Q3. --- Q10. [Given that the PPL paper now evaluates on 22 benchmarks (per the OpenReview update), does PRESELECT still maintain a clear experimental advantage?] First, the new PPL paper is updated after the ICML submission deadline,, which should not be considered when reviewing our paper. Second, even compared to PPLv2, our experimental advantage is very clear: With a pre-filtered data pool, PPLv2 indicates quite negative results and it does not outperform the random baseline or DCLM significantly, while we achieve pretty strong results beating all the baselines under the DCLM pre-filtered data pool. We think that working with a pre-filtered data pool represents a more realistic setting, and empirically outperforming the baselines or not on a pre-filtered data pool makes a huge empirical difference and it directly impacts whether this method will really be used in practice or not. --- Rebuttal Comment 1.1: Comment: I continue to strongly recommend this paper for acceptance.
null
null
null
null
null
null
Double-Filter: Efficient Fine-tuning of Pre-trained Vision-Language Models via Patch&Layer Filtering
Accept (poster)
Summary: This paper presents Double-Filter, a method aimed at optimizing the fine-tuning process of vision-language pre-trained (VLP) models. The method reduces redundancy through two strategies: first, a novel patch selection approach that enhances feature representation via background-foreground separation; second, a genetic algorithm to eliminate redundant architectural layers, improving model efficiency. Experiments on benchmark tasks, including VQA and NLVR2, with METER and ViLT models, show that Double-Filter achieves substantial reductions in computational complexity (e.g., up to 21.18 GFLOPs in VQA with METER) while maintaining competitive accuracy, with minimal performance degradation (0.27% on METER and 0.65% on ViLT). Visualizations further confirm that IPF effectively preserves global contextual semantics, and ALF ensures task-specific layer optimization. ## update after rebuttal Authors' rebuttal have solved my concerns. I think previous score is high enough and I will keep my rating. Claims And Evidence: The claims made in the submission are clear and the paper provided enough evidence. The proposed method achieved significant reductions in FLOPs while maintaining competitive performance. Methods And Evaluation Criteria: The proposed method combined Image Patch Filter (IPF) for data input optimization and Architecture Layer Filter (ALF) for network architecture optimization, providing a comprehensive solution for efficient fine-tuning. The motivation for fine-grained model redundancy removal is reasonable. Experiment results show that the proposed method can reduce a large portion of input tokens while maintaining the performance. Theoretical Claims: This paper provided enough details in ensuring the correctness of its theoretical proofs. The author not only provides a clear description of each component of the model design but also presents well-defined assumptions, verifying the key efficiency design derivations and proof steps. Experimental Designs Or Analyses: 1. It achieved significant reductions in FLOPs while maintaining competitive performance. 2. The ablation experiment is well designed and sufficient, clearly demonstrating the effectiveness of each key component proposed. 3. Clear visualization results are given to prove the effectiveness of the method. Supplementary Material: 1. The supplementary material provides more detailed experimental parameter settings, laying the foundation for reproducibility. The motivation for fine-grained model redundancy removal is reasonable. It maintain a coherent and complete structure suitable for submission to ICML2025. 2. More visualization results are provided on different datasets. Relation To Broader Scientific Literature: The motivation for fine-grained model redundancy removal is reasonable. It maintain a coherent and complete structure suitable for submission to ICML2025. Essential References Not Discussed: References are enough in this paper. Other Strengths And Weaknesses: Strength: 1. This paper appears to maintain a coherent and complete structure suitable for submission to ICML2025. 2. Achieves significant reductions in FLOPs while maintaining competitive performance. The topic of model architecture pruning is interesting and will be important for model deployment (e.g. on edge devices). 3. Unlike traditional methods that focus on coarse-grained block-level pruning, the ALF component employs a fine-grained filtering strategy. By targeting specific sub-layers (e.g., Multi-Head Self-Attention, Feed-Forward Network) within Transformer blocks, it achieves a more precise balance between model compression and performance retention. Weakness: 1. The Image Patch Filter (IPF) relies on YOLO and ViT to determine patch importance. However, in complex scenarios or cases with occlusions, YOLO's object detection may be inaccurate, leading to incorrect foreground-background segmentation and potentially affecting the patch selection process. 2. The paper lacks a detailed analysis of YOLO's parameter count and FLOPs, which is important for assessing its computational cost in the proposed method. 3. Why does the paper not consider addressing redundancy in the text modality? 4. A minor but important point: I noticed that the paper inconsistently uses "FLOPS" and "FLOPs" to refer to floating-point operations. It would be helpful to standardize the terminology throughout the paper for clarity and accuracy. Other Comments Or Suggestions: A minor but important point: I noticed that the paper inconsistently uses "FLOPS" and "FLOPs" to refer to floating-point operations. It would be helpful to standardize the terminology throughout the paper for clarity and accuracy. Questions For Authors: 1. The Image Patch Filter (IPF) relies on YOLO and ViT to determine patch importance. However, in complex scenarios or cases with occlusions, YOLO's object detection may be inaccurate, leading to incorrect foreground-background segmentation and potentially affecting the patch selection process. 2. The paper lacks a detailed analysis of YOLO's parameter count and FLOPs, which is important for assessing its computational cost in the proposed method. 3. Why does the paper not consider addressing redundancy in the text modality? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their very detailed and instructive feedback. We are very happy that you recognize our motivation and work and give positive support. Regarding your suggestions and concerns, we will respond one by one below, hoping to answer your questions. >_Q1. “YOLO's object detection may be inaccurate, leading to incorrect foreground-background segmentation and potentially affecting the patch selection process.”_ Thanks for your comment. Indeed, the situation you mentioned is possible. But this is one of the motivations behind our decision to remove redundancy in the foreground and background separately and retain a certain proportion of patches respectively. On the one hand, our attention mechanism is based on the global attention distribution, so even if there are missed or wrong detections in the foreground, our IPF method can retain these important entity areas in the background (we assume they are important). Therefore, in this case, it can maintain the same global salient object attention ability as EViT(Liangetal.,2022). On the other hand, since we distinguish the foreground and background, we can pay more attention to the background than EViT, which only focuses on the salient area, even if there is no clear semantics, it is very important for the logical integrity of the image context expression. >_Q2. “The paper lacks a detailed analysis of YOLO's parameter count and FLOPs, which is important for assessing its computational cost in the proposed method.”_ Thank you for the helpful suggestion. Specifically, YOLOv8 used in our IPF module requires approximately 4.5G FLOPs per forward pass on one RTX 3090Ti. Taking the METER model as an example, it requires a total of about 100G FLOPs cost in per forward pass, of which each transformer Block in the interaction network requires approximately 5G FLOPs. Therefore, YOLOs accounts for less than 5% of the total, which is approximately equal to the cost of one Block. We will revise Section 3.3 and Table 1 in the new paper to explicitly include this small cost to provide a complete picture of all forward computations. >_Q3. “Why does the paper not consider addressing redundancy in the text modality?”_ Thank you for your insightful question and we appreciate the opportunity to clarify this aspect of our work. Our study focuses primarily on image redundancy because images account for the majority of computational overhead in Vision-Language Pretraining (VLP) models. **As discussed in Section 3.3 (Lines 259–264) of our paper**, textual redundancy is considerably lower than that of image patches, making image redundancy the more critical bottleneck to address. Tasks commonly used in VLP, such as image captioning, VQA, and cross-modal retrieval,etc., typically involve short text sequences. For example, in the METER and ViLT models we adopt, the maximum text length is only 40–50 tokens, whereas ViLT-B/32 processes over 240 image patches—meaning text accounts for less than 20% of the total input length. Similar observations are also reported in [1, 2]. Moreover, since the computational cost (FLOPs) of transformer-based models scales linearly with sequence length, the relatively short length of text implies that the FLOPs consumed by the text modality are inherently much lower. Therefore, we concentrate on optimizing the visual modality, where redundancy is more significant and computational savings are more impactful. References: [1] Yang, Senqiao, et al. "Visionzip: Longer is better but not necessary in vision language models." arXiv preprint arXiv:2412.04467 (2024). [2] Kim, Wonjae, Bokyung Son, and Ildoo Kim. "ViLT: Vision-and-language transformer without convolution or region supervision." International Conference on Machine Learning, PMLR, 2021. >_Q4.A minor but important point: I noticed that the paper inconsistently uses "FLOPS" and "FLOPs" to refer to floating-point operations._ Thank you for pointing this out. We will standardize the terminology throughout the paper, consistently using "FLOPs" to ensure clarity and accuracy. And we will carefully proofread the full text to ensure there are no typos. **We hope that the above responses could address the reviewers' concerns and questions.** --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply, which solved my confusions. In particular, the authors further gave explanations of the detailed computational cost of YOLO and the impact of the foreground and background on the model fine-tuning. Also, I browsed the comments of other reviewers and the author's reply, and found that the author supplemented the generalization experimental results on more complex retrieval task. As I commented before, I think the experimental setup of this paper, especially the ablation studies, is relatively complete. Overall, I'd support the full exploration of traditional machine learning methods to promote contributions to existing deep learning models. So I'd be willing to this paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewers. Your recognition and support are the greatest encouragement for us to continue to improve our work. The reviewers clearly saw the core of our paper. Our motivation is indeed to further explore the innovation of traditional machine learning in efficient fine-tuning of pre-trained VLP models. We believe that this is in line with the theme of the ICML conference and to make extensive explorations for innovative applications of machine learning technology.
Summary: The paper introduces Double-Filter, a novel approach for efficient fine-tuning of vision-language pre-trained (VLP) models. The method addresses redundancy at both data and model levels. At the data level, an Image Patch Filter (IPF) leveraging a YOLO detector and ViT attention scores to retain only the most informative image patches is proposed, while at the model level, a fine-grained Architecture Layer Filter (ALF) using an adjustable genetic algorithm (AGA) to selectively replace redundant sub-layers within transformer blocks is designed. Experiments on VQA and NLVR tasks using the METER and ViLT models demonstrate that Double-Filter can significantly reduce FLOPs with only marginal performance degradation, thereby offering a compelling efficiency-performance trade-off. ## update after rebuttal Thank you to the authors for their rebuttal, which has addressed most of my concerns. Hence, I maintain my recommendation of weak accept. Claims And Evidence: The paper makes several key claims: (1) The proposed dual filtering strategy effectively reduces both data-level and architecture-level redundancies, which in turn cuts down computational costs during fine-tuning. This is supported by theoretical FLOPs analysis and extensive ablation studies. (2) The Image Patch Filter preserves global semantic integrity by separately considering foreground and background regions, rather than focusing solely on salient objects. (3) The fine-grained Architecture Layer Filter, implemented via an adjustable genetic algorithm, enables selective pruning of sub-layers while retaining model performance. Experimental results comparing Double-Filter with other parameter-efficient fine-tuning (PEFT) methods (e.g., LoRA, Adapter, DAS) provide quantitative evidence for these claims, the visualization of IPF outputs supports the claim (2) Methods And Evaluation Criteria: The methodology is built on two complementary components. The IPF first segments an image using a YOLO detector into foreground and background, then ranks patches based on [CLS] attention scores from a pre-trained ViT model. The ALF formulates layer pruning as an optimization problem solved by a genetic algorithm that iteratively adjusts a binary chromosome representation of the transformer’s sub-layers. Evaluation is conducted on two downstream tasks (VQA and NLVR) using common metrics such as accuracy and FLOPs reduction, alongside detailed ablation studies that examine varying patch filtering ratios and layer removal extents. Theoretical Claims: The paper provides theoretical analysis regarding the reduction of FLOPs, demonstrating how the Double-Filter framework scales down the computational complexity of transformer blocks from the vanilla model. It also argues that selectively replacing sub-components (rather than entire blocks) offers a finer control over efficiency.However, the current experimental setup does not include a dedicated ablation study that directly compares selective sub-component replacement with whole block removal. A focused comparison—using image patch filtering (IPF) while contrasting ALF with block-level filtering methods would provide more definitive evidence for the benefits of fine-grained architectural control. Experimental Designs Or Analyses: Experiments are carried out on two widely adopted VLP models—METER and ViLT—across the VQA and NLVR benchmarks. The results indicate that Double-Filter achieves a substantial reduction in FLOPs (e.g., over 21G reduction for VQA on METER) while incurring only a minimal drop in accuracy. Detailed ablation studies further explore the impact of varying the patch filtering ratio and the number of layers pruned, validating the effectiveness of both the IPF and ALF components. The inclusion of inference speed comparisons also proves the effectiveness of proposed method in boosting VLP finetune. Supplementary Material: Supplementary materials include additional visualizations that illustrate the patch filtering results as well as model hyperparameter settings. Relation To Broader Scientific Literature: The work builds on recent advances in efficient fine-tuning methods for VLP models, extending beyond traditional adapter-based approaches and block-skipping strategies (e.g., DAS(Wu et al., 2024b)). The work also builds on a patch filtering techique that assigns an importance score to each patch by leveraging the classification token ([CLS]) of the pre-trained ViT following (Liang et al., 2022). Essential References Not Discussed: The paper comprehensively discussed the related works. Other Strengths And Weaknesses: Strengths of the paper include its dual-level approach to reducing redundancy, a sound theoretical grounding through FLOPs analysis, and extensive experimental validation across multiple benchmarks. The modular design of the proposed filters offers flexibility and could be adapted to various VLP models. On the other hand, potential weaknesses include an increased system complexity due to the integration of a YOLO detector and genetic algorithm, and limited exploration of the method’s generalizability to tasks beyond VQA and NLVR. Other Comments Or Suggestions: It would be beneficial for the authors to expand the evaluation to include a broader set of tasks or datasets could further validate the robustness of the approach. A discussion on hyperparameter sensitivity for the genetic algorithm would also help in understanding the stability of the ALF component. Questions For Authors: Could you elaborate on your choice of a genetic algorithm for the fine-grained Architecture Layer Filter? Given that GAs are heuristic and do not always guarantee an optimal result and requires hyperparameter tuning, did you consider alternative optimization methods that might offer stronger guarantees on the quality or stability of the optimization outcome? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer very much for the kind words, for the interest in our research activities, and for the very insightful comments. >_W1: “Potential weaknesses include an increased system complexity due to the integration of a YOLO detector and genetic algorithm, and limited exploration of the method’s generalizability beyond VQA and NLVR.”_ Thanks for the comment. For the system complexity of the integration of a YOLO detector and genetic algorithm, as shown in Table 1 and Table 2, after introducing YOLO-based IPF and AGA-based ALF, our Double-Filter has significantly reduced the calculation of FLPOs, and the inference time has been greatly improved. Although the model introduces more algorithms, the inference efficiency of the VLP model has been significantly improved. For the exploration of the method’s generalizability to more tasks, we further experimented on more complex image and text retrieval tasks by renting a larger GPU. Due to time limits, the response Table reports our Double-Filter with same setting as Table 1 in the paper compared to the mainstream methods. METER | IR/TRR@1 | Add FLOPs | ViLT | IR/TRR@1 | Add FLOPs -------------------|------------------|------------------|-------------------|------------------|------------------ ClassifierOnly | 78.80/89.00 | 0.00 | - | 57.42/78.00 | 0.00 Shallow Prompt| 74.20/88.60| +28.71G. | -| 55.92/74.80| +19.53G Deep Prompt| 78.84/89.40 | +6.53G | -| 58.64/79.50 | +5.14G LoRA | 79.86/92.60 |0.00. | -| 57.44/77.70 |0.00 Adapter | 80.38/91.90 |+1.64G. | -| 62.68/81.40 |+0.86G Scaled PA | 80.40/93.20 | +1.12G. | -| 61.88/79.00 | +0.44G DAS | 80.12/91.80. |-11.16G | - | 60.66/80.80 | -1.03G **Double-Filter** | 80.05/91.22 | -21.18G | - | 61.18/79.39 | -4.72G On the cross-modal retrieval task, our proposed Double-Filter also shows the lowest computational complexity among existing PEFT methods, and maintains competitive performance. This further demonstrates the task generalization ability of our proposed Double-Filter method. >_Q1: “Could you elaborate on your choice of a genetic algorithm for the fine-grained Architecture Layer Filter?”_ Thanks. **As description in Section 3.2, we propose an Adjustable Genetic Algorithm (AGA) for fine-grained architecture filtering, approximating an optimal configuration.** Based on the traditional genetic algorithm, it introduced adaptive adjustment for custom number of filtering layers, so as to better perform adaptive training and deployment capabilities on different devices. Each model configuration is encoded as a chromosome, with genetic operations (fitness evaluation, crossover, and mutation) guiding optimization across generations. The fitness function balances loss on sampled datasets, ensuring optimal layer reduction. Crossover swaps genes between parents, while mutation adjusts gene values to maintain a fixed number of replaced layers. Additionally, as mentioned in **Appendix A.1.2 (Lines 551-557), our loss calculations in the fitness function were based on 100 batches of training data (equivalent to 2 epochs), and losses were recorded over 10 validation batches**. The search cost of GA is equivalent to adding 2 epochs to the training phase, but DAS requires 3 epochs and requires full validation set validation. So the GA cost is still more efficient than the comparison method. Moreover, **please refer to our response to Q2, which further discusses optimization quality and stability.** >_Q2: “Given that GAs are heuristic and do not always guarantee an optimal result and requires hyperparameter tuning, did you consider alternative optimization methods that might offer stronger guarantees on the quality or stability of the optimization outcome?”_ Thanks for the insightful comment. **As we said in Footnote2 on Page 5, identifying an absolute optimal structure is NP-hard, rendering exhaustive search impractical. So as mentioned in Section 3.2 (line 252), "The AGA aims to identify the approximate optimal reduction with L filtered layers."** Comparing with other searching algorithms, genetic algorithm uses population search to avoid falling into the local optimal solution, and has a greater probability to find the global optimal solution. And another reason we choose GA-based model is that genetic algorithms can be well parallelized and improve computational efficiency, because of the independent evaluation among individuals. To offer stronger guarantees on the quality or stability of the optimization outcome, the "Elite Strategy" is adopted in step 11 of Algorithm 2 to ensure that the optimal solution of the current generation is not lost during each generation update, thereby ensuring the quality of the search. We design the mutation operation to satisfy the constraint of the genetic algorithm while diversifying each generation effectively. **We hope that the above responses could address the reviewers' concerns and questions.**
Summary: The paper introduces Double-Filter for refining the fine-tuning process of VLP models. It employs two key strategies to reduce redundancy: A new patch selection method that uses background-foreground separation to improve image feature selection; Then, a genetic algorithm designed to remove redundant architectural layers, thereby enhancing both the efficiency and effectiveness of the model. Together, these strategies aim to streamline the fine-tuning process while maintaining the model’s performance. Experimental results demonstrate that the proposed approach achieves competitive performance, with only marginal degradation compared to existing parameter-efficient fine-tuning methods, while significantly reducing computational complexity through the filtering of over 60% of image patches and 12 model layers. Claims And Evidence: Experimental results demonstrate that the proposed approach achieves competitive performance, with only marginal degradation compared to existing parameter-efficient fine-tuning methods, while significantly reducing computational complexity through the filtering of over 60% of image patches and 12 model layers. The claims are clear with enough evidence. Methods And Evaluation Criteria: The proposed efficient training and inferencing Double Filter method makes it possible to deploy multimodal LLMs on low-resource devices by removing redundancy from both data and models, and also provides an effective reference for efficient training. Theoretical Claims: The theoretical method proposed in this paper is correct, and the detailed algorithms are given. In addition, this paper verifies the key efficiency design derivation. Experimental Designs Or Analyses: Experimental results demonstrate that the proposed approach achieves competitive performance, with only marginal degradation compared to existing parameter-efficient fine-tuning methods, while significantly reducing computational complexity through the filtering of over 60% of image patches and 12 model layers. The ablation experiments and visualization results are sufficient to demonstrate the effectiveness of each key component proposed. Supplementary Material: The supplementary material provides more visualizations, and detailed experimental parameter settings. It can help readers further understand the model performance and reproducible details. Relation To Broader Scientific Literature: The research on VLP efficiency is very extensive. This paper introduces new thinking to this field through double filtering of data and model and based on the idea of ​​genetic algorithm, which is beneficial. Essential References Not Discussed: Enough. Other Strengths And Weaknesses: Strength: 1. The paper is well-written and easy to follow. And the motivation is reasonable 2. I appreciate that the authors provide a detailed proof process for the claimed efficiency while describing the model in detail (giving two detailed algorithmics). This paper provided enough details in ensuring the correctness of its theoretical proofs. 3. Double-Filter achieved significant reductions in FLOPs while maintaining competitive performance. Demonstrates versatility across multiple VLP models (METER and ViLT) and tasks (VQA and NLVR2). Weakness: 1. The computational cost of YOLOv8 is included in Section 3.3 (FLOPs analysis). The authors claim that its cost is much lower than other modules, but no clear evidence is provided. Please estimate the forward pass cost of this module, as it is part of the forward process of the proposed method. The same problem exists in Table 1, please consider all forward computations. 2. In the Image Patch Filter (IPF) section, the paper mentions sparsity ρ ∈ (0, 1). Is this sparsity a hyperparameter, and if so, how is it set or optimized during the training process? Could the authors provide more details on how this parameter is determined? 3. In the Fitness Function, why must \( \beta \) be greater than half of the maximum summed losses? How does this choice impact the optimization process? 4. Why choose METER and ViLT instead of other VLP models as the base models for experiments? The author needs to give a clearer explanation and reason. 5. On page 6, in the left column, there are two formulas that appear to be excessively long. Other Comments Or Suggestions: See Weaknesses. Questions For Authors: 1. In the Image Patch Filter (IPF) section, the paper mentions sparsity ρ ∈ (0, 1). Is this sparsity a hyperparameter, and if so, how is it set or optimized during the training process? Could the authors provide more details on how this parameter is determined? 2. Please estimate the forward pass cost of this module, as it is part of the forward process of the proposed method. The same problem exists in Table 1, please consider all forward computations. 3. Why choose METER and ViLT instead of other VLP models as the base models for experiments? The author needs to give a clearer explanation and reason. 4. In the Fitness Function, why must \( \beta \) be greater than half of the maximum summed losses? How does this choice impact the optimization process? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your valuable and constructive comments. We appreciate your recognition of our work's innovative designs, evaluation criteria, and insightful analysis. Below, we address your concerns and questions individually: >_Q1: “In IPF section, the paper mentions sparsity ρ ∈ (0, 1). Is this sparsity a hyperparameter, and if so, how is it set or optimized during the training process? Could the authors provide more details on how this parameter is determined?”_ Thanks for your detailed suggestion. The sparsity parameter (ρ) in the image patch filter is actually a hyperparameter that controls the filtering ratio of the image patches. The larger the sparsity parameter, the greater the filtering degree. In fact, in Table 3 of the paper, we have conducted ablation experiments by setting the sparsity parameter (ρ) differently to achieve a balance between efficiency and performance. We will further clarify this in the final version to avoid confusion. >_Q2: “The authors claim that YOLOv8’s cost is much lower than other modules, but no clear evidence is provided. Please estimate the forward pass cost of this module,...”_ Thank you for the helpful suggestion. In our original analysis, we omitted the YOLOv8 cost because it is negligible compared to the overall model computation. However, for completeness, we now explicitly to calculate it. Specifically, YOLOv8 used in our IPF module requires approximately 4.5 G FLOPs per forward pass on one RTX 3090Ti. For the ViT-based encoder, taking the METER model as an example, it requires a total of about 100G FLOPs cost in per forward pass, of which each transformer Block in the interaction network requires approximately 5G FLOPs. Therefore, YOLOs accounts for less than 5% of the total, which is approximately equal to the cost of one Block. We will revise Section 3.3 and Table 1 to explicitly include this small cost to provide a complete picture of all forward computations. In addition, as shown in Table 1 and Table 3, even considering the cost of YOLO, our Double-Filter methods significantly reduces the computational cost when fine-tuning the VLP model, especially for models with complex interactions such as the METER model. >_Q3. “Why choose METER and ViLT instead of other VLP models as the base models for experiments? The author needs to give a clearer explanation and reason.”_ Thanks for the insightful suggestion. Our choice of METER and ViLT as baselines was based on the following considerations: (1) METER and ViLT are the primary baselines for our comparison methods. To ensure a fair and meaningful evaluation, we followed the standard practice of using the same VLP models as our comparison baselines. (2) METER and ViLT represent two distinct VLP architectures. METER employs complex cross-modal fusion mechanisms, while ViLT maintains independent unimodal encoding. Our experimental results demonstrate that our Double-Filter achieves more significant efficiency improvements under the complex METER framework. This suggests that Double-Filter has the potential to be even more effective when applied to more complex VLP models. Given these considerations, we choose METER and ViLT. Nonetheless, we acknowledge the importance of further validation on additional VLP models and will explore this in future work. >_Q4. “In the Fitness Function, why must ( \beta ) be greater than half of the maximum summed losses? How does this choice impact the optimization process?”_ Sorry for the confusion. In this paper, the role of $\beta$ is precisely to transform fitness values into the positive domain. In genetic algorithms, roulette wheel selection necessitates that the fitness scores generated by the fitness function be positive, as its core mechanism depends on transforming fitness values into non-negative and normalized probability distributions. If fitness values are negative, it not only leads to calculated selection probabilities being negative, which contradicts the fundamental principles of probability, but may also trigger division-by-zero errors or render probability calculations invalid when the sum of fitness values is zero or negative. Consequently, by employing a translation method to shift all fitness values into the positive domain, the relative ranking of individuals is maintained, thereby ensuring the algorithm's effective and reliable operation. We will clearly these discussion in the new revision. >_Q5. “On page 6, in the left column, there are two formulas that appear to be excessively long.”_ Thank you for the detailed suggestion! We'll simplify or break these formulas down into clearer segments to enhance readability. --- Rebuttal Comment 1.1: Comment: Thanks for the author's clarification, especially the further explanation of model choices and detail settings. Because our team is also concerned about the exploration of VLP efficient fine-tuning methods, and explored related efficient redundancy removal methods. The proposed Double-Filter is consistent with the experimental phenomenon we explored before, and the introduction of genetic algorithms to this task is eye-opening and inspiring to me. In addition, the FLOPs analysis in section 3.3 is useful to me, and I plan to continue to follow up the authors' work, especially after they clarify the details again in rebuttal. I think it's more complete. Although the authors had previously given detailed performance and efficiency comparisons, comprehensive results are further given during rebuttal, so I decided to further raise the score. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for sharing insights from your own research on VLP efficient fine-tuning and redundancy removal. We're glad to hear that our Double-Filter approach aligns with your experimental observations and that the integration of genetic algorithms was inspiring to you. We also appreciate your recognition of the FLOPs analysis in Section 3.3 and your continued interest in our work. Your constructive discussion during the rebuttal process helped us refine our explanations and make the paper more comprehensive. Thanks again for your support and for considering an updated score—it means a lot to us! We look forward to discussions in the future.
Summary: The paper introduces Double-Filter, an approach for efficient fine-tuning of Vision-Language Pre-trained (VLP) models by addressing redundancies at both the data and architectural levels. The authors propose two main components: (1) an Image Patch Filter (IPF) that selectively filters image patches by distinguishing between foreground and background regions, and (2) a Fine-grained Architecture Layer Filter (ALF) that employs an adjustable genetic algorithm (AGA) to identify and remove redundant sub-layers within transformer blocks. The authors claim that Double-Filter significantly reduces computational costs (e.g., 21.18G FLOPs reduction on METER) while maintaining competitive performance. The paper provides experimental results on METER and ViLT models, demonstrating the effectiveness of the proposed method in balancing efficiency and performance. Claims And Evidence: The claims made in the paper are generally supported by experimental evidence, but there are some areas where the evidence could be stronger or more comprehensive. The authors provide extensive ablation studies and comparisons with state-of-the-art PEFT methods, demonstrating that Double-Filter achieves significant reductions in computational costs (FLOPs) while maintaining competitive accuracy on VQA and NLVR tasks. For example, the paper reports that Double-Filter reduces FLOPs by 21.18G on the METER model for the VQA task, with only a minimal drop in accuracy (0.27% on average). However, the paper lacks a thorough discussion of the limitations of the proposed method. For instance, while the results are impressive on METER and ViLT, it is unclear how Double-Filter would perform on other VLP models, such as CLIP or ALIGN, which have different architectures and training objectives. Additionally, the paper does not explore the impact of patch filtering or layer removal on more complex multimodal tasks beyond VQA and NLVR, which limits the generalizability of the claims. Methods And Evaluation Criteria: The proposed methods are partially suited for the problem of efficient fine-tuning of VLP models, there are some areas where the approach could be improved. The Image Patch Filter (IPF) is a logical approach to reducing redundancy in image patches, which are known to be a major bottleneck in VLP models. By using a YOLO detector to separate foreground and background regions and then applying ViT attention scores to rank patch importance, the authors ensure that the most semantically relevant patches are retained. This approach is particularly effective for tasks like VQA and NLVR, where both foreground objects and background context are important. However, the reliance on YOLO for foreground-background separation could be a limitation. YOLO is a computationally expensive object detection model, and its use adds overhead to the overall pipeline. The authors do not explore alternative object detection methods, which could potentially offer a better trade-off between accuracy and computational cost. Additionally, the patch filtering process is based on ViT attention scores, which may not always capture the full semantic importance of patches, especially in complex scenes with multiple objects or ambiguous backgrounds. The Fine-grained Architecture Layer Filter (ALF) is a novel contribution, but the use of a genetic algorithm (GA) for layer removal introduces some challenges. While the GA is effective in exploring the space of possible layer configurations, it is computationally expensive and requires multiple generations of evaluation to converge. The authors do not discuss the computational cost of running the GA, which could be significant, especially for larger models or more complex tasks. Furthermore, the GA-based approach may not scale well to models with a large number of layers, as the search space grows exponentially with the number of layers. Theoretical Claims: The paper does not present any theoretical proofs, so there are no theoretical claims to evaluate. The focus is primarily on empirical results and algorithmic contributions. However, the authors do provide a detailed analysis of the FLOPs reduction achieved by Double-Filter. The paper could benefit from a more rigorous theoretical analysis of the proposed methods. For example, the authors could provide a theoretical justification for why certain layers are more redundant than others, or why the proposed patch filtering strategy is optimal for preserving semantic information. Experimental Designs Or Analyses: The experimental design is sound generally but there are some areas where the analysis could be improved. The authors conduct extensive experiments on two widely used VLP models (METER and ViLT) and compare their method against several state-of-the-art PEFT methods, including Shallow Prompt, Deep Prompt, LoRA, Adapter, and DAS. The ablation studies on patch filtering and layer removal are particularly insightful, demonstrating the impact of different filtering ratios and layer removal strategies on model performance and efficiency. However, the experiments are limited to two downstream tasks (VQA and NLVR), which may not fully capture the generalizability of the proposed method. The authors do not explore the performance of Double-Filter on other tasks, such as image captioning or text-to-image generation, which are also common applications of VLP models. Additionally, the experiments are conducted on relatively small datasets (e.g., VQA 2.0), and it is unclear how the method would perform on larger or more diverse datasets. Another limitation is the lack of analysis on the impact of Double-Filter on inference time and memory usage. While the paper focuses on FLOPs reduction, these metrics are critical for real-world applications, especially in resource-constrained environments. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is situated within the broader literature on efficient fine-tuning of VLP models. The authors discuss related work on adapter-based methods, block-skipping strategies, and other PEFT approaches, highlighting how their method differs by addressing both data-level and architecture-level redundancies. For example, the authors compare their approach to DAS, which also aims to reduce architectural redundancy but does so at a coarser level by replacing entire transformer blocks. Essential References Not Discussed: The paper covers most of the essential references related to efficient fine-tuning of VLP models. The paper could reference more recent advancements in genetic algorithms for neural architecture search, which might provide further context for the proposed AGA-based layer filtering approach. Other Strengths And Weaknesses: Strengths: - The paper presents an effective approach to efficient fine-tuning of VLP models by addressing both data-level and architecture-level redundancies. - The ablation studies and visualizations provide valuable insights into the impact of patch filtering and layer removal on model performance and efficiency. Weaknesses: - The paper could benefit from a more detailed discussion of the limitations of the proposed method, particularly in scenarios where patch filtering or layer removal might negatively impact performance. - The authors could explore additional methods for determining patch importance, beyond the ViT-based approach, to further improve the robustness of the IPF. - The reliance on YOLO for foreground-background separation adds computational overhead, and alternative object detection methods should be explored. Other Comments Or Suggestions: N/A Questions For Authors: 1. The paper mentions that the proposed method is effective under METER and ViLT models. Have the authors tested Double-Filter on other VLP models. 2. The paper focuses on reducing FLOPs and maintaining accuracy. Have the authors considered the impact of Double-Filter on inference time and memory usage. 3. The paper uses a YOLO detector for foreground-background separation. Have the authors explored other object detection methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review, acknowledging our contributions, and providing instructive suggestions. We respond to each weakness (_W*_) or question (_Q*_) below. >_W1: “... explore the impact on more complex multimodal tasks beyond VQA and NLVR, ...”_ Thanks for the instructive suggestion. We conducted VQA and NLVR experiments on one NVIDIA RTX 3090Ti, but the retrieval task requires many negative samples, making it difficult to maintain the same batch size compared to existing methods. To address the concerns, we rented a larger GPU and conducted generalization experiments on Flickr30K for image-text retrieval. The response Table reports our Double-Filter with same setting as Table 1 in the paper compared to the mainstream methods. METER | IR/TRR@1 | Add FLOPs | ViLT | IR/TRR@1 | Add FLOPs -------------------|------------------|------------------|-------------------|------------------|------------------ ClassifierOnly | 78.80/89.00 | 0.00 | - | 57.42/78.00 | 0.00 Shallow Prompt| 74.20/88.60| +28.71G. | -| 55.92/74.80| +19.53G Deep Prompt| 78.84/89.40 | +6.53G | -| 58.64/79.50 | +5.14G LoRA | 79.86/92.60 |0.00. | -| 57.44/77.70 |0.00 Adapter | 80.38/91.90 |+1.64G. | -| 62.68/81.40 |+0.86G Scaled PA | 80.40/93.20 | +1.12G. | -| 61.88/79.00 | +0.44G DAS | 80.12/91.80. |-11.16G | - | 60.66/80.80 | -1.03G **Double-Filter** | 80.05/91.22 | -21.18G | - | 61.18/79.39 | -4.72G Our Double-Filter achieves the lowest computational complexity among PEFT methods and maintains competitive performance in image-text retrieval. >_W2: “The authors do not discuss the computational cost of running the GA, which ...”_ Thanks for the detailed suggestion. As mentioned in **Appendix A.1.2 (Lines 551-557), our loss calculations in the fitness function were based on 100 batches of training data (equivalent to 2 epochs), and losses were recorded over 10 validation batches**. The search cost of GA is equivalent to adding 2 epochs to the training phase, but DAS takes 3 epochs and requires full validation set validation. So the GA cost is still lower than the comparison method. >_W3: “A more detailed discussion of the limitations of the proposed method.”_ Thanks for the detailed suggestion. We will further discuss detailed limitations. On the one hand, our method relies on pre-trained target detectors, but it still adds additional inference time; on the other hand, we use genetic algorithms to search for the best subnet, which has a long search time in the training stage and is prone to fall into the limitation of local optima. We will add the discussion in the final version. >_Q1: “Have the authors tested Double-Filter on other VLP models.”_ Thanks for the instructive comment. We choose METER and ViLT as baselines because: (1) METER and ViLT are the primary baselines for our comparison methods. To ensure a fair evaluation, we followed the standard practice of using the same VLP models as our baselines. (2) METER and ViLT represent two mainstream VLP architectures, and others generally follow these architectures. METER employs complex cross-modal fusion, while ViLT maintains independent unimodal encoding. Our results show that Double-Filter achieves notable efficiency improvements, especially under METER, suggesting its potential for more complex VLP models. >_Q2: “Have the authors considered the impact of Double-Filter on inference time and memory usage.”_ Thanks. In fact, **as shown in Table 2 (Line 332), we have compared the inference speed of Double-Filter with other methods**. Our results demonstrate that Double-Filter enables processing more samples per unit time, indicating a clear efficiency improvement. The following table provides additional memory consumption (inferencing with batch size of 1) to further illustrate the lower memory of Double-Filter. METER | Full tuning | Adapter | DAS | **Double-Filter** ------------------|------------------|------------------|-------------------|------------------- VQA | 3068M | 3090M | 2950M | **2906M** NLVR | 3030M | 3050M | 2916M | **2884M** >_Q3: “Have the authors explored other object detection methods.”_ Thank you for highlighting this vital direction. We employed YOLOv8 because the YOLOs are the most mainstream object detectors, which can better balance effectiveness and efficiency of detection. We further test EfficientDet[1], which need more computing resources. Due to the limited rebuttal time, we only compared the IPF filtering overlap among detectors, and found that over 96% of filtered patches remained consistent, so we believe that this has little impact on the final patch retention. Future work will explore alternative methods, including lightweight offline pre-training. [1] Tan, Mingxing, Ruoming Pang, and Quoc V. Le. "EfficientDet:Scalable and efficient object detection." CVPR. 2020.
Summary: The paper proposes Double-Filter, an efficient fine-tuning framework for vision-language pre-trained (VLP) models. It combines two redundancy reduction techniques: (1) an Image Patch Filter (IPF) that leverages YOLO for foreground/background separation and a ViT-based [CLS] attention mechanism to retain informative image patches, and (2) a fine-grained Architecture Layer Filter (ALF) using a genetic algorithm to prune redundant transformer sub-layers. The method is evaluated on visual question answering (VQA) and natural language visual reasoning (NLVR) tasks with METER and ViLT models, showing reduced computational costs (FLOPs) with minimal performance degradation compared to baseline parameter-efficient methods. Claims And Evidence: The paper claims Double-Filter reduces FLOPs by 21.18G (METER/VQA) and 12.51G (METER/NLVR) while maintaining accuracy within 0.5% of full fine-tuning. However, the inclusion of YOLO's computational cost is not accounted for in efficiency metrics , potentially overstating FLOP reductions. The experimental FLOP analysis excludes the YOLO component (Figure 2, Section 3.3), which may add significant overhead during inference, undermining the claimed efficiency. Methods And Evaluation Criteria: The proposed methods (Double-Filter) and evaluation criteria (VQA/NLVR benchmarks with METER/ViLT) align with the objective of improving fine-tuning efficiency for VLP models. However, the evaluation scope is narrow, excluding retrieval tasks critical for VLP benchmarking. YOLO’s overhead in the IPF step is also not accounted for, which could skew FLOP reduction claims. Theoretical Claims: The only theoretical component is the FLOP analysis. The derivation of FLOPs for transformer components (MHSA, MHCA, FFN) is mathematically sound, using standard matrix multiplication cost formulas. However, the analysis oversimplifies assumptions (e.g., ignoring tokenization/embedding layers, assuming perfect parallelism). YOLO’s FLOPs are excluded, making the theoretical cost reductions optimistic. Other claims (e.g., redundancy removal effectiveness) are empirical and lack theoretical backing. Experimental Designs Or Analyses: 1. Evaluation Scope: The experiments focus on two tasks (VQA 2.0 and NLVR2) with METER and ViLT as baselines. While these are standard benchmarks, the exclusion of retrieval tasks (e.g., COCO evaluation) limits the validity of claims about broad efficiency. VLP models are often evaluated across a more diverse set of tasks, so the absence of retrieval results weakens the argument for general utility. 2. FLOP Calculation: The FLOP analysis excludes YOLO’s computational cost, which could be significant. The claimed 21.18G FLOP reduction for METER/VQA may be offset by YOLO’s overhead, especially during inference. The analysis assumes constant FLOPs for tasks with different input sizes (e.g., VQA’s image + text vs. NLVR’s dual images + text), which may not reflect real-world scenarios. Supplementary Material: Supplementary material is not found. I have read the Appendix which contains the Visualization and Hyperparameters details. Relation To Broader Scientific Literature: It combines two key techniques: (1) an Image Patch Filter (IPF)[1] that uses YOLO for foreground/background separation and ViT [CLS] tokens to retain informative patches, and (2) a fine-grained Architecture Layer Filter (ALF) employing a genetic algorithm to prune redundant transformer sub-layers. [1] Liang, Youwei, et al. "Not all patches are what you need: Expediting vision transformers via token reorganizations." arXiv preprint arXiv:2202.07800 (2022). Essential References Not Discussed: Not found. Other Strengths And Weaknesses: Strengths: This paper is well written and organized. Additional Weaknesses: The experimental design lacks diversity, focusing solely on METER/ViLT with fixed sparisty ratios (e.g., 60% patch filtering). Scalability to larger VLP models (e.g., OFA, BEIT-3) or datasets (COCO, Conceptual Captions) is untested. The supplementary material (Appendix) provides incomplete details on genetic algorithm parameters or YOLO hyperparameters, limiting clarity on implementation. Incorporating YOLO adds complexity and potential latency to the pipeline, contradicting the efficiency goals. YOLO’s inference time likely outweighs the savings from patch pruning, especially in real-time applications. The Architecture Layer Filter (ALF) draws parallels to LayerDrop (Fan et al., 2019) and DAS (Wu et al., 2024b), which prune entire transformer blocks. ALF’s fine-grained pruning (targeting MHSA/MHCA/FFN sub-layers) is more granular, akin to methods like Dynamic Head Pruning (Michel et al., 2019). The use of a genetic algorithm for layer selection is novel but conceptually related to AutoML frameworks (e.g., NAS), though this paper doesn’t position ALF as a neural architecture search. The lack of benchmarking against LayerDrop-style dynamic inference suggests a gap in comparing to adaptive methods. Other Comments Or Suggestions: N/A Questions For Authors: How does YOLO’s computational cost compare to the FLOP savings from patch/layer pruning? Can the pipeline function without YOLO for fair comparison to prior work? Why were retrieval tasks excluded, given their importance for VLP benchmarking? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the encouraging feedback and suggestions. However, we argue that some concerns were addressed in our paper, so we clarify them again here. We address the main weaknesses(W*) and questions(Q*) below: >_W1: “YOLO's computational cost is not accounted for in efficiency metrics ,....”_ **Clarification:** We would like to clarify that our paper explicitly states that our experiments take YOLO’s FLOPs into account when calculating the model FLOPs, **in the corresponding captions of Table 1 (Line 278) and Table 3(Line 332), e.g. “_The FLOPs of Double-Filter contain the YOLO detector._”.** We compare complete Double-Filter with others in Table 1 and show the FLOPs reduction statistics via the diversity IPF in Table 3. We hope this clarification resolves the reviewer’s concern. >_W2: “The experimental design lacks diversity, focusing solely with fixed sparsity ratios (e.g., 60% patch filtering).”_ First, we clarify that Table 3 and Figure 3 present ablation studies on various sparsity ratios, which is inconsistent with the reviewer's claim of a fixed ratio. Second, exhaustively testing all ratio combinations would demand substantial computational resources. To balance performance and efficiency, we evaluate four representative ratios—40%, 50%, 60%, and 70%. This approach ensures a comprehensive yet feasible analysis and follows a widely adopted experimental methodology. >_W3: “YOLO’s inference time likely outweighs the savings from patch pruning.”_ As shown in Table 2, YOLO’s inference time is included in Double-Filter. Despite this, it still improves overall inference speed. We will clarify this in the final version. >_W4: “The Appendix provides incomplete details...”_ The main parameters of the genetic algorithm were given. We guess the concerns lie in YOLO. We clarify that our pipeline does not rely on additional settings of YOLO, and directly uses the pre-trained YOLOv8 (Reisetal.,2023). Therefore, it does not affect our reproducibility. We will add more details in the final version. >_W5: “The lack of benchmarking against LayerDrop-style dynamic inference ...”_ LayerDrop-style methods typically required modifications during the pre-training stage to enable adaptive inference, whereas our focus lies in fine-tuning pre-trained VLP models without altering the upstream pre-training process. This makes the comparison less aligned in terms of scope—note that DAS (Wu et al., 2023) also does not compare with such methods. Moreover, unlike dynamic methods that often require loading the entire model and selectively dropping layers, our method only loads a searched subnetwork, reducing memory and computational overhead. >_Q1: “How does YOLO’s computational cost compare to the FLOP savings from patch/layer pruning?”_ Taking METER as an example, as shown in Table 1, our Double-Filter can reduce computational costs by 21.18G FLOPs. The computational costs consider the YOLO’s FLOPs (about 4.5G FLOPs), so there are about 26G FLOPs savings from IPF and ALF. Additionally, Table 3 also shows the detailed computational costs including YOLO’s FLOPs when applying different filter ratios for IPF. The FLOPs saved by image patch filtering far exceeds the introduction of YOLO. >_Q2: “Can the pipeline function without YOLO for fair comparison to prior work?”_ YOLO effectively distinguishes foreground and background without complex design, enhancing versatility. In future work, we will further explore how to replace YOLO, but this will undoubtedly require a tailored design, and the versatility may be reduced. Additionally, as we clarified in W1, our DF included YOLO's cost when comparing with others, so they are fair comparisons. >_Q3: “Why were retrieval tasks excluded, given their importance for VLP benchmarking?”_ Sorry for the confusion. Due to resource limitations, we conducted VQA and NLVR experiments on an NVIDIA RTX 3090Ti. However, retrieval require numerous negative samples, making it difficult to maintain the same batch size as in prior works (e.g., DAS used NVIDIA Tesla A100 GPU). To address the concerns, we rented a larger GPU for image-text retrieval task on Flickr30K. METER | IR/TRR@1 | Add FLOPs | ViLT | IR/TRR@1 | Add FLOPs -------------------|------------------|------------------|-------------------|-----------------|----------------- ClassifierOnly | 78.80/89.00 | 0.00 | - | 57.42/78.00 | 0.00 Shallow Prompt| 74.20/88.60| +28.71G. | -| 55.92/74.80| +19.53G Deep Prompt| 78.84/89.40 | +6.53G | -| 58.64/79.50 | +5.14G LoRA | 79.86/92.60 |0.00. | -| 57.44/77.70 |0.00 Adapter | 80.38/91.90 |+1.64G. | -| 62.68/81.40 |+0.86G Scaled PA | 80.40/93.20 | +1.12G. | -| 61.88/79.00 | +0.44G DAS | 80.12/91.80. |-11.16G | - | 60.66/80.80 | -1.03G **Double-Filter** | 80.05/91.22 | -21.18G | - | 61.18/79.39 | -4.72G Our model for retrieval task still maintains a high efficiency while ensuring the performance with the same setting as Table 1.
null
null
null
null
VideoRoPE: What Makes for Good Video Rotary Position Embedding?
Accept (oral)
Summary: In this work, the authors study criteria for video position embedding and propose a rotary position embedding method. They claim that a good video position embedding should handle 3D structure, and appropriate frequency allocation to prevent embedding collision, spatial symmetry and temporal index scaling between text and visual tokens. According to these criteria, they build their method upon M-RoPE (Wang et al., 2024b). They propose low-frequency temporal allocation to avoid embedding collision, and diagonal position ID layout to make the token distances more balanced, and adjustable temporal scaling that controls temporal interval between visual tokens with a hyperparameter. They validate the proposed idea on public benchmarks: long video understanding, long video retrieval, and video hallucination. ## update after rebuttal My major concerns are all resolved by the rebuttal. I have increased my rating to strong accept. Claims And Evidence: Some of the claims are not fully supported by clear and convincing evidence. - L266-270, second column: “This periodicity creates “hash collisions” (red planes), where distant positions share near-identical embeddings, making the model susceptible to distractor influence.” $\rightarrow$ Why such collisions make the model susceptible to distractor influence? I understand the proposed method helps avoiding such collisions and improves V-NIAH-D performance significantly upon M-RoPE baseline. However, I do not see any logical connection between the collision and distractor influence. Can the authors provide either logical explanations or more direct empirical evidence on this? - L413-414, first column: “Fig. 7 (a) and (b) demonstrate that the proposed V-NIAH-D is more challenging than V-NIAH.” $\rightarrow$ It is quite arguable that V-NIAH-D is more challenging that V-NIAH by merely looking into Fig. 7. Can the authors provide quantitative measure of challengingness of these two datasets? Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem of video positional embedding. Theoretical Claims: There is no theoretical claim made. Experimental Designs Or Analyses: I have checked all experimental designs and analysis. They all seem to be valid. Supplementary Material: I have reviewed the supplementary material including more experiments on scaling factor, x, y location, and extrapolation to 128k token length, additional details on evaluation benchmarks, V-NIAH-D examples, more attention analysis, and details on frequency allocation. Relation To Broader Scientific Literature: The key contribution of this paper is related to the general multi-modal LLM field. The findings and proposed method could be helpful for the general audience of the field. Essential References Not Discussed: I do not see any missing essential references not discussed. Other Strengths And Weaknesses: - Strengths: This paper is well-motivated and the proposed method is sensible. The experimental results demonstrate the clear merit of the proposed video position embedding method. The paper is well-organized and easy to follow in general. - Weaknesses: There is missing evidence for some of the claims made. Please address the “Claims and Evidence” section. Although the paper is easy to follow in general, there are some errors, typos, and awkward sentences. Furthermore, there are some unclear points in the paper. The presentation quality could be improved to a more professional level. Other Comments Or Suggestions: I list a few errors, typos, and awkward sentences below. - L214-216, second column: “M-RoPE (Wang et al., 2024b) uses dimensions to model temporal, horizontal, and vertical dimensions sequentially” $\rightarrow$ “uses dimensions” seems a bit awkward. - L271-273, second column: “The visualized relationship between the periodicity, monotonicity, and temporal modeling.” $\rightarrow$ Not an English sentence. - L357, first column: “…different modes that ….” $\rightarrow$ models Questions For Authors: There are some unclear points in the paper. I list a few below. - In Fig. 2 right side, what do the axes mean and each cell color mean? - In Fig. 3 of the main paper, and Fig. 9 of the supplementary material, How do we interpret the results? For instance, in L165-176, second column, what does it mean by “the needle is located primarily through vertical positional information”? I do not understand this paragraph. Can the authors elaborate on this paragraph? - In L298-303, first column, what does it mean by “creating a stack in the bottom-left corner”? I do not understand this paragraph as well. Can the authors elaborate on this paragraph? - Can the authors give more intuitive explanation on Fig. 6? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer 3qrY, Thanks for your valuable feedback. We sincerely thank all reviewers for acknowledging that this paper is novel (t9dP, juMQ), well-motivated (t9dP, 26sL, 3qrY), significant improvements (t9dP, 26sL, 3qrY), well-written (t9dP, juMQ, 26sL, 3qrY), and easy to follow (t9dP, 3qrY). Below we address your questions: > Q1: L266-270, second column. Why such collisions make the model susceptible to distractor influence?...Can the authors provide either logical explanations or more direct empirical evidence on this? We appreciate your request for clarification. As shown in Fig. 4 and Appendix F, we have linked position embedding collisions in M-RoPE to distractor influence. Due to high-frequency oscillations, distant positions can share nearly identical embeddings (red planes in Fig. 4), making the model vulnerable to distractors. In contrast, VideoRoPE avoids such oscillations, enabling more robust temporal modeling. We have also conducted an ablation study (Table 5), confirming that LTA(Low-frequency Temporal Allocation)—by preventing temporal embedding collisions—improves video understanding performance. > Q2: L413-414, first column: It is quite arguable that V-NIAH-D is more challenging than V-NIAH by merely looking into Fig. 7. Can the authors provide quantitative measure of challengingness of these two datasets? In the table below, we provide the quantitative performance gaps observed in various models, including our Video-RoPE, tested on both V-NIAH and V-NIAH-D datasets: |Model|V-NIAH|V-NIAH-D| |-|-|-| |LLaVA-NeXT-Video-7B|14.66|3.99| |LongVA|90.88|80.44| |Qwen2.5-VL-7B-VideoRoPE|86.44|86.22| These results indicate a significant performance drop on V-NIAH-D for most methods (-10.67 for LLaVA-NeXT-Video-7B and -10.44 for LongVA), supporting our claim that V-NIAH-D is more challenging than V-NIAH. Also, our VideoRoPE effectively mitigates the misleading influence of distractors in V-NIAH-D (-0.22 merely). We will include this quantitative comparison in our revised manuscript. > Q3: I list a few errors, typos, and awkward sentences below. Thank you for highlighting language issues and awkward expressions. We will carefully revise the manuscript to correct these: - "uses dimensions" (L214-216) will be corrected to "uses different dimensions". - "The visualized relationship between the periodicity, monotonicity, and temporal modeling" (L271-273) will be corrected to "The relationship between periodicity, monotonicity, and temporal modeling is visualized in Fig 4." - "modes" will be corrected to "models" (L357). > Q4: In Fig. 2 right side, what do the axes mean and each cell color mean? Thank you for pointing that out. Fig. 2 and the last row of Fig. 7 are indeed the same, both showing M-RoPE and VideoRoPE performance on V-NIAH and V-NIAH-D benchmarks. The x-axis indicates context length, the y-axis shows frame depth, and the green-to-red color bar reflects needle retrieval accuracy—from perfect to zero. > Q5: In Fig. 3 of the main paper, and Fig. 9 of the supplementary material, How do we interpret the results? For instance, in L165-176, second column, what does it mean by “the needle is located primarily through vertical positional information”? I do not understand this paragraph. Can the authors elaborate on this paragraph? In Fig 3, we can derive the fact from different component of the attention score, that M-RoPE locates and retrieves the needle using the component with respect to the feature dimension capturing vertical positional information, namely the product of the last 48 dimension of Q and K in self-attention. This goes against the design intention of M-RoPE. > Q6: In L298-303, first column, what does it mean by “creating a stack in the bottom-left corner”? I do not understand this paragraph as well. Can the authors elaborate on this paragraph? Can the authors give more intuitive explanation on Fig. 6? The three subfigures in Figure 6 represent the schema of Vanilla RoPE, M-RoPE, and VideoRoPE when encoding text-video-text input. The three dimensions of each subfigure represent the sequential indices (both textual and temporal indices), and the horizontal and vertical positions in each frame. We find that each dimension of Vanilla RoPE increases with the input token indices, regardless of the existence and the spatiotemporal characteristics of video, which is a straight diagonal line in 3D space. After that, M-RoPE uses part of dimensions to represent the spatial information of the video, but the horizontal and vertical position indices do not change with the video frames. Therefore, it appears as a vertical stack in the bottom-left corner in 3D space, deviating from the diagonal direction, different from the pre-trained features in the text. Comparatively, VideoRoPE allows the horizontal and vertical position indices of each frame to change with the frame indices, while maintaining the diagonal features, which makes it easier to transfer LLM to the video mode. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and other reviews. I appreciate the rebuttal. Most of my concerns are resolved by the rebuttal. However, I am still confused about the "distractor influence". How do the authors define a distractor? --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3qrY, Thank you for your feedback! We are glad that most of your concerns have been addressed. As for your last question regarding the definition and rationale behind distractors in our proposed benchmark, **V-NIAH-D**. We provide a detailed clarification below. In **V-NIAH-D (Visual Needle-in-a-Haystack with Distractors)**, a *distractor* is specifically defined as a frame that is both **visually and semantically similar to the ground-truth needle** yet remains **irrelevant to the query posed**. These distractors are intentionally introduced to rigorously evaluate the model's ability in accurate interpretation and robust long-context reasoning, by providing challenging *hard negatives* that closely mimic the correct frame but do not answer the query. Distractors are carefully selected to share strong visual and contextual similarity with the needle, such as similar object categories, scene structures, or visual themes. Despite this resemblance, they are irrelevant to the specific question being asked. This ensures that the correct answer remains unambiguous, while also rendering superficial pattern matching or short-term heuristics ineffective. To further increase the difficulty, distractors are inserted every 200 frames—a value guided by the analytical approximation: $2 \cdot \pi \cdot 1000000^{32/128} \approx 198.7$ This periodic strategy ensures that distractors and the needle share similar positional embeddings, thus introducing interference at the position-encoding level during queries. We source these distractors from *Google Image Search* and *Flux*, yielding visually rich and naturally diverse content that is plausibly confusable with the needle, yet explicitly filtered to maintain irrelevance to the query. This design significantly reduces the likelihood of random guessing and encourages models to engage in genuine long-context reasoning, rather than relying on superficial heuristics or shortcut cues. We have illustrated and discussed these concepts clearly in **Figure 8** and **Appendix B** of the manuscript. We welcome any further concerns you might have. In light of our response, we would greatly appreciate your review and a potential adjustment to your score. Thank you again for your time and thoughtful feedback.
Summary: This paper identifies four key factors in extending position encodings from images to videos: spatio-temporal structures, frequency allocation, spatial symmetry, and temporal indexing. Drawing from these observations, the authors propose VideoRoPE, which (1) uses low-frequency temporal allocation to mitigate periodic oscillations, (2) applies a diagonal layout to maintain spatial symmetry, and (3) adopts adjustable temporal spacing to decouple temporal from spatial indexing. The method shows consistent performance improvements in long video retrieval, video understanding, and video hallucination tasks. Claims And Evidence: The four key aspects conceptually sound but stronger empirical validation is required, especially on frequency allocation and spatial symmetry. Temporal allocation: The authors’ figures and retrieval results (e.g., Fig. 6) support that low-frequency allocation reduces periodic oscillations, but explicitly showing some specific failure cases of high-frequency allocation would strengthen the claim. Spatial symmetry: The only reference for spatial symmetry (Su et al., 2024b) is very weak. It’s just a blog not a peer-reviewed source. I understand the concept of the need of spatial symmetry but I think the authors should have provided more empirical qualitative study of how spatial inconsistency becomes problematic. Methods And Evaluation Criteria: The authors’ approach—incorporating low-frequency temporal allocation, a diagonal layout, and adjustable temporal spacing—is broadly reasonable. However, one concern remains regarding low-frequency temporal allocation. While using only low frequencies may help capture global or long-range context, it potentially risks missing short-term or local dynamics, which are often critical in videos displaying rapid changes (e.g., fast-moving objects or abrupt scene transitions). A related question is why the temporal encoding frequencies are not interleaved with the spatial frequencies (like the 2D RoPE approach), balancing both low and high frequencies for the time dimension. For example, one could allocate 32, 48, 48 frequencies for (t,x,y) and interleave them as [t t x y x y x y … t t x y x y x y] or [t x y t x y x y … t x y t x y x y] to preserve the ability to capture both long- and short-term variations. Theoretical Claims: Theoretical claims appear to sound although further empirical and qualitative analysis should provide compelling support. Experimental Designs Or Analyses: Most of the experiments on different video tasks appear valid and sufficiently controlled, except scaling factor ablation in Tab.6. The performace seems to be sensitive to the scaling factor $\delta$ on LongVideoBench. Does the optimal scaling factor \delta vary significantly across benchmarks, or is it stable? Did you tune it separately for each dataset? Supplementary Material: I reviewed all contents of the supplementary material. Relation To Broader Scientific Literature: The core findings and solutions (i.e., diagonal layout) for spatial symmetry was already proposed in (Su et al, 2024b). Personally, I think the proposed method essentially extends RoPE-Tie-V2(Su et al, 2024b) to the temporal domain. The newly featured low-frequency temporal allocation and temporal spacing are technically less significant or already proposed in (Li et al., 2024), respectively. Essential References Not Discussed: [a] Heo et al., “Rotary Position Embedding for Vision Transformer,” ECCV, 2024. This reference would provide additional context for 2D extensions of RoPE in vision tasks. Other Strengths And Weaknesses: Strengths: - The paper is well-written, motivating the problem clearly. - Empirical results consistently demonstrate improvements across diverse tasks. Weaknesses: - Please see above. Other Comments Or Suggestions: - Captions in Fig.2 should be more informative. Especially, on right figure, the caption should explain what the x, y axes, and color indicates - In Eq.7, I think the equation needs to be correct as, $(t,x,y)=(\tau + (\delta - 1)T_v, \tau + (\delta - 1)T_v, \tau + (\delta - 1)T_v)$ if $T_s+T_v \leq \tau \leq T_s+T_v+T_e$ Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 26sL, Thanks for your valuable feedback. We sincerely thank all reviewers for acknowledging that this paper is novel (t9dP, juMQ), well-motivated (t9dP, 26sL, 3qrY), significant improvements (t9dP, 26sL, 3qrY), well-written (t9dP, juMQ, 26sL, 3qrY), and easy to follow (t9dP, 3qrY). Below we address your questions: > Q1: Temporal allocation...showing some specific failure cases of high-frequency allocation... Failure cases of high-frequency allocation in M-RoPE have shown in Fig. 3 and Appendix E. Specifically, the M-RoPE responses illustrate its limitations, with attention visualizations revealing that high-frequency encoding hinders long-range temporal dependencies, favoring local patterns instead. > Q2: Spatial symmetry...more empirical qualitative study of how spatial inconsistency becomes problematic We have quantitatively shown the impact of spatial symmetry in Tab. 5 (second row), with a +1.65 gain on MLVU under 64k context over M-RoPE. To further validate its importance, we evaluated it on 4 benchmarks with consistent improvements supporting our claim. |Method|MLVU|VideoHallucer|V-NIAH|V-NIAH-D| |-|-|-|-|-| |baseline|61.56|34.3|78.67|74.67| |+ DL|63.03|34.8|80.44|76.44| > Q3: While using only low frequencies...potentially risks missing short-term or local dynamics... See Q4. > Q4: Why the temporal encoding frequencies are not interleaved with the spatial frequencies ... For example, ... [t t x y x y x y … t t x y x y x y] or [t x y t x y x y … t x y t x y x y] to preserve the ability to capture both long- and short-term variations. We trained a model using the "[t t x y x y x y]" format and conducted additional comparative experiments with varying context lengths on the LongVideoBench benchmark, which features a wide range of video scenarios—including both rapidly changing dynamic content and slowly evolving scenes. Our results show below that, on average, low-frequency temporal allocation consistently outperforms the "[t t x y x y x y]" arrangement. This suggests that our frequency design effectively balances the modeling of both global context and local dynamics across diverse video conditions. As for the "[t x y t x y x y … t x y t x y x y]" pattern, we plan to report on it in future updates. |Context Length|[t...x...y...]|[t t x y x y x y]|[xy...t...] (Ours)| |-|-|-|-| |16k|60.05|59.95|62.03| |32k|59.33|58.40|59.54| |64k|58.71|57.73|59.12| |Avg|59.36|59.06|60.14| > Q5: ...The performance seems to be sensitive to the δ on LongVideoBench. Does the optimal δ vary significantly across benchmarks, or is it stable? Did you tune it separately for each dataset? Table 6 presents the ablation study of the scaling factor on LongVideoBench. Additionally, we report the average results on other benchmarks, MLVU and VideoMME, in the table below. Our analysis indicates that the optimal δ remains consistent across various tasks. We would also like to clarify explicitly in the revised manuscript that we did not tune this δ separately for each individual dataset. |δ|LongVideoBench|MLVU|VideoMME|Avg| |-|-|-|-|-| |0.5|50.83|59.87|58.33|56.34| |1|54.11|63.54|59.67|59.11| |2|55.50|65.59|61.67|**60.92**| |3|53.83|63.38|60.33|59.18| > Q6-1: The core findings and solutions (i.e., diagonal layout) for spatial symmetry was already proposed in (Su et al, 2024b). Personally, I think the proposed method essentially extends RoPE-Tie-V2(Su et al, 2024b) to the temporal domain. This is not true. RoPE-Tie-V2 (Su et al, 2024b) introduced the concept of spatial symmetry but did not provide experimental validation, nor did it discuss the diagonal layout. Additionally, RoPE-Tie-V2 cannot handle streaming video inputs (as discussed by reviewer juMQ regarding streaming scenarios) > Q6-2: The newly featured low-frequency temporal allocation and temporal spacing are technically less significant or already proposed in (Li et al., 2024), respectively. This is not true. Li et al. (2024) primarily focuses on enhancing video models from a data perspective, using only temporal reasoning data in pure text. Their work does not address low-frequency temporal allocation or temporal spacing—both of which are novel contributions introduced by our method. We will clearly highlight these distinctions in our revised manuscript. > Q7: [a] Heo et al., should be cite. We will cite this work in our related work section. This paper is designed for 2D image tasks. On the contrary, our VideoRoPE is designed for 3D video tasks. > Q8: Captions in Fig.2 Thank you for pointing that out. Fig. 2 and the last row of Fig. 7 are indeed the same, both showing M-RoPE and VideoRoPE performance on V-NIAH and V-NIAH-D benchmarks. The x-axis indicates context length, the y-axis shows frame depth, and the green-to-red color bar reflects needle retrieval accuracy—from perfect to zero. > Q9: Typo in Eq.7 Thank you for identifying this typo. We will correct and clarify this equation in our revised submission. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. After going through the rebuttal, I found that all my concerns are resolved. I strongly recommend adding new results, e.g., streaming videos, temporal allocation, $\delta$ ablation, etc., to the final manuscript. I raise my rating to accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 26sL, Thanks for increasing the score, and we appreciate it. We are encouraged by your recognition of the quality and significance of our work. Following your recommendation, we will include the new results (e.g., results on streaming video benchmarks, ablation studies on temporal allocation and $\delta$, etc) in our final version. Regards, Authors
Summary: VideoRoPE is a position embedding method designed for video large language models. It addresses four key issues: 3D structure, frequency allocation, spatial symmetry, and temporal index scaling. The authors demonstrate through a new benchmark V-NIAH-D that existing methods perform poorly when distractors are present. VideoRoPE solves these problems through low-frequency temporal allocation, diagonal layout, and adjustable temporal spacing, outperforming other RoPE variants in video retrieval, understanding, and hallucination benchmarks. Claims And Evidence: Yes. VideoRoPE's superior performance is demonstrated through comprehensive evaluations across multiple benchmarks with significant improvements (+12.44% on V-NIAH-D, +11.9% on VideoHallucer). Methods And Evaluation Criteria: Yes. The proposed method directly addresses video position embedding challenges, the new V-NIAH-D benchmark effectively tests model robustness to distractors, and the evaluation uses appropriate benchmarks covering understanding, retrieval, and hallucination tasks across multiple context lengths, which make sense to me. Theoretical Claims: Yes, I did. This paper doesn't contain formal mathematical proofs or theoretical claims requiring verification. These equations correctly describe implementation details of vanilla RoPE, M-RoPE, and VideoRoPE, but they serve as explanations rather than theoretical proofs. Experimental Designs Or Analyses: Yes, I did. The V-NIAH-D benchmark with periodic distractors effectively tests robustness to frequency-based issues. And the evaluations are comprehensive, testing across multiple datasets and context lengths (8k-64k) to assess both in-distribution and extrapolation capabilities. Supplementary Material: Yes. Appendix A and D. Relation To Broader Scientific Literature: VideoRoPE extends position embedding research from text-only models to video, addressing the unique challenge of handling both spatial and temporal dimensions in one attention space. It builds on long-context understanding work (like LongRoPE) and attention mechanism research (particularly attention sinks), while making novel contributions to frequency allocation for effectively modeling video's complex spatio-temporal structure. Essential References Not Discussed: The paper covers most of the essential references related to position embeddings for transformer models and video understanding. However, there are a few relevant works that are not cited or discussed that could provide additional context: [1] Chai, W., Song, E., Du, Y., Meng, C., Madhavan, V., Bar-Tal, O., ... & Manning, C. D. (2024). Auroracap: Efficient, performant video detailed captioning and a new benchmark. arXiv preprint arXiv:2410.03051. [2] Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Liu, Y., ... & Qiao, Y. (2024). Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 22195-22206). [3] Tang, Y., Guo, J., Hua, H., Liang, S., Feng, M., Li, X., ... & Xu, C. (2024). VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?. arXiv preprint arXiv:2411.10979. Other Strengths And Weaknesses: Other Strengths: S1: VideoRoPE addresses key issues of current position embedding for video-LLM including 3D structure, frequency allocation, spatial symmetry, and temporal index scaling. S2: The paper has very strong visualizations to explain frequency allocation and position embedding. S3: The V-NIAH-D benchmark is a valuable contribution that provides a way to test model robustness to distractors. Other weakness: no obvious drawbacks. Other Comments Or Suggestions: The paper could potentially be enhanced by discussing additional benchmarks that, while not included in the current comparison, might provide valuable insights, such as those works referenced in the "Essential References Not Discussed" section. Questions For Authors: Q1. How would VideoRoPE need to be adapted for streaming video understanding tasks where the full temporal context isn't available upfront? Since there are many new datasets and benchmarks [1, 2] for streaming video understanding, which is more and more important. [1] Yang, Z., Hu, Y., Du, Z., Xue, D., Qian, S., Wu, J., ... & Xu, C. (2025). SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding. arXiv preprint arXiv:2502.10810. [2] Lin, J., Fang, Z., Chen, C., Wan, Z., Luo, F., Li, P., ... & Sun, M. (2024). Streamingbench: Assessing the gap for mllms to achieve streaming video understanding. arXiv preprint arXiv:2411.03628. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer juMQ, Thanks for your valuable feedback. We sincerely thank all reviewers for acknowledging that this paper is novel (t9dP, juMQ), well-motivated (t9dP, 26sL, 3qrY), significant improvements (t9dP, 26sL, 3qrY), well-written (t9dP, juMQ, 26sL, 3qrY), and easy to follow (t9dP, 3qrY). Below we address your questions: > Q1: The paper covers most of the essential references related to position embeddings for transformer models and video understanding. However, there are a few relevant works that are not cited or discussed that could provide additional context. Thank you for the suggestion. We will incorporate citations and a discussion of Auroracap, MVBench, and VidComposition into the revised manuscript. These benchmarks, which evaluate video large language models, offer valuable insights into detailed video captioning (Auroracap), general video understanding (MVBench), and compositional structure analysis (VidComposition). We also report the performance of our VideoRoPE on these benchmarks, see Q2 below. > Q2: The paper could potentially be enhanced by discussing additional benchmarks that, while not included in the current comparison, might provide valuable insights, such as those works referenced in the "Essential References Not Discussed" section. Thanks for your suggestion. We evaluate our method on the Video-Detailed-Caption (VDC) benchmark proposed by Auroracap and MVBench. Since VidComposition has not yet been open-sourced, we will evaluate it once the benchmark is released. |Model|VDC|MVBench| |-|-|-| |Auroracap-7B|38.2|-| |Vanilla RoPE|43.0|67.1| |TAD-RoPE|43.8|66.9| |M-RoPE|**44.0**|67.8| |VideoRoPE|**44.0**|**68.4**| > Q3: How would VideoRoPE need to be adapted for streaming video understanding tasks where the full temporal context isn't available upfront? Since there are many new datasets and benchmarks [1, 2] for streaming video understanding, which is more and more important. Thank you for this insightful question regarding the adaptation of VideoRoPE to streaming video understanding tasks. We acknowledge the growing importance of streaming video understanding, as it closely aligns with how humans naturally perceive the world. Accordingly, we will reference the works you mentioned in our manuscript and discuss the significance of streaming modeling for video understanding tasks. By design, VideoRoPE effectively supports streaming input scenarios, in contrast to RoPE-TIE-V2, which requires prior knowledge of the video's total length and thus cannot accommodate streaming data. Although VideoRoPE was not explicitly trained on streaming video data, we evaluated its effectiveness for streaming modeling on the StreamingBench Benchmark[2]. Specifically, we focused on the Real-Time Visual Understanding (RTVU) setting relevant to streaming modeling, as presented in the table below. The results demonstrate that VideoRoPE still exhibits superior performance in streaming video scenarios. |Model|StreamingBench (RTVU)| |-|-| | Vanilla RoPE|75.0| | TAD-RoPE|75.8| | M-RoPE|76.2| | VideoRoPE|**77.1**| --- Rebuttal Comment 1.1: Comment: Thanks for your comprehensive rebuttal. It shows the effectiveness of VideoRoPE across broader benchmarks and its potential in streaming tasks. My concerns have been addressed. --- Reply to Comment 1.1.1: Comment: Dear Reviewer juMQ, We sincerely appreciate your valuable feedback, which has helped us clarify and strengthen our work. We are glad that we have addressed your concerns and will incorporate your points into our final version. Regards, Authors
Summary: This paper proposes VideoRoPE, a new positional embedding for videos. VideoRoPE extends the 1D RoPE to 3D cases to encode spatiotemporal information. M-RoPE used in QWen2-VL (Wang et al., 2024b) employ 3D structure, dividing the feature dimensions into distinct subsets for spatiotemporal encoding. However, the authors argue that the allocation of feature dimensions is suboptimal. To address this issue, they propose to assign higher dimensions (lower frequencies) to the temporal axis, enhancing long-range temporal dependence modeling without interference from oscillations. Extensive experiments have been conducted to compare VideoRoPE against RoPE variants such as Vanilla RoPE, TAD-RoPE, and M-RoPE. The results on LongVideoBench and MLVU demonstrate VideoRoPE’s superior performance in long video understanding, retrieval, and hallucination. Claims And Evidence: The paper makes a strong claim that their proposed VideoRoPE method is superior for video understanding tasks compared to existing RoPE variants. The claims are generally well-supported by the evidence presented. For example, VideoRoPE consistently outperforms Vanilla RoPE, TAD-RoPE, and M-RoPE across multiple popular video understanding benchmarks and the proposed Visual Needle-In-AHastack-Distractor benchmark. Methods And Evaluation Criteria: Method: The key idea of VideoRoPE is to improve the frequency allocation of the existing M-RoPE. Allocating higher dimensions (lower frequencies) to the temporal axis is a well-motivated strategy. It aims to prioritize temporal modeling and capture long-range dependencies, which are essential for video understanding. The justification based on the "global branch" corresponding to higher dimensions from other works also lends credibility to this. The diagonal layout is intended to maintain spatial symmetry and prevent bias toward input order, which is reasonable too as it ensures that visual input receives equal contextual influence from preceding and subsequent textual information. Evaluation Criteria: The paper evaluates VideoRoPE on a diverse set of video understanding benchmarks including Video-MME, MLVU, etc. The paper also proposes the Visual Needle-in-a-Haystack with Distractors benchmark to evaluate the retrieval capability. This paper follows the commonly used metrics like average accuracy to compare again baselines. Theoretical Claims: There is no formal proof and no theoretical claim in the paper. Experimental Designs Or Analyses: The experimental design looks sound. The experiments are quite extensive in terms of the number of benchmark and tasks evaluated. The main experiments include results on Long Video Understanding (covering 3 standard video QA benchmarks), results on Long Video Retrieval (the needle test), results on Video Hallucination. Moreover, ablation studies are provided to further justify the design choices. Supplementary Material: Yes, the section named A. MORE EXPERIMENTS, has been reviewed. Relation To Broader Scientific Literature: This paper proposes VideoRoPE, an new video positional embedding that is aware of spatiotemporal information. Compared to existing RoPE variants (Vanilla RoPE, TAD-RoPE, and M-RoPE), it has nice properties such as 2D/3D structure awareness, frequency allocation, spatial symmetry, and temporal index scaling. Overall, I believe this paper provides a valuable contribution to the field of video understanding with LLMs. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: -The proposed VideoRoPE is novel to the best my my knowledge. -The improvement on many video benchmarks is significant. -The paper is well-structured and easy to follow. The problem is clearly defined, the proposed method is clearly explained, and the experimental results are clearly presented. Weaknesses: -Only one LLM i.e., QWen2-VL, is evaluated. It would be more convincing to show improvement over more base LLMs. -More recent (SOTA) baselines should be compared. For example, LLaVA-Video, LLaVA-OneVision with different sizes. -The performance degradation at 64k context length (Tab. 2) is not well explored in the main paper. Other Comments Or Suggestions: The authors are encouraged to address the concerns in the weakness section. Questions For Authors: See the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer t9dP, Thanks for your valuable feedback. We sincerely thank all reviewers for acknowledging that this paper is novel (t9dP, juMQ), well-motivated (t9dP, 26sL, 3qrY), significant improvements (t9dP, 26sL, 3qrY), well-written (t9dP, juMQ, 26sL, 3qrY), and easy to follow (t9dP, 3qrY). Below we address your questions: > Q1: Only one LLM i.e., QWen2-VL, is evaluated. It would be more convincing to show improvement over more base LLMs. Thanks for your suggestion. We have updated the table below with results on QWen2.5-VL. Performance is reported across three categories: long video understanding (LongVideoBench, MLVU, VideoMME) with a 64k context length, short video hallucination at an 8k context length, and average retrieval accuracy. Across all six benchmarks, VideoRoPE's performance remains consistently superior to other RoPE variants, including Vanilla RoPE, TAD-RoPE, and M-RoPE. We will add these results to our revised version. |Qwen2.5VL-7B|LongVideoBench|MLVU|VideoMME|VideoHallucer|V-NIAH|V-NIAH-D| |-|-|-|-|-|-|-| |Vanilla RoPE|53.37|63.13|60.0|45.1|31.77|29.99| |TAD-RoPE|54.62|66.78|59.0|45.1|28.44|24.88| |M-RoPE|58.71|68.09|60.6|45.3|77.33|75.11| |VideoRoPE|**59.20**|**68.84**|**61.6**|**45.8**|**86.44**|**86.22**| > Q2: More recent (SOTA) baselines should be compared. For example, LLaVA-Video, LLaVA-OneVision with different sizes. We appreciate your feedback. We have added the experimental results of Video-RoPE with the SOTA models including LLaVA OneVision, LongVU, Apollo and LLaVA-Video with different sizes (3B and 7B). Results with 3B size models: |Model|LongVideoBench|MLVU|VideoMME| |-|-|-|-| |VILA1.5-3B|42.9|44.4|42.2| |Phi-3.5-Vision-4.2B|-|-|50.8| |LongVU-3.2B|-|55.9|51.5| |Qwen2.5-VL-3B-VideoRoPE|**54.7**|**62.6**|**58.3**| Results with 7B size models: |Model|LongVideoBench|MLVU|VideoMME| |-|-|-|-| |LLaVA OneVision-7B|56.3|64.7|58.2| |LongVU-7B|-|65.4|60.6| |Apollo-7B|58.5|**70.9**|61.3| |LLaVA-Video-7B|58.2|70.8|63.3| |Qwen2.5-VL-7B-VideoRoPE|**62.0**|70.7|**64.4**| Notably, our method achieves better results with significantly less fine-tuning data (0.33 million) than prior SOTA approaches (8.8 million for LLaVA OneVision, 3.75 million for LongVU, 3.2 million for Apollo, and 2.7 million for LLaVA-Video). Our results highlight the benefits of using our RoPE design in Video Large Language Models. > Q3: The performance degradation at 64k context length (Tab. 2) is not well explored in the main paper. In the Tab. 2 of main paper, the performance of 64k (65.56 for MLVU, 61.33 for VideoMME) is slightly worse that 32k (66.02 for MLVU, 61.67 for Video-MME) on some benchmarks. The reason is that simply increasing the context window doesn't guarantee improved performance. It is challenging for the model's ability to effectively process and utilize extremely long contexts. For example, in Figure 5 of the technical report of QWen2-VL based on M-RoPE, the performance of 64k (70.4) is also slightly worse than 48k (71.3). We further tested the performance on our VideoRoPE on 48k and 80k context length. The results at 48k and 80k showed performance fluctuations, which are consistent with Figure 5 of the QWen2-VL technical report. |Method (VideoRoPE)|8k|16k|32k|48k|64k|80k| |-|-|-|-|-|-|-| |LongVideoBench|54.46|55.29|57.15|56.22|57.26|56.53| |MLVU|65.19|66.29|66.02|66.43|65.56|64.88| --- Rebuttal Comment 1.1: Comment: Thanks for providing the rebuttal. I would keep my original rating. I do not improve my rating because the new results are not completely convincing. - Regarding the experiments with QWen2.5-VL, there is only marginal improvement over M-RoPE (58.71 vs 59.2 on LongVideoBench, 68.09 vs 68.84 on MLVU, 45.3 vs 45.8 on VideoHallucer). It seems that the improvement of VideoRoPE saturates with a stronger base LLM. - Why don't the numbers of Qwen2.5-VL-7B-VideoRoPE in the third table match the numbers of the first table? The first table also shows the results with Qwen2.5-VL-7B, right? --- Reply to Comment 1.1.1: Comment: Dear Reviewer t9dP, We understand the high workload of reviewers, and your comments come when there is only one minute left (April 8th, AoE) for the author's rebuttal. Therefore, to ensure you still have time to respond, we managed to submit the rebuttal above quickly. Please kindly take a look at our rebuttal and don't hesitate to respond with more thoughts. > Q1: Regarding the experiments with QWen2.5-VL, there is only marginal improvement over M-RoPE (58.71 vs 59.2 on LongVideoBench, 68.09 vs 68.84 on MLVU, 45.3 vs 45.8 on VideoHallucer). It seems that the improvement of VideoRoPE saturates with a stronger base LLM. We appreciate the reviewer’s observation that the performance gains of VideoRoPE over M-RoPE are relatively smaller on some benchmarks when applied to Qwen2.5-VL. Nonetheless, VideoRoPE still consistently brings improvements across **all six benchmarks**, expecially on retrieval benchmarks (77.33 -> 86.44 on V-NIAH, 75.11 -> 86.22 on V-NIAH-D), which we believe demonstrates its robustness and general applicability, even on recent SOTA vision-language models. Furthermore, our goal is to provide a general method that enhances long-context video understanding, and consistent improvements, albeit marginal in some cases, support the effectiveness of our approach. > Q2: Why don't the numbers of Qwen2.5-VL-7B-VideoRoPE in the third table match the numbers of the first table? The first table also shows the results with Qwen2.5-VL-7B, right? That's because **we report the results under different settings**. The caption of the first table already indicates that the results are based on a 64k context, which is intended to demonstrate the model’s long-context reasoning ability. As for the third table, since it is a comparison with SOTA models, we report the best results across different context lengths for each model. We have already addressed all the concerns and with detailed response. We are actively looking forward to your response and further feedback.
null
null
null
null
null
null
Scaling Trends in Language Model Robustness
Accept (spotlight poster)
Summary: This paper conducts and empirical study on language model robustness, examining how model size influences the robustness, both for regular models and adversarially fine-tuned counterparts. In particular, the authors focus on the relationship between expended compute by the adversary vs defender compute budget. ## update after rebuttal Following from the author’s rebuttal, I think if the authors clarified further that the it’s not intended as a practical robustness approach, but exploring the trends and their implications would be beneficial. Regarding the discussion about robustness transfer: the figures the authors suggest could be suitable; the main concern was that it seemed a finding that everyone is unsurprised about, so perhaps dedicating around a page to it seems too much. Potentially, given the suggestions from the other reviewers which may require more page space, it would make sense to lighten the robustness transfer section in the main body of the paper in favour of the appendix. Overall, I am happy to raise my score to accept. Claims And Evidence: The core claims of examining model size vs robustness, are reasonably well supported by the experiments, with a few caveats: this is a large topic area with a multitude of models, attacks, defences, and datasets. Having a through coverage of \emph{all} of these areas would be unrealistic for a single paper, and so focus and cuts will inevitably need to be made. With this in mind, a reasonable sub-selection was carried out and on the selected experiments the claims are supported. Given the current SOTA in the NLP I was surprised to see the strong focus on toy classification datasets (Spam, IMBD, etc) compared to more complex and realistic tasks of StrongREJECT. Some of the claims in the paper are relatively well established already e.g. that strong attack training protects against weaker attacks has been shown all the way back with PGD vs FGSM training. Methods And Evaluation Criteria: As mentioned previously, a fair selection for attacks and models was used. Figure 1 (along with the extra graphs in the appendix D.8) are used as the basis that eventually with larger models there may be a defensive advantage. Two queries/comments: it seems that the gain between the smallest and largest models (three orders of magnitude 7.6 million to 6.7 billion) only gives around one order of magnitude in attack compute increase. This doesn't seem like as promising a scenario as is presented in the paper text. Secondly, what was the justification for the target ASR of 2\%? If the attacker optimized for a higher ASR will the picture change? This core analysis seems to have only been carried out on the simpler tasks of IMBD/Spam rather than the more complex, and more realistic, generative task Theoretical Claims: NA - paper is a empirical one and does not contain theorems/proofs. Experimental Designs Or Analyses: More through analysis with other attacks such as AutoDAN, TAP, Crescendo, etc can give a broader comprehensive picture, and furthermore some notion of perturbation bound in relation to ASR scaling can be a further useful notion. However, with these benchmarking papers it's always possible/interesting to account for yet another dimension; thus considering the realistic scope of a ICML paper I would say there is sufficient coverage to make the experiments valid for the claims they are trying to make. Supplementary Material: Appendix was reviewed; though given its length and volume of figures, less rigour was applied. Relation To Broader Scientific Literature: The paper builds on the ideas in scale in computer vision robustness and builds on this for the NLP domain. Essential References Not Discussed: No essential works missing. Other Strengths And Weaknesses: Overall, the paper carries out a worthwhile step into systematizing and analyzing the relationship between the properties of model size, defense benefit, and robustness for NLP models. There are some areas that could be improved (see prior sections), but having an explicit study around these properties is useful rather than relying on ad-hoc intuition gathered over reading many papers in the area. Other Comments Or Suggestions: Paper is clear and well written; perhaps more page space could have been given to expand out the core paper analysis rather than also examining aspects such as robustness transfer: there is plenty already to discuss. Questions For Authors: See prior sections for questions relating to specific aspects of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Focus on classification Thank you for this point, it is well-taken. We decided to focus on classification tasks for two main reasons: 1) it allowed us to study 3 orders of magnitude of model sizes—about 1.5 orders of magnitude more than we could have in the generative setting (since in our experience generative models only get reasonably good at the 1B size) 2) it gave an unambiguous signal for attack success or failure (much better than token matching, see [1], and more reliable than llm-as-a-judge [2], which in our experience often gave assessments of attack success/failure that we didn’t agree with) The generative setting brings with it other challenges too: in order to do the tasks in StrongREJECT or HarmBench, we need to use in-context learning or to prompt an Instruct model. Is the model's performance then thanks to its safety training, or other parts of its Instruct training? Finally, Instruct models have often undergone training that we don’t know details of, making the FLOP-relative analysis and comparison across model sizes much more difficult than in the Pythia (and to some extent, Qwen2.5 base) classification setting. Having decided to focus on classification, we tried to do a spread of relevant tasks. Spam and IMDB, while easy tasks, are still real-world-relevant. Also, Helpful and Harmless are quite challenging. If you know of any frontier classification tasks in NLP, we would be grateful to hear about them. [1] Zhu et al., AdvPrefix: An Objective for Nuanced LLM Jailbreaks, 2024 [2] Raina et al., Is LLM-as-a-judge robust?..., 2024 > Already established claims Thank you for pointing this out. Like you, we did expect to see strong-to-weak transfer, and confirmed that it occurs across all model sizes. Going further, we unexpectedly only found weak-to-strong transfer for small models. We also investigated changing the threat model, showing that large models generalize better from suffix attacks to infix attacks than smaller ones. This underscores our conviction that investigating robustness across model scales is crucial, rather than looking at a single point estimate. That said, we could move Figure 5 to the appendix and use the space for more core plots, as you suggest below. Would you agree with this approach? Also, we would be grateful if you could please let us know if there are other results in the main body you think would best be moved to the appendix to make more space. > Offense-defense balance Thank you for engaging so attentively with our plots! * We agree with the “1 order of magnitude” point. Eyeballing the extrapolation in Figure 1, to reach attack-defense parity (y = x + 0) we would likely need to go 4-6 orders of magnitude larger in model size, which is beyond the current largest known models. In your mind, would making this “impracticality for current models” more clear in the paper (perhaps even in the abstract) address your concern? We are trying in this paper to make it clear we aren’t trying to give a “practical” suggestion for how to make models more robust, but rather exploring the trends (and promoting a scaling trend perspective more broadly, which we believe is important for evaluating attacks and defenses in general). * The choice of 2% exactly was somewhat arbitrary (1% or 3% would have worked too—we can include those plots in the appendix if you are interested), but it was constrained on both sides. We needed a value large enough such that the measurement of how much compute it takes to reach that attack success rate is not dominated by noise (eg, if we had put 0.1% as the ASR threshold, then only a few datapoints would need to be successfully attacked in order to reach that ASR, and there would be more noise in the curve). On the other side, it needed to be small enough that we eventually reach that ASR across all model sizes, even after the models have undergone many rounds of adversarial training. Because the models eventually become quite robust with adversarial training, going larger than 2% would mean we start missing datapoints because there is no amount of attack compute in the regime studied that achieves a 2% ASR (indeed, even 2% is not reached by some models in Figure 1, which you can see by the lack of error bars around some models/datapoints). * We did every analysis on Spam and Harmless for Pythia and Qwen2.5, and more tasks where possible (See Appendix, especially D). We notice we are missing the [Pythia Harmless Offense-Defense plot](https://pasteboard.co/KSvNu98kbogI.png) and will add it to App D (apologies!). > Focus on core analysis Thanks for this interesting suggestion. We could cut Figure 5 or 6 (or both) and replace with core analysis plots. Is there any plot you’d be particularly happy to see in the main body? Perhaps parts of Figure 33 and 35 (to show the sample-efficiency vs flop-efficiency distinction)? Alternatively, we could include more of the attack scaling or adversarial training plots.
Summary: The authors observe a gap concerning investigations of robustness scaling laws in LLMs. They conduct a large-scale empirical study and find that without other changes, simply increasing model size does not yield increased robustness. However, they find that larger models require less samples and compute for robustification using adversarial training. In addition to robustness scaling laws, the authors also investigate scaling laws in attack settings and find that an increased computational budget reliably increases attack strength. Comparing both trends, the authors hypothesize that defenses may scale faster than attacks. References for the remaining review: [1] Andriushchenko et. al., “Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks”, 2024 ## update after rebuttal The authors addressed most of my concerns. The majority of remaining concerns from other reviewers and me seem to be related to the scope of the paper (e.g., more experiments on autoregressive tasks). However, I agree with the authors that their paper provides a major first step and think its unreasonable to expect that they tackle this very complicated topic completely in this first work. I think the results are interesting to the ICML community and recommend acceptance. Claims And Evidence: The authors claim to provide scaling laws regarding model robustness and attack strengths for LLMs. - They provide evidence across two different LLM families and different model scales. However, the authors do not consider models that were safety fine-tuned - They use GCG and Beast as adversarial attacks (Not exhaustive but understandable considering the compute effort) - They use different random seeds for their evaluation - They conduct their evaluation mostly on classification tasks, where they finetune existing models on classification datasets before evaluation. The authors provide an argument for the suitability of the classification task for this experiment but I am not convinced that their argumentation is meaningful (more details in Methods and Evaluation Criteria). Methods And Evaluation Criteria: - Two LLM families were chosen that provide models at several scales while still being not unreasonably large. Unfortunately, this selection does not include a strong safety fine-tuned model. While I consider this a disadvantage, it makes it easier to study the behavior of models when increasing the safety budget. - The adversarial attacks chosen for evaluation are sufficient in my opinion. This kind of evaluation is very costly. While methods that are different in design (e.g., PAIR [1]) would be interesting, I will consider this a minor concern. - The authors fine-tune models to evaluate them in a classification setting. While this simplifies the comparison I would argue that the robustness task in LLMs is inherently generative and a classification setting does not capture nuances present in generative robustness evaluations (e.g., late refusal, early refusal and late harmfulness, etc.). Still, the number of evaluations is extensive and the authors consider one generative setting. - The metrics used in the evaluation are suitable (if one finds the classification setting appropriate) Theoretical Claims: N/A Experimental Designs Or Analyses: The results in the experiments align with relevant results in the literature. I did not find a major flaw in the evaluation. Supplementary Material: N/A Relation To Broader Scientific Literature: The authors position their paper appropriately within the literature. They also provide arguments for their design choices including most of the limitations I name in my review. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** - First comprehensive study concerning robustness scaling laws in LLMs - Results for two different model families and several different benchmarks - I found the comparison between scaling robustness and scaling attack strength and possible trends in the limit of scaling to be very interesting **Weaknessess** - I am very skeptic that the proposed benchmarks are suitable for developing scaling laws in this regime. I would suggest reframing the paper to investigate scaling laws specifically regarding classification tasks. IMO, it's unclear if these laws would remain consistent with varying defense mechanisms and attacks in a generative setting. Even different types of harmfulness may have a severe impact on scaling laws. - No LLM with strong safety fine-tuning is used for the evaluations. However, I understand that computational concerns are relevant and consider this a minor concern. - The adversarial training algorithm proposed by the authors could be considered a novel method (e.g., there are no other results that are comparable in the literature). This makes it somehow hard to assess how important the design choices of the algorithm are for the overall results. Would the scaling laws change considerably if the authors would have used another robustification algorithm? Other Comments Or Suggestions: - I would recommend moving some results from Appendix D3 to the main paper. While the text is understandable, I found the figures to be very informative. Some of the tables currently take up a lot of space and may be reduced to a single-column format to create space. Questions For Authors: - Could the authors provide an argument for the design choice of comparing the models in a classification setting. Why does something like HarmBench and LLM as a judge not work in the proposed setting? - Could the authors explain their motivation to not use models from a model family with more safety fine-tuning? e.g., Gemma or LLAMA - Since classification tasks are used for evaluation. Why not compare to strong classifiers, such as BERT as well? - Do you think the results depend on the defense methodology and attacks? Would the scaling laws look considerably different for different choices in this regard? I am willing to increase my score if my questions/concerns are addressed after looking at the other reviews and authors' responses, and I see considerable potential for an increased score after the rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Other defenses + gen setting Thank you for the reframing suggestion. We indeed focus on classification (see below and WX3L rebuttal). We believe that our overall points will hold (1: model scale has limited impact on robustness by itself, but improves safety training, and 2: the relative effectiveness of attack and defense compute is more important than point estimates), but the overall offense/defense balance may change for different attacks/defenses or in the generative setting. Having analyzed classification tasks, we wanted to sanity check at least one generative task. Our Qwen2.5 StrongREJECT results are consistent with point 1, as larger Qwen2.5 models appear to benefit more from safety training. This said, since Qwen2.5 does not have a helpful-only variant, there could be confounding factors (see WX3L rebuttal). We only intended to make strong claims about adversarial training (and to a lesser extent, perplexity filtering, which is bypassed by BEAST) in classification, and will try to make our focus more clear in the paper. If there is anywhere in particular you think would benefit from attention, please let us know and we will address it. > More safety tuned LLMs Thanks for the comment. Note that our StrongREJECT experiment is in fact on Qwen2.5 Instruct, which underwent Harmlessness and Debiasing training as part of RLHF [1]. As you mention, the bulk of our analysis was on finetuned base models which have not undergone safety training. This was intentional. For one, our result that “while model scale on its own only slowly improves robustness, model scale enhances safety training” was only observable because we started from models without safety training. See WX3L rebuttal for more discussion. [1] Yang et al., "Qwen2.5 technical report", 2024, page 6-7 > Effect of different robustification approaches This is a good question. While we chose to implement a fairly simple adversarial training method, it is likely that different algorithms would give somewhat different results. More than the specifics of our approach, we tried to emphasize the importance of offense/defense balance of when studying robustification algorithms: ASR goes up with attack compute and down with defense compute, and so the relative strengths of these changes are much more informative than any point estimate of ASR. > Classification vs HarmBench Two main reasons for classification: 1) It enables us to systematically study trends in adversarial robustness over 3 orders of magnitude of model sizes. We find LLMs first become competent at key generative tasks like chat dialog at ~1B params, which would have greatly limited the range of model sizes for studying variation in robustness. 2) It allows us to unambiguously tell if an attack was successful or not. In the generative setting, evaluation is less clear. Historical attack evaluations focus on keyword/phrase matching, which does not guarantee bad behavior. LLM-as-a-judge (also used in HarmBench) improves on this but is not perfect: in early experiments, we often disagreed with the LLM judge (more details in WX3L rebuttal). Finally, we wanted to run a large number of experiments/seeds. This would have been significantly more expensive in the generative setting due to needing to use larger models for generation, plus the judgement cost. All this said, we believe the generative setting is a natural direction for future work. > No Llama/Gemma Thank you for this question. Analytically * Using Pythia enables apples-to-apples comparisons between different models to isolate the effects of scale on its own, as opposed to scale + different amounts of data, safety training, etc. * We can only observe the separate effects of model scale and model scale + safety training because we start with models that have not been safety trained Practically * Gemma 3 only ranges from 1B–27B in model size, only varying by 1.5 orders of magnitude * Llama requires efficiency techniques to be practical for larger scales. Using quantization or LoRA would undermine the comparability between models at different scales * Starting at the 7.6M parameter scale and going bigger lets us afford experiments with multiple seeds over 3 OOMs of model sizes > No BERT Thank you for this question. Our Pythia and Qwen2.5 classifiers had high accuracy, so we did not see the need to improve classifier strength. We think studying encoder models could be interesting future work. > Different attacks/defenses Interesting question! It depends which results you are talking about. We expect that the finding that scale alone confers limited robustness but boosts safety training will hold. The offense/defense slopes could significantly change for different attack/defense combinations (exciting future work). Indeed, a central goal of the paper is to promote a scaling lens. It is by plotting these slopes that one could show a future defense technique to be more efficient than any current attack (or vice versa)!
Summary: This paper studies the scaling behavior of adversarial robustness from three angles: attacker compute (number of flops used in the attack), defender compute (number of pretraining flops), and adversarial training. The settings studied include both discriminative and generative tasks. The attacks include both a white box and a black box attack. Claims And Evidence: See strengths and weaknesses Methods And Evaluation Criteria: See strengths and weaknesses Theoretical Claims: N/A Experimental Designs Or Analyses: See strengths and weaknesses Supplementary Material: See strengths and weaknesses Relation To Broader Scientific Literature: See strengths and weaknesses Essential References Not Discussed: See strengths and weaknesses Other Strengths And Weaknesses: # Strengths: - This paper provides a thorough and careful implementation of different adversarial attacks at different scales for LLMs. These insights are a useful tool to help researchers identify various different areas for improvement in the field of adversarial robustness. - Another interesting part of this paper is the study of adversarial training. It is nice that the authors explored practical safety mitigations, such as adversarial training, to see how they change the attack-defender curves. - The paper is well-written and the claims are well supported. # Weaknesses: - There are no explorations of any other types of defenses other than adversarial training. In general, it's important to understand whether new attack strategies or new defense strategies will have different scaling properties (quantitatively: will these new strategies change the slope of the scaling curves?) than those explored. In terms of attacks, I feel satisfied that the two methods chosen are likely representative of the set of attacks that adversaries might try to explore. However, I think there are several other defenses that could be evaluated on (e.g., input filtering / rewriting or using more reasoning compute). While I understand that adversarial training is the easiest way for academics to study test-time scaling on the defense side, it would be useful to have results on mitigations that model developers (e.g., OAI, GDM, etc.) are currently deploying. Do these change the scaling curves? This would serve to guide researchers to develop more effective methods and not duplicate work. - [Minor] The paper does not introduce new technical ideas. However, I still think it is valuable. - [Minor] This is more of a matter of preference, but I think it would be nice if the authors could provide more interpretation of the landscape of adversarial robustness. Currently it reads like a laundry list of results; it would be nice to add more interpretation in the discussion. Other Comments Or Suggestions: "Large and small models appear to benefit proportionally to adversarial training: when large models start with a robustness advantage, they maintain it, but they do not increase their advantage through adversarial training." - Does this suggest that adversarial training isn't actually a scalable method? In some sense the adversarial training doesn't leverage the additional compute added during pretraining. It'd be nice to add some interpretation of this result somewhere, as I think it's interesting to understand from a defender's perspective (e.g., do I need to spend more work on developing more efficient defense algorithms?) "We first note that the curve slopes are all < 1, meaning that for a given model size, doubling adversarial training compute leads to attacker needing to less than double attack compute to maintain the same attack success rate... What matters in the long run, however, is not the slope of any given model’s scaling curve, but whether increasing model size and adversarial training continue to shift the “robustness frontier” up and to the left." - This is incorrect, as attacks may develop in the future to better scale with compute, which would break these trends. In general, it seems that the main point is whether the slope of the attack scaling curve is substantially less than the slope of the defense scaling curve (to the point where it becomes too expensive to compute the attacks). It would be great if the authors could make some of this analysis more precise throughout the text. Questions For Authors: Is strongREJECT really the only benchmark exploring robustness of models to harmful prompts? Why wasn't HarmBench tried? I think it's fine not to include these results but it would be worth mentioning in the text somewhere why related benchmarks were not evaluated on. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Other defenses besides adversarial training? Great question, this is exactly the kind of future study that we would like to explore! Indeed, one of our core points is that, because ASR goes up with attack compute and down with defense compute, looking at a single point estimate can be deceiving. Instead, we propose the scaling lens of focusing on offense/defense balance as you suggest. **Input filtering** We implemented a perplexity filter, though we only mention it in passing (line 247), using it to check that our BEAST implementation defeats it. It would be interesting to study scaling properties of model-based filters like LlamaGuard [1] more directly, and would be relevant in the generative setting [2]. In contrast, using a perplexity filter does not lend itself to a study of scaling. **Rewriting and retokenization** Like perplexity filtering, rewriting and retokenization are one-off increases in cost, where there is no way to spend more compute to make the defense stronger. In the same way that BEAST defeats perplexity filtering, we believe that future algorithms could likely circumvent these defenses. We also suspect these defenses are unpalatable to frontier model developers due to possible performance degradation. Therefore, we focused on scaling trends for a fixed set of attacks and defenses where compute can be smoothly scaled, in contrast to the cat-and-mouse setting of novel filters and filter circumventions. **Test-time scaling** This is an interesting question and has recently been explored in [3], where the authors find that increasing test-time compute reliably improves robustness in a generative setting. **Developing more effective methods and not duplicating work** We would recommend the following: * Do adversarial training in latent space [4], which is much more compute-efficient than in token space * Use LLM-based input and output filters [1, 2] [1] https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations, 2023 [2] https://www.anthropic.com/news/constitutional-classifiers, 2025 [3] https://openai.com/index/trading-inference-time-compute-for-adversarial-robustness, 2025 [4] Casper et al., Defending against unforeseen failure modes with latent adversarial training, 2024 > [Minor] The paper does not introduce new technical ideas. However, I still think it is valuable. We agree on both fronts :). One possible small technical contribution is setting up the mildly intricate adversarial training pipeline (details in Appendix D2) but this is a minor contribution at most. We will also make the code public for others to use. > [Minor] Interpretation of landscape? Thank you for raising this. Having read the paper so many times, it’s hard for us to catch issues like this, so this is very helpful feedback. We will go over the intro and related work sections to improve flow and add interpretation. > Is adversarial training actually scalable? Great question, we will attempt to explain this better and give a takeaway message in the paper. We find that adversarial training is scalable in the sense that holding the attack, threat model, and attack compute budget fixed, additional adversarial training reliably improves robustness. However, in our studied settings, the offense/defense slope is <1 across model sizes, so the defender will lose to the attacker if both scale proportionally. In our case, the defender would need to develop a more efficient defense algorithm to move the slope >1 if they want to outpace the attacker. Larger models generalize better to different threat models, so additional pretraining compute mostly shows up in improved generalization from adversarial training. > Future offense/defense slopes will be different Thank you for this. We agree that future attacks and defenses will lead to different slopes (and intercepts). The point we most want to emphasize in the offense/defense section is how the scaling lens is necessary to understand how an attack or defense will perform, vs only looking at a point-estimate which might become irrelevant with a different allocation of compute to offense and defense. We will make this more clear in the paper. > Only StrongREJECT for gen Thank you for this point, we will include a note in the paper of why we focused on classification and other generative settings were not tried. We focus on classification to study a wider range of model sizes, for unambiguous ASR measurement, and due to other challenges with evaluating generative models (see WX3L rebuttal for more details). We settled on StrongREJECT rather than HarmBench because it fit more naturally into our model evaluation pipeline. Since the two benchmarks share similarities and our paper focused on classification, we decided not to test on HarmBench too. This said, we would be interested in future studies focusing fully on the generative setting! --- Rebuttal Comment 1.1: Comment: I think the interpretation would make the paper stronger and am now happy to accept the paper. Thanks for the rebuttal comments! --- Reply to Comment 1.1.1: Comment: Thank you again for your helpful feedback—we will be sure to add/improve the interpretation and flow. Thank you so much for the increased score! Please let us know if you have any more questions for us before the end of the rebuttal period.
Summary: This paper investigates the relationship between language model scaling and adversarial robustness, focusing on: 1) The impact of model size on robustness, 2) The effect of scaling on adversarial training, and 3) The cost trade-offs between attackers and defenders as model size increases. The experiments cover language models ranging from 7.6M to 14B parameters (Pythia and Qwen2.5 models), evaluating six classification tasks (e.g., spam detection, sentiment analysis, and moral judgment) and one generation task (StrongREJECT, measuring refusal rates using GPT-4o). Claims And Evidence: Although the experiments have limitations, they still provide some insights into the problem. Methods And Evaluation Criteria: See weaknesses and Questions. Theoretical Claims: No Theoretical Claims. Experimental Designs Or Analyses: See weaknesses and Questions. Supplementary Material: No Supplementary Material. Relation To Broader Scientific Literature: See weaknesses and Questions. Essential References Not Discussed: See weaknesses and Questions. Other Strengths And Weaknesses: ### Strengths - The paper is well-written and easy to understand. - It is the first systematic study of the relationship between LLM scaling and adversarial robustness, providing valuable insights into attack and defense dynamics. ### Weaknesses - **Limited model parameter scale:** The study evaluates models ranging from 7.6M to 14B parameters, but lacks experiments on larger models (70B+ parameters). This limitation hinders a deeper understanding of the scaling trends in robustness. The authors are encouraged to expand their study by including GPT-class and DeepSeek models to provide stronger empirical support. - **Lack of generative task evaluation:** The paper primarily focuses on classification tasks and does not sufficiently explore real-world generative tasks, such as jailbreaking attacks. The authors should consider evaluating adversarial robustness in jailbreak scenarios to enhance the study’s applicability. Other Comments Or Suggestions: - **Technical details of adversarial training:** Although Algorithm 1 outlines the adversarial training process, it lacks formal definitions and technical details. The authors should clarify how adversarial samples are constructed, how perturbation magnitudes are determined, and how different adversarial budgets affect model performance. - **Model editing vs. adversarial training:** While the paper states that adversarial training improves robustness, it does not explicitly compare it to other defense strategies. In contrast, model editing[1] may offer a more flexible defense mechanism. The authors should provide empirical comparisons between scaling, adversarial training, and model editing, along with insights into their trade-offs. - **Computational cost of defense:** The paper suggests that defense costs may eventually surpass attack costs in the long run. However, in the short term, defenders still require significant computational resources. Given limited computational budgets, what practical efficiency improvements does the paper recommend for defenders? - **Scaling laws for adversarial and backdoor poisoning attacks:** What are the authors' insights into scaling laws in the context of adversarial and backdoor poisoning attacks? Could scaling laws be leveraged to enhance or mitigate these adversarial attacks? [1] Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing, EMNLP, 2024 [2] Backdoorllm: A comprehensive benchmark for backdoor attacks on large language models, 2024 Questions For Authors: This paper explores a promising research direction on scaling laws and model robustness. However, the current experiments have limitations in model size and task diversity. Addressing the above concerns would significantly strengthen the study. If the authors can provide further experimental results and insights, I would be happy to increase my rate. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Lacks 70B+ models, no GPT-class or DeepSeek Thank you for these points, they are well-taken. **Model Scale:** Computational limitations constrained our study: 1) The adversarial training defense studied in the paper is very compute intensive, already taking thousands of H100 hours to complete for all seeds and model sizes. 2) To finetune 70B+ models we would need to use LoRA and quantization, which would make the comparison with smaller models less apples-to-apples. Are ASR differences from model scale, or because of LoRA or quantization? We believe that our core claims are well-substantiated by our experiments as they stand: 1) Model scale alone confers limited robustness, but model scale improves adversarial training 2) ASR goes up/down with attack/defense compute, so point estimates of robustness are less meaningful than offense/defense balance **Model families:** We used Pythia for an apples-to-apples comparison of ASR for model sizes across 3 OOMs. With other model families, different sizes use different training procedures, making it hard to tell if the difference is from using more or less data, different amounts of safety training, etc. We agree that a study on frontier models would be valuable, but it would require a different regime of computational resources than what we have access to in order to control appropriately for the factors mentioned above. > Lack of generative tasks This point is also well-taken! Our decision to focus on classification tasks was intentional: 1) It allows us to study robustness over 3 orders of magnitude for high performing models, since classification models achieve high performance with small model sizes (~14M parameters). In the generative setting, it is difficult to get good performance with <1B parameters, leaving us with 3-4 models spanning ~1 OOM before running into comparability issues arising from LoRA, quantization, etc., and undermining our ability to identify longer trends. 2) Classification provides an unambiguous attack success rate, since we know the label. In the generative setting, the standard approach is LLM as a Judge which is not guaranteed to agree with a human evaluator [2], and often disagreed with our judgments. Our Qwen2.5 StrongREJECT sanity-check for the generative setting in Figure 2 is consistent with our hypothesis that model scale enhances safety training. For the initial study, we wanted to control for as many variables as we could and find trends over as wide a spread of sizes as possible. We believe a follow-up study squarely focused on the generative setting would be worth doing. [2] Raina et al., Is LLM-as-a-judge robust? Investigating universal adversarial attacks on zero-shot LLM assessment, 2024. > Technical details of adv training We agree it would be helpful to have a more formal overview of the algorithm in the main body (vs Appendix D2). We will also add information about the different adversarial training hyperparameters we tried. * We wanted to make Algorithm 1 as readable as possible, but could make it more formal if you think that would be of value * What do you mean by “perturbation magnitude”? All of our attacks are adversarial suffix (or infix, or prefix), so there isn’t such a clear notion of deviation from the original datapoint (unless you’re talking about number of attack iterations or suffix length?) > Model editing vs. adversarial training Thank you for bringing this defense to our attention. We chose to focus on adversarial training because it is an industry standard and gives us a natural way of scaling defense compute (we also implemented a perplexity filter to check BEAST can bypass it). Model editing does not have a compute scaling component so does not naturally fit into the same kind of study. We are not claiming that adversarial training is the best defense method, but rather that model scale on its own confers limited robustness, but model scale enhances adversarial training. In follow-up work, exploring model editing with a wider set of defense techniques would be interesting. > Practical efficiency improvements Relevant question! We would recommend: * More efficient adversarial training using attacks in latent space [3, 4] * Input-output filtering [5] We can add this to the conclusion if you think that would be valuable. [3] Schwinn et al., Soft prompt threats…, NeurIPS 2024 [4] Casper et al., Defending against unforeseen failure modes with latent adversarial training, 2024 [5] Inan et al., Llama guard: Llm-based input-output safeguard for human-ai conversations, 2023 > Scaling for backdoor poisoning? Great question! Indeed they can, see [6]. Not only is it a successful attack, it becomes more effective against larger models. This suggests that (a) careful data curation and (b) strong safeguards over finetuning APIs are crucial for frontier models. [6] Bowen et al., Data Poisoning in LLMs: Jailbreak-Tuning and Scaling Laws, 2024.
null
null
null
null
null
null
any4: Learned 4-bit Numeric Representation for LLMs
Accept (poster)
Summary: This paper introduces a 4-bit weight quantization method called any4, designed for Large Language Models (LLMs). The authors claim that this method offers arbitrary numeric representations without the need for preprocessing weights or activations. The paper compares any4 with other 4-bit numeric representation types like int4, fp4, and nf4, using various LLMs (Llama 2, Llama 3, Mistral, and Mixtral) of different sizes and families. The key finding is that any4 outperforms other numeric formats in terms of accuracy. The paper also demonstrates that any4 is competitive with techniques like AWQ and GPTQ, which require weight/activation preprocessing. Additionally, the authors show the competitiveness of any3 and any2 at lower bit widths and highlight the ability to calibrate any4 using a single, curated diverse sample instead of hundreds of dataset samples. The paper also introduces tinygemm, an open-source GPU matrix multiplication library optimized for LLMs. tinygemm efficiently implements any4 using a lookup table strategy on GPUs. Claims And Evidence: Overall, the paper presents a compelling case for the any4 quantization method. The claims are generally well-supported by evidence from experiments and comparisons with existing techniques. However, there are a few areas where further clarification or investigation could strengthen the paper: Strengths: - Clear evidence of any4's superior accuracy compared to {int4, fp4, and nf4} in terms of perplexity and downstream task accuracy. - Competitive performance with preprocessing techniques such as {AWQ, GPTQ, and QuIP} - Effectiveness at lower bit widths (any3) - Efficient calibration: The ability to calibrate any4 with a single, diverse sample is a significant advantage. - Open-source implementation: The tinygemm library provides a practical implementation of any4, allowing for further research and adoption. Areas for improvement: - Calibration sample details: the paper shows using a single, curated diverse sample for calibration, but does not explain how it was constructed or why they thought this was sufficient. More information about this process would be beneficial. - Further exploration of any4 with other techniques: While the paper demonstrates any4's competitiveness with existing techniques, it doesn't explore combining any4 with those techniques (e.g., using any4 as the numeric format for GPTQ or AWQ). This could be a promising avenue for future work. - Any2: the paper claims any3 and any2 are the effective, but looking at the results, any2 doesn't really work... Despite these minor points, the paper provides strong evidence to support its claims and makes a valuable contribution to the field of LLM quantization. Methods And Evaluation Criteria: Yes The paper made sensible choices (row-wise grouping), focused on downstream accuracy, showed performance across a large set of model sizes, compared to other low precision quant techniques, and had an easy calibration setup. This makes sense for the problem or application at hand. Theoretical Claims: Yes. The theoretical claim in the paper: ``` Eqn. 23 states that the optimal value to represent a group of scaled weights within a cluster is their average weighted by the product of the scaling factor of a weight element and mean of the norm of activations applied to that element. ``` This just follows from their equations. Experimental Designs Or Analyses: Yes Experimental Design: - They used a wide range of models (Mistral, Llama 2, Llama 3), and sizes (from 1B to 70B) - They compared with a good set of numeric formats (int4, fp4, nf4) and quantization techniques (AWQ, GPTQ, QuIP)/ - They ablate different group sizes Analysis:The paper evaluates speedup vs perplexity and downstream tasks Potential Issues: - limited exploration of combining any4 with e.g. GPTQ or AWQ Despite these minor points, the experimental designs and analyses in the paper are generally sound and provide convincing evidence to support the claims made about the any4 quantization method. Supplementary Material: Skimmed all of it. Read the calibration data, looked at ablations, algorithm, and other results. Relation To Broader Scientific Literature: Accelerating llm inference via compressed representations is an active area of research in the community. Essential References Not Discussed: not that i can immediately think of Other Strengths And Weaknesses: The work shows improved performance, but does have performance degradation. Its fast but this is on old hardware (A100s). When B200s come out with native fp4 support, this will not work anymore (unless you can write fp4 codebook with your learned codebook). Other Comments Or Suggestions: This is just a learned compression codebook (LUT) which optimizes for reconstructing the output (instead of minimizing mse of weights error). I've personally talked about this with 2 separate orgs; in my mind its not novel, but I guess I cant think of anyone who has published it already... Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, particularly recognizing that the paper **“presents a compelling case”**, and **“provides strong evidence to support its claims and makes a valuable contribution to the field of LLM quantization”**, as well as the important suggestions to improve the quality of the paper. We address the reviewer’s comments as follows: - **Calibration Sample Construction:** The calibration text sample (that we have provided in Section A.2 in the Appendix) was manually crafted with sentences spanning diverse domains: Fiction, News, Code, Math, Facts, and concatenating them together. Our intuition was that having a single sample with many diverse topics would be sufficient, rather than extracting many samples from a Common Crawl dataset and hoping that they will cover diverse domains. - **Combining any4 with AWQ or GPTQ:** We have attempted to combine any4 with AWQ during the rebuttal period and provided results in Table B (in response to Reviewer pDZ9) in this rebuttal. We show that indeed combining AWQ with ANY4 improves the results of AWQ. - **Native FP4 Support in New Nvidia B200 GPUs:** We acknowledge that new hardware is supporting FP4 as a native format for computation but we emphasize that our approach is particularly **effective for PTQ** and **can enable arbitrary number of bits**: - In our experiments, we observed that FP4 leads to lower Post-Training Quantization (PTQ) accuracy compared to NF4 and our proposed ANY4 format. i.e., FP4 does not perform well for zero-shot quantizing without training. - Nevertheless, FP4 works well for Quantized Aware Training (QAT). Native support for computing in FP4 enables training models faster with FP4. However, our paper explicitly focuses on post-training quantization (PTQ), a practical setting where model weights are quantized without retraining. - Furthermore, our method also achieves strong results with ANY3, a 3-bit format for which there is currently no native hardware support—highlighting the broader applicability of our lookup-table-based quantization approach beyond current hardware capabilities. Moreover, it could be used for 6-bit quantization [H] which other papers are exploring. - **Clarification on ANY2:** We agree ANY2 is not competitive at this stage; we included it to illustrate the extensibility of our method across bit widths, even at the extreme low end. - Moreover, QuIP which performs well on 2-bit quantization is orthogonal to our approach and hence may be integrated in future work to enhance 2-bit quantization performance. - **Novelty:** Our method is novel in that it is the first to apply scalar LUT-based quantization minimizing output error in LLMs without any finetuning, unlike prior work [F, G] that provided a formulation in reducing output error rather than weight error but required finetuning. **References** [E] “Ultra-Low Precision 4-bit Training of Deep Neural Networks”, Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Xiaodong Cui, Swagath Venkataramani, Kaoutar El Maghraoui, Vijayalakshmi (Viji) Srinivasan, Kailash Gopalakrishnan, https://papers.nips.cc/paper_files/paper/2020/hash/13b919438259814cd5be8cb45877d577-Abstract.html, NeurIPS 2020 [F] “And the Bit Goes Down: Revisiting the Quantization of Neural Networks”, Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou, https://arxiv.org/abs/1907.05686, ICLR 2020 [G] “Extreme Compression of Large Language Models via Additive Quantization”, Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh, https://arxiv.org/abs/2401.06118, ICML 2024 [H] "FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design", Haojun Xia, Zhen Zheng, Xiaoxia Wu, Shiyang Chen, Zhewei Yao, Stephen Youn, Arash Bakhtiari, Michael Wyatt, Donglin Zhuang, Zhongzhu Zhou, Olatunji Ruwase, Yuxiong He, Shuaiwen Leon Song, https://arxiv.org/abs/2401.14112 --- Rebuttal Comment 1.1: Comment: Thank you for responding I'm still not convinced that this is super novel. I'm boarderline weak accept / week reject, but am slightly leaning towards week reject.
Summary: The paper introduces "any4," a newly proposed learned 4-bit numeric representation aimed at optimizing the quantization of weights in large language models (LLMs). Any4 enhances accuracy compared to traditional 4-bit formats such as int4, fp4, and nf4, and does not require preprocessing of weights or activations. Furthermore, it shows competitive results against other techniques that perform such preprocessing, like Adaptive Weight Quantization (AWQ) and Generalized Post-Training Quantization (GPTQ). The authors also present tinygemm, a latency-optimized GPU matrix multiplication library designed for implementing any4 effectively. The results indicate that any4 achieves superior accuracy across various model sizes and types. Claims And Evidence: Overall, the claims about Any4 quantization are generally well supported by experimental results and methodological explanations. The performance improvements and efficiency gains are backed by perplexity results and design choices. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper for the any4 quantization technique are appropriate for addressing the challenges in optimizing large language models (LLMs). 1. Methodology: It employs group-wise scaling and K-means clustering for quantization, aiming to improve efficiency and accuracy in weight representation. 2. Evaluation Metrics: Using perplexity and assessing downstream task performance provide a reliable indication of the model's effectiveness in real-world applications. 3. Benchmark Datasets: The inclusion of diverse datasets (e.g., WikiText-2, C4) allows for comprehensive evaluation across different contexts, reinforcing the applicability of the proposed method. Overall, the methods and evaluation criteria are well-tailored to the problem at hand, offering both theoretical advancement and practical usability in model deployment. Theoretical Claims: It does not explicitly detail any formal proofs or theoretical claims. Experimental Designs Or Analyses: It compares with other numeric formats (int4, fp4, nf4) and orthogonal quantization techniques (AWQ, GPTQ, QuIP). While the methodology is well-structured, the evaluation could be strengthened by ensuring statistical validation and consistent experimental conditions. Additionally, the ablation studies, currently in the Appendix, provide valuable insights and would be more impactful if moved into the main content for better visibility and discussion. Supplementary Material: It includes additional experimental results and ablation studies evaluating the any4 quantization algorithm's performance across different group sizes and initialization methods. It demonstrates that any4 consistently achieves lower perplexity compared to other numeric formats, such as fp4 and nf4, especially across larger group sizes. Additionally, the material discusses the impact of K-means initialization techniques on the quantization efficiency, with the k-means++ method showing the best performance. There are also experiments comparing the performance of any4 with other orthogonal quantization techniques like AWQ and GPTQ. Relation To Broader Scientific Literature: The paper introduces the any4 learned numeric representation, which improves traditional quantization techniques like int4 and fp4 without requiring preprocessing. It consistently outperforms or matches existing methods, using a single diverse sample for calibration instead of many. Results on group size and initialization enhance understanding of numeric stability. The paper suggests future work combining any4 with other techniques, contributing valuable insights to neural network efficiency and quantization research. Essential References Not Discussed: None Other Strengths And Weaknesses: ### Strengths: 1. Originality – The any4 representation introduces a novel quantization approach that eliminates the need for preprocessing weights or activations, setting it apart from existing methods and encouraging further exploration in adaptive quantization. 2. Significance – With growing demand for efficient neural networks, any4 effectively reduces parameter size while maintaining high accuracy, making it valuable for both cloud and edge deployment of large language models. 3. Practical Implementation – The inclusion of tinygemm, a GPU-optimized matrix multiplication library, enhances the paper’s real-world applicability, achieving real speedup in GPUs. 4. Clear Evaluation – The paper rigorously benchmarks any4 against multiple numeric formats and quantization techniques, providing strong empirical evidence of its advantages. ### Weaknesses: 1. Limited Theoretical Foundation – While the experiments are thorough, a deeper theoretical analysis of any4’s effectiveness could strengthen its claims. 2. Calibration Dataset Concerns – Relying on a single curated calibration dataset may introduce biases or limit generalizability; a broader dataset selection could improve robustness. 3. Comparative Analysis Gaps – While the paper compares any4 with several quantization methods, expanding the discussion to include other optimization approaches, such as neural architecture search, could provide better context. Other Comments Or Suggestions: It has almost three pages of related work and background, which is too detailed and is not really necessary. This structure limits the space in the experiment section, and actually, ablation studies and other content can be moved to the main content. Additionally, consider condensing the related work to focus on key contributions that directly inform the development of any4, allowing for a more concise and impactful presentation of experimental results. Questions For Authors: 1. What justifies the effectiveness of using only a single calibration sample? Would using more samples further improve performance? Additionally, how does the calibration cost of your method compare to other quantization techniques? 2. How does tinygemm interact with other acceleration methods, such as torch.compile, which can automatically optimize execution? Furthermore, when used in self-attention, how does its performance compare to FlashAttention (https://pytorch.org/blog/flashattention-3/)? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive review and comments, including that the approach was **“rigorously benchmark[ed]”**, the claims are **“supported by experimental results and methodological explanations”**, and that it’s implementation **“enhances the paper’s real-world applicability”**, as well as the important suggestions to improve the quality of the paper. Please find below our remarks regarding the reviewer’s requests and comments: - **Limited Theoretical Foundation / Formal Proof:** We would like to highlight that our approach is based on a theoretical derivation that we laid out in Section 4.1 of the paper. That mathematical derivation to minimize the reconstruction error of the output of each of the model’s linear layers, eventually lead to a modified k-Means clustering that guides us to apply clustering on the product of weights, mean activations, and group scaling factors (as described in Equation 14). - To prove the claims of our formal proof, we have provided in this rebuttal Table A (in the response to Reviewer pDZ9), empirical results that prove that each term in Equation 14 was necessary to minimize the loss of the model. - **Calibration Dataset Concerns**: - In Table A5 in the Appendix, we show that our single sample calibration set outperforms other datasets such as WikiText-2, C4, and The Pile. - The results show that our approach of a single curated sample with sentences covering diverse topics (fiction, non-fiction, code, and math), as shown in Section A.2 in the Appendix could be sufficient to calibrate a quantization algorithm, in contrast to large datasets where each sample typically covers a single domain. - Nevertheless, our approach can work by calibrating with arbitrary datasets with arbitrary number of samples. - **Calibration Cost:** - Applying our single-sample calibration takes only 1–3 seconds, while applying a larger number of samples (e.g., 128) that is required by other algorithms could take 10 or 20 seconds. - We have found that the total time to run our quantization algorithm on Llama2 7B is 10 minutes, which is similar to the time reported by other quantization algorithms: AWQ and GPTQ. - **Interaction with Other Acceleration Methods:** Overall, our quantization algorithm is orthogonal to other optimizations. - **Torch.compile:** We have already integrated our INT4 implementation of our tinygemm library with torch.compile. We still haven’t yet integrated the ANY4 implementation within the library with torch.compile, but we believe it is straightforward. - **FlashAttention:** FlashAttention in general (and FlashAttentionv3 in particular) only optimizes activation-activation matrix multiplication in the attention operation (i.e., Y = softmax(QK.T / sqrt(d) )V) while our work focuses on quantizing weight-activation matrix multiplication (for attention that would be Q=XWq, K=XWk, V=XWv, O=XWo, and we also quantize all the matrix multiplications in feed forward layers). Hence, from the perspective of end-to-end speedup of a model, our work is orthogonal to FlashAttention. - **Other** - **Moving Results from Appendix to Main Body:** We agree with this proposal and plan to do that in the Camera Ready version if the paper is accepted. - **Discussions on Other Optimization Approaches:** We can add discussions to the paper but we can mention here: - Quantization in general is orthogonal to other optimization approaches like pruning and neural architecture search. - We have focused in this paper on Post-Training Quantization (PTQ) that does not require retraining or fine-tuning of the model, while pruning requires extensive fine-tuning or continual pretraining, and NAS requires training from scratch or continual pretraining. - **Strengthening Evaluation:** - **Ensuring Statistical Validation:** For the camera ready paper, we plan to re-run some of the text generation experiments with different seeds and report the average, and measure perplexity on different random subsets of the respective datasets and report their averages. However, we have noticed that in most papers on quantization, and on LLMs in general, measurements on perplexity as well as text generation are only provided for a default seed. - **Consistent Experimental Conditions:** For perplexity, we have re-used the same evaluation script that is used in [GPTQ](https://github.com/IST-DASLab/gptq/blob/2d65066eeb06a5c9ff5184d8cebdf33662c67faf/llama.py#L206), that is also used in [AWQ](https://github.com/mit-han-lab/llm-awq/blob/aacd3b8923b080d58734001b3e7842c8ca3e6967/awq/entry.py#L300), and used the exact same seed to select the subsets of data. For downstream tasks, we have used LM Evaluation Harness and BigCode Evaluation Harness with the default settings provided in each library. We believe that our experimental conditions are consistent but we welcome further suggestions to improve consistency.
Summary: The paper presents any4, a new 4-bit numerical representation for post-training quantization of LLMs. The paper states that any4 does not require any additional pre-processing of weights or activations and for most LLMs, can find the optimal 4-bit representation with a single sample. The authors define the basis for the any4 format - using weight-only group quantization. They derive the rest of the math which forms the basis of their k-means based algorithm. This results in an LUT representation, each having 16 elements (for 4-bits). The authors further present tinygemm, a library that has efficient implementations for the different numerical formats (primarily matrix multiplies are achieved via efficient dequantization of weights to 16-bits and then multiplying with the activations). This is followed up by experimental results for different models (Llama 2/3, Mistral), showcasing both perplexity and downstream few-shot evaluation results. --------- ### Update after rebuttal After reviewing the rebuttal and the responses to follow-up questions, I've decided to retain my score. There are several reasons: - The difference between the LUT compression row-wise and group-wise quantization introduces a disparity in my mind. It seems like throwing away many of the benefits of group quantization to save memory with row-wise LUTs. It might have been nice to explore other forms of quantization like power of 2 scaling from MX formats to save on memory (uses only a scale factor with 8-bit scales). - The RTN explanation still feels lacking. In the end, the authors do apply changes like group wise quantization and an LUT lookup table - for which I'd categorize this method as an LUT method. - The overall novelty of the algorithm is limited. LUT compression has been explored in various forms before - and with upcoming fp4 formats in GPUs, having 4-bit codebooks is an expensive choice. Claims And Evidence: I think one thing that is unclear is the differentiation between the group quantization approach which the authors state they use for their method vs the computation of the LUT across an entire row (both in their derivation (equations 15-23), but also in their Figure 2). This is somewhat confusing. What is the benefit of doing group quantization and then creating only row-wise LUT? The end result is many scalars for the actual quantization and then very few actually representable values to represent them. Is this not creating a mismatch in the overall quantization process? Methods And Evaluation Criteria: Yes, the presented evaluations are consistent with other literature in the field, and do fair comparisons against other methods. Theoretical Claims: While there are no theoertical claims, the authors present the full derivation of their method. I have checked the soundness of their math. Experimental Designs Or Analyses: Yes, the experimental analysis and presented ablations are sound. Supplementary Material: Yes, read through all parts. The authors present the main algorithm, the calibration prompt they use, additional ablations for their algorithm and further evaluation results for their method on different models. Relation To Broader Scientific Literature: The paper presents a new 4-bit numerical representation, which is similar to previous efforts such as NF4 [1] and FP4 [2]. For the new method, the authors work with group quantization, which has been tackled in previous works such as GPTQ [3], and their benchmarks are commonly reported in quantization papers [3]. [1] NF4: https://arxiv.org/abs/2305.14314 [2] FP4: https://arxiv.org/abs/2310.16836 [3] GPTQ: https://arxiv.org/abs/2210.17323 Essential References Not Discussed: N/A Other Strengths And Weaknesses: Addn. Strenghts: - The authors present benchmarks on generative tasks like MBPP - which gives higher confidence in the presented method Weaknesses: - Despite using a scalar based 4-bit LUT quantization method, the authors do not compare their results with SqueezeLLM, which is the closest method to their paper. - Broadly classifying their algorithm under the RTN umbrella is not correct. There are additional transformations going on to ensure high quality of their resulting models - Table 2 needs to fix this. - While the overall method does show some benefits on certain tasks, the results are varied for different methods and different tasks. For example in table 2, sometimes their method is better, and sometimes its not. Same with Table A1. Other Comments Or Suggestions: 1. Table 2 - for the numeric format column, this should talk about the format and not the method used 2. Lines 307-308, there is no figure 4.1 - please correct this to point to Figure 2 instead 3. Table A2 needs the correct highlighting for results of Mistral-8x7B model Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive review, including **“higher confidence in the presented method”** due to presenting results on generation and coding tasks, and thank the reviewer for checking the **“soundness of [the] math”** of the **“full derivation of [the] method”**, as well as for the important suggestions to improve the quality of the paper. Please find below our remarks: - **Benefit of Group Quantization with Row-wise LUT:** the reason we chose row-wise (look up table) LUT is to minimize the storage overhead. - Each LUT consists of 2^n_bit FP16 values. So for 4-bits, each LUT consists of 16 FP16 values. Having such a LUT for each group of 128 values is a high overhead, while having it for each row (in Llama2 7B each row has 4096 values) the overhead is negligible. - On the other hand, grouping requires 2 FP16 values (one for scale and one for offset) for each group. So we can afford having relatively small group sizes for grouping. - **Comparison with SqueezeLLM:** In the paper we haven’t compared with SqueezeLLM because SqueezeLLM keeps a portion (albeit small) of it’s weights in high precision (in their code they keep 10 rows of each weight matrix in high precision as well as 0.45% of outlier and sensitive values in the matrix). Nevertheless, we compare it here as requested by the reviewer in the Table below. - Despite that any4 does not rely on storing rows or portions of its weights in high precision, its perplexity is competitive with SqueezeLLM. - Moreover, we would like to highlight that storing rows or outlier/sensitive values in high precision is orthogonal to any4 and could be combined with it. - The numbers reported below were copied from the SqueezeLLM paper. We tried to run the SqueezeLLM code to quantize Llama3 models, to compare with any4, but we hit into runtime errors in their code. | Model | Quantization | WikiText-2 PPL↓ | C4 PPL↓ | |------------|--------------|-----------------|---------| | Llama2 7B | SqueezeLLM | 5.57 | 7.08 | | | ANY | 5.59 | 7.10 | | Llama2 13B | SqueezeLLM | 4.96 | 6.54 | | | ANY | 4.97 | 6.55 | | Llama2 70B | SqueezeLLM | 3.39 | 5.57 | | | ANY | 3.40 | 5.58 | **Table D:** *Comparison of any4 with SqueezeLLM* - **Any4 Under RTN Umbrella:** We categorized ANY4 under round-to-nearest (RTN) because in RTN given a list of possible values (whether those values are a LUT as in ANY4, or predefined as in INT4, FP4, or NF4), we round each weight to the nearest one. - Any computation done in ANY4 algorithm is used to obtain the optimal values of the LUT, rather than modifying the values of the weights or activations. While quantization algorithms do modify the values of weights and/or activations: - AWQ scales down the activations and scales up the weights. - GPTQ quantizes each column in a weight matrix sequentially, and every time a column is rounded to the nearest number, the remaining unquantized weight values are modified to mitigate the reconstruction error of rounding previous columns. - **Accuracy on All Tasks:** - **Perplexity is a Less Noisy Indicator then Downstream Tasks:** Accuracies on downstream tasks, especially generation tasks, could be noisy. EvalArena [D] studies the noisiness of such downstream tasks. E.g., HumanEval and MBPP appear in EvalArena with a significantly low signal to noise ratio, which explains why results in Table A1 may show ANY4 not always outperforming others on those specific tasks.. On the other hand, perplexity is a less noisy metric as it measures average loss on all tokens in each sample (rather than evaluating a final token or evaluating a pass or fail for a whole sample), and any4 does consistently well on perplexity in Table A1. - **4-bit vs. 3-bit vs. 2-bit:** Table 1 compares perplexity on various bitwidths with quantization algorithms that do process weights and/or activations. While the results show QuIP tends to be better than our approach on lower bits (always better on 2-bits and sometimes better on 3-bits), our approach is always in the top for 4-bits. - **Overall:** instances where any4 is not the top-performing method are a natural outcome of our extensive evaluation across a diverse set of models, generation types, model sizes, and tasks. - **Other:** - **Typos and Formatting:** We thank the reviewer for pointing out the typos and issues in formatting. We have applied all the fixes. - **Theoretical Claims:** Please see "Limited Theoretical Foundation / Formal Proof" in our response to Reviewer aGNw. **References** [D] EvalArena, https://crux-eval.github.io/eval-arena/ --- Rebuttal Comment 1.1: Comment: Thanks to the authors for all the reviews and responses. Based on this, I am leaning towards keeping my score as is (ie, leaning towards an acceptance).
Summary: The paper introduces a method for finding the optimal 4-bit quantization codebook for quantizing pre-trained language models. This is done by applying the LLoyd-Max algorithm to each *row* of the weight matrices of the model, thus finding an optimal (per-row) quantization look-up table. In order to find the optimal quantization values, the authors define the mean-squared error criteria not in terms of the weights themselves, but in terms of the outputs of the input-weight matrix multiplication, using a short hand-crafted input text sequence to achieve the calibration. They show that the obtained representation, called any4, performs favorably compared to quantizing to int4, fp4, and nf4 formats. Claims And Evidence: The main claim of the paper is that using the proposed any4 representation leads to better perplexity and task performance than quantizing to standard int4, fp4, and nf4 formats. The authors test their method on Llama 3 models of different sizes (1B, 3B, 8B, and 70B) shown in Figure 1 and  Table 1, as well as Llama 2 (7B, 13B, and 70B) in Table A1, and Mistral-7B and Mixtral-8x7B in Table A2. Overall, any4 does show improved perplexity, which tends to lead to better downstream task performance (but is not always the case). For these results, the authors fixed the group size (for the common scaling factors) to 128. The results do seem to indicate that the difference in performance is more pronounced for Llama 3 family of models, compared to Llama 2 and Mistral. The authors also compare perplexity results (on WikiText-2) of their method vs. using post-training quantization techniques with integer formats in Table 2, showing it performs same/better for 4 bits and LLama 3 70B for 3 bits (the increase in perplexity for 3 and 2 bits is overall quite high for all techniques, so I am not completely sure about the utility of those particular results). It would be interesting to see PTQ results using NF4 (as this was the most competitive format otherwise), but I understand these results were taken from Huang et al., 2024 where integer formats were used. Methods And Evaluation Criteria: The authors evaluate their method using the perplexity metric on WikiText-2, C4, and Penn Treebank, as well as on downstream tasks within EleutherAI’s Harness and BigCode’s Harness. These datasets and tasks are in line with what is generally used in literature, and I believe provide a good evaluation basis for quantization approaches. Theoretical Claims: The main theoretical derivation in the paper is for the k-means/Lloyd-Max algorithm for finding the optimal quantization codebook given the set of weight elements. The algorithm is fairly standard, and I believe its usage within this work is appropriate. I believe the final algorithm is correct, I just wanted to ask a clarifying question: * The authors claim that Eq. (17) follows from (14); this wasn’t quite clear to me, as (14) involves the summation across all row elements, while in (17) we are choosing the closest quantized value for each element in the row independently. Couldn’t the optimal weights for (14) differ from (17), i.e., (17) is an approximation but not the exact solution to (14)? Experimental Designs Or Analyses: / Supplementary Material: I did not review any supplementary material. Relation To Broader Scientific Literature: I believe the authors did a fine job of relating their work to the prior literature; they use the well-known Lloyd-Max algorithm to find optimal quantization codebooks for each row of every weight matrix within the model, and compare this to quantizing to standard numerical data types, as well as the commonly used post-training quantization techniques. Essential References Not Discussed: I am not aware of any essential references that were not mentioned in the manuscript. Other Strengths And Weaknesses: Strengths: * The proposed method overall showcases better perplexity/task performance compared to the standard numerical formats. * Good literature review — I believe the authors provided a good coverage of different quantization techniques. * Providing an open-source GPU implementation of the method is a very welcome contribution. Weaknesses: * I am slightly concerned that the paper lacks novelty; the main contributions of the paper are using the standard Lloyd-Max procedure per row of weight matrices, as well as defining the problem in terms of the matrix multiplication outputs using a short hand-written calibration dataset. It doesn’t seem too surprising that finding an optimal quantization codebook would lead to better performance compared to pre-defined quantization formats; in addition, the improvement does not seem to be as significant for models other than Llama 3 family. Other Comments Or Suggestions: * Some links within the paper (to figures and tables) seem to be broken (for example Fig. 4.1, line 307). * The authors might want to consider moving some of the derivation on pages 5 and 6 to the appendix to improve the flow of the main paper, but I leave this to the authors’ preference. * It might be useful to specify for Eq. (6) what dequant function is (before it is expanded later in (12)). * I believe it would be clearer to use the norm notation in the equations only for the vector equations (i.e. (10)), and not for the scalar equations such as (11). * It might be useful to add a more detailed caption for Figure 2. * I think the presentation of Table 2 could be made clearer (and “Numeric Format” column for 4-bits is wrong). For example, maybe consider removing “Numeric Format” columns altogether, and indicating next to the PTQ methods that they are associated with int format, while the last RTN row is associated with the any format. * (Minor) There are two bolded values in Table 1, MBPP column, Llama 3.2 1B Questions For Authors: The following questions are more of minor/clarifying nature: * As the authors compare their technique with a few other PTQ techniques, I was wondering if they could provide a comparison (could just be a comment/approximate) in terms of the cost of their k-means technique vs. running a PTQ algorithm? * Why is group size=128 used for the main results? (I also thought it might be useful to add group size=32 results, as this is the standard group size used for e.g. MXFP) * Would be interesting to know how much better is it to optimize outputs on a calibration dataset vs. optimizing the weights directly? * The results for any4 seem to be stronger for Llama 3 compared to other model families explored; do the authors have any comments/explanation for this? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and constructive feedback, which will help improve the paper. Please find below our remarks: - **Optimizing Weights Directly:** Please find the results in Table A below. First row shows the results of optimizing weights directly. The other 2 rows show the results of using the 2 additional terms of Equation 14 in our paper, i.e., multiplying with activations and scales. These results confirm that our derivation (Eq. 14) is essential for optimal performance. | | Term to Minimize in Equation 14 | WikiText-2 | C4 | PTB | CodeParrot | |--|--|--|--|--|--| | Optimizing Weights Only | $( w_{S_{i,j}} - w_{Q_{i,j}} )$ | 6.680 | 9.619 | 11.186 | 2.751 | | Optimizing Weights * Activations | $( w_{S_{i,j}} x_{j} - w_{Q_{i,j}} x_{j} )$ | 6.496 | 9.375 | 11.055 | 2.675 | | Optimizing Weights * Activations * Group Scales [Ours] | $(\alpha_{i,j} w_{S_{i,j}} x_{j} - \alpha_{i,j} w_{Q_{i,j}} x_{j} )$ | 6.487 | 9.366 | 11.034 | 2.680 | **Table A:** *PPL Results on Any4 Quantization on Llama3.2 1B (Lower is Better)* - **Lack of Novelty:** Please see our response to Reviewer U2Ld - **Significant Improvement for Llama3:** Quantizing Llama 3 is known to be more challenging than Llama 1 and 2, often resulting in larger accuracy drops [A]. This is likely due to its larger pretraining dataset (8T tokens vs. 2T for Llama 2 [B, C]), which makes it better trained—and therefore harder to compress without loss. In contrast, older models are relatively undertrained and easier to quantize, so the impact of advanced quantization methods appears smaller. Thus, the stronger results on Llama 3 highlight the effectiveness of our approach, especially on newer, more robust models where quantization is harder and matters more. - **PTQ Results using NF4:** We provide in Table B the results of combining AWQ with NF4 as well as results of combining AWQ with ANY4. These results show that combining preprocessing-based methods like AWQ with our numeric format leads to further improvements. | Model | Quantization Algorithm | Numeric Format | WikiText-2 PPL↓ | |--|--|--|--| | Llama3 8B | | FP16 | 6.14 | | | RTN | INT4 | 6.87 | | | RTN | NF4 | 6.63 | | | RTN | ANY4 | 6.51 | | | AWQ | INT4 | 6.53 | | | AWQ | NF4 | 6.51 | | | AWQ | ANY4 | 6.38 | **Table B:** *Combining AWQ with Different Numeric Formats* - **Algorithm Time Comparison:** We have measured the time to quantize a Llama2 7B and found it to be approximately 10 minutes, which is similar to the time we have measured to run both AWQ and GPTQ. Therefore, we conclude that quantization times of different approaches are similar. - **Different Group Sizes:** - We used group size 128 as it is the default in many quantization papers (e.g., AWQ, GPTQ). - In the Appendix we have provided results for group sizes 64, 128, 256, 512, 1024 for Llama3.2 1B. - As requested by the reviewer, we also provide here results for group size 32, as well as the other group sizes we already had in the Appendix - Please note the bitsandbytes library that we use for FP4 and NF4 quantization doesn't support group size 32. | | 32 | 64 | 128 | 256 | 512 | 1024 | |--|-|-|-|-|-|-| | FP4 | N/A | 16.19 | 17.11 | 18.12 | 20.43 | 2.3E6 | | NF4 | N/A | 14.27 | 14.63 | 14.98 | 15.38 | 7.8E5 | | ANY4 | 13.54 | 13.75 | 13.95 | 14.09 | 14.24 | 14.34 | **Table C:** *Llama3.2 1B Perplexity on C4 (Lower is Better)* - Other: - **Formatting and Typos:** We thank the reviewer for the detailed comments and we will apply them to the camera ready paper. - **Equation 17 following Equation 14:** We thank the reviewer for highlighting this important point. While Eq. (14) defines a global row-wise objective, Eq. (17) corresponds to a local minimization step within a K-Means-style alternating optimization procedure. - Specifically, Eq. (17) performs the E-step, assigning each weight $w_{s_{i,j}}$ to its nearest codebook value in $Q_i$, treating activations $x_j$ as constants. This simplifies to a local nearest-neighbor assignment. The M-step (Eqs. (19)–(20)) then updates $Q_i$ by minimizing the total reconstruction error across the row. - Thus, while Eq. (17) does not solve the global objective in Eq. (14) directly, it is part of an iterative process that does. We will revise the text to clarify this decomposition and explicitly note that Eq. (17) performs local minimization within the broader alternating optimization scheme. - **Downstream Task Performance:** Please check "Perplexity is a Less Noisy Indicator then Downstream Tasks" in our response to Reviewer yZMB. **References** [A] “How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study”, https://arxiv.org/abs/2404.14047v1, April 2024 [B] “Scaling Laws for Floating Point Quantization Training”, https://arxiv.org/abs/2501.02423, January 2025 [C] “The Llama 3 Herd of Models”, https://arxiv.org/abs/2407.21783, July 2024 --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for the detailed and informative response, including the additional results included. While I still have some reservations regarding the overall impact of the work, I do believe the evaluation is thorough and it effectively showcases the benefits of the proposed method, which the added results further confirm. Although I am inclined to keep my initial overall score, I am leaning towards an overall positive assessment of the paper.
null
null
null
null
null
null
Reinforcement Learning with Adaptive Reward Modeling for Expensive-to-Evaluate Systems
Accept (poster)
Summary: The paper presents AdaReMo, an approach designed to accelerate reinforcement learning (RL) in systems where reward evaluations are computationally expensive. The key idea is to decouple the RL loop—where decisions are made quickly—from the reward evaluation process that is slow and costly. AdaReMo achieves this by introducing a neural network–based reward model (RM) that approximates the true reward function. To handle the complexity and variability of real-world reward functions, the approach adaptively decomposes the overall reward function into multiple localized reward models that are trained on the agent’s most recent exploratory data. This adaptive reward modeling is integrated into an asynchronous training framework where the online decision system rapidly collects trajectories using fast RM predictions, while an offline evaluation system periodically updates the RM using precise but expensive reward calculations. Claims And Evidence: The paper claims that AdaReMo can decouple the fast decision-making loop from the slow, expensive reward evaluations, achieving over 1,000× speedup and about 14.6% performance improvement across three distinct real-world tasks. The extensive experiments in molecular generation, epidemic control, and urban spatial planning provide strong empirical support for these claims. However, one might ask whether the reported gains depend heavily on specific task formulations or the particular settings (e.g., the chosen hyperparameters and the design of the reward model) used in the experiments. Are the improvements robust across a wider variety of expensive-to-evaluate systems? Methods And Evaluation Criteria: The idea of decoupling the online decision system from the offline evaluation system is a natural fit for scenarios where reward computations are very expensive. Using an adaptive neural reward model that fine-tunes based on recent exploratory data is an inventive solution to keep the online loop fast. The evaluation criteria—such as Top 5% Score and Hit Ratio for molecular generation, Healthy and Contained metrics for epidemic control, and accessibility/greenness metrics for spatial planning—are well chosen to reflect the quality of decisions under high computational cost. Theoretical Claims: The paper primarily emphasizes an empirical demonstration of its ideas rather than formal theoretical proofs. The conceptual justification for adaptive reward modeling is well articulated, yet there isn’t a formal mathematical proof guaranteeing, for instance, convergence or error bounds for the reward model approximation. Experimental Designs Or Analyses: The experiments span three different domains, each chosen to represent a class of expensive-to-evaluate tasks. The paper performs ablation studies on key hyperparameters (e.g., fine-tuning interval and epochs), which helps to understand the sensitivity of the adaptive reward model. Supplementary Material: There are no supplemantary materials. Relation To Broader Scientific Literature: The paper builds on ideas from surrogate modeling and model-based RL, and it resonates with recent work in reinforcement learning from human feedback (RLHF) by adapting techniques to approximate expensive evaluations. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The decoupling of the RL loop into online and offline components using an adaptive reward model is an interesting solution to a long-standing computational bottleneck in RL. - By demonstrating significant speedup and performance gains on challenging real-world tasks, the paper makes a strong case for its practical relevance. - The paper is clear in describing the architecture, algorithm (with pseudocode in Algorithm 1), and the adaptive training framework. Detailed experimental results and visualizations (e.g., Figures 3–8) further aid understanding. Other Comments Or Suggestions: No comments Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer joeE, Thank you for your thoughtful and constructive feedback. We appreciate your recognition of the core idea behind AdaReMo, particularly the decoupling of the fast decision-making loop from the slow, expensive reward evaluations. We hope the following responses can address your concerns. **Q1:** Hyperparameters settings. **A1:** Thanks for your constructive comments. Besides the grid search results in Figure 8, we would like to provide **practical guidelines** that enable users to efficiently estimate an **approximate range for the optimal hyperparameters**, ensuring the method’s reliability and reproducibility without exhaustive tuning. First, the optimal number of **fine-tuning interval** can be approximated based on the complexity of the reward function, which is **positively correlated with its evaluation time**. For example, Table 2 indicates that reward computation in pandemic control is **more time-consuming** than molecular design, and grid search identified an optimal fine-tuning interval of 9 and 7 (Figure 8a) for the two scenarios, respectively, suggesting that a more complex reward function requires additional samples for effective RM fine-tuning. Second, the optimal number of **fine-tuning epoch** should balance the fine-tuning duration with the policy optimization (Section 5.5). Specifically, we can measure the time $t_1$ required for one policy optimization iteration. Then, based on the previously determined fine-tuning interval and the reward function’s computation time, we can estimate the number of samples needed for fine-tuning and compute the time $t_2$ per fine-tuning epoch. **The theoretically optimal number of epochs is $N \approx \frac {t_1}{t_2}$**, providing a robust starting point. Search within a small neighborhood of this estimated value typically yields the optimal setting, as validated in our experiments. We sincerely appreciate your constructive comment, and we have added the above guidelines on hyperparameters in Section 4.5 of the revised manuscript. **Q2:** Are the improvements robust across a wider variety of expensive-to-evaluate systems? **A2:** Thank you for your insightful question. The proposed AdaReMo is designed as a general solution for expensive-to-evaluate systems. To demonstrate its versatility, we conducted extensive experiments across three quite diverse scenarios: molecular design, pandemic control, and urban planning. The choice of tasks has also been acknowledged by Reviewer kWsg. We sincerely appreciate your suggestion and consider the inclusion of a broader range of expensive-to-evaluate tasks an important direction for extending our approach. In particular, we plan to investigate the effectiveness of AdaReMo in additional domains such as robotics and autonomous driving. Thank you once again for your valuable comments, please let us know if you have additional questions.
Summary: This work deals with the RL setting in which we have access to a reasonably fast simulator of the system, but the reward evaluations are slow. They employ the idea of modeling the reward to enable taking advantage of the fast simulation to update the policy. In particular, they use an asynchronous scheme where they are running multiple true reward evaluations in parallel that are added to a buffer for training the reward model. The states for reward evaluation are sampled from a buffer of recent states encountered in policy optimization. They optimize the policy using PPO and their trained reward model. The training process also employs a warmup phase for initial reward model training, as in the beginning the reward model is inaccurate and can potentially lead to detrimental policy learning. Another part of their proposal is adding a correction term to the reward during policy optimization; in particular, even though computing the precise reward is slow, there exists a faster surrogate $r_c$ for the tasks that they consider. As their learned reward model may be wrong, they blend this surrogate into their reward prediction to aid with dealing with outlier data $\alpha r + (1-\alpha) r_c$. They consider the tasks of molecular generation, pandemic control and urban spatial planning. As all of these tasks involve a graph-based structure, they employ Graph Neural Networks for the policy. The performance improved across several baselines (by around ~10%) including classic approaches as well as the model-based RL algorithm MBPO. The ablation study showed that the method is sensitive to the number of fine-tuning intervals and epochs for training the reward model. ------------------------------------------------------- Update: Thank you for the response; this addresses most of my concerns, so I increased the score. One point that still caught my eye was that the results you currently reported were based on the value at the best iteration; however, in the paper some of the learning curves were quite erratic, oscillating up and down, so even if the best iteration result is good, the learning may still be unstable (at least from the presented result, I can't rule out this possibility). However, perhaps in terms of your application, you are mainly interested in the best result, so I can also see it being argued the other way. Claims And Evidence: There are many claims about improved performance, and they also introduce methods such as blending the reward value estimates, the GNN model, etc. Currently, I am not convinced by the claims as the number of seeds is either low, or not specified. For the molecular task, they said they repeated the experiment with 3 seeds, which is typically considered very low (e.g., see https://arxiv.org/abs/2108.13264 Deep Reinforcement Learning at the Edge of the Statistical Precipice). Moreover, some ablations are missing, e.g., an ablation for the blended reward in equation 9 is missing. A major concern for me is the sensitivity study in Figure 8 about the number of fine-tuning intervals (tested in [1, 3, 5, 7, 9]) and the number of epochs (tested in [10, 20, 30, 40, 50]). The method only provides reasonable performance for the selected hyperparameter. This indicates to me that the method is unreliable. Moreover, if for example the results in the other figures were based on selecting the max results across such a sweep, then we would expect a maximization bias leading to unreliable results. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense. The choice of tasks to tackle is a strong point of this paper as it shows interesting applications of reinforcement learning. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes, I checked them. See the Claims and Evidence section for details for the issues. There are issues with too few seeds used in the experiments and missing ablations. Supplementary Material: There is no supplementary (except for the code in their provided anonymous link, which I did not check.). Relation To Broader Scientific Literature: The idea of creating a reward model for tasks where the reward evaluation is slow is a simple one, and it exists in prior work as well. What I found interesting in the current work was the choice of tasks. The approach itself is a sensible engineering solution to the tasks that they consider. Essential References Not Discussed: None. Other Strengths And Weaknesses: Other strengths: The ideas are reasonable from a practical point of view. I liked the choice of tasks. Other weaknesses: The clarity could be improved in some places, and there are some grammatical errors. Regarding clarity, a few examples are: when you first introduce synchronous correction, it's not clear what the correction term is (one has to read the experimental sections to understand, and it may be better to add early pointers or keep it self-contained), the number of seeds is not clear. Other Comments Or Suggestions: Grammatical errors: Page 1: "hinders" -> "hinder" Around Line 301: "records" -> "record" Page 6: "simulates" -> "simulate" "To further visualize the consistency of between the RM and the agent, we calculate the errors between RM estima- tions and precise evaluation during the training process." Fix the grammar. Equation 1 for the objective seems incorrect, as it does not take into account the randomness (you define transition probabilities P, so it seems you assume randomness in the dynamics). Equations 5 and 6 are not PPO, they are REINFORCE. Suggestions: If you look at Figure 7, it seems that as the training progresses, in the middle, the model becomes worse at predicting the early rewards. Perhaps this could be fixed by training the reward model on the stored rewards from the early stage as well? The model predictive performance could probably be made to not drop much on the early data. Questions For Authors: How many seeds did you use for all experiments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** Experiments with more seeds should be reported. **A1:** In our submission, we reported results based on 3 seeds. To address your concern, we have now conducted experiments with **10 seeds**. The results (see table in comments for details) confirm that our method remains **robust across a larger number of seeds** and consistently **outperforms the SOTA baselines** across all evaluated scenarios. Notably, the improvements is more evident with 10 seeds, as the reduced standard deviations indicate greater statistical reliability. We will update the results in Section 5 to further validates the effectiveness and robustness of our approach. **Q2:** More ablation study. **A2:** We have conducted additional ablation studies on the module described in Section 4.4 and the results are as follows: | Full | w/o Synchronous Correction | w/o Model Warm-up | w/o Parallel Computation | |-|-|-|-| |10.5$\pm$ 0.6 \| 0.31$\pm$ 0.06 | 10.1$\pm$ 0.8 \| 0.29$\pm$ 0.07 | 9.8$\pm$ 0.2 \| 0.24$\pm$ 0.04 | 9.5$\pm$ 0.5 \| 0.29$\pm$ 0.03 | Removing any component results in a performance decline. Notably, excluding **model warm-up** leads to random rewards in the initial iterations, causing suboptimal policy optimization and a 7% drop in performance. Similarly, removing **parallel computation** reduces training efficiency by limiting the reward model’s access to sufficient samples for accurate fine-tuning, resulting in an approximate 10% performance decrease. These results show that each component contributes to the final performance. We will include these results into a new ablation study section (Section 5.6) of the revised version. **Q3:** Only provides result for the selected hyperparameter, which is unreliable. **A3:** Besides the grid search results in Figure 8, we would like to provide **practical guidelines** that enable users to efficiently estimate an **approximate range for the optimal hyperparameters**, ensuring the method’s reliability and reproducibility without exhaustive tuning. First, the optimal number of **fine-tuning interval** can be approximated by the complexity of the reward function, which is **positively correlated with its evaluation time**. For example, Table 2 indicates that reward computation in pandemic control is **more time-consuming** than molecular design, and grid search identified an optimal fine-tuning interval of 9 and 7 (Figure 8a) for the two scenarios, respectively, suggesting that a more complex reward function requires additional samples for effective RM fine-tuning. Second, the optimal number of **fine-tuning epoch** should balance the fine-tuning duration with the policy optimization (see Section 5.5). Specifically, we can measure the time $t_1$ required for one sampling and policy optimization iteration. Then, based on the previously determined fine-tuning interval and the reward function’s computation time, we can estimate the number of samples needed for fine-tuning and compute the time $t_2$ per fine-tuning epoch. **The theoretically optimal number of epochs is $N \approx \frac {t_1}{t_2}$**, providing a robust starting point. Search within a small neighborhood of this estimated value typically yields the optimal setting, as validated in our experiments. We will add the above guidelines on hyper-parameters as Section 4.5 of the revised version. **Q4:** The RM (reward model) at late-stage becomes worse at predicting the early rewards, using early reward to fine-tune may help. **A4:** Thanks for your suggestion. We would like to first clarify that the reward prediction error you have observed **does not undermine the performance of our method**, as the RM’s role is to provide accurate evaluations for the current policy rather than historical ones. Moreover, it is worth emphasizing that Figure 7 showcases our AdaReMo enables the RM to adapt to the current policy’s outputs, as evidenced by **near-zero error values along the diagonal elements**. To further address this, we include samples from previous fine-tuning cycles into RM updates. The updated results for Figure 7 are shown below: | RM\Precise Reward | 50 | 100 | 150 | 200 | |-|-|-|-|-| | 50|**0.87%/1.92%**|3.81%/4.47%|8.13%/5.67%|13.16%/10.41%| | 100|9.47%/9.12%|**1.32%/1.59%**|8.61%/3.55%|26.78%/16.71%| | 150|12.35%/8.21%|4.13%/5.19%|**1.31%/2.10%**|7.25%/6.33%| | 200|4.21%/7.92%|5.04%/6.30%|2.74%/1.52%|**0.04%/0.38%**| The first value uses only current-cycle samples; the second includes previous-cycle samples. Though off-diagonal errors decreased after using historical samples, **diagonal errors increased significantly**, reducing the RM’s ability to accurately evaluate the current policy and **resulting in a 5% performance drop**. This confirms that focusing RM updates on current samples enhances overall optimization effectiveness. We have included the above experiments into Section 5.4 of the revised manuscript. **Q5:** Equation expressions. **A5:** Please see our response to Reviewer pywE’s Q3. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. >Q3: Only provides result for the selected hyperparameter, which is unreliable. >grid search identified an optimal fine-tuning interval of 9 and 7 (Figure 8a) for the two scenarios These sections seemed to misunderstand my comment, sorry for any confusion. If you look at Figure 8, where you tested the number of fine-tuning intervals in [1, 3, 5, 7, 9]) and the number of epochs in [10, 20, 30, 40, 50], the performance is only good for the choice (7, 40). All other choices give erratic poor performance. This seems very unreliable to me; it shouldn't be that sensitive. This is a major concern for me, and I did not see how the rebuttal fixes this concern. Moreover, in your rebuttal you mentioned experiments with 10 seeds, but I don't see which Tables you meant. Another remaining question I have is regarding the synchronous correction. While you did address the question and added ablations, I was also interested in the result when only the $r_c$ term is used i.e. $\alpha=0$. (sorry if this was no clear in my review.) --- Reply to Comment 1.1.1: Comment: Dear Reviewer kWsg, Thank you so much for your prompt reply! We truly appreciate your continued engagement and constructive feedback. We would like to provide the following response to address your remaining concerns. --- **Q1:** Hyperparameter sensitivity. **A1:** We sincerely apologize for misunderstanding your earlier comments on this topic. We now provide a thorough analysis with new experimental results to show that **our method is reliable and achieves stable performance across a wide range of hyperparameter values.** **First**, we would like to clarify that the step sizes of grid search in Figure 8 (2 for fine-tuning interval and 10 for the number of epochs) correspond to significant changes in the number of fine-tuning samples. For example, increasing the fine-tuning interval by 2 can introduce up to 150 additional samples, which has a substantial impact on performance. Therefore, we have conducted experiments with fine-grained values for fine-tuning interval (6, 7, 8, 9) and the number of epochs (35, 40, 45, 50): |Interval|6|7|8|9| |-|-|-|-|-| |MG(FA7)|10.32 (-1.3%)|**10.46**|10.21 (-2.4%)|10.06 (-3.8%)| |PC(CA-GrQc)|37.52 (-4.7%)|**39.37**|38.19 (-3.0%)|37.94 (-3.6%)| |Epoch|35|40|45|50| |-|-|-|-|-| |MG(FA7)|10.02 (-4.2%)|**10.46**|10.41 (-0.4%)|9.99 (-4.5%)| |PC(CA-GrQc)|37.41 (-5.0%)|**39.37**|39.09 (-0.7%)|38.66 (-1.8%)| We can observe that **performance remains within 5.0% of the optimal value across different hyperparameter values, showing the reliability of our method.** **Second**, results in Figure 8 were abtained after 300 optimization iterations. To further assess sensitivity, we extended optimization iterations to 500 and found that **all tested hyperparameters eventually converge to near-optimal performance, albeit with varying training efficiency, as shown in the table below.** | Interval | 5 | 6 | 7 | 8 | 9 | 11 | |-|-|-|-|-|-|-| |GA(FA7)| 10.04 (-4.0%) | 10.32 (-1.3%) | **10.46** | 10.33 (-1.2%) | 10.37 (-0.9%) | 10.24 (-2.1%) | |Best Iter| 409 | 312 | **272** | 296 | 387 | 449 | |PC(CA-GrQc) | 38.10 (-3.2%) | 38.40 (-2.5%) | **39.37** | 38.95 (-1.1%) | 38.32 (-2.7%) | 38.11 (-3.2%) | |Best Iter| 447 | 352 | **216** | 322 | 343 | 463 | |Epoch | 30 | 35 | 40 | 45 | 50 | 60 | |-|-|-|-|-|-|-| |GA(FA7)| 10.10 (-3.4%) | 10.34 (-1.1%) | **10.46** | 10.42 (-0.4%) | 10.38 (-0.7%) | 10.17 (-2.8%) | |Best Iter| 461 | 335 | **272** | 310 | 394 | 471 | |PC(CA-GrQc)| 37.90 (-3.7%) | 38.98 (-1.0%) | **39.37** | 39.09 (-0.7%) | 38.85 (-1.3%) | 38.51 (-2.2%) | |Best Iter| 380 | 354 | **272** | 292 | 382 | 433 | **Our method consistently converges to high-quality solutions with less than 4% differences to the optimal one, under any hyperparameters of finetuning intervals from 5 to 11 and number of epochs from 30 to 60, though they may cost longer time to converge as reflected by Best Iter in the table.** Thank you again for raising this important point. To illustrate that our approach is not highly sensitive to hyperparameters, we have added the above discussions to Section 5.5 of the revised manuscript. --- **Q2:** Results for 10 seeds **A2:** We are so sorry for the omission of the table. Below, we present the performance of both the SOTA method and our approach under 3 and 10 random seeds to show the robustness of our method. | Seeds | Method | MG(FA7) T5↑\|HR↑ | PC(CA-GrQc) H↑\|C↑ | SP(HLG) D↓\|G↑ | |-|-|-|-|-| |3|SOTA|10.3±0.5\|0.25±0.05|36.7±5.2\|9.9±2.4| 3.06±0.21\|2.60±0.13| |3|Ours|**10.5±0.6\|0.31±0.06**|**39.4±5.7\|10.6±2.9**|**2.88±0.23\|2.80±0.42**| |10|SOTA|10.1±0.2\|0.22±0.04|35.8±2.3\|9.8±1.4| 3.01±0.11\|2.63±0.07| |10|Ours|**10.4±0.2\|0.29±0.03**|**39.8±3.4\|10.4±1.2**|**2.82±0.18\|2.84±0.24**| **Our method consistently outperforms SOTA baselines across all scenarios under 10 seeds, which reinforces the statistical reliability of the results.** We have incorporated these findings into the revised paper. --- **Q3:** Synchronous correction with $\alpha=0$. **A3:** Thank you for highlighting this point. We have conducted additional experiments with $\alpha = 0$. The results are summarized below: |$\alpha$|MG(FA7) T5↑\|HR↑|PC(CA-GrQc) H↑\|C↑|SP(HLG) D↓\|G↑| |-|-|-|-| |0.8(default)|10.5\|0.31|39.4\|10.6|2.88\|2.80| |0|9.9 (-5.7%)\|0.17 (-45.2%)|33.9 (-14.0%)\|8.5 (-19.8%)|3.12 (+8.3%)\|2.67 (-7.3%)| Setting $\alpha=0$ **introduces greater stochasticity and bias, leading to an average performance drop of about 17%, with a maximal degradation of up to 45.2%**. In some cases, it even underperforms models trained with heuristic metrics. We have included this analysis in the revised manuscript. --- Thank you once again for your follow-up and thoughtful questions. We sincerely hope that our responses have addressed your concerns and highlighted the strengths and contributions of our work. If you find our responses satisfactory, we would greatly appreciate your consideration in raising your score. Sincerely, All authors
Summary: The paper proposes **AdaReMo (Adaptive Reward Modeling)**, a reinforcement learning (RL) framework designed specifically for **expensive-to-evaluate reward systems**, such as molecular generation, epidemic control, and spatial planning. The key innovation is the adaptive decomposition of complex, computationally expensive reward functions into localized, neural network-based reward models (RMs). AdaReMo dynamically updates these RMs by fine-tuning them on recent agent trajectories, thus ensuring accurate reward estimations aligned with the agent’s evolving policy. This approach effectively separates fast policy decisions from slow, costly reward evaluations by handling them asynchronously through online and offline systems. Empirical results demonstrate that AdaReMo achieves significant speedups (over 1,000x faster than baselines) and performance improvements (around 14.6%) across the tested tasks, confirming its efficacy in addressing the computational bottleneck present in RL for real-world systems. Claims And Evidence: Training reinforcement learning (RL) agents requires extensive trials and errors, which becomes prohibitively time-consuming in systems with costly reward evaluations. The major claim that this paper makes is that the proposed adaptive reward modeling (AdaReMo) paradigm surpasses existing methods in significantly reducing computational overhead and optimality. I am not fully convinced by this claim since there are several RL optimization algorithms that serve as substitutes for the proposed AdaReMo as well as the compared baselines in the paper. Those RL algorithms include REINFORCE/REINFORCE++, RLOO, GRPO, that circumvents the need for a critic model and brings efficiency. Methods And Evaluation Criteria: In Eq (7), all the node embeddings are averaged and sent to the MLP layer to calculate the reward, which is inconsistent with Eq (4) where each edge is regarded as an action. Theoretical Claims: I did check all the equations. Experimental Designs Or Analyses: I checked all the experimental designs and analyses. The major concern I have is for including other RL optimization algorithms for comparison. Supplementary Material: I checked code and data. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: [+] The paper is well organized and carefully written, making it quite easy to follow. [+] The design seems overall reasonable to me. Detaching the reward model to online split and offline split should improve the efficiency of the RL online learning process. [-] The motivation is still not quite clear to me. If the reward model is of the same size as the policy model (same as the case in RLHF for LLMs), then it is acceptable to obtain the reward. In this paper, the GNN policy and reward model seems even more tiny than most of the open-source LLMs. It is not clear to me why detach the reward modeling to online/offline splits. Other Comments Or Suggestions: Typo: In Eq (7), the h and a should be flipped? Questions For Authors: See other sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer pywE, We express our sincere gratitude for your thorough review and valuable feedback on our paper. Regarding the potential alternative algorithms and the issue of online and offline splits you mentioned, we hope the following replies can address your concerns. **Q1:** REINFORCE/REINFORCE++, RLOO and GRPO serve as substitutes for the proposed AdaReMo. **A1:** Thanks for your constructive comments. We would like to provide more clarification on the necessity of the proposed AdaReMo compared to substitutes like RLOO and GRPO with additional experiments. In fact, although the introduction of the reward model in AdaReMo draws inspiration from RLHF, its primary focus diverges from the RL optimization algorithms mentioned, such as REINFORCE, RLOO, and GRPO. For instance, GRPO avoids using a critic model to compute advantages due to the typically large size of value networks in RLHF. In contrast, the value network in AdaReMo is significantly smaller, rendering the removal of the critic model less critical. Instead, **AdaReMo targets the efficiency imbalance caused by a computationally expensive reward function**, which sets it apart from these alternatives. To evaluate the optimization efficiency of various algorithms, we conducted experiments by replacing PPO with RLOO and GRPO. The results are summarized in the table below: ||Molecular Generation(FA7)|Pandemic Control(CA-GrQc)|Urban Spatial Planning(HLG) |-|-|-|-| ||T5$\uparrow$ \| HR$\uparrow$|H$\uparrow$ \| C$\uparrow$|D$\downarrow$ \| G$\uparrow$ | RLOO | 10.2$\pm$ 0.4 \| 0.25$\pm$ 0.02 | 34.6$\pm$ 3.8 \| 9.9$\pm$ 1.5 | 3.66$\pm$ 0.38 \| 2.56$\pm$ 0.36| | GRPO | 9.6$\pm$ 0.2 \| 0.27$\pm$ 0.03|33.1$\pm$ 4.7 \| 8.7$\pm$ 2.1|3.31$\pm$ 0.24 \| 2.28$\pm$ 0.39| |PPO| **10.5**$\pm$ 0.6 \| **0.31**$\pm$ 0.06 | **39.4**$\pm$ 5.7 \| **10.6**$\pm$ 2.9 | **2.88**$\pm$ 0.23 \| **2.80**$\pm$ 0.42 | As demonstrated, **AdaReMo combined with PPO consistently outperforms RLOO and GRPO** across multiple metrics and domains. While RLOO and GRPO eliminate the need for a critic model, **they require multiple sampling**, which increases computational complexity and hinders training efficiency. Furthermore, **the training time per iteration for RLOO and GRPO does not significantly differ from that of PPO**, suggesting that optimizing the critic model is not the primary bottleneck in training efficiency. Additionally, GRPO’s method of calculating advantages **can amplify minor differences**, potentially destabilizing the policy optimization process. In summary, these results validate the effectiveness of AdaReMo in addressing reward function complexity. Thank you again for your valuable feedback, and we will incorporate the above discussion and experiments into the appendix of the revised manuscript. **Q2:** Why detach the reward modeling to online/offline splits. **A2:** Thank you for your comment. The online/offline splits are proposed to **address the computationally expensive reward functions**, rather than reducing the computational cost of solution generation. Specifically, **the online system** leverages the reward model to deliver rapid reward estimations, enabling the agent to receive **real-time feedback**. Without this mechanism, the agent would face significant slowdowns due to the time-intensive reward calculations, as exemplified by applications like RLGN (Meirom et al., 2021) for pandemic control. Conversely, **the offline system** continuously refines the reward model using the latest exploratory samples **to ensure its accuracy**. In the absence of this offline phase, the reward model would fail to provide reliable evaluations for samples generated by the evolving policy, leading to significant errors throughout the optimization process, as illustrated in Figure 6a. In summary, the online/offline splits **effectively balance real-time efficiency with long-term accuracy**. Thank you again for your valuable comment, and we will include the above discussion in Sections 4.3 and 5.4 of the revised manuscript. **Q3:** Typo in Eq4 and Eq7. **A3:** Thanks for pointing out this issue. We have corrected it in the revised manuscript. Specifically, Eq4 should be $s_i = \text{MLP}_p(\bf{a}_i)$, where $\bf{a}_i$ denotes the embeddings of action ${a}_i$. We have checked the entire paper and added following corrections as Reviewer kWsg suggested, - Eq1. $\max_{\Theta} E_{\pi_\Theta} \left[ \sum_{t=0}^{T} \gamma^t r(s_t, a_t) \right]$ - Eq5. $\nabla_{\Theta} J(\Theta) = E \left[ \min \left( r_t(\pi_\Theta) \hat A_t, \text{clip}(r_t(\pi_\Theta), 1 - \epsilon, 1 + \epsilon) \hat A_t \right) \right]$ We hope these responses fully address your concerns and welcome any further questions or suggestions you may have.
Summary: The authors propose **Adaptive Reward Modelling (AdaReMo)**, an approach that accelerates Reinforcement Learning (RL) by fine-tuning a reward model (RM) multiple times during training. This reduces the need for expensive reward evaluations. In AdaReMo, the RL agent interacts only with the learned RM, while a parallel process computes ground-truth rewards (which are expensive to obtain) to gradually build a fine-tuning pool. Once this pool reaches a threshold (after a fixed number of RL iterations), the RM is fine-tuned for a pre-defined number of epochs and then redeployed in the RL loop. To ensure reliability, the RM is pre-trained for several epochs before any policy optimization begins. PPO is used as the underlying RL algorithm. Through evaluations on tasks from **Molecular Generation**, **Epidemic Control**, and **Urban Spatial Planning**, the paper shows that AdaReMo outperforms classical, model-free and model-based RL baselines. Claims And Evidence: The claims about the effectiveness of AdaReMo are well-supported by the empirical results. Methods And Evaluation Criteria: The methodology and evaluation criteria are appropriate and well-justified. Theoretical Claims: No formal theoretical results are presented, but the approach is grounded in sound reasoning. Experimental Designs Or Analyses: The experimental design is clearly described and appears sound. Supplementary Material: No supplementary material was provided. Relation To Broader Scientific Literature: **Key Contribution**: - A method for *adaptive and efficient reward modelling* that avoids expensive reward evaluations by strategically fine-tuning a reward model. This aligns with ongoing work in sample-efficient RL and learned reward estimation, while targeting real-world domains with costly simulation steps. Essential References Not Discussed: None that are obviously missing. Other Strengths And Weaknesses: **Strengths**: - Clear, concise writing. - The algorithm design is sensible and addresses a practical bottleneck in real-world RL settings. **Weaknesses**: - No major weaknesses observed. Other Comments Or Suggestions: - Line 235: "reduce-scaled direct evaluations" → *reduced-scale direct evaluations* - Line 301: "records" → *record* - Line 313: "have" → *has* - Line 319: "KDE" → *KED* - Line 404: "molecular" → *molecules* Questions For Authors: 1. **Is the warm-up cost included in the plots in Figure 6a?** Clarifying this would help interpret the reported efficiency gains of AdaReMo compared to the baselines. If it is excluded, a brief discussion of the warm-up cost tradeoff might be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer Q9Xy, We would like to express our sincere gratitude for your thorough review and constructive feedback on our paper. We are particularly pleased that you have recognized the effectiveness of our proposed approach, AdaReMo, in solving the efficiency bottleneck between policy optimization and computationally expensive reward calculation. We hope the following responses address your concerns. **Q1:** Is the warm-up cost included in Figure 6a? A brief discussion of the warm-up cost tradeoff might be valuable. **A1:** Thank you for your insightful comments. Figure 6a illustrates the training process from when policy optimization begins, and the warm-up cost is not included. Specifically, the model warm-up takes only one fine-tuning interval, accounting for only **3% of the total optimization time**. We also conducted additional ablation experiments by removing the warm-up phase. Without model warm-up, we observed a performance **decrease of 7% across the evaluated metrics**. This decline arises because the policy is initially trained with randomly generated rewards, which introduces additional noise and hinders optimization. These results underscore the critical role of model warm-up in enhancing training efficiency, leveraging a modest (3%) time investment to yield substantial improvements in model performance. We will add the above results and further discussion into a new ablation study section (Section 5.6) of the revised version. **Q2:** Typos and grammatical errors. **A2:** We sincerely thank the reviewer for their thorough and careful scrutiny. In response to your feedback, we have thoroughly reviewed the manuscript, correcting all identified typos and grammatical mistakes. We hope the above replies resolve your concerns and welcome any further questions.
null
null
null
null
null
null
Semantic Shift Estimation via Dual-Projection and Classifier Reconstruction for Exemplar-Free Class-Incremental Learning
Accept (poster)
Summary: This paper focuses on Exemplar-Free Class-Incremental Learning (EFCIL), which aims to solve the problem of catastrophic forgetting without retaining any exemplars. Existing methods alleviate forgetting by storing distributional information about past tasks. However, as models are updated, past distribution information becomes outdated due to semantic shifts, and thus cannot properly represent past classes. This paper proposes a Dual-Projection Shift Estimation method to estimate and calibrate this shift. In addition, this paper uses ridge regression for offset estimation to reformulate classifier training as a reconstruction process, thereby alleviating the classifier's bias towards new categories.The proposed method achieved outstanding performance on multiple benchmark datasets. Claims And Evidence: The paper illustrates the claims made with figures and supports them by citing previous work. The paper provides experiments that demonstrate that the proposed classifier reconstruction can strike a better balance between plasticity and stability. Methods And Evaluation Criteria: The proposed method is meaningful for the domain of Exemplar-Free Class-Incremental Learning it focuses on. Semantic shift estimation has been extensively studied in EFCIL. The method proposed in this paper further considers the semantic shift differences between different classes compared to previous work, although this entails a more complex computation. Theoretical Claims: I have specifically checked the correctness of the proofs for the theoretical claims in this paper. Experimental Designs Or Analyses: Yes. This paper follows the previous experimental design on three benchmark datasets. Experiments on larger datasets are beneficial to further demonstrate the effectiveness and generalizability of the proposed method. Supplementary Material: N/A. Relation To Broader Scientific Literature: 1. Compared to previous work that only focused on task-level shifts, this paper further considers the differences in semantic shifts between different categories. 2. This paper uses ridge regression to reconstruct the classifier instead of retraining the general classifier to solve the decision bias of the classifier. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. This paper considers both semantic shifts and decision biases. 2. The proposed method achieves state-of-the-art performance on multiple datasets. Weaknesses: 1. There have been some previous works that estimate semantic drift and feature projection [1,2,3]. The advantages of the proposed method over previous works and how it solves the limitations of previous works should be elaborated to help readers understand the novelty of the method. 2. Referring to Eq.8 and Eq.9, does the CIP need to learn and store a linear projector for each class at each subsequent incremental phases? If so, then the additional computational and storage costs incurred by these projectors should be discussed, compared to the case where they are not used. Is this going to be a significant burden on large-scale datasets? 3. The steps of using uncentered covariance and singular vector decomposition seem to result in information loss. The error caused by this information loss gradually accumulates as the number of stages experienced by the class learned earlier increases. 4. Lack of details on training, such as learning rate and weight of distillation loss, etc. Reference: [1] Bring evanescent representations to life in lifelong class incremental learning. CVPR 2022. [2] Elastic feature consolidation for cold start exemplar-free incremental learning. ICLR 2024 [3] Prospective Representation Learning for Non-Exemplar Class-Incremental Learning. NeurIPS 2024. Other Comments Or Suggestions: None. Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer V6Ai Thank you for your constructive and detailed feedback. We provide detailed responses to your concerns below. The discussion and experiments below will be properly included in the manuscript. ### Q1: Elaborate the advantages of proposed methods over existing works [R1,R2,R3]. Response: The limitations of existing works [R1,R2,R3] can be summarized as follow respectively. 1. Toldo et al [R1] propose to train multi-layer perceptrons or variational autoencoders to estimate the semantic shift, intensifying the training cost. EFC [R2] utilizes the Gaussian kernels to capture the shift by estimating the translation of feature means, potentially neglecting other types of shift transformations including scaling. PRL [R3] proposes a feature projection to align the current embedding space to a pre-defined latent space, rather than estimating the semantic shift. Also, PRL does not consider different classes within a training task. Our DPCR provides a DP to estimate the semantic shift in both efficient and effective way via a linear projection can be calculated directly without iterative BP-training. Also, the CIP in DP includes class-specific knowledge. To demonstrate the effectiveness of DPCR, we further include the experiments compare with EFC and PRL (we did not find the released code of [R1]). The results show the high performance of our DPCR. |CIFAR-100|T=10|| T=20|| |-|-|-|-|-| |(%)|A_last|A_avg|A_last|A_avg| |EFC|45.01|59.44|34.12|48.63| |PRL|45.32|57.08|28.43|41.98| |**DPCR**|**49.59**|**62.13**|**37.79**|**54.48**| ### Q2: Does CIP need to learn and store a linear projector for each class at each subsequent incremental phase? Response: CIP is a training-free projection to include the class-specific knowledge and it does not need to be stored. Actually, during the training of DPCR, CIP is calculated from the uncentered covariance via SVD (see Eq. (6) and (7)) and it will be used to calibrate the old information set with TSSP and then discarded. Only the covariance and prototype ${\hat{{\Phi}}{c}^{\theta{t}}, \hat{{\mu}}{c}^{\theta{t}}}$ need to be stored in DPCR. For the operations in Eq. (8) and Eq (9), they are used to demonstrate how the old features of far previous task are calibrated of without loss of generality and this process does not be necessarily performed in the implementation of DPCR. In our implementation, the calibration is conducted across two adjacent tasks (see Algorithm 1). We will further clarify this to avoid potential misunderstanding. ### Q3: Using SVD seems to result in information loss. Response: CIP is used to inject class-specific information via projecting the projector in TSSP to significant directions. The construction of projector for CIP maintains all the singular vectors corresponding to non-zero singular values and only discards the information of null space, which we believe not necessarily needed. To validate the effect of information of null space, we perform an experiment on CIFAR-100 with all the singular vectors to construct the projector of CIP (denoted as DPCR-wn). The results are shown below and we find that including null space information does not affect the performance and thus the information loss may not have large impact on the performance. |CIFAR-100|T=10|| T=20|| |-|-|-|-|-| |(%)|A_last|A_avg|A_last|A_avg| |DPCR-wn|49.59|62.14|38.04|54.25| |DPCR|49.59|62.14|38.04|54.25| ### Q4: Lack of details of training. Response: As indicated in line 310 (left), the implementation details are provided in Appendix B. We will move them to the manuscript to avoid possible omission. ### Q5: Experiments on larger datasets are beneficial. Response: As suggested, we conduct the validation on ImageNet-1000 with T=10 using ResNet-18 under the same seed of 1993. The hypeparameters are the same as those in ImageNet-100 experiments. The results shows our DPCR is still competitive on ImageNet1000. |ImageNet-1k(T=10)|A_last|A_avg| |-|-|-| |LwF|22.01|42.40| |ACIL|32.28|46.61| |DSAL|33.67|48.84| |LDC|35.15|53.88| |ADC|31.34|50.95| |**DPCR**|**35.49**|**54.22**| [R1] Bring evanescent representations to life in lifelong class incremental learning. CVPR 2022. [R2] Elastic feature consolidation for cold start exemplar-free incremental learning. ICLR 2024 [R3] Prospective Representation Learning for Non-Exemplar Class-Incremental Learning. NeurIPS 2024.
Summary: This paper proposes a method called Dual-Projection Shift Estimation and Classifier Reconstruction (DPCR) to solve two key challenges in Exemplar-Free Class-Incremental Learning (EFCIL): semantic shift and decision bias. It reconstructs the classifier through the dual projection mechanism and ridge regression, effectively balancing the learning of new and old knowledge. Experimental results show that DPCR outperforms the existing EFCIL method on multiple benchmark datasets. Claims And Evidence: This paper claims to address semantic shift via proposed DP, and decision bias via RRCR. The claims are supported by the experiments. Methods And Evaluation Criteria: The methods are illustrated clearly. The evaluation criteria is common. A slight drawback is the lack of the evaluation on large datasets like ImageNet1000. Besides, this paper lacks evaluation on the complexity of the method. Theoretical Claims: There are no new theoretical results. Experimental Designs Or Analyses: The experiment designs and analyses are adequate. But there are figures lacking of horizontal labels (Figure 4). Supplementary Material: In the supplementary material, the authors provide the pseudo-code of DPCR and results comparing to exemplar-based methods. Also, additional experiments on CN are provided. Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: Overall, this paper proposes a novel method in NECIL, and seems achieving promising results with reasonable approaches. Although with some drawbacks, the paper seems good. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to Reviewer YwAB Thank you for your constructive and detailed feedback. We provide detailed responses to your concerns below. The discussion and experiments below will be properly included in the manuscript. ### Q1: Lack of evaluation on lager datasets like ImgaNet1000. Response: As suggested, we validate DPCR and compared methods on ImageNet1000 with T=10. We use the same backbone of ResNet-18 and same under the same seed of 1993 to conduct the experiments. The hypeparameters are adopted the same as those in ImageNet-100 experiments. The results are reported below and our DPCR is still competitive on ImageNet1000. These results will be included in the manuscript. |ImageNet-1k(T=10)|A_last|A_avg| |-|-|-| |LwF|22.01|42.40| |ACIL|32.28|46.61| |DSAL|33.67|48.84| |LDC|35.15|53.88| |ADC|31.34|50.95| |**DPCR**|**35.49**|**54.22**| ### Q2: Lack of complexity analysis Response: As suggested, we will include the complexity analysis. Since the incremental representation learning is common in the CIL realm, here we analyze the complexity of DP and RRCR. The complexity analysis will be included as follow. Suppose $F$ is the FLOPs of the model ($F \approx 1.8\times10^{9}$ for ResNet-18), the time complexity of DP $\mathcal{O}(FN_t + N_td^2 + tC(d^3 + C^2d))$ is the sum of those of the feature extraction $\mathcal{O}(FN_t)$, calculation of the task-wise projection matrix $\mathcal{O}(N_td^2+d^3)$, and the rectification of each class $\mathcal{O}(d^3+C^2d)$. Similarly, the time complexity of RRCR is $\mathcal{O}(d^2N_c + dtC + td^2+d^3)$. ### Q3: Some figures lack of horizontal labels. Response: Thank you for pointing out this. We will check and include the horizontal labels for all the figures.
Summary: The paper studies exemplar-free class-incremental learning and focus on addressing two major problems of semantic shift and decision bias. The authors propose to use dual-projection to estimate semantic shift which includes both class-wise and task-wise shifts, and a ridge regression based classifier instead of a learnable classifier to address the problem of decision bias. The proposed method achieves better stability-plasticity trade-off among existing methods in EFCIL benchmarks. Claims And Evidence: Most claims are validated. Some claims need better explanation and validation: 1. Why using Ridge Regression-based Classifier Reconstruction is better than other methods like NCM or FeCAM [X] which also employs covariance based classification? What is the motivation for introducing this RR based classifier? Does RRCR improve over Mahalanobis-distance based classification as done in [X] for EFCIL? 2. Line 066-069: “However, its effectiveness heavily depends on the quality of the learned representations, which are susceptible to degradation caused by semantic shift.” This motivation is neither clear nor justifies introducing a new type of classifier. Existing methods like LDC, ADC exactly solve the semantic shift problem. I would ask the authors to justify why it is required to introduce a Ridge Regression-based Classifier in the context of EFCIL and what are the benefits of using this. [X] FeCAM: Exploiting the heterogeneity of class distributions in exemplar-free continual learning. In Advances in Neural Information Processing Systems, 2023. Methods And Evaluation Criteria: The evaluation criteria is appropriate for the EFCIL setting. The compared methods are very relevant and recent. Additionally, I would propose to include evaluation on some fine-grained classification datasets similar to what is done in ADC [CVPR 24] for a more rigorous experimentation. Theoretical Claims: I do not find any errors in the theoretical formulation. Minor suggestion: It is important to mention that the covariance matrix referred to in equation 15 is not centered on the mean and also not normalized. This can be referred to as gram matrix as done in [Y]. [Y] Ranpac: Random projections and pre-trained models for continual learning. Advances in Neural Information Processing Systems. 2023 Experimental Designs Or Analyses: The experimental setup is sound and valid, and following the standard practice in EFCIL. The analysis and ablation studies are extensive. Supplementary Material: Yes, The supplementary material adds more clarity with the pseudo-code of the method, implementation details and comparison with exemplar-based methods. Relation To Broader Scientific Literature: The contributions of this paper aims to address problems of semantic shift and task-recency bias discussed in several existing CL works. Essential References Not Discussed: 1. Most essential references are discussed. 2. Some discussion on FeTrIL [WACV 23] would be useful in the context of the paper since they also propose an alternative SVM-based classifier instead of training linear classifier. Comparison with a recent work FeCAM [NeurIPS 23] could be added which also uses covariance of features for classification. 3. It is important to acknowledge that learning a projector for estimating semantic shift in prototype means was also proposed by LDC (ECCV 24). While the proposed method builds on top of this concept, the authors do not acknowledge this in the paper. 4. While the paper focus largely on semantic shift, it does not have much discussion on existing drift compensation or resistant methods like LDC, ADC. Adding some discussion on this to better understand existing methods solving the same problem would add more clarity. Why these methods are not good enough or how is the proposed method addressing their limitations? This would improve the motivation of the proposed method. 5. References to ridge regression theory is missing. Other Strengths And Weaknesses: Strengths The paper is well written and organized, with extensive experiments and ablations, good illustrations. Weaknesses: Discussed above. My main concern is that the motivation for using Ridge Regression-based Classifier Reconstruction is not convincing. More discussions on "why use RRCR" is important. Some more comparisons, discussion and validation as mentioned above will improve the paper. I am willing to increase my rating if the authors address my major concerns. Other Comments Or Suggestions: Writing mistakes: 1. Line 189 (right side) - “Then we can constructed the projector of CIP as”. Questions For Authors: 1. How is decision bias different from task-recency bias which is discussed in many CL works? It is better to stick to existing terms instead of defining the same thing with a different term. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Response to Reviewer spyP Thank you for your constructive feedback. We provide detailed responses to your concerns below. The discussion and experiments will be properly included. Also, we will check and revise all the writing mistakes, and include references of ridge regression. ### Q1: Comparison between RRCR and NCM/Mahalanobis-distance classifier in FeCAM. Response: RRCR is a parametrized classifier where the decision boundaies can be directly modified by training the parameters. NCM and Mahalanobis-distance classifiers are non-parametric classifiers that utilizes feature distribution and absence of trainable parameters in limits their adaptability across task. Although they can avoid decision bias and achieve better performance over existing parametrized classifiers, RRCR mitigates this via reconstruction then keep the benefit as a parametrized approach. To validate this, we compare RRCR with NCM and Mahalanobis on CIFAR-100 (T=10) using the same frozen backbone after task 1 (following FeCAM). RRCR surpasses both. |CIFAR-100(T=10)|A_last(%)|A_avg(%)| |-|-|-| |FeCAM|30.41|45.59| |NCM|18.77|36.16| |**RRCR**|**33.28**|**49.16**| RRCR also better adapts to semantic shift during CIL via reconstruction. FeCAM freezes the backbone to maintain consistent covariances, but this impedes plasticity. Once the backbone evolves, prior class covariances become outdated and hard to update without storing previous data due to centralization to obtain covariance. RRCR, by contrast, recalibrates using ${\hat{{\Phi}}{c}^{\theta{t}}, \hat{{\mu}}{c}^{\theta{t}}}$ via matrix operations, avoiding the need to retain prior data or embeddings. ### Q2: Motivation of RRCR. Response: In this paper, we propose RRCR to address decision bias in parametrized classifier during CIL. Non-parametric classifiers like NCM lack lack adaptability due to their static structure and inability to directly modify decision boudaries. While techniques like ADC/LDC try to offset semantic shift, they remain limited by non-learnable classifiers. Existing parametrized classifiers can directly learn from labels but tend to become biased without old data. RRCR fills this gap, offering the benefits of a learnable model while correcting bias via reconstruction. We will clarify this motivation in lines 066–069. ### Q3: Evaluation on fine-grained datasets. Response: As suggested, we include the results on fine-grained dataset CUB-200 with T=5 in cold-start setting with the same seed. Results shows that DPCR also leads the compared methods. |CUB200(T=5)|A_last(%)|A_avg(%)| |-|-|-| |LwF|25.40|36.38| |ACIL|21.14|33.14| |DSAL|21.28|32.36| |SDC|24.21|36.00| |LDC|28.70|39.09| |ADC|28.84|39.44| |**DPCR**|**29.51**|**40.62**| ### Q4: Covariance vs. Gram Matrix. Response: We will replace "uncentered covariance" to "gram matrix". ### Q5: Compare with FeCAM. Response: We compare DPCR with FeCAM and obtain the results of FeCAM via the official implementation under the same seed of 1993. Our DPCR outperforms FeCAM and this can be attributed to that FeCAM needs to freeze backbone thus limits the plasticity. Our DPCR can take advantage of incremental representation learning with semantic shift estimation then a good stability-plasticity balance. |CIFAR-100|T=10||T=20|| |-|-|-|-|-| |(%)|A_last|A_avg|A_last|A_avg| |FeCAM|34.82|49.14|25.77|41.21| |**DPCR**|**50.08**|**63.43**|**41.62**|**56.01**| |Tiny-ImageNet|T=10||T=20|| |(%)|A_last|A_avg|A_last|A_avg| |FeCAM|29.83|42.19|22.69|34.48| |**DPCR**|**35.50**|**47.73**|**26.67**|**38.80**| |ImageNet-100|T=10||T=20|| |(%)|A_last|A_avg|A_last|A_avg| |FeCAM|41.92|58.21|28.64|43.04| |**DPCR**|**53.46**|**68.20**|**40.76**|**57.81**| ### Q6: Acknowledge LDC and add discussion of LDC, ADC, and FeTrIL. Response: We will include the discussion as follow. To estimate the semantic shift, ADC uses adversarial samples to estimate the translation of prototypes, and LDC introduces a learnale projection. However, ADC only considers the shift of translation, neglecting other transformation including scaling, and LDC only estimates the shift across tasks without class-specific information. Moreover, LDC needs iterative BP-training to get the projector, inducing more computation cost. Inspired by the learnable projector in LDC, DPCR achieves shift estimation via DP with low-cost closed-form solution and class-specific information. FeTrIL uses prototype-based pseudo-features to rebalance classifier training and employs a LinearSVC. However, it freezes the backbone, reducing plasticity. Also, LinearSVC's effect in CIL is not thoroughly studied. ### Q7: Decision bias vs. Task-recency bias? Response: Task-recency bias refers to the model's tendency to favor new tasks. Several works attribute it to the bias in classifier. We use “decision bias” to clarify that the bias originates in the classifier. While similar in outcome, we prefer the more specific term **decision bias** for better understanding and will clarify this in the text. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to the questions. Most of my concerns are addressed. However, the motivation of using RRCR is still not well-formed. I do not agree with the authors rebuttal statement "RRCR is a parametrized classifier where the decision boundaies can be directly modified by training the parameters". Can the authors specify what exactly are the trainable parameters in the RRCR classifier? As far as I understand, the model feature extractor is trainable in new tasks unlike in NCM or Fecam, but the RRCR classifier is not trainable and also a non-parametric classifier which simply uses the feature distributions. Why is RRCR referred to as a parametrized classifier? This is still not clear to me and more clarifications are required. --- Reply to Comment 1.1.1: Comment: Thank you for taking time to read our response! We apologize that our delivery might lead to misunderstanding that RRCR is non-parametric since the formulation of RRCR is **very different** from the training of existing parametrized classifiers. We provide the clarification as follows. As indicated in line 223, RRCR constructs a **trainable classifier parametrized by $W_t$** and the parameters can be **learned via the least-square solution in Eq. 12**. The classifier in RRCR is basically a **fully connecting network without bias** and the forward process can be formulated as $Y=W_{t}X$. The objective function of training classifier in RRCR is the ridge regression in Eq. 11, and the optimal solution can be obtained analytically without gradient-based optimization on the classifier. To modify the training in Eq. 11 to each task in CIL, we further decompose the solution to a category-wise form (Eq. 13), and this may lead to misunderstanding that RRCR is simply based on the feature distributions. However, the solution is originated in the training result of the objection function Eq. 11. Eq. 11: $\underset{W_{t}}{\operatorname{argmin}}~\lVert Y_{1:t} - X_{1:t}^{\theta_1} W_{t}\rVert_{\text{F}}^{2} + {\gamma}\lVert W_{t}\rVert_{\text{F}}^{2}$ Eq. 12: $\hat{W_t} = (\sum_{i=1}^{t} X_{i}^{\theta_{t}\text{T}} X_{i}^{\theta_{t}}+ \gamma {I})^{-1}\sum_{i=1}^{t} X_{i}^{\theta_{t}\text{T}} Y_i$ Eq. 13: $\hat{W_t} = (\sum_{i=1}^t \sum_{c \in \mathcal{C}_i}^{|\mathcal{C}_i|} \Phi^{\theta_t} _ {i,c} + \gamma {I})^{-1} \sum^t _ {i=1} \sum _ {c \in \mathcal{C}_i}^{|\mathcal{C}_i|} H^{\theta_i} _ {i,c}$ The training of classifier in RRCR could differ from backpropagation in existing classifier training significantly and potentially lead to misunderstanding. We sincerely apologize for any lack of clarity in our previous presentation of RRCR and will provide further explanations to address potential confusion.
null
null
null
null
null
null
null
null
BEST-Route: Adaptive LLM Routing with Test-Time Optimal Compute
Accept (poster)
Summary: This work proposes BEST-Route, a method that combines LLM routing with test-time compute scaling. Given a pool of LLMs and an input query, a router determines not only which LLM to route the query to, but also the number of responses to sample for it before applying Best-of-N (BoN) sampling. Details about training the router and the proxy reward model for BoN sampling are explained. Empirical results on various benchmarks show that the proposed method achieves a better cost-accuracy trade-off than baseline approaches. **Update after rebuttal:** I'd like to thank the authors for their answers to my clarification questions, and will keep my positive rating unchanged. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense to me. Theoretical Claims: There is no theoretical claim in this work. Experimental Designs Or Analyses: I have checked the experimental designs and analyses, which make sense to me. I only have a clarification question about cost calculation; see Question 1 below. Supplementary Material: I have reviewed the appendix, which looks good to me. Relation To Broader Scientific Literature: Both LLM routing and test-time scaling (via BoN sampling) have been well studied in prior works. The key novelty in this work is a combination of both, i.e., boosting the accuracy by BoN sampling when the query is routed to a smaller model, which can still be cheaper than calling the most expensive LLM once. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Some questions for clarification: 1. For the calculation of cost[(M, i)] in Algorithm 1, the number of sampled responses $i$ is multiplied only with the output length, not with the input length. Is this a typo or intentional? Are the costs in experiment results calculated in the same way? This calculation seems to rely on the assumption that the input tokens are only charged once when sampling $i$ responses for the same prompt; otherwise, it would be an underestimate of the actual cost by the proposed method. 2. What does "Random" in the legend of Figure 4 mean? 3. In Line 425 Left, the authors mention that increasing $n$ has a marginal impact on overall latency. Is this due to implementation of batch inference and parallelism? 4. Could you provide some context about the armoRM score, e.g., how significant is an increase from 0.112 to 0.126? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: For the calculation of cost[(M, i)] in Algorithm 1, the number of sampled responses i is multiplied only with the output length, not with the input length. Is this a typo or intentional? Are the costs in experiment results calculated in the same way? This calculation seems to rely on the assumption that the input tokens are only charged once when sampling i responses for the same prompt; otherwise, it would be an underestimate of the actual cost by the proposed method.** A1: Thanks for the question. This way of cost calculation is intended. Most modern LLMs support returning multiple responses at once for a given query. For example, you can set the “num_return_sequences” hyper-parameter for HuggingFace LLMs (https://huggingface.co/docs/transformers/en/main_classes/text_generation) to tune the number of independently computed returned sequences for each query. Given this, we only need to feed the prompt/input tokens once to get multiple response samples. Therefore, “input tokens are only charged once when sampling i responses for the same prompt”. We will clarify more on this in the revision. **Q2: What does "Random" in the legend of Figure 4 mean?** A2: “Random” stands for the baseline which randomly routes queries between the large and small models at different ratios. This baseline has been adopted in prior binary routing work [1, 2]. **Q3: In Line 425 Left, the authors mention that increasing n has a marginal impact on overall latency. Is this due to implementation of batch inference and parallelism?** A3: Thank you for the question. This is for the same reason as explained in A1. The overall latency comprises the prefilling latency (processing the prompt tokens) and decoding latency (generating output tokens). Since we generate n responses in parallel with the same prompt (e.g., https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/generation/utils.py#L2317), increasing n only leads to modest decoding latency overhead and marginal overall latency increments (see Figure 6 in our paper). We use a batch size of 1 but larger batches may further reduce latency and are worth future exploration. **Q4: Could you provide some context about the armoRM score, e.g., how significant is an increase from 0.112 to 0.126?** A4: Thanks for the question. armoRM score [3] is a comprehensive response quality metric that aggregates 19 evaluation perspectives, including Helpfulness, Correctness, Coherence, and Verbosity. It is constructed by collecting ground-truth ratings for each perspective, followed by normalization, de-biasing, and weighted summation into a unified score ranging from -1 to 1. An increase from 0.112 to 0.126 reflects meaningful improvements across multiple quality dimensions. In our evaluation, Mistral-7B scores 0.112 on average, while GPT-4o reaches 0.126 — consistent with benchmarks like MMLU [4], where GPT-4o surpasses 85% accuracy vs. ~60% accuracy for Mistral-7B, underscoring the significance of this gap. Qualitatively, we observe that responses at 0.112 are generally helpful but limited, whereas higher-scoring responses (e.g., 0.126) offer deeper insight and more comprehensive guidance. We provide one example to illustrate this difference. “ Query: Is it normal to have a fever when I'm sick? Response 1 (armoRM = 0.112): Yes, having a fever when you're sick often indicates that your body is fighting off an infection or illness. Fever is a natural defense mechanism whereby your body's temperature increases to create an environment less conducive for pathogens to multiply. Response 2 (armoRM = 0.127): Yes, it is common to have a fever when you're sick. A fever is your body's natural response to fighting off an infection. It indicates that your immune system is actively working to fight the pathogens causing the illness. **However, if your fever is above 101°F (38.3°C) and persists for more than a couple of days, it's a good idea to seek medical advice to ensure there isn't a more serious underlying condition.** ” Both Response 1 and 2 cover the perspective that “fever is a natural defense mechanism”. However, Response 2 further enriches the argument by discussing the potential danger of persisting high fever and suggests to users to seek medical advice in such cases, which could be life-critical in healthcare consultations and is missing from Response 1. **Thank you for your time and consideration. We sincerely hope that you find our responses convincing and would consider increasing your rating.** References: [1] Ding, Dujian, et al. "Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing." ICLR. 2024. [2] Ong, Isaac, et al. "RouteLLM: Learning to Route LLMs from Preference Data." ICLR. 2025. [3] Wang, Haoxiang, et al. "Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts." EMNLP Findings. 2024. [4] Hendrycks, Dan, et al. "Measuring Massive Multitask Language Understanding." ICLR. 2021.
Summary: This paper proposes a cost-effective LLM inference framework that leverages multiple small LLMs alongside a large LLM (GPT-4o). A router model is trained to (1) select which LLM to use from the candidate pool, (2) determine how many responses that LLM should generate, and (3) select the best response among them. The goal is to strike a balance between inference cost and response quality. Experimental results suggest that the proposed BEST-route method effectively achieves this balance. Claims And Evidence: The paper should avoid claiming that it achieves an optimal balance between cost and quality, as no theoretical evidence is provided to justify the optimality. A more appropriate claim would be that the method achieves a preferred or effective trade-off. If the authors wish to make such a claim, they should justify its superiority using a well-defined metric that quantifies the trade-off between cost and quality, such as a composite metric that integrates both factors. Methods And Evaluation Criteria: The paper evaluates response quality solely using armoRM, which is also the training objective for the method. This introduces potential bias and may not fully capture response quality. The reviewer recommends incorporating additional evaluation metrics, such as BLEU, ROUGE, or human evaluation, to provide a more comprehensive assessment. Theoretical Claims: No theoretical analysis is included. Experimental Designs Or Analyses: It is unclear why the experiments do not include the previous “best-of-n sampling” baselines, where multiple responses are generated and the best is selected using a reward model. These baselines are highly relevant for comparison, as they also aim to balance cost and quality. The authors should either include such baselines or provide a clear justification for their omission. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The paper reveals that there is potential to actively select between LLMs with different costs. The trade-off between cost and performance can be vital. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper does not sufficiently describe the architecture of the proxy reward model $R_{proxy}(s)$. While the cost-efficient multi-head router is stated to use a BERT-style backbone, no analogous detail is given for the reward model. This information is crucial for reproducibility and understanding the model’s capacity. Other Comments Or Suggestions: In section 4.2, the "They key intuition" should be "The key intuition". Questions For Authors: The paper does not clarify the output range of the proxy reward model $R_{proxy}(s)$. Is the score normalized to lie within [0,1]? Knowing the score range is important for interpreting the reward model’s outputs and understanding how it influences selection (e.g., the $L_{rank}$ in equation (1)). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments. Due to space limits, we have paraphrased some questions to save space while preserving the original intent. **Q1: The claim of “optimal balance” between cost and quality is too strong without theoretical justification. A more appropriate phrasing would be “preferred” or “effective” trade-off, unless supported by a formal composite metric.** A1: Thanks for the comment. This work aims to effectively uplift the performance-cost trade-offs achieved by routing techniques. We will revise the claim as “BEST-Route achieves an effective trade-off between cost and quality”, as suggested. **Q2: The exclusive use of armoRM (also used for training) may introduce bias. Additional metrics like BLEU, ROUGE, or human evaluation are recommended for a more comprehensive assessment.** A2: Thanks for the comment. We now report BLEU and ROUGE scores, as shown in the tables below. Since human evaluation is expensive and unscalable, we leave it in future work. We observe that BEST-Route consistently achieves better trade-offs than all baselines (e.g., N-label routing has up to 31.7% BLEU drop at 60% cost reduction, vs. only 18.07% drop for BEST-Route). | | Response Quality Drop (BLEU score) | w.r.t. always using GPT-4o (%) | |:---:|:---:|:---:| | Cost Reduction (%) | N-label | BEST-Route | | 10 | 6.57 | **3.61** | | 20 | 13.13 | **6.07** | | 40 | 25.80 | **12.76** | | 60 | 31.70 | **18.07** | | N-class | N-class achieves 0.2% cost reduction | with 0.7% quality drop. | | Clustering | Clustering achieves 0% cost reduction | with 0% quality drop. | | | Response Quality Drop (ROUGE score) | w.r.t. always using GPT-4o (%) | |:---:|:---:|:---:| | Cost Reduction (%) | N-label | BEST-Route | | 10 | 6.10 | **3.88** | | 20 | 11.65 | **7.27** | | 40 | 22.26 | **15.78** | | 60 | 27.62 | **21.97** | | N-class | N-class achieves 0.2% cost reduction | with 0.9% quality drop. | | Clustering | Clustering achieves 0% cost reduction | with 0% quality drop. | A detailed visualization is provided in Figure 2 and 3, via [link](https://github.com/BEST-Route2025/BEST-Route/blob/main/README.md). **Q3: It is unclear why the experiments do not include the previous “best-of-n sampling” baselines, where multiple responses are generated and the best is selected using a reward model. These baselines are highly relevant for comparison, and should be included or clearly justified.** A3: Thanks for the comment. We have implemented the best-of-n sampling baseline and report the performance in Figure 1, via [link](https://github.com/BEST-Route2025/BEST-Route/blob/main/README.md). Best-of-n offers fixed trade-offs for each model and n pair and often yields lower-quality responses (e.g., 4.9% quality drop for Phi-3-mini, 1.1% for LLaMA-3.1-8B at n=5). In contrast, BEST-Route offers flexible trade-offs, achieving 20% cost reduction with only 0.21% quality drop, and 40% cost reduction with 0.47% drop, with the max sampling number n=5. **Q4: The architecture of the proxy reward model R_proxy(s) is unclear. While the cost-efficient multi-head router is stated to use a BERT-style backbone, no analogous detail is given for the proxy reward model.** A4: In Lines 283-286, we mentioned “the proxy reward model is fine-tuned from OpenAssistant RM, a DeBERTa-v3-large model (300M)”. Specifically, the proxy reward model is a DeBERTa model [1] which improves BERT with enhanced mask decoder and disentangled attention. In our evaluation, we observe that the proxy reward model is cost-efficient and incurs negligible overhead (see Figure 6 in our paper). **Q5: In section 4.2, the "They key intuition" should be "The key intuition".** A5: Thank you. We will fix this in the revision. **Q6: The output range of R_proxy(s) is unclear. Clarifying whether scores are normalized (e.g., within [0,1]) is important for interpreting outputs and their role in selection (e.g., the L_rank in equation (1)).** A6: Thanks for the comment. In our paper, we train a proxy reward model to “preserve the ranking of responses” and take the output logits as the proxy reward scores R_proxy(s), which ranges from (-∞, +∞). A higher proxy score indicates a better response quality. In our evaluation, proxy reward scores range from -12.25 to 12.1875, as detailed in Figure 4, accessible via [link](https://github.com/BEST-Route2025/BEST-Route/blob/main/README.md). In equation (1), we construct training pairs in the form of (good response s, bad response s’) and train the proxy reward model to preserve the ranking of responses by minimizing the negative log likelihood loss L_rank, which takes on lower values as the proxy reward score difference, R_proxy(s) - R_proxy(s’), gets larger . **Thank you for your time and consideration. We sincerely hope that you find our responses convincing and would consider increasing your rating.** References: [1] He, Pengcheng, et al. "DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION." ICLR 2021.
Summary: This paper proposes BEST-Route, combining two different research areas: model selection (through routing) with adaptive allocation of test-time compute (e.g., best-of-n sampling). Their framework dynamically selects a model and the optimal number of responses to sample based on query difficulty and quality thresholds. Experimental results show up to 60% cost savings with less than 1% performance degradation compared to always using a large model. Claims And Evidence: - The main claim, “Experiments on real-world datasets demonstrate that our method reduces costs by up to 60% with less than 1% performance drop”, is validated empirically on a dataset compiled from existing sources (Table 3). - The claim “surpassing prior routing approaches and setting a new standard for efficient LLM service deployment” seems too strong. While the method shows clear improvements over prior routing approaches, it would benefit from direct comparisons with other efficiency techniques, such as speculative decoding, as mentioned in the related work. Methods And Evaluation Criteria: The methods and evaluation criteria seem reasonable. However, if I understood correctly, the router is trained on data that is very similar to the validation and test data (since all splits are sampled from the dataset that they collected). While this is acknowledged as a limitation, additional discussion is needed on how well BEST-Route generalizes to unseen datasets. Ideally, testing on another dataset would strengthen the results. This is the main weakness of the paper, in my opinion. Theoretical Claims: NA Experimental Designs Or Analyses: See the comments above. Supplementary Material: I skimmed through the appendix but did not carefully read all the parts. Relation To Broader Scientific Literature: The paper combines research on query routing with test-time compute and cites related work properly. Essential References Not Discussed: See below. Other Strengths And Weaknesses: Strengths: The motivation is well explained, and the topic is important, especially now with the trend of building larger and larger models. I particularly appreciate Section 3.1. Also, combining test time computing with routing makes a lot of sense and has not been explored before (to my knowledge). Other Comments Or Suggestions: - “The rising costs of LLM inference have”. I get your point but it would be good to provide some examples (e.g., self-consistency, reranking, etc.) - “3) Model cascades ( e.g. Chen et al. (2023)) where the query passes through the models sequentially, from the cheapest to the most costly, until a satisfactory response is obtained.”- Not always “until a satisfactory response is obtained”… most cases consider a fixed number of models in the cascade - “Prior query routing approaches generate only one response from the selected model and a single response from a small (inexpensive) model was often not good enough to beat a response from a large (expensive) model due to which they end up overusing the large model and missing out on potential cost savings.” and “, small models continue to come up short in terms of response quality when compared to the largest, most powerful models”. While I agree with this, there are cases when this is not the case. For instance, check Table 1 of Farinhas et al. (2025). Even though this is for model cascading and specific for machine translation, I think it’s worth mentioning in Section 2. - Typo in L048-049: “development innovative solutions” References: Farinhas et al., 2025. Translate Smart, not Hard: Cascaded Translation Systems with Quality-Aware Deferral **Update after the rebuttal**: I increased my score to 4. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: The claim “surpassing prior routing approaches and setting a new standard for efficient LLM service deployment” seems too strong. While the method shows clear improvements over prior routing approaches, it would benefit from direct comparisons with other efficiency techniques, such as speculative decoding.** A1: Thanks for the comment. We will revise the claim to emphasize that our method significantly improves upon prior routing techniques, contributing toward more efficient LLM service deployment. It is worth noting that our approach is orthogonal and complementary to efficiency techniques like speculative decoding [1], which accelerates decoding for expensive models. In contrast, BEST-Route reduces cost by intelligently assigning “easy” queries to small models while maintaining high performance. A hybrid system could combine the advantages of both techniques and could first route queries (via BEST-Route) between cheap and expensive LLMs, and then apply speculative decoding if the expensive model is selected to yield further efficiency gains. **Q2: Additional discussion is needed on how well BEST-Route generalizes to unseen datasets. Ideally, testing on another dataset would strengthen the results. This is the main weakness of the paper, in my opinion.** A2: Thanks for this insightful comment. We evaluate the trained routers on the OOD dataset, MT-Bench [2], and observe that BEST-Route achieves strong performance under data distribution shifts (e.g., 60% cost reduction with only 1.59% quality drop). Due to the space limit, please refer to A2 for Reviewer ewCv for details. **Q3: “The rising costs of LLM inference have”. I get your point but it would be good to provide some examples (e.g., self-consistency, reranking, etc.)** A3: Thanks for the comment. We will revise the introduction by including examples on “the rising costs of LLM inference”, such as self-consistency [3], which samples multiple reasoning paths and selects the most consistent answer at increased cost, and reranking [4], which generates multiple candidates and uses a re-ranker to select the ones that could lead to better final results at increased inference costs.​ **Q4: “3) Model cascades ( e.g. Chen et al. (2023)) where the query passes through the models sequentially, from the cheapest to the most costly, until a satisfactory response is obtained.”- Not always “until a satisfactory response is obtained”… most cases consider a fixed number of models in the cascade** A4: Thank you for the comment. We will revise to clarify that, most cascade approaches consider sequentially executing models, until either a satisfactory response is obtained or a pre-defined max number of models of the cascade is reached [5]. **Q5: “... a single response from a small (inexpensive) model was often not good enough to beat a response from a large (expensive) model …” and “small models continue to come up short in terms of response quality when compared to the largest, most powerful models”. While I agree with this, there are cases when this is not the case. For instance, check Table 1 of Farinhas et al. (2025). Even though this is for model cascading and specific for machine translation, I think it’s worth mentioning in Section 2.** A5: Thank you for pointing this out. We agree and we have independently verified this behaviour in our evaluation. While small models underperform large ones on average, specific cases exist where they perform comparably or even better. Farinhas et al. (2025) observed that though Tower-v2 7B is inferior to its large counterpart Tower-v2 70B on average, it can outperform the large model on 32% cases. This observation motivates routing queries between LLMs to not only save costs but also improve performance. Table 2 in our paper empirically supports this intuition by showing cases where routing achieves significant cost reduction (e.g., 20%) as well as performance improvements (e.g., 0.5%). We will clarify more on this observation in our revision. **Q6: Typo in L048-049: “development innovative solutions”** A6: Thank you. We will fix this in the revision. **Thank you for your time and consideration. We sincerely hope that you find our responses convincing and would consider increasing your rating.** References: [1] Leviathan, Yaniv, et al. "Fast inference from transformers via speculative decoding." ICML 2023. [2] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." NeurIPS 2023. [3] Wang, Xuezhi, et al. "Self-consistency improves chain of thought reasoning in language models." arXiv 2022. [4] Chuang, Yung-Sung, et al. "Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering." ACL Findings 2023. [5] Chen, Lingjiao, et al. "Frugalgpt: How to use large language models while reducing cost and improving performance." arXiv 2023. [6] Farinhas, António, et al. "Translate Smart, not Hard: Cascaded Translation Systems with Quality-Aware Deferral." arXiv 2025.
Summary: The paper introduces BEST-Route, an adaptive routing framework designed to optimize inference efficiency and response quality in large language models (LLMs). The framework dynamically selects an appropriate LLM and determines the optimal number of responses to sample (best-of-n sampling) based on the estimated difficulty of individual queries. It builds on the observation that combining multiple responses from smaller, more cost-effective models, along with a lightweight proxy reward model to select the best response, significantly enhances quality while often being more economical than consistently using larger, costly models. Main contributions of the paper are: * Introduces a dynamic multi-head router that adaptively assesses query difficulty to efficiently allocate computational resources, optimizing the trade-off between inference cost and response accuracy. * Employs best-of-n sampling to produce multiple candidate responses from smaller models, selecting the highest-quality response through an efficiently trained proxy reward model. * Empirical evaluations demonstrate that BEST-Route significantly improves inference efficiency, achieving up to 60% cost reduction with less than a 1% decrease in response quality compared to consistently utilizing a state-of-the-art reference model (GPT-4o). Claims And Evidence: Supported claims: * The paper demonstrates the effectiveness of its multi-head router architecture, supported by comprehensive experiments across a diverse set of real-world tasks. * The paper demonstrates the performance gains from utilizing best-of-n sampling on smaller models through detailed quantitative results. * Detailed analyses against baseline methods (N-label, N-class, clustering, and cascade methods) support the claim that BEST-Route achieves significant inference cost reductions with minimal impact on response quality Problematic claims: * The authors present clear results across diverse tasks, yet the evaluation is primarily conducted on datasets curated specifically for this paper. Additional evidence showing performance consistency on independently curated datasets or real-world deployments could further strengthen the generalizability of their approach. * The submission briefly acknowledges potential sensitivity to data drift but does not provide empirical evidence or in-depth analysis regarding the robustness of BEST-Route under changing data distributions. Addressing this gap through empirical evaluation or detailed discussion would be beneficial for supporting practical deployment scenarios. Methods And Evaluation Criteria: * The proposed BEST-Route framework introduces a novel multi-head routing architecture, integrating best-of-n sampling for smaller models * The proposed benchmark dataset covers a range of important application scenarios (e.g., question answering, coding, and safety evaluation) * However, although the dataset provides a valuable evaluation framework across multiple tasks, it doesn't show the generalizability of the approach especially with data drift in real-world cases. Theoretical Claims: NA Experimental Designs Or Analyses: Please see methods section above Supplementary Material: No did not need to Relation To Broader Scientific Literature: The paper contributes to the active area of cost-efficient LLM inference by combining adaptive routing techniques with test-time optimal compute strategies. Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper is very well written and easy to follow Other Comments Or Suggestions: Update: The provided rebuttal strengthens the paper so I increased my score to 4. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive comments. Several comments pertain to the generalizability of BEST-Route, and we address them jointly below for clarity. **Q1: The authors present clear results across diverse tasks, yet the evaluation is primarily conducted on datasets curated specifically for this paper. Additional evidence showing performance consistency on independently curated datasets or real-world deployments could further strengthen the generalizability of their approach.** A1: Thanks for this comment. We would like to clarify that our dataset is a random sample of multiple public benchmarks such as CodeUltraFeedback [1] and BeaverTails [2], which are independently curated by other parties. That said, to further evaluate generalizability, we test BEST-Route on the out-of-distribution (OOD) benchmark MT-Bench [3], a widely used dataset covering writing, reasoning, fact extraction, and roleplay. Please see A2 below for details. **Q2: The submission briefly acknowledges potential sensitivity to data drift but does not provide empirical evidence or in-depth analysis regarding the robustness of BEST-Route under changing data distributions. Addressing this gap through empirical evaluation or detailed discussion would be beneficial for supporting practical deployment scenarios. Q3: However, although the dataset provides a valuable evaluation framework across multiple tasks, it doesn't show the generalizability of the approach especially with data drift in real-world cases.** A2: Thanks for highlighting this. We evaluate BEST-Route on the OOD dataset MT-Bench [3]. As shown in the table below, BEST-Route consistently outperforms all baselines. For example, it achieves 60% cost reduction with only a 1.59% performance drop – up to 4.3% better than the strongest baseline. In contrast, N-class and clustering-based routing often default to using GPT-4o, yielding minimal cost savings, while N-label routing suffers notable quality drops especially at high cost reduction rates. These results demonstrate BEST-Route's robustness under distribution shifts. | | Response Quality Drop (armoRM score) | w.r.t. always using GPT-4o (%) | |:---:|:---:|:---:| | Cost Reduction (%) | N-label | BEST-Route | | 10 | 0.88 | **0.25** | | 20 | 2.29 | **0.43** | | 40 | 4.41 | **1.56** | | 60 | 5.89 | **1.59** | | N-class | N-class achieves 0% cost reduction | with 0% quality drop. | | Clustering | Clustering achieves 0% cost reduction | with 0% quality drop. | A detailed visualization of quality-v.s.-cost trade-offs achieved by different methods on MT-Bench is provided in Figure 5, accessible via the anonymous [link](https://github.com/BEST-Route2025/BEST-Route/blob/main/README.md#figure-5-routing-performance-results-on-the-ood-dataset-mt-bench). **Thank you for your time and consideration. We sincerely hope that you find our responses convincing and would consider increasing your rating.** References: [1] Weyssow, Martin, et al. "Codeultrafeedback: An llm-as-a-judge dataset for aligning large language models to coding preferences." arXiv preprint arXiv:2403.09032 (2024). [2] Ji, Jiaming, et al. "Beavertails: Towards improved safety alignment of llm via a human-preference dataset." Advances in Neural Information Processing Systems 36 (2023): 24678-24704. [3] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2023): 46595-46623.
null
null
null
null
null
null
Action-Dependent Optimality-Preserving Reward Shaping
Accept (poster)
Summary: The paper proposes a new reward-shaping framework that is action-dependent while preserving the optimal policies. The proposed method can convert any intrinsic rewards that are not optimality preserving to preserve optimal policies. Through experiments in Montezuma's Revenge, the authors demonstrate the effectiveness of the proposed method. Claims And Evidence: The proposed method can be supported by its theories but not sufficient enough by its experiments since only one environment is tested. Methods And Evaluation Criteria: The proposed method makes sense that the classic PBRS is indeed a sufficient not necessary condition for preserving optimality after shaping. However, the evaluation criteria are a bit less convincing that the method is only evaluated in one single environment. Theoretical Claims: Under the assumptions, the overall proof seems legit since I haven't checked every step in detail. But those assumptions look way too strong to be realistic. - For assumption 5.1 which assumes the neural network estimates of the values are the true values. As the authors claim, if trained long enough, it will converge to true values, but this already hinders the initial motivation of using reward shaping - to train agents sample efficiently. - For assumption 5.2, if we already assume sufficient exploration of all possible actions in every state vistable by Sn, why do we even need intrinsic motivations? In other words, the intrinsic motivation is designed to motivate the agent to explore possible actions/states. Having this assumption renders intrinsic motivations useless. - Follow up on assumption 5.2, can you define formally the set of Sn? If we assume sufficient exploration of the action space for every policy, is it the case that Sn will be the same for everyone? Experimental Designs Or Analyses: The experiment results on Montezuma's Revenge, while being good empirical evidence for the proposed method, might not be sufficient enough to convince the broader community. There are many other hard-to-explore environments. For example, a more basic giant grid world maze could suffice. Extending such a grid world to a continuous maze is also interesting to see. Supplementary Material: I checked the experiment details and proof in B.2. Relation To Broader Scientific Literature: The paper extends the PBRS framework by adding action-dependent terms. Essential References Not Discussed: The following baseline in intrinsic motivation works should be included for a fair comparison, - [Automatic Intrinsic Reward Shaping for Exploration in Deep Reinforcement Learning](https://proceedings.mlr.press/v202/yuan23c/yuan23c.pdf) in ICML 23 and other baselines included in this paper. Other Strengths And Weaknesses: - The writing could be improved in the sense that (1) the introduction and background on reward shaping are overly extended, and (2) a more intuitive explanation and examples could be used to illustrate the correctness of theorem 5.3 Other Comments Or Suggestions: N/A Questions For Authors: - Is it possible to have the same results without assumptions 5.1, 5.2? If possible, could you characterize the error your approach could induce under imperfect value estimates and imperfect state/action visitations? - In your proposed shaping functions, what is the input to $Q^*_{I,t+1}$? Is it the state action pair in the next time step? If so, how should we have access to it when we are still calculating the reward for $s_t,a_t$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and effort in giving detailed feedback; we really appreciate it. We’re excited to evaluate our method on additional environments in future work, and believe that these initial results stand as a significant contribution, particularly when paired with our theoretical results. See our response to Reviewer 4wwG for a more detailed overview of our reasons for believing that this work stands on its own as a novel and significant contribution. You make multiple insightful comments about our theory, and we’ve reworked and rephrased Section 5.2 substantially for clarity and precision in light of them. In particular, we’ve improved the wording of both of our assumptions, which as previously phrased were both more restrictive than necessary, and worded somewhat confusingly. A reworked version of Section 5.2 (our main proof) can be found here, for you to peruse. https://drive.google.com/file/d/1XTZ3VSuIDFaqHM_NxsjBpQEhkR6DFOyM/view?usp=sharing We will now address your comments in more detail, in the order that they appear. For Assumption 5.1: we want to disambiguate between the two goals of ADOPS/of optimality-preserving reward shaping in general: firstly, speeding training/increasing sample efficiency, and secondly preventing reward hacking by preserving the optimal policy. The latter of these goals is what we prove in Theorem 5.3, and what we require our assumptions for. The former goal then, of speeding training, can still be met (and, empirically is met, better than RND itself, to a statistically significant degree) before the point at which Assumption 5.1 is true. Indeed, RND itself has no such theoretical guarantees of conserving the optimal policy, but has sped up training across a wide variety of domains, at the risk of making the agent susceptible to reward hacking. Our theorem is designed to prevent this reward hacking, while leaving the underlying training-speeding qualities of the base IM as unchanged as possible ($F^2=0$ very frequently on any given timestep). We have removed the assumption 5.1 and adopted the convergence property of the underlying algorithm. Since the ADOPS ensures that the shaped reward will produce a consistent policy with the extrinsic reward, the learning algorithm would share its convergence property of using the extrinsic reward in the worst case scenario. We have changed our Assumption 5.2 to be much more specific and limited of an assumption. Namely, it defines a notion of “unstable” policies, which are those for which an extremely similar, locally better policy exists, and so which any competent learning algorithm will quickly leave in favor of a better policy. It’s detailed more precisely in the attached document above, and we believe it to be much improved in both rigor and scope to the original assumption. We’ve added the baseline you reference to our related work section, and are excited to test against this (and other IM baselines) in future work. We’ve also rewritten Theorem 5.3, in a way that clarifies many of the derivations and gives an intuitive overview at the beginning, and signposts throughout. As for your questions: We have substantially reworked our assumptions, and given some sense of what were to happen were they violated. Your question about the variable dependence of $\gamma Q_{It+1}$ is valid, and brings to light the needless complexity of our previous notation. $\gamma Q_{It+1}$ simply represents all components of $Q_{I}$ that come from a timestep later than $t$. This is equivalent to, and can be more simply written as, $\gamma V_{I}(s’)$. We in fact rely on this equivalence in our paper, in the step between Equations (26) and (27). For clarity, we have replaced all instances of $\gamma Q_{It+1}$ with $\gamma V_{I}(s’)$. This depends, as you might imagine, on $s’$. Thank you again for your in-depth comments and suggestions; we believe that incorporating them has substantially improved our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and the efforts in uploading the revised section. Some of my concerns have been addressed and I will explain what is left in the sequel. Theories in its current form looks reasonable to me. However, removing assumption 5.1 only covers instead of solving the intrinsic limitation of the proposed method, which is, how can we have access to a reasonable value/q-value critique before we even train the agent? From the experiment section, the authors use some preexisting critiques. I wonder how the performance of those critiques are? If they are already good value estimations on themselves, why should we train another agent? If they are not good (far from the true value) violating the formulation in theories, how would you explain the good performance? **Update after rebuttal comment reply**: Thanks for the clarifications, I have increased the score to 3. --- Reply to Comment 1.1.1: Comment: We’re glad to hear some of your concerns were addressed. Thank you for the opportunity to clarify further, as we believe there may be an important misunderstanding here, due to some bad wording in the initial draft of our paper. We believe that when you mention our use of “preexisting critiques,” you’re referring to Section 6.4, where we say we’re using the “preexisting network critics’ estimations of the relevant quantities.” This was poor wording on our part, and we’ve amended the paper to be more clear. We address this clarification further in our response to Reviewer PXoU, but in short, we just meant that these critics are already used in the base PPO algorithm, and so our method doesn’t require any additional network architecture. To clarify, $\textbf{we do not use any pre-trained networks or transfer learning of any kind.}$ All of the networks used in our method are randomly initialized, and trained from scratch, just as they would be in a standard PPO algorithm. If you meant something else by the term preexisting critiques, please clarify further so we can address your concerns. We would like to emphasize that ADOPS as implemented in our experimental section and in Section 5.2 require only that the Q and V estimations for the $\textit{current}$ policy be accurate, rather than the optimal Q and V values (this is the main advantage that Equation (18) has over Equation (15)). Right as training begins, the critic networks immediately begin receiving good, on-distribution data for the Q/V values of taking the current (randomly initialized) policy, and thus should quickly approximate those before the agent begins performing well at all. In fact, in PPO, the policy section of the network never “sees” any of the rewards themselves, but only the critic networks’ estimations of the Q/V functions for the current policy, which are themselves trained on the base reward. So effective training of the policy to maximize the rewards can’t really even begin unless the critic networks are at least somewhat reliable estimations of the V functions under the current policy. This is true in baseline RND, or even in the case where you’re training without any intrinsic reward, so long as the method you’re using uses a critic network. In other words, the accurate estimation of V of the current policy which we require is much different, and much less useful, than an accurate estimation of V of the optimal policy: this is why we still need to train, as training improves the policy network’s performance, and then the critics update to estimate V of the new, improved policy. This happens iteratively throughout training, until the V estimations eventually approach V of an optimal policy. Please let us know if you have any further questions or clarifications. Thank you again for your time.
Summary: This paper presents Action-Dependent Optimality Preserving Shaping (ADOPS), a technique that transforms intrinsic rewards into a format that maintains optimality and enhances the efficacy of intrinsic motivation in the challenging, reward-sparse environment of Montezuma’s Revenge. Additionally, it establishes that ADOPS supports reward shaping functions that are not aligned with potential-based frameworks: whereas PBRS-based methods necessitate that the cumulative discounted intrinsic return remain independent of actions, ADOPS allows these returns to be contingent upon the agent’s actions while still preserving the set of optimal policies. Claims And Evidence: The evidence provided in this paper vaguely supports the claims made, raising concerns about the robustness and reliability of the findings. More discussion and claims is needed. Methods And Evaluation Criteria: Yes. However, only one environment is tested Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: yes Relation To Broader Scientific Literature: The key contributions of the paper focus on encouraging agent exploration in sparse-reward environments. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: The literature review is well-executed, and the discussion of prior work is thorough. The theoretical foundations appear to be robust, and the experimental results are also commendable. Weakness: The assumption are quite strong: The assumption requires that the estimator functions receive an unbounded amount of unbiased data from the environment under the current policy. In practice, data is always finite and can be biased due to exploration strategies or environmental nonstationarities. Only one test environment is not enough to demonstrate effectiveness Other Comments Or Suggestions: Minor: Equation (3) should be F_t(s) =γΦ_{t+1}(s')−Φ_t(s) or F'_t(s) =γΦ_{t+1}(s')−Φ_t(s)? However you never define the F' before. π*_1 and π*_2 should be defined in (13) what is bar{a}? in (14)? Questions For Authors: In the claim you presented, could you clarify the specific assumptions you've set aside, such as the requirement for the environment to be episodic and for the intrinsic motivation (IM) to be "future-agnostic"? It appears that these points were not revisited later in the paper. The primary focus of the document seems to be on resolving issues arising from the discrepancy between the action-independent value function and the action-dependent Q-function. Additionally, you mentioned "using the preexisting network critics' estimations of the relevant quantities"—does this critic remain static, or does it evolve over time as it learns within the ADOPS framework? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time, energy, and constructive feedback. We agree with your assessments about the quality of our empirical evaluations. Our primary contribution is the initial proposal of ADOPS, as well as our theoretical proofs of both optimality and increased generality over prior methods. Our future work will focus on further empirical evaluation of ADOPS in more environments as well as against other IM methods. See our response to Reviewer 4wwG for a more detailed overview of our reasons for believing that this work stands on its own as a novel and significant contribution. Our assumptions as originally stated were somewhat stronger than they had to be in order for our proof to function, and our initial discussion around them lacked clarity. In light of your helpful feedback, as well as that of other reviewers, we’ve revised them substantially. We dropped assumption 5.1 and simply adopted the convergence property of the base learning algorithm. Additionally, our previous Assumption 5.2 (now 5.1) requires essentially that the learning algorithm executes locally greedy actions in the policy space, and we describe when it’s introduced why this is a reasonable assumption. We’ve also substantially reworded sections of the proof of Theorem 5.3, for clarity. We’ve attached an anonymous revised text of Section 5.2, so that you can examine these changes holistically. https://drive.google.com/file/d/1XTZ3VSuIDFaqHM_NxsjBpQEhkR6DFOyM/view?usp=sharing To generally address your comment about requiring enough data for the critic estimators to converge to their theoretical values: while we’ve altered our assumption here, this critique still touches on an important point we’d like to address. We’re building on a prior line of work in optimality-preserving reward shaping, including (Ng, 1999), (Devlin 2012), and the PIES, PBIM, and GRM frameworks among others, that all are based on preserving the optimal policy set of the underlying environment. The implicit assumption behind this framework is that the agent being trained on an optimality-preserving reward function will converge to an optimal, or near-optimal policy, and avoid the alternative, which is to hack the shaping reward. After all, if the agent being trained does not converge to an optimal policy, or a reasonable-enough approximation of it, then practically, the importance of a shaping function being optimality-preserving is somewhat limited. So some assumption like “the agent will eventually converge to an optimal or near-optimal policy” is implicit in all prior work in this area, even though it’s rarely explicitly stated as such. Our assumption of critic convergence, both in its current and previous form, is actually somewhat limited compared with this, as convergence of the agent’s policy network (the implicit assumption underpinning prior work in this area) is generally dependent on (and therefore assumes) convergence of its critic network(s). You’re of course right to note that, in practice, finite and biased data is always what we’re working with. To more accurately characterize the convergence property, at the moment we rely on the convergence property of the base learning algorithm as the worst case scenario since ADOPS will produce an optimality-preserved policy. Our experimental results are promising, and show our method’s ability to effectively improve training in a complex, sparse-reward environment. We’re enthusiastic about expanding these results to additional domains and forms of IM in future work. Additional responses to your comments/questions, in order: Thank you for your correction about Equation (3); we’ve edited it to read $F_t$ rather than $F’_t$. We’ve added an explanation/definition of $\pi_1$ and $\pi_2$ below Equations (13) and (14), on your recommendation. These are simply two (potentially distinct) extrinsically optimal policies. $\bar{a}$ is an extrinsically suboptimal action for a given state $s$. These assumptions you mention (being episodic and/or future-agnostic) are not necessary for our method, and can be violated freely without consequence. This is an important strength of ADOPS that you’re right to note we forgot to keep stressing the significance of about halfway through the paper. To fix this, we’ve added an additional note of our ability to relax these assumptions in the conclusion of the paper. Thank you for pointing this out. Our use of the phrase “preexisting network critics” is admittedly confusing, and we’ve replaced it in the paper for clarity. We meant “preexisting” just in the sense that these networks were already part of the architecture and algorithm being used, and so using them in our method introduced no additional computational overhead. The critic networks update their parameters throughout training, just like in a standard PPO algorithm. Thank you again for your time and helpful feedback.
Summary: The paper focuses on improving performance in complex, exploration-heavy environments with long-duration episodes. In particular, the paper focuses on reward shaping methods that shape rewards while maintaining optimality. The paper proposes a new reward shaping approach called Action-Dependent Optimality Preserving Shaping (ADOPS). The paper claims that (a) ADOPS allows for reward shaping that previous potential-based reward shaping methods do not. (b) ADOPS allows for intrinsic cumulative returns to be dependent on agents’ actions while still preserving optimality. (c) ADOPS empirically improves performance over baseline intrinsic motivation in complex, extremely sparse environments where preexisting methods for preserving optimality fail. ## update after rebuttal I am happy with the rebuttal. It addressed my major concerns. I think the paper has sufficient evidence for the claimed contributions. Claims And Evidence: For claim (a), Theorem B.1 shows that there are are reward shaping functions that previous methods such as GRM cannot use. Moreover, Theorem B.2 shows that the proposed ADOPS conserves the set of optimal policies. For claim (b), ADOPS is designed to have action dependent reward shaping and the theoretical proof shows that ADOPS still preservers optimality. For claim (c), the paper compares ADOPS and baselines in the Atari game Monte Zuma's Revenge. In this case, using a single benchmark is in my opinion sufficient and a good choice. The results are thoroughly analyzed and explained. Nevertheless, saying that the approach improves performance in more than one experiment (the word "environments", plural, is used) is a slight over-claim w.r.t. the empirical evidence and the description should be modified. Methods And Evaluation Criteria: The paper compares ADOPS and baselines in the Atari game Monte Zuma's Revenge. In this case, this makes sense. The paper contains theoretical proofs and the chosen benchmark and the resulting analysis provide detailed knowledge of the methods. Theoretical Claims: I did not notice problems with the proofs. However, I did not check the proofs in detail. Experimental Designs Or Analyses: The experimental design makes sense. The experimental analysis is of high quality motivating the choice of hyperparameters and other design choices well. Supplementary Material: The supplementary material includes both experimental details and details of theoretical proofs. The details are well written. I checked the supplementary material but did not check proofs step by step. Relation To Broader Scientific Literature: The paper provides a novel contribution in the specific domain of reward shaping. The proposed approach shows how to do reward shaping in a more complex action dependent way while maintaining optimality w.r.t. the original reward function. Essential References Not Discussed: To me the references look fine. The paper has a well defined scope. Other Strengths And Weaknesses: The paper is well written. Other Comments Or Suggestions: In "While it would be ideal, it is usually not feasible to implement Equation (41), as it requires an accurate estimate of the optimal value function.", should (41) be (15)? In multiple locations, the paper uses phrases of the form "We will begin". These can be rephrased to "We begin". The word "will" is not necessary. Figure 1 shows result plots w.r.t. training steps. What does training steps mean here? How many time steps is one training step? Questions For Authors: I did not fully understand the text that says why Equation (13) leads to "the first of these conditions says that every action that would be optimal without IM must remain optimal after the addition of the IM". Can you please explain this in more detail? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time and effort to review our work. We appreciate your positive feedback. We fixed the reference to Equation (41) to Equation (15). Thank you for noticing this error. We removed unnecessary use of future tense as you suggested. Each "training step’’ is a rollout and subsequent set of gradient updates that occurs every 128 time steps across each of 32 workers. We use the same architecture and training procedure as in the original RND paper (Burda 2019). We’ve updated our appendix to define "training step’’ more clearly. We’ll explain a bit more what we meant when we said Equation 13 entails that "every action that would be optimal without IM must remain optimal after the addition of the IM.’’ There are multiple extrinsically optimal policies, two of which we can call $\pi_1$ and $\pi_2$. These policies, being extrinsically optimal, have equal $\textit{extrinsic}$ value functions $V_E^{\pi_1} = V_E^{\pi_2}$, but that doesn’t necessarily mean $\textit{ a priori}$ that they have equal intrinsic value functions. If they happen to not have equal intrinsic value functions, then one will be greater than the other, and thus the extrinsically optimal policy with the lower intrinsic value function will no longer be optimal (because there exists another policy that would obtain greater expected value). In order to ensure that this doesn’t happen, and that all extrinsically optimal policies are also optimal intrinsically, we need to assert that their intrinsic value functions (and thus their combined intrinsic+extrinsic value functions) are equal to each other. Admittedly, we had worded this somewhat confusingly, as we reference "actions,’’ but Equation (13) deals explicitly only with policies. We’ve amended our explanation to say "every policy that would be optimal without IM must remain optimal after the addition of the IM.’’ Thank you for bringing this to our attention. Thank you again for your time, effort, and kind words.
Summary: This work introduces Action-Dependent Optimality Preserving Shaping (ADOPS), a general framework for modifying intrinsic rewards to a form that preserves the set of optimal policies. The proposed framework is general enough to capture not only potential-based reward shaping (PBRS) functions, but also reward shaping functions that cannot be written in potential-based form. Indeed, contrary to PBRS, ADOPS allows for reward shaping functions with the property that the intrinsic cumulative returns depend on agent's actions while still preserving the set of optimal policies. The authors evaluate their framework in the Montezuma's complex Atari Learning Environment, which is characterized by very sparse rewards, and show that ADOPS outperforms all other baselines, some of which struggle to make any progress at all. ## update after rebuttal I have read the rebuttal carefully and I will keep my score unchanged. The authors have mostly addressed my questions and comments. Something to point out is that the revised proof is based on the concept of a stable policy, and I am not so clear how common such proof strategies are in the RL Literature. Can for instance the authors make any connections (if applicable) to the prior literature? Furthermore, I still feel the proof is not so easy to follow, even though the revised proof is better written. Claims And Evidence: Overall, I feel that the paper does not make unsubstantiated claims, in the sense that it provides both theoretical derivations as well as empirical evidence for the various claims it makes. That said, I feel that the empirical evidence is limited, as I explain in the next sections. I also have some concerns regarding the theory, which I discuss below. Methods And Evaluation Criteria: The fact that the authors only test their approach against a single environment is a bit problematic. The Montezuma's complex Atari Learning Environment is indeed a challenging environment with very sparse rewards, but results with more environments would have made the various claims more convincing. - As an example, Random Network Distillation was testing on multiple environments such as Gravitar, Montezuma’s Revenge, Pitfall, PrivateEye, Solaris, and Venture. The recent works by Forbes (cited in the paper) also test on several environments, namely, MiniGrid DoorKey, Cliff Walking, and Longer Cliff Walking. Not all environments are equally interesting, and obviously the authors would want to choose the challenging ones with sparser rewards. My point is simply that one environment does not provide significant evidence. As far as the baselines are concerned, I like the fact that the authors have decided to experiment with three of the most recent potential-based approaches: PIES, PBIM, and GRM. That said, it is also important to understand how the proposed framework compares against more traditional approaches based on curiosity and intrinsic rewards (even if these do not have theoretical guarantees). Random network distillation is an excellent candidate, but rather old. There are more recent works like ""Never give up :Learning directed exploration strategies" by Badia et al. from ICLR 2020. It would have been interesting to see how the proposed framework would compare against the state of the art in intrinsic (but not PBRS) rewards. Theoretical Claims: Overall, I feel that the authors have done a good job with the derivations. Some concerns: - In (38), I think $F'_t$ should read $F'_j$. - In (39), the notation $U_t^{I_{old}}$ locks strange. Why "old"? - The proof of Theorem 5.3 may be correct, but it is not easy to read. One confusing thing concerns the policy. The authors claim that $\pi$ is the current policy followed during training. I assume this policy refers to the ADOPS reward, not the original reward. So, essentially the current policy outputs the action it thinks is best in terms of the shaped ADOPS reward. The notation with the Q- and V-functions is also quite confusing, because there is an implicit dependence on the timestep and the policy. - Another issue with the proof of Theorem 5.3 concerns Assumption 5.2, which guarantees adequate exploration. I understand the authors need that assumption in the proof of the theorem. But this is still very high-level, and not completely rigorous. It is more of the big idea. I am not sure whether it can count as a formal argument though, but I could be wrong. - In (35), (36) and even other equations, the authors do not sum over all possible states $s'$ The Bellman operator should contain the summation over all possible next states though, weighted by the transition probabilities. - I was not able to figure out what (29)-(31) show. In particular, how do we go from (29) to (30)? Experimental Designs Or Analyses: I did not have any particular concern with the design. The authors made changed some things from the prior literature, like not clipping external rewards, but everything is explained sufficiently. The experimental analysis is quite deep, as the authors dive deeper into all methods. Supplementary Material: I reviewed the entire supplemental material. Relation To Broader Scientific Literature: The paper pushes the envelope in the reward-shaping literature by proposing the novel Action-Dependent Optimality Preserving Shaping (ADOPS). This framework can unify PBRS approaches; in addition to that, it can incorporate shaping reward that fall outside the PBRS paradigm. I find the framework quite powerful and interesting. Essential References Not Discussed: My main concern regards the literature on Intrinsic Motivation. The authors cite some important approaches, namely, count-based exploration (Bellemare et al., 2016), Intrinsic Curiosity Module (ICM) (Pathak et al., 2017), and Random Network Distillation (RND) (Burda et a l., 2018a; 2019). All these papers are important but not recent. I think the authors could have included some papers from the more recent literature. This could also be helpful in relation to the baselines in the experimental section. The authors use RND only from the IM literature, which is rather old. It would be great if they could show that their approach outperforms more recent approaches from the IM literature as well. This could broaden the scope of the work - do the authors want to show that ADOPS just outperforms PBRS approaches, or that it can in fact be competitive w.r.t. recent IM approaches (see also my earlier comment)? Other Strengths And Weaknesses: Strengths - The framework is a nice contribution that goes beyond the current PBRS literature. Action-dependent but policy preserving reward shaping is a novel and promising idea. - The theoretical derivations are generally done with care. - The results on the Montezuma's complex Atari Learning Environment are good, and demonstrate superiority w.r.t. recent PBRS approaches and even RND. - The experimental analysis is interesting. Weaknesses - Proof 5.3 is hard to read with confusing notation and some non-rigorous statements. - Using a single environment is not as convincing as experimenting with several environments. - RND is not a recent approach from the IM literature. It would be interesting to know how this framework compares against more recent state-of-the-art IM frameworks. Other Comments Or Suggestions: - It would be nice if the authors could improve the exposition of the proof of Theorem 5.3. It might be helpful to structure the proof in a case-based format, e.g., Case 1: ...., Case 2: ...., etc. Currently, there is a lot of text, but I feel that more rigorous symbols and derivations might improve the exposition. - It would be nice if the authors could experiment with more environments and/or recent sate-of-the-art baselines from the IM literature. Questions For Authors: Please address the various concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough feedback, and for the time and energy it took to read our paper in such an in-depth manner. We appreciate your insightful comments, and our resulting edits to the paper have improved it substantially. We acknowledge that further work on a wider suite of environments than Montezuma’s Revenge is merited, and are excited to explore this territory in future work: we are currently working to extend our empirical evaluations to other Atari environments for a follow-up work. We limited ourselves to one environment in this work due to time and compute constraints, and chose Montezuma’s Revenge due to its status as a well-tested benchmark known for being extremely sparse and difficult to learn. We’d also like to emphasize, as you’ve pointed out, that ours is a plug-and-play method with theoretical guarantees, and this represents the most complex environment in which such a method has been tested. Additionally, as you note, it would be interesting to apply our methods to shaping rewards other than RND, particularly more recent, SOTA methods such as Never Give Up. We’re excited to do so in future work; we chose RND as a base IM method for its relative simplicity and ubiquity, as a good baseline for demonstrating the initial efficacy of our method. We limited ourselves to one IM method due, again, to ever-present time and compute constraints (particularly here, wherein our experimental setup requires testing each IM method with each plug-and-play method, and so an additional one would almost double our required computational budget). We consider our experimental results in Montezuma’s Revenge to be strong contributions for an initial work, particularly when paired with our theoretical contributions. We will next address your comments in the “Theoretical Claims” section in the order that they appear. We’ve corrected Equation 38 as appropriate. Thank you for catching this. We agree that $U^{I_{old}} t $ is a somewhat odd notation, and furthermore unnecessary to condense what it is meant to represent at all. We’ve eliminated it entirely, and replaced it with the full text of what it notates, namely $\sum_{j=t}^{N-1} \gamma^{j-t}F’_j$ (the return of the “old” intrinsic reward, unmodified by GRM). We have significantly updated the text of our proof of Theorem 5.3 for clarity and rigor. We’ve also added a preamble to the full text of it explaining the outlines of the proof. We further explain our modifications to the text of the proof, including more formally-defined assumptions, in our response to Reviewer cdE4. These changes include moving the $P(s’|a,s)$ term to the distribution of an expectation function, thus avoiding the need to include any (initially omitted, as you noted) summation in Equations (35) and (36), among others. A fully updated version of Section 5.2 is included in the below link, to peruse the changes to this proof at your leisure. https://drive.google.com/file/d/1XTZ3VSuIDFaqHM_NxsjBpQEhkR6DFOyM/view?usp=sharing The transition from Equation (29) to (30) is essentially the same as the transition from (24) to (28), except that all the $C_1, C_2, C_3 = 0$, and the $V_E + V_I$ terms do not drop out, as they’re inside a max function rather than an argmax. This was admittedly confusing, and also unnecessary to the flow of the proof. We’ve omitted these equations in the new version, in favor of a more rigorous explanation. As for addressing more literature, we’ve added acknowledgements of more recent approaches to Section 2, including Never Give Up and AIRS (suggested by another reviewer). Thank you again for the review, particularly the kind words about the novelty of our framework and theory; you clearly read the paper very closely and understand what we’re trying to do, and it’s nice to feel “seen.”
null
null
null
null
null
null
Deep Unsupervised Hashing via External Guidance
Accept (poster)
Summary: This paper introduces DUH-EG, a deep unsupervised hashing framework that integrates external textual information as semantic guidance to overcome the limitations of relying solely on internal visual structures. The method first constructs external features by extracting and clustering textual features (derived from WordNet nouns and a pre-trained CLIP model) to select representative semantic nouns. It then employs a bidirectional contrastive learning loss to align hash codes generated from two augmented views of an image and its corresponding external textual view. Extensive experiments on standard image retrieval benchmarks (CIFAR-10, NUS-WIDE, FLICKR25K, and MSCOCO) demonstrate that DUH-EG outperforms several state-of-the-art unsupervised hashing methods. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. The paper claims that incorporating external textual guidance can enrich semantic supervision and lead to more discriminative hash codes. These claims are supported by the comprehensive experimental results, such as comparison experiments, ablation studies, and parameter analysis. Methods And Evaluation Criteria: Yes, both the proposed methods and evaluation criteria make sense for the problem at hand. Specifically, the proposed method is well motivated and clearly described. Leveraging a pre-trained CLIP model to extract and fuse textual features, followed by a clustering-based selection mechanism, is a creative strategy to overcome synonym redundancy. The use of a bidirectional contrastive loss—extending traditional contrastive learning to align not only different augmented views but also external textual views—is appropriate for enhancing semantic consistency. Evaluation metrics such as MAP, Precision-Recall curves, and TopN-Precision are standard and suitable for assessing image retrieval performance. Theoretical Claims: There is no any proofs for theoretical claims. The paper primarily focuses on algorithmic and empirical contributions rather than deep theoretical proofs. The derivation of the bidirectional contrastive loss and the rationale for the external feature construction are presented clearly. The formulation is standard within the contrastive learning literature and appears correct. Experimental Designs Or Analyses: Yes, I checked the soundness of the experimental designs and analyses. The experimental design is robust: 1) The authors evaluate on multiple benchmark datasets. 2) Comparisons are made against 14 state-of-the-art unsupervised hashing methods. 3) Ablation studies and parameter sensitivity analyses provide insights into the contribution of each component. Supplementary Material: Yes, I reviewed the supplementary material. The supplementary material (including additional P-R and P@N curves and extended parameter analysis) reinforces the claims made in the main paper. It offers a more comprehensive view of the method's performance under various settings and validates the stability of the approach across different conditions. Relation To Broader Scientific Literature: The paper is well situated within the broader context of unsupervised hashing and contrastive learning. It builds on recent advances in using internal data structures for hash code learning while making a novel contribution by integrating external textual information. The work is compared with and references several key studies in both unsupervised hashing and multi-modal representation learning, clearly delineating its contributions relative to prior art. Essential References Not Discussed: No. This paper have cited the essential related works. Other Strengths And Weaknesses: Strengths: 1) Novel integration of external textual guidance with unsupervised hashing. 2) Comprehensive experimental validation across multiple benchmark datasets. 3) Clear ablation studies and parameter analyses that illustrate the effectiveness of individual components. Weaknesses: 1) Limited discussion on the scalability and computational cost of extracting and processing external textual features. 2) Lack of deeper analysis on why external guidance particularly improves the discrimination of hash codes. 3) Some methodological details (e.g., sensitivity to the choice of pre-trained models) could be elaborated further. Other Comments Or Suggestions: 1) A more detailed analysis of computational overhead would be valuable, especially when scaling to larger external noun databases. 2) Consider discussing potential impacts of noisy or less relevant textual features. 3) Minor editorial improvements and clarifications in the description of the bidirectional contrastive loss could help improve readability. Questions For Authors: 1) How does DUH-EG scale when the external textual database is significantly larger, and what are the computational implications? 2) Have you evaluated the impact of noisy or irrelevant external textual features on the hash code quality? 3) Can you provide further justification for the selection of the similarity thresholds (T1 and T2) across different datasets? Are these hyperparameters sensitive to dataset characteristics? 4) How does the choice of pre-trained models (e.g., using alternatives to CLIP) affect performance? Would similar improvements be expected with other multi-modal models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful comments and suggestions. To enhance readability, we will refine the description of the bidirectional contrastive loss in the revised manuscript accordingly. **Q1: How does DUH-EG scale when the external textual database is significantly larger, and what are the computational implications?** **R1:** Our method enhances scalability by selecting representative textual features and aggregating them into a single external feature **e** using Eq.(5) of the original paper. This ensures scalability when handling a significantly larger textual database. Regarding computational cost and scalability, Table 1 and Table 2 analyze computational cost, showing that while pre-processing time increases with larger external noun databases, MAP performance remains stable, demonstrating scalability. Additionally, the MAP performance remains largely unchanged, indicating that our method exhibits good scalability. However, in the inference phase, since only the image hashing network is used to generate discriminative hash codes, the size of the textual database has no impact on inference efficiency. --- **Q2: Have you evaluated the impact of noisy or irrelevant external textual features on the hash code quality?** **R2:** Yes. We highlighted the positive impact of filtering redundant external textual features using the External Feature Construction (EFC) method (Section 3.2) in the supplementary material. As shown in Fig. 7, after filtering, the feature "Cat_sleep" had the highest weight. Without EFC, irrelevant features like "Round_shape," "Ovoid," and "Hair_ball" had higher weights than "Cat_sleep." The ablation study in Table 3 confirms that irrelevant features degrade the quality of the learned hash codes. **Q3: Can you provide further justification for the selection of the similarity thresholds $T_1$ and $T_2$ across different datasets? Are these hyperparameters sensitive to dataset characteristics?** **R3:** 1. To ensure consistency across datasets, we fixed $T_1$ at 0.90 and $T_2$ at 0.97 when comparing with other methods in Table 1 of the original paper. 2. No, they are not sensitive to datasets, so we use the same parameters in all datasets. **Q4: How does the choice of pre-trained models (e.g., using alternatives to CLIP) affect performance? Would similar improvements be expected with other multi-modal models?** **R4:** 1. Since our method uses Bidirectional Contrastive Learning to align an image's textual and visual representations, the quality of external knowledge introduced through the textual modality affects the hashing network’s performance. To validate this, we trained our model with textual features from CLIP (ViT-B/16) and CLIP (ViT-B/32). As shown in Table 3, the MAP performance differs significantly on CIFAR-10 and MSCOCO, confirming the impact of the pre-trained model choice. 2. Yes, similar improvements would be expected with other multi-modal models, as evidenced by the results of CLIP (ViT-B/16) and CLIP (ViT-B/32). **Table 1: The number of words and pre-processing time for different external knowledge.** | External Knowledge | Word Num | Pre-processing Time | |--------------------|----------|---------------------| | ImageNet | 21,843 | 99.53s | | WordNet | 117,797 | 476.79s | | ConceptNet | 134,364 | 531.12s | | GloVe | 317,756 | 1182.11s | **Table 2: The MAP performance while using different external knowledge.** | External Knowledge | CIFAR-10 (16, 32, 64 bits) | NUS-WIDE (16, 32, 64 bits) | FLICKR25K (16, 32, 64 bits) | MSCOCO (16, 32, 64 bits) | |--------------------|---------------------------|----------------------------|----------------------------|--------------------------| | ImageNet | 0.940 / 0.942 / 0.941 | 0.849 / 0.854 / 0.855 | 0.871 / 0.885 / 0.888 | 0.864 / 0.885 / 0.892 | | WordNet | 0.939 / 0.940 / 0.940 | 0.849 / 0.856 / 0.856 | 0.874 / 0.887 / 0.892 | 0.862 / 0.881 / 0.888 | | ConceptNet | 0.938 / 0.940 / 0.940 | 0.850 / 0.853 / 0.855 | 0.875 / 0.885 / 0.890 | 0.865 / 0.886 / 0.893 | | GloVe | 0.932 / 0.936 / 0.935 | 0.846 / 0.852 / 0.852 | 0.870 / 0.882 / 0.887 | 0.842 / 0.863 / 0.869 | **Table 3: The MAP performance while using different pre-trained models.** | Pre-trained Model | CIFAR-10 (16, 32, 64 bits) | NUS-WIDE (16, 32, 64 bits) | FLICKR25K (16, 32, 64 bits) | MSCOCO (16, 32, 64 bits) | |---------------------|---------------------------|----------------------------|----------------------------|--------------------------| | CLIP (ViT-B/16) | 0.939 / 0.940 / 0.940 | 0.849 / 0.856 / 0.856 | 0.874 / 0.887 / 0.892 | 0.862 / 0.881 / 0.888 | | CLIP (ViT-B/32) | 0.920 / 0.929 / 0.920 | 0.855 / 0.856 / 0.860 | 0.869 / 0.887 / 0.888 | 0.848 / 0.868 / 0.887 |
Summary: This paper proposes a deep unsupervised hashing framework (DUH-EG) designed for image retrieval. Unlike traditional unsupervised hashing methods that rely solely on intrinsic visual structures, DUH-EG leverages external textual guidance extracted from a lexical database (i.e., WordNet) and processed via a pre-trained CLIP model. The method comprises two components: (1) an external feature construction module that selects representative semantic nouns using clustering and filtering, and (2)) a bidirectional contrastive learning loss that aligns hash codes from two augmented views of an image with its corresponding external textual feature. Extensive experiments on four widely-used benchmarks (i.e., CIFAR-10, NUS-WIDE, FLICKR25K, and MSCOCO) show considerable improvements in MAP over state-of-the-art approaches. Claims And Evidence: Yes Methods And Evaluation Criteria: 1. This paper introduces a two-phase approach. In the first phase, textual features for nouns are fused and clustered to select a diverse set of representative semantic features. In the second phase, these external features are aligned with internal image representations via a specially designed bidirectional contrastive loss. This loss not only maximizes mutual information between different augmented views but also redefines positive pairs by using external guidance to mitigate false negatives. 2. The evaluation is primarily based on MAP metrics across four image retrieval datasets. In addition, the authors provide ablation studies, sensitivity analyses for the key thresholds (T1 and T2), and supplementary metrics (P-R and P@N curves) to reinforce the empirical performance. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: 1. The authors evaluate DUH-EG on four standard benchmarks, comparing against 14 state-of-the-art unsupervised hashing methods. The comprehensive comparison and the use of diverse backbones (i.e., VGG-16 and CLIP) strengthen the empirical study. 2. Detailed ablation experiments dissect the contributions of individual modules (i.e., SIM and AUG) and various configurations (e.g., using internal visual knowledge vs. external textual knowledge). The parameter analysis on thresholds T1 and T2 further validates the robustness of the proposed method. 3. The experimental analysis is thorough. However, a discussion regarding the computational overhead introduced by the external feature extraction and the sensitivity to pre-trained models (like CLIP) would provide additional clarity. Supplementary Material: The supplementary materials include additional precision-recall curves, TopN-precision curves, and visualizations of retrieval results. These supplementary results further substantiate the claims made in the main paper by demonstrating improved retrieval quality under varying conditions. Relation To Broader Scientific Literature: The paper positions its contributions well within the literature on unsupervised hashing, contrastive learning, and multimodal representation learning. It advances the field by combining cross-modal external guidance with contrastive learning, addressing a known limitation of internal visual structures. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The idea of using external textual guidance to overcome intrinsic limitations of visual data is novel and interesting. 2. Extensive experiments with competitive baselines across multiple datasets demonstrate the efficacy of the proposed method. 3. The detailed ablation studies and parameter analyses provide a clear breakdown of the contributions of each component. 4. The method and experiments are described with sufficient clarity, making the approach understandable. Weaknesses: 1. A discussion on the computational costs and scalability issues associated with integrating external guidance would be beneficial. 2. Relying on external pre-trained models (e.g., CLIP) and WordNet might limit the method's applicability in scenarios where such resources are constrained. Other Comments Or Suggestions: Typo: Page 6, "token the remaining images" -> "took the remaining images". Questions For Authors: 1. Why was T2 fixed at 0.97 for all datasets despite varying optimal values (Fig. 3b)? What about performance if dataset-specific T2 is used? 2. What is the training/inference time of DUH-EG compared to baselines, especially with CLIP? 3. Will code and pretrained models be released? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and constructive suggestions. Below are our point-to-point responses: **Q1: A discussion on the computational costs and scalability issues.** **R1:** To analyze computational costs and scalability, we evaluated our approach using external vocabulary sets of varying sizes from four sources: 1) Noun category labels from ImageNet; 2) Nouns extracted from WordNet; 3) Nouns from the ConceptNet knowledge graph; and 4) The full vocabulary from GloVe. The experimental results are shown in Table 1, presenting the pre-processing time of our method for each external knowledge source. Since our approach extracts textual features for all words using CLIP, the pre-processing time increases as the vocabulary size grows significantly. Therefore, exploring more efficient strategies for extracting external textual features is a promising direction for our future research. Notably, after external knowledge preprocessing, our training and inference are the same as the existing methods with no additional time added. Regarding scalability, we evaluated the model performance using four external vocabulary sources with different sizes, as shown in Table 2. Our External Feature Construction (EFC) method (in section 3.2 of the original paper) effectively mitigates redundancy in external textual features, ensuring robust performance even when different external knowledge sources are used. This highlights our method's adaptability and scalability. **Q2: Relying on external pre-trained models (e.g., CLIP) and WordNet might limit the method's applicability in scenarios where such resources are constrained.** **R2:** Our method leverages the pre-trained CLIP to incorporate textual features from WordNet as external knowledge for unsupervised hashing. However, we would like to clarify that our approach does not strictly rely on these specific resources. In resource-constrained scenarios, lightweight or domain-specific pre-trained multimodal models and textual knowledge bases could be used as alternatives. **Q3: Why was $T_2$ fixed at 0.97 for all datasets despite varying optimal values (Fig. 3b)?** **R3:** To maintain a consistent set of hyperparameters across all datasets, we fixed $T_2=0.97$ in comparison experiments, thereby reducing parameter tuning costs. If dataset-specific $T_2$ values were used, the performance would improve slightly, as indicated by the star-marked points in Fig. 3b. **Q4: What is the training/inference time of DUH-EG compared to baselines, especially with CLIP?** **R4:** We freeze the parameters of CLIP and train only the subsequent hashing layers. This makes our training process more efficient than updating all parameters of CLIP. For inference, our method consists of two steps: (1) feature extraction using CLIP and (2) hashing the features via the trained hashing layers. Moreover, the hashing layers consist of only two fully connected layers [$F$--$512$--$L$] with activation functions, which introduce negligible additional overhead (about 0.3M params). We provide a detailed overview of the computational costs associated with our method in Table 3, which demonstrates the efficiency of our method compared with CLIP. **Q5: Will code and pretrained models be released?** **R5:** Yes. We will release them after the acceptance of the paper. **Table 1: The number of words and pre-processing time for different external knowledge.** | External Knowledge | Word Num | Pre-processing Time | |------------------------|-------------|-------------------------| | ImageNet | 21,843 | 99.53s | | WordNet | 117,797 | 476.79s | | ConceptNet | 134,364 | 531.12s | | GloVe | 317,756 | 1182.11s | **Table 2: The MAP performance using different external knowledge.** | External Knowledge | CIFAR-10 (16, 32, 64 bits) | NUS-WIDE (16, 32, 64 bits)| FLICKR25K (16, 32, 64 bits) | MSCOCO (16, 32, 64 bits) | |------------------------|--------------------------------|--------------------------------|--------------------------------|------------------------------| | ImageNet | 0.940 / 0.942 / 0.941 | 0.849 / 0.854 / 0.855 | 0.871 / 0.885 / 0.888 | 0.864 / 0.885 / 0.892 | | WordNet | 0.939 / 0.940 / 0.940 | 0.849 / 0.856 / 0.856 | 0.874 / 0.887 / 0.892 | 0.862 / 0.881 / 0.888 | | ConceptNet | 0.938 / 0.940 / 0.940 | 0.850 / 0.853 / 0.855 | 0.875 / 0.885 / 0.890 | 0.865 / 0.886 / 0.893 | | GloVe | 0.932 / 0.936 / 0.935 | 0.846 / 0.852 / 0.852 | 0.870 / 0.882 / 0.887 | 0.842 / 0.863 / 0.869 | **Table 3: Training/inference time, FLOPs, and params of DUH-EG and CLIP on an NVIDIA 3090 GPU.** | Model | Training Time / epoch | Inference Time / sample| GFLOPs| Params| |------------|----------------------------------------------------|----------------------------|------------------|-------------| | DUH-EG | 105.27s (without pre-processing of external knowledge) | 4.1227s | 11.2703 | 57.56M | | CLIP | The CLIP was not trained on the 3090 GPU | 4.1223s | 11.2700 | 57.26M |
Summary: The paper identifies a crucial bottleneck in unsupervised image hashing, i.e., the limitation of insufficient knowledge guidance solely relying on the visual structures. To remedy this, semantic representatives are selected from external textual databases, serving as external guidance for the image modality via a bidirectional contrastive loss. Leveraging textual external knowledge, they designed a novel contrastive loss to avoid false negative pairs. Experiments on various datasets and model architectures have validated the consistent effectiveness of the method in mining and preserving semantic similarity for unsupervised hashing. ## update after rebuttal The authors' rebuttal adequately addressed these concerns. As a result, I keep my score as 4 (recommend acceptance). Claims And Evidence: Yes. Claims regarding proposed approaches are well-verified by extensive ablation results w.r.t. all 4 evaluated datasets and multiple hash code lengths. Methods And Evaluation Criteria: The proposed method is clear and reasonable with unique consideration of external textual characteristics. The evaluation criteria are also reasonable as the evaluated datasets vary in scales and the adopted evaluation metrics are widely-used for the hashing task. Theoretical Claims: The paper does not contain proofs/theoretical claims. Experimental Designs Or Analyses: The paper’s method is verified comprehensively and analyzed thoroughly via extensive experiments to establish its validity. Supplementary Material: Yes. The supplementary material provides the P-R Curve results on all evaluated datasets and a visualization of the retrieved images and matched external nouns. There is also an algorithmic outline of the method. Relation To Broader Scientific Literature: The work is a novel improvement in unsupervised semantic image hashing, while being potential to be applied in hashing retrieval of other modalities that can be associated with textual knowledge. The paper’s method is closely related to contrastive learning and clustering approaches. Essential References Not Discussed: Key references are well-discussed. Other Strengths And Weaknesses: Strengths: 1. As summarized, the authors identify a knowledge bottleneck in unsupervised image hashing retrieval. The proposed method is well-motivated and novel by analyzing of the key challenges and potential problems from learning with external guidance. 2. The experiments compare state-of-the-art methods with datasets of different scales and complexities, while presenting extensive analyses on both the learning approaches and the external feature construction. 3. The paper effectively presents the proposed method with generally excellent clarity. Weaknesses: 1. More recent works could be included in the related work of deep unsupervised hashing. Other Comments Or Suggestions: 1. The paper is clear in its language but it’s better to check some mathematical operations with different names (e.g., 'sim' and 'cos' in Eq.(3),(4),(6),(8), and Line 256-258) which may have the same meanings and cause ambiguity. 2. The authors may limit the use of boldface abbreviations only for necessary cases. Questions For Authors: 1. Why does the similarity threshold T_1 specifically affect the Flickr25k dataset when its values is high? 2. Why does a reduction in T_2 have a more significant impact on the model compared to T_1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and constructive suggestions. Below are our point-to-point responses: **Q1: More recent works could be included in the related work of deep unsupervised hashing.** **R1:** Thank you for your suggestion. We will update the related work section to incorporate more recent studies on deep unsupervised hashing, ensuring a more comprehensive and up-to-date literature review. --- **Q2: The paper is clear in its language, but it’s better to check some mathematical operations with different names (e.g., 'sim' and 'cos' in Eq.(3), (4), (6), (8), and Lines 256-258) which may have the same meanings and cause ambiguity.** **R2:** We appreciate your careful examination of our mathematical operations. To eliminate potential ambiguity, we will revise the notation in Eq.(3), (4), (6), and (8) of the original paper, as well as in Lines 256–258, ensuring consistency in mathematical operations. --- **Q3: The authors may limit the use of boldface abbreviations only for necessary cases.** **R3:** Thanks for your suggestion. We will refine the formatting to use boldface abbreviations only where necessary, improving readability and clarity. --- **Q4: Why does the similarity threshold $T_1$ specifically affect the Flickr25k dataset when its value is high?** **R4:** The similarity threshold $T_1$ plays a crucial role in selecting representative textual features with lower similarity to the cluster center. As shown in Eq.(5) of the original paper, the final fused external feature e is computed from the external textual feature set $\left(\bar{\{t}}_w^{com}\right)$. Thus, reducing redundancy in $\left(\bar{\{t}}_w^{com}\right)$ is essential for ensuring well-matched external textual knowledge for the original images. If $T_1$ is set too high, some representative textual features may be filtered out during the process described in Eq.(4), weakening the discriminative power of the final fused external feature **e**. Since the representational capacity of **e** is influenced by $\left(\bar{\{t}}_w^{com}\right)$, different datasets may respond differently to variations in $T_1$. In the case of Flickr25k, a higher $T_1$ can significantly impact model performance, likely due to the dataset’s inherent characteristics, making it more sensitive to the removal of certain textual features. --- **Q5: Why does a reduction in $T_2$ have a more significant impact on the model compared to $T_1$?** **R5:** Unlike $ T_1$, which filters representative textual features, the primary role of the similarity threshold $T_2$ is to determine positive and negative sample pairs in Eq.(9). Fluctuations in $T_2$ directly affect the model’s ability to correctly distinguish between potential positive and negative pairs. Even a slight reduction in $T_2$ can cause some negative sample pairs to be misclassified as positive ones. This misclassification directly affects the discrimination of the learned hash codes, ultimately leading to a decline in overall model performance.
Summary: This paper proposes a novel deep unsupervised hashing method, Deep Unsupervised Hashing with External Guidance (DUH-EG), to enhance image retrieval by incorporating external textual knowledge as semantic guidance. The method selects representative semantic nouns from an external textual database, aligns images with them to extract more discriminative external features, and employs a bidirectional contrastive learning mechanism to maximize agreement between hash codes in internal and external spaces. Experiments on CIFAR-10, NUS-WIDE, FLICKR25K, and MSCOCO demonstrate that DUH-EG significantly outperforms state-of-the-art unsupervised hashing methods. Claims And Evidence: The paper claims that incorporating external textual guidance improves hash learning by overcoming the limitations of internal visual structures. This claim is supported by extensive experiments, where DUH-EG consistently outperforms existing methods across different datasets and hash code lengths. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. The paper follows standard benchmarks and metrics, such as Mean Average Precision (MAP), Precision-Recall (P-R) curves, and Top-N Precision. The inclusion of multiple datasets and comparison with strong baselines provides a comprehensive evaluation. Theoretical Claims: No theoretical claims or proofs are presented; the work is empirically driven. Experimental Designs Or Analyses: The experimental design is rigorous, with a comprehensive evaluation across multiple datasets and comparison with state-of-the-art methods. The ablation studies provide insights into the contributions of different proposed components. Supplementary Material: I have reviewed the supplemental material (e.g., Figure 4-7, Algorithm 1-2), the visualizations and additional metrics (P@N) can reinforce the claim. Relation To Broader Scientific Literature: The paper builds upon prior work in deep unsupervised hashing and contrastive learning, while introducing external knowledge as a novel enhancement. Essential References Not Discussed: No. The paper has covered a wide range of related works. Other Strengths And Weaknesses: The strengths of the paper include the novelty of integrating external textual knowledge, strong experimental validation, and a well-designed bidirectional contrastive learning mechanism. The primary weakness is the lack of deeper analysis of how different external knowledge sources affect performance. Other Comments Or Suggestions: (1) Clarify the selection criteria for external textual features and how they generalize across datasets. (2) Provide additional ablation studies on the influence of different external knowledge sources. (3) Discuss potential limitations, such as dependency on pre-trained vision-language models like CLIP. Questions For Authors: (1) How does the selection of external textual features affect the model's robustness across datasets? (2) Would using domain-specific external knowledge further enhance performance? (3) Can this approach be extended to other modalities beyond images and text? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your thoughtful feedback and recognition of our contributions. Below are our point-to-point responses: **Q1: How does the selection of external textual features affect the model's robustness across datasets?** **R1:** In our work, we focus on widely used benchmarks, such as CIFAR-10, NUS-WIDE, FLICKR25K, and MSCOCO, which primarily contain images of common objects and everyday scenes. To ensure generalization, we use a general vocabulary (e.g., WordNet) as our external knowledge source. Specifically, we extract textual features via CLIP and align them with visual features, promoting semantic consistency among similar images in the learned discrete representations. However, due to the presence of synonyms, textual features from the general vocabulary often introduce redundancy. To address this, we propose an External Feature Construction (EFC) method (Section 3.2) to cluster semantically similar nouns and select representative textual features, thereby reducing redundancy caused by excessive synonyms. This process enhances the semantic richness of the aggregated external textual features obtained via weighted summation (Eq. 5) and improves their representational capacity. As demonstrated in our ablation experiments (Table 3), the EFC method improves model performance. Furthermore, as shown in Fig. 7 (Appendix), applying EFC reduces the number of high-weighted nouns with similar meanings, validating its effectiveness in redundancy reduction. --- **Q2: Would using domain-specific external knowledge further enhance performance?** **R2:** Yes, using domain-specific external knowledge can further enhance performance. To evaluate this, we conducted an ablation study using six external knowledge sources (ImageNet, ConceptNet, WordNet_N, WordNet_V, GloVe, and Medical) on the MSCOCO dataset, which belongs to the natural image domain. - ImageNet represents 21,843 noun labels from the ImageNet dataset. - ConceptNet and WordNet_N contain noun data. - WordNet_V consists of verb data. - GloVe includes words spanning multiple parts of speech. - Medical contains vocabulary from the medical domain. Experimental results in Table 1 show that models trained with general noun vocabulary (e.g., ImageNet, ConceptNet, WordNet_N) perform better, despite differences in vocabulary size. In contrast, models trained on verb-based (WordNet_V) or mixed-POS vocabularies (GloVe) exhibit lower performance, with medical vocabulary yielding the lowest performance. In summary, for a target dataset (e.g., MSCOCO) in the natural image domain, using domain-relevant external knowledge (e.g., general noun vocabulary) effectively enhances model performance. Thus, matching external knowledge with the target dataset’s domain improves performance. **Table 1: The ablation studies on the influence of different external knowledge on specific dataset MSCOCO.** | **External Knowledge** | **Word Num** | **MSCOCO (16 bits, 32 bits, 64 bits)** | |------------------------|-------------|----------------------------------------| | **Medical** | 35,500 | 0.807 / 0.835 / 0.846 | | **GloVe** | 317,756 | 0.842 / 0.863 / 0.869 | | **WordNet_V** | 11,531 | 0.825 / 0.850 / 0.863 | | **WordNet_N** | 117,797 | 0.862 / 0.881 / 0.888 | | **ImageNet** | 21,843 | 0.864 / 0.885 / 0.892 | | **ConceptNet** | 134,364 | 0.865 / 0.886 / 0.893 | **Q3: Can this approach be extended to other modalities beyond images and text?** **R3:** Yes, our approach could be extended to other modalities, but certain preconditions apply. Specifically, our method aligns external textual features with visual representations via CLIP, making it well-suited for unsupervised image hashing. While extending to other modalities (e.g., audio using models like Wav2CLIP) is promising, its effectiveness depends on the availability of modality-specific external knowledge and corresponding pre-trained models.
null
null
null
null
null
null
Heterogeneous Sufficient Dimension Reduction and Subspace Clustering
Accept (poster)
Summary: This paper proposes a new method, mixPFC, which integrates subspace clustering with model-based SDR to handle heterogeneous, high-dimensional data. The method simultaneously performs clustering, subspace estimation, and variable selection. A group Lasso penalized EM algorithm is developed, and non-asymptotic convergence rates are established. Empirical results demonstrate superior performance over existing methods in simulations and real-world applications. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes, I checked the numerical results in Section 5. Supplementary Material: Yes, I reviewed the implementation details in Appendix B. Relation To Broader Scientific Literature: This paper extends the subspace clustering into a supervised framework. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: The integration of subspace clustering with SDR is innovative, addressing the limitations of unsupervised subspace clustering and homogeneous SDR. The supervised framework leverages response information to improve clustering and dimension reduction. Weakness: The proposed mixPFC is built upon a Gaussian assumption outlined in Equation (3). This may limit the applicability and robustness of the method because many real-world datasets exhibit non-normal characteristics such as skewness or heavy tails. Other Comments Or Suggestions: None. Questions For Authors: 1. How to choose or estimate $f(Y)$ in practice? 2. What are the theoretical properties when $K>2$? 3. What is the computational cost associated with mixPFC? I believe that when $K$ is large or $p$ is very high, the mixPFC becomes computationally extensive. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for raising many critical points. Below, we address each of your concerns. ## Weaknesses > (Gaussian assumption) **Reply** - Gaussian assumption is common in clustering, promoting interpretability and scalability. - mixPFC’s theoretical grounding and empirical performance establish it as a new frontier for heterogeneous SDR, with a clear pathway for generalization: e.g., replacing Gaussian distribution in the model with the student-t or skew normal to handle heavy-tailed or skewed data. - In our experience, the mixPFC, whose algorithm is derived under Gaussian assumption, is generally robust under non-Gaussian noise (model-misspecification). To show this, we consider the following errors: - multivariate t-distribution: degrees of freedom is 3 - multivariate skew normal distribution: shape is 5 - Under heavy-tailed noise, mixPFC achieves low error rates (e.g., t(3): 3.4 under $0.1\mathbf{I}$ vs. SSC’s 17.4; 11.0 under AR(0.3) vs. SSC’s 46.3), demonstrating robustness to non-Gaussianity. Under the same setting, mixPFC's performance has no significant changes when we replace normal with skew normal. Full results will be included in the final manuscript. ## Questions > (choice of $f(Y)$ in practice) **Reply** - As noted in Theorem 3.5 of the original PFC framework (Cook & Forzani (2008 Stat. Sci)), the estimated subspace remains consistent under misspecification of $\mathbf{f}(Y)$, provided $\mathbf{f}(Y)$ is sufficiently correlated with the true function. Our mixPFC model directly inherited this property from the PFC model. This property ensures that our method retains validity across a broad class of functions, which can be treated as a tuning parameter to enhance flexibility. - In practice, polynomials or splines are standard choices. We use $\mathbf{f}(Y) = (Y, Y^2, Y^3)^T$ in simulations. > (theory when $K>2$) **Reply** Theoretical Challenge for $K=2$ - While classical EM theories only guaranteed asymptotic convergence to a fixed point, we derive a non-asymptotic result that mixPFC converges geometrically to a fixed point that is within statistical precision of the unknown **true** parameter. This type of result has emerged only recently [BA17]. - Even the low-dimensional EM theory for two mixtures requires additional assumptions such as equal proportions and known covariance (Gaussian mixtures [XU16]), or equal proportions and symmetric coefficients $\boldsymbol{\beta}_1 = -\boldsymbol{\beta}_2$ (mixtures of linear regression [KL19]). Theorem 4.1 establishes convergence rates under unequal proportions and arbitrary subspace angles. - Many established EM theories [KW19] are based on sample-splitting. Sample-splitting divides the data into many batches and uses a new batch of samples in each iteration to make random samples and current parameter estimates independent. - Our theoretical analysis does not require sample-splitting and allows a more complex model (which involves subspace estimation and variable selection), making it novel and highly non-trivial. Theoretical Challenge for $K>2$ - Establishing theories for multi-cluster EM algorithms is an important but challenging direction. Recent advances focus on low-dimensional settings, see [YA17, TI24]. - To the best of our knowledge, extending multi-cluster EM theories to high dimensions is an open question. Theoretical analyses are often limited to $K=2$ even for Gaussian mixtures and mixtures of linear regression (Cai et al (2019 AoS); Wang et al. (2024 JMLR)). - For $K$ diverges, we believe it requires fundamentally new tools to handle complex parameter spaces and interactions between $K$ subspaces. - Empirically, mixPFC works well when $K>2$. Its superior performance is demonstrated through simulations (Section 5.1 and Appendix A.1). > (computational cost) **Reply** - The total cost per EM iteration is $O(nKpq + nK^3q^3 + KTpnq + nKp^2)$. Typically, $q$ is a small number that does not grow with $n, p, K$. Theorem 4.1 suggests the number of EM iterations is of order $O(\log(n))$. - The total cost of mixPFC is $O(\log(n)(nK^3 + KTpn + nKp^2))$ with the dominated term $O(\log(n)Knp^2)$ from covariance estimation, which is tractable when $K$ or $p$ is large. [BA17] Balakrishnan et al. (2017 AoS) Statistical guarantees for the em algorithm: From population to sample-based analysis [XU16] Xu et al. (2016 NeurIPS) Global analysis of expectation maximization for mixtures of two gaussians. [KL19] Klusowski et al. (2019 IEEE Trans. Inf. Theory) Estimating the coefficients of a mixture of two linear regressions by expectation maximization [KW19] Kwon et al. (2019 PMLR) Global convergence of the em algorithm for mixtures of two component linear regression [YA17] Yan et al. (2017 NeurIPS) Convergence of gradient em on multi-component mixture of gaussians [TI24] Tian et al. (2024 ICML) Towards the theory of unsupervised federated learning: Non-asymptotic analysis of federated em algorithms
Summary: The paper introduces a supervised subspace clustering model called mixPFC, which combines sufficient dimension reduction (SDR) with subspace clustering to address three major bottlenecks in traditional methods for high-dimensional heterogeneous data analysis: 1. The model effectively guides clustering and dimension reduction through the response variable, even when the subspaces are fully overlapping. 2. It designs an EM algorithm with a grouped Lasso penalty to jointly optimize variable selection and subspace estimation in high-dimensional scenarios, overcoming the limitations of traditional methods on the separation angles of subspaces. 3. The model allows for arbitrary overlap between subspaces and ensures identifiability by leveraging the nonlinear relationship between the response variable and the projected subspaces. Experiments show that it reduces clustering errors by more than 50% on both synthetic and real data, such as cancer genomic data. However, its limitations include overly simplified covariance structure assumptions (e.g., isotropic assumption), sensitivity to initialization, lack of a systematic strategy for cluster number selection, and the need for further validation of its generalizability to multi-cluster settings. Claims And Evidence: The submitted materials include a relatively detailed theoretical analysis. Methods And Evaluation Criteria: The paper introduces the mixPFC model, which integrates supervised sufficient dimension reduction with subspace clustering. Utilizing an EM algorithm with a grouped Lasso penalty, the model achieves joint clustering, dimension reduction, and variable selection in high-dimensional data. The paper also theoretically proves the non-asymptotic convergence rate, thus overcoming the traditional methods' dependence on the separability of subspaces. Theoretical Claims: I have checked the method. Experimental Designs Or Analyses: I have checked the experiment and analysis. Supplementary Material: I have read the supplementary. Relation To Broader Scientific Literature: The paper introduces the mixPFC model, which integrates supervised sufficient dimension reduction with subspace clustering, thus overcoming the traditional methods' dependence on the separability of subspaces. Essential References Not Discussed: I did not find. Other Strengths And Weaknesses: The strengths and weaknesses have already been discussed in the preceding sections. Other Comments Or Suggestions: No further comments. Questions For Authors: The theoretical analysis in the paper is confined to the binary classification ($K=2$) scenario, while the empirical results validate the effectiveness for multiple clusters ($K>2$). How can we ensure the convergence and statistical error rates of the EM algorithm remain consistent in complex heterogeneous data with multiple clusters ($K>3$)? When the number of clusters K increases with the data size (e.g., K=O(log n)), does the existing theoretical framework still hold? Additionally, is there a risk of decreased subspace identifiability due to increased interactions between clusters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful comments. Below, we address each limitation raised. ## Weaknesses > (isotropic covariance assumption) **Reply**: - The isotropic assumption in Theorem 4.1 was adopted solely to enable tractable analysis of $\gamma_{iw}$, a common simplification in mixture model theory. We agree that extending the theory to general covariances is a valuable future direction. However, relaxing this would require significant advances beyond the scope of our current framework. - While the theory assumes isotropy for clarity, our algorithm imposes no such restriction. Simulations in Section 5.1 demonstrate mixPFC’s robustness under general covariances (e.g., AR(0.3)). This suggests the assumption is not necessary in practice. > (sensitivity to initialization) **Reply** - Non-Convexity: Like all EM-based methods, we acknowledge that mixPFC inherits sensitivity to initialization—a well-known challenge in non-convex optimization [JA17]. - To mitigate this, our initialization strategy (Appendix B.1) follows principled recommendations from [BI03], using short EM runs on predictors reduced via PCA and variable screening to ensure stability in high dimensions. While existing initialization methods for classical mixtures are not directly applicable to our proposed model due to its unique structure, this approach consistently produced high-quality initial values (Table A.15). > (cluster number selection) **Reply** - Appendix B.1 details our strategy for selecting $K$, which is inspired by the well-accepted gap statistics [TI01] with careful modifications to tailor it to our problem. - Gap statistics are calculated using data projected onto the orthogonal subspaces. - Under high dimensions, estimating the expected within-cluster dispersion becomes computationally prohibitive. Instead, we propose using the observed within-cluster dispersion. Simulations in Appendix B.1 validate our proposal. We acknowledge that a summary of this approach in the main text would enhance clarity, and we will add it to the final manuscript if accepted. > (multi-cluster generalizability) **Reply** - We discuss theoretical challenges under multi-cluster settings in the reply to the second question of Reviewer LLk6. To avoid redundancy, we kindly ask Reviewer ooHT to refer to that response. - MixPFC’s performance in multi-cluster settings is rigorously tested in both simulations and real data. - Simulations in Section 5.1 include extensive tests with $K=3,5$ clusters under different noise levels. Appendix A.1 further validates its performance in terms of subspace estimation and variable selection. - In the analysis of CCLE data in Section 5.2 and Appendix A.2, we systematically tested $K=2,3,\dots,10$ clusters and selected $K=3$ for Nutlin-3 and $K=5$ for AZD6244, demonstrating mixPFC’s ability to adapt to multi-cluster settings. ## Questions > (theory for $K>2$) **Reply** We sincerely thank Reviewer ooHT and Reviewer LLk6 for their insightful critiques on multi-cluster theory. To avoid redundancy, we have provided a detailed discussion of theoretical challenges under multi-cluster settings in our **reply to Reviewer LLk6’s second question**. We kindly ask Reviewer ooHT to refer to that section and welcome any follow-up clarifications. > (subspace identifiability) **Reply** - Role of Response Information: Unlike classical subspace clustering, mixPFC leverages response information, which disentangles overlapping subspaces using the relationship between predictors and the response. Even when subspaces are identical (e.g., Figure 2(a)), the response provides discriminative signal to recover clusters. Under Model M1 in Section 5.1, where the two subspaces are identical, mixPFC maintains cluster error rates below 10\% across most covariance settings, whereas classical subspace clustering methods fail. - Theoretical Guarantees: Theorem 4.1 establishes convergence to true subspaces **without requiring a minimum separation angle**. Traditional subspace clustering relies on separation assumptions (e.g., angles between subspaces in Theorem 2.8 of Soltanolkotabi & Candés (2012 AoS)). [JA17] Jain et al. (2017 FnTML) Non-convex optimization for machine learning. [BI03] Biernacki et al. (2003 CSDA) Choosing starting values for the em algorithm for getting the highest likelihood in multivariate gaussian mixture models. [TI01] Tibshirani et al. (2001 JRSS-B) Estimating the number of clusters in a data set via the gap statistic.
Summary: The paper presents a mixture of PFC model, which combines sufficient dimension reduction with subspace clustering to deal with heterogeneous high-dimensional data. Moreover, a grouped Lasso based EM algorithm is designed to solve the problem. Nonasymptotic convergence analysis is provided. Empirical results are also shown. ## update after rebuttal: The authors promise to reorganize the literature review and clarify the significance of this work and some issues in the empirical evaluation. Thus, the reviewer is leaning to a borderline accept. Claims And Evidence: yes. Methods And Evaluation Criteria: Yes Theoretical Claims: Check a part, not completely. The proofs are too lengthy. Experimental Designs Or Analyses: Yes Supplementary Material: Not Relation To Broader Scientific Literature: Different from (unsupervised) subspace clustering, the paper attempts to perform (sufficient) dimension reduction and subspace clustering for regression problem, which related to broad literature in statistics. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: + The paper attempts to address an interesting problem, which involves subspace clustering, supervised dimension reduction and variable selection. Weaknesses: - The literature review is chaos and the presentation is unclear. While quite involved and complicated derivations are shown, the novel contribution is weak. The deduction of the EM algorithm is a routine task. - Empirical evaluations are insufficient and weak. Most experiments are conducted on toy datasets. - For the designed algorithm, the initialization of $\gamma_{i,w}$ is critical. Rather than the mentioned ad hoc tricks to initialize $\gamma_{i,w}$, is there more principled way to provide good initialization? Other Comments Or Suggestions: - It might be more suitable to submit on a statistics conference, e.g., AISTATS, UAI. Questions For Authors: - The comments on subspace clustering methods are questionable in L034-036 at the right column: the state of the art subspace clustering methods can easily handle random errors. Also, in L114-115: "in subspace clustering the latent subspaces cannot overlap..." It is NOT true. As discussed in (Soltanolkotabi & Candes, 2012), subspaces can even be partially overlapping. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your detailed feedback. Below, we address each of your concerns. ## Weaknesses > (literature review, novel contribution, EM algorithm) **Reply**: We acknowledge the current presentation could be streamlined. In the revision, we will reorganize the literature review to focus on clarifying how prior works on subspace clustering lack integration of response information and variable selection. The novelty lies in three key advances: 1. Supervised Subspace Clustering - Integrates response information into subspace clustering, enabling exact overlap of subspaces (unlike classical methods requiring separation angles, Soltanolkotabi & Candés (2012 AoS)). - Achieves 30–40% higher clustering accuracy than unsupervised baselines in high-noise regimes (Table 1). 2. High-Dimensional EM Algorithm: Although our proposed mixPFC algorithm has a familiar structure of the classical EM algorithm, it involves many innovations to adapt to the challenging problem of interest, which distinguishes it from existing EM algorithms. - Our algorithm is built on a new model that seamlessly integrates dimension reduction and subspace clustering, and naturally combines the information from the predictors and the response. Without such a rigorous foundation, it is extremely difficult to derive a reasonable algorithm for our purpose, or the strong theoretical guarantee. We also incorporate variable selection to exclude inactive predictors, a capability absent in classical subspace clustering. - While low-dimensional EM is standard, extending it to high dimensions presents significant challenges. A naive generalization would require inverting the $p\times p$ covariance matrix in both E-step and M-step, which is practically impossible in high dimensions without additional structural assumptions. - Novel E-step: estimate probabilities $\gamma_{iw}$ via a low-dimensional mixture linear regression on projected predictor, reducing covariance inversion to $d\times d$ ($d\ll p$). - Scalable M-step: replace the non-convex or $p\times p$ dimensional optimization for subspace estimation with scalable $p\times q$ ($q\ll p$) dimensional convex optimization. 3. Theoretical Results - The high-dimensional EM theory is challenging (Wang et al. (2024 JMLR)). Unlike classical results focusing on asymptotic convergence to fixed points, we derived a non-asymptotic result that mixPFC converges to the **true** parameter. We provided a detailed discussion in our response to Reviewer LLk6's second question and kindly ask Reviewer HtK4 to refer to that section. > (empirical evaluations) **Reply**: - The datasets and model sizes used in our experiments align with established benchmarks in SDR and mixture model literature ([LI19] and Wang et al. (2024 JMLR)). Due to page limits, we included extensive numerical analyses (subspace estimation errors, variable selection) in Appendix A. - Our primary goal was to conceptually demonstrate that incorporating response information significantly improves clustering accuracy over unsupervised methods, which is not explored in the literature. > (initialization of $\gamma_{iw}$ ) **Reply**: - We recognize that initialization is a prevalent issue in EM algorithms, without complete solutions even for much simpler models. However, we provide a practical solution to make our algorithm feasible. - Our strategy (Appendix B.1) follows principled recommendations from [BI03], using short EM runs on predictors reduced via PCA and variable screening to ensure stability in high dimensions. - While existing methods are incompatible with our model’s structure, this approach consistently yields reliable initializations (Table A.15). A formal initialization scheme remains future work. ## Questions > (comments on subspace clustering methods) - **On random errors (L034-036)**: We wrote ''When the observations are subject to **significant** random errors, ...''. Subspace clustering methods struggle under significant random errors(Table 1 M3: SSC error rate >47 under high noise vs. 13.2 under low noise). Our intent was to highlight this limitation, not dismiss robustness to small errors. - **On overlapping subspaces (L114-115)**: We apologize for the imprecise phrasing and appreciate the reviewer's clarification. Classical subspace clustering requires a minimal angle condition for identifiability (Theorem 2.8 of Soltanolkotabi & Candés (2012 AoS)). In contrast, mixPFC allows exact overlap by leveraging response information. We will revise to clarify this distinction. [LI19] Lin et al. (2019 JASA) Sparse sliced inverse regression via lasso [BI03] Biernacki et al. (2003 CSDA) Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate gaussian mixture models.
null
null
null
null
null
null
null
null
Energy-Based Preference Model Offers Better Offline Alignment than the Bradley-Terry Preference Model
Accept (poster)
Summary: This paper introduces a novel alternative to DPO for training LLMs, using an energy-based preference model (EBM). The proposed method, Energy Preference Alignment (EPA), semmly guarantees a unique MLE)and consistently outperforms DPO in empirical benchmarks, providing better alignment with human preferences and faster convergence during training. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Please refer to the questions 1&2. Experimental Designs Or Analyses: Please refer to the questions 3-6. Supplementary Material: Yes. Code. Relation To Broader Scientific Literature: Please refer to the questions. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The combination of strong and weak negatives in the EPA loss function shows a novel and effective way to improve the model’s performance, as supported by ablation studies. Weaknesses: Relative weak experiments. Please refer to the questions. Other Comments Or Suggestions: Please refer to the Questions. Questions For Authors: 1. My primary concern and question lie in Eq. (11) and Eq. (12). The authors introduce cross-pairs of responses between mini-batches to increase the number of negatives (mismatched responses, when index $i ≠ j$). Does this imply that the proposed training objective relies on a perfectly clean dataset? In other words, does $ r_\theta(x^i, x_w^i) $ always need to be greater than $ r_\theta(x^i, x_w^j) $ and $ r_\theta(x^i, x_l^j) $ when $ i \neq j $? 2. What is the rationale behind computing the reward for irrelevant $x $ and $y$ pairs? Could the authors provide a more solid theoretical justification for introducing negatives in Eq. (11) and Eq. (12)? 3. Could the authors provide an ablation study regarding the batch size? I noticed that the global batch size is set to 64—does the performance improvement over DPO depend on the use of a large batch size? 4. In the current experiments, the introducation of more negatives are within a mini-batch. Could the authors explore the possibility of randomly sampling more mismatched responses from the training data to increase the number of negatives in the mini-batch, thereby achieving further performance improvements? 5. Furthermore, could the authors provide a performance comparison between EPA and DPO under the same training time? Can Figure (b) be interpreted as showing that, given the same computational budget, EPA achieves faster convergence and performance improvement compared to DPO? 6. Additionally, some performance results in the current experiments seem relatively weak. Only the AlpacaEval and MT-Bench results are reported. I would expect to see more benchmark evaluations, such as Arena-Hard. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Concern#1: Does this imply that the proposed training objective relies on a perfectly clean dataset?? Thank you for raising this interesting question. No, it is not a requirement to have a perfectly clean dataset. The reason for the weak negative samples is to provide high variance. Even if some $y_w^j (j \neq i)$ is accidentally better than $y_w^i$ in the training dataset, it should not be a problem. However, if all or too many such $y_w^j$ becomes better than $y_w^i$, we might end up with lower variance as a result, which is not desired according to the theorems (see our response to concern#2 for more information about weak negatives and the variance of negatives). > Concern#2: What is the rationale behind computing the reward for irrelevant x and y pairs? Could the authors provide a more solid theoretical justification for introducing negatives in Eq. (11) and Eq. (12)? The rationale, as mentioned in the response to Concern#1, is that the irrelevant x and y pairs can provide high variance. The premise of Theorem 3.2 and Theorem 3.3 only states that the Var[Y|Z] has to be positive. However, from the proof of Theorem 3.2, we can also see that the value of Var[Y|Z] is positively associated with the 2nd-order derivative of the energy discrepancy. Therefore, the higher variance will lead to a more convex Energy Discrepancy functional/loss. We also discuss their intuitive usefulness as a regularizer in section 4.3. The negative gradients in DPO might push the policy parameters towards unexpected directions because they are likely not pointing to the RLHF minimizer. Therefore, the additional negative gradients introduced by the weak negatives can alleviate this issue. > Concern#3: does the performance improvement over DPO depend on the use of a large batch size? No, because we use 64 as the global batch size for all the models and methods (including DPO) reported in the paper. As for why 64 in particular, this is because it is the highest possible number we can set to prevent our most memory-consuming setting (i.e., EPA's 1:3:10 setting in Table 3) from OOM. > Concern#4: sampling more mismatched responses from the training data instead of the minibatch. We believe an irrelevant response from a mini-batch is not meaningfully different from an irrelevant response elsewhere, because the mini-batches are randomly sampled from the dataset in the first place. Also, as we mentioned in the response to Concern#2, the value of weak negatives is from the high variance as opposed to where they are sampled from. > Concern#5: EPA and DPO under the same training time? The x axis in Figure 2.(b) is proportional to the number of prompts (ie., x in the datasets). However, since EPA (1:1:2) has exactly twice the amount of the responses (ie, y in the datasets) used in DPO, we think scaling the dynamics curve of EPA rightward by a coefficient of 2 is close to what you ask for. In such a graph, we do not think EPA achieves faster convergence compared to DPO. However, we also do not intend to make such a claim in the paper. Instead, we do agree the computation cost of EPA is its disadvantage compared to DPO. > Concern#6: more benchmark evaluations, such as Arena-Hard Thank you for the suggestion. We quickly ran Arena-Hard for EPA against the two most important baselines: DPO and DPO-PL, using the same checkpoints we use in Table 2. The results are as follows. And, we can see that EPA is still better. We will add them to the final version of the paper. However, we believe the widely used MT-Bench and Alpaca-Eval 2.0 are enough for most of the experiments in the paper. | Training Data | Method | Arena-Hard (Win Rate, %) | |---------------|--------|--------------------------| | UF-binarized | DPO | 12.0 | | UF-binarized | EPA | 16.3 | | UF-all | DPO-PL | 13.0 | | UF-all | EPA | 16.9 | --- Rebuttal Comment 1.1: Comment: Thanks for the authors feedback and I don't have other concerns.
Summary: This paper identifies a major limitation in Direct Preference Optimization (DPO): the underlying Bradley-Terry Model (BTM) does not always guarantee a unique Maximum Likelihood Estimator (MLE), leading to suboptimal alignment in Reinforcement Learning with Human Feedback (RLHF).To address this, the authors propose an Energy-Based Model (EBM), termed the Infinite Preference Model (IPM), which inherently ensures a unique MLE. They introduce Energy Preference Alignment (EPA)—a contrastive loss function that better enforces the required slope-1 linearity between learned and true rewards. Claims And Evidence: The paper’s claims are well-supported by a mix of theoretical proofs, empirical benchmarks, and ablation studies. The primary claims regarding the limitations of BTM, the uniqueness of EBM’s MLE, and EPA’s improved alignment are convincingly demonstrated. Minor Weaknesses: 1.Higher computational cost (acknowledged but not mitigated). 2.Tested mainly on Mistral-7B (broader validation needed). Methods And Evaluation Criteria: The paper employs a well-justified Energy-Based Preference Model (EBM) to address Direct Preference Optimization (DPO) limitations, with strong theoretical support (Theorem 3.1). The Energy Preference Alignment (EPA) loss effectively enforces slope-1 linearity, outperforming DPO, IPO, KTO, and NCA. Theoretical Claims: The paper's theoretical claims are largely well-justified with rigorous mathematical proofs. Theorem 3.1, which guarantees the uniqueness of the Maximum Likelihood Estimator (MLE) for the Infinite Preference Model (IPM), is correctly derived and logically sound. The equivalence between slope-1 linearity and the minimizer of the RLHF loss (Theorems A.3 and A.4) follows standard derivations in reinforcement learning theory and is correctly structured. The proof of Proposition B.5, which demonstrates the non-uniqueness of the MLE in the Bradley-Terry Model (BTM) under certain conditions, is valid and aligns with prior work on ranking models. Experimental Designs Or Analyses: The experimental design is generally sound, with a clear methodology comparing the proposed Energy Preference Alignment (EPA) loss against strong baselines, including Direct Preference Optimization (DPO), IPO, KTO, and NCA. The use of intrinsic (Pearson correlation, slope-1 regression error) and extrinsic (MT-Bench, Alpaca-Eval 2.0) evaluation metrics provides a well-rounded assessment of alignment performance. Supplementary Material: I am able to inspect their code in the Supplementary Material but unable to run. Their code seems to align with the paper. However, I wise they could provide the checkpoint from their training for the community. Relation To Broader Scientific Literature: The paper builds on the foundation of preference optimization for RLHF, particularly Direct Preference Optimization (DPO), which was introduced as a reward-model-free approach to align preference. It identifies a major limitation of DPO stemming from the Bradley-Terry Model’s (BTM) non-uniqueness of the Maximum Likelihood Estimator (MLE), a problem well-documented in ranking literature. By introducing the Infinite Preference Model (IPM), an energy-based alternative, the paper aligns with prior work in energy-based modeling and construct the Energy Preference Alignment (EPA) loss. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1.Clarity of Motivation: The paper clearly explains the limitations of BTM and how the proposed EBM circumvents them, making a compelling argument for its adoption. 2.Robust Evaluation: The authors conduct extensive experiments, comparing against strong baselines, and include multiple evaluation perspectives. Weaknesses: 1.Lack of experiments on diverse model architectures beyond Mistral-7B limits the generalizability of its findings. 2.Discussion on computational trade-offs is relatively brief. Other Comments Or Suggestions: None Questions For Authors: Could you clarify how robust that uniqueness advantage remains if your negative sampling is incomplete or unrepresentative (e.g., limited coverage in real-world data)? Specifically, do suboptimal negative samples risk undermining uniqueness—leading to the same drawbacks that can occur under BTM—and, if so, how severe is the impact? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Concern #1: generalizability to more datasets and more base models (Please refer to our response to reviewer xiBU for our additional experiments using a different dataset and a different model) > Concern #2: limited discussion on computational trade-offs Thank you for pointing out the limitation of this paper. As the main focus of this paper is on the theoretical benefit of using our EBM instead of BTM to model human preference, we have not included many possible further analyses regarding the computational cost of EPA (which is one among many other possible ways to fit the EBM). However, there is an explicit question raised by reviewer RQg9 (Concern#5) about a computational comparison between DPO and EPA. We hope our response there can provide additional insights. > Concern #3: "Could you clarify how robust that uniqueness advantage remains if your negative sampling is incomplete or unrepresentative (e.g., limited coverage in real-world data)? Specifically, do suboptimal negative samples risk undermining uniqueness—leading to the same drawbacks that can occur under BTM—and, if so, how severe is the impact?" Thank you for raising these very interesting questions that we think suffice for many future research directions. From a general point of view, we agree that suboptimal negative samples do risk undermining the uniqueness. Although there are no designated experiments in the paper to show this, we do think there are some related observations in the paper. * Observation 1. In Figure 2.(b), EPA also degenerates (slightly) after some epochs, which means the dataset might still be not enough. * Observation 2. In Table 5, as we stated in the paper, the fact that there are two tricks (adding a margin and using a dynamic weight on the loss) that can further boost the performance of EPA means it is empirically not perfect. * Observation 3. In Table 1, the slope-1 linear regression error is non-zero. However, in all these cases, we see EPA is still better than DPO. Therefore, although EPA is not robust enough, it is safe to say that it is more robust than DPO. In other words, the error caused by using EPA to estimate the unique MLE of the EBM is milder than the intrinsic flaw of BTM of not being guaranteed to have a unique MLE in the first place. --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author response to my review and will update my review in light of this response as necessary.
Summary: The authors highlight a fundamental issue with Bradley-Terry DPO, namely the lack of uniqueness in its solutions. To address this, they propose a novel approach based on energy-based models, termed Energy Preference Alignment. They demonstrate that their model overcomes the non-uniqueness problem inherent in Bradley-Terry DPO. Additionally, they approximate the model’s objective to develop a practical algorithm, which they empirically validate, showing its effectiveness compared to other DPO-based methods. Claims And Evidence: - The theoretical results appear to be well-supported. - The connection between the theoretical and practical results could be strengthened. For instance, in Theorem 3.3, the result holds only asymptotically as both M and N approach infinity. Providing insights into the rate of convergence would make the theoretical findings more informative. - The practical evaluation seems limited, as the proposed approach was tested on only one dataset. Expanding the experiments to multiple datasets would help better assess its generalizability. Methods And Evaluation Criteria: - The theoretical results appear to be valid. - The current practical evaluation is insufficient and could benefit from further validation. Theoretical Claims: I thoroughly checked the proof of B.5 but only skimmed the rest. Experimental Designs Or Analyses: See above. Supplementary Material: Only the appendix. Relation To Broader Scientific Literature: - The authors discuss various related works, particularly different objectives for DPO. -The final algorithm can be interpreted as DPO-PL with a technique to expand the dataset for K-wise comparisons. Specifically, given a dataset with k responses, the approach involves adding unrelated k' responses to them from other queries and then solving the problem. This formulation effectively leads to the final practical objective derived from DPO-PL. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: See above! Other Comments Or Suggestions: In **A.5**, I believe the argument relies on the condition that **\(\pi_{\text{ref}}(x, y^*)\) must be non-zero** for the reasoning to hold. Questions For Authors: - Does DPO-PL suffer from the same issue as DPO, particularly in terms of the non-uniqueness of solutions? - Could you clarify the relationship between the proposed approach and DPO-PL? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Concern #1 (also raised by reviewer ZCWk): generalizability to more datasets and more base models As this is a point mentioned by multiple reviewers, we managed to run quick experiments on another relatively small dataset and on another base model. Although the experiments can never be exhaustive, we believe the results (as follows) should be additional evidence for the generality of our findings. We will include them in the final version of the appendix. * dataset: - source: https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs - reference: Álvaro Bartolomé Del Canto et al., 2024; https://github.com/argilla-io/distilabel - description: a cleansed version of the widely used dataset (Intel/orca_dpo_pairs) * results: (MT-Bench scores for epoch#1/#2/#3) | base model | EPA | DPO | |------------|--------------------|----------------| | Llama3-8B | 6.84/6.98/**7.04** | 6.94/6.83/6.71 | > Concern #2: A.5 requires an assumption of the non-negativity of $\pi_{\text{ref}}(x, y)$ Thank you for the suggestion. Yes, the assumption is required, which is true for both DPO (as explicitly stated in (Rafailov et al., 2023)) and EPA. Although this is a mild assumption that does not affect the main theories of our paper, we will add this to the final version of the appendix for comprehensiveness. > Concern #3: Does DPO-PL suffer from the same issue as DPO, particularly in terms of the non-uniqueness of solutions? Yes, we think so. The proof will be analogous to that of DPO. It is not difficult to show that proposition B.5 is also true for the Plackett-Luce Model by only checking that the reconstructed reward $\tilde{r}_{\hat{\theta}}$ also leads to the same expected likelihood of data. We will add the exact proof to the final version of the appendix. Besides the proof per se, there is an intuitive understanding of why this is the case. As we have mentioned in the first paragraph of subsection 3.1., the Plackett-Luce Model also just models human preference among a **finite** number of candidates. This is problematic when the actual space of candidates is **infinite**. > Concern #4 (also raised by reviewer PHXo): detailed differences between EPA and other multi-response losses such as DPO-PL, infoNCA, etc. We are thankful that some reviewers point out the similarity shared by EPA and other multi-response losses. This is exactly the reason why we include many such losses as baselines for comparison in Table 2. However, for a clearer description of the differences, we hope the following table can suffice. We will add it to the final version of the paper. | name | probalistic model | data | loss | |------------|--------------------------|-----------------------------------------------------------------------|------| | EPA | IPM | $(y_w, y_{l_1}, y_{l_2})$ and $(y_w > y_{l_1})$ and $(y_w >y_{l_2})$ | $-\log\frac{\exp(r_{\theta}(y_w))}{\exp(r_{\theta}(y_w)) + \exp(r_{\theta}(y_{l_1})) + \exp(r_{\theta}(y_{l_2})) + \exp(r_{\theta}(y_{wk_1})) + \exp(r_{\theta}(y_{wk_2}))}$ | | DPO-PL | Plackett-Luce Model | $(y_w, y_{l_1}, y_{l_2})$ and $(y_w > y_{l_1} > y_{l_2})$ | $-\log\frac{\exp(r_{\theta}(y_w))}{\exp(r_{\theta}(y_w)) + \exp(r_{\theta}(y_{l_1})) + \exp(r_{\theta}(y_{l_2}))} - \log\frac{\exp(r_{\theta}(y_{l_1}))}{\exp(r_{\theta}(y_{l_1})) + \exp(r_{\theta}(y_{l_2}))}$ | | InfoNCA | EBM for the ideal policy | $(y_w, y_{l_1}, y_{l_2})$ and $(y_w > y_{l_1})$ and $(y_w >y_{l_2})$ | $-\log\frac{\exp(r_{\theta}(y_w))}{\exp(r_{\theta}(y_w)) + \exp(r_{\theta}(y_{l_1})) + \exp(r_{\theta}(y_{l_2}))}$ |
Summary: Bradley-Terry model (BTM) has been the default modelling assumption between rewards and preferences, used in RLHF and DPO. This paper challenges the BTM, claiming that the BTM has not guaranteed unique minimizers in DPO training. To solve this issue, they propose Energy-based Preference Model (EBM), which models preferences in a Boltzmann distribution. The paper claims that the EBM has unique minimizers towards the global optimum in the RLHF objective. While EBM is not tractable, the paper proposes Energy Preference Alignment (EPA) to approximate the EBM objective. Specifically, they propose to use two types of negative responses: negative responses from the same prompt and negative responses from different prompts. They conducted thorough experiments to compare EPA and other baselines like DPO. Their experiments show superior performances of EPA in benchmarks like MT-bench and AlpacaEval. They also included ablation studies to investigate the impact of "weak negative examples" and "other tricks". Claims And Evidence: * **Non-unique minimums of DPO training**. As argued in this paper, one main problem of DPO is that its minimizers are not unique. Reading their proposition B.5, it is based on the assumption that some $y^*$ were never sampled within the preference dataset. Then one can build a translated reward model by adding an arbitrary $A(x)$ to the target RM for any $y \neq y^*$. I don't understand why this is a problem. If $y^*$ is never sampled in the preference dataset, then it is an issue with the dataset instead of with the DPO algorithm. * Can the authors explain why EBM still have unique minimizers when some $y^*$ never appear in their training data? What about the approximation, EPA? Does it guarantee unique minimizers? * **Difference between EPA and InfoNCA / RPO** The proposed EPA method is very similar to InfoNCA [1] and Multi-Response RPO [2], except that the weak responses come from other prompts. Can the author further clarify the differences? Intuitively, I am not convinced that adding responses from irrelevant prompts will add any benefit to model training. [1] Noise Contrastive Alignment of Language Models with Explicit Rewards [2] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment Methods And Evaluation Criteria: For the proposed methods, please see "Claims and Evidence". The evaluation criteria are standard for comparing preference optimization techniques. Theoretical Claims: I checked the proof of Proposition B.5, which is correct. However, as discussed in "Claims and Evidence", I don't agree with the implications of that proposition. Experimental Designs Or Analyses: In general, this paper presents comprehensive studies to understand the impact of the proposed approach. * Figure 2 is pretty good. It shows that EPA achieves a better reward-KL tradeoff compared to DPO. * Table 3 studies the impact of the number of weak negative samples. Some questions, * For Table 2. They fixed the same $\beta = 0.01$ for all methods. This feels problematic, cuz different methods might have different optimal $\beta$. How $\beta = 0.01$ is selected is not mentioned either. * In Table 4, how are the weak negatives added to DPO or DPO-PL training? Can you also include InfoNCA for a comparison? Supplementary Material: I checked the proof of Proposition B.5, which is correct. However, as discussed in "Claims and Evidence", I don't agree with the implications of that proposition. Relation To Broader Scientific Literature: This paper aims to push forward preference optimization algorithm, which could be useful in many problems like LLM post-training and off-policy RL. Essential References Not Discussed: None Other Strengths And Weaknesses: This paper is clearly written. I especially like how they conduct multiple ablation studies to investigate the performance of each element in their algorithm. One weakness is the proposed approach incurs larger computational and memory costs due to the additional weak negative samples. Can the author present specific experimental data points or analysis regard them? Other Comments Or Suggestions: None Questions For Authors: As mentioned before, my main concern is that I am not convinced that adding responses from irrelevant prompts will add any benefit to model training. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Concern #1: The implication of Proposition B.5: the issue of the dataset or the issue of the DPO algorithm? We believe a reasonable definition of a good algorithm is one that works for datasets "easy" to collect. If there are too many constraints to collect the datasets, it is indeed an issue of the algorithm. B.5 gives a necessary constraint which is itself not easy to meet, i.e., we can hardly sample all $y$s from an infinite space given a finite data point budget. What makes the datasets "even harder" to collect is that the B.5 constraint is not sufficient. This means even if we somehow are able to sample all $y$s in an infinite space, we still do not guarantee such a dataset will allow the DPO algorithm to work. As mentioned in the paragraphs before B.5, one can refer to (Ford, 1957; Bong & Rinaldo, 2022) for more situations for BTM to have non-unique MLEs. > Concern #2: "why EBM still have unique minimizers when some y* never appear in their training data?" & "what bout EPA" As shown in Theorem 3.1 (also B.4), EBM having a unique minimizer to its maximum likelihood estimation is not dependent on any requirements on the structure of $p(y|x)$. Even when some $p(y^*|x)=0$, the proof we provide to Theorem 3.1 still holds. For EPA, we acknowledge that there will be an error in the approximation (illustrated as the fuzzy dashed curve in Figure 1). This is why we need to check empirically whether or not the error introduced in EPA is smaller than that of DPO. For example, Table 1 explicitly shows that the slope-1 linear regression error is smaller for EPA, although the error is non-zero. > Concern #3: Difference between EPA and RPO, infoNCA, etc. We noticed that the RPO paper was posted to arxiv.org after the submission of our paper. So, we were not aware of this concurrent work. Although EPA itself may or may not be a special case of RPO, we argue that our paper is more about the underlying energy-based preference model as opposed to EPA (which is only one of many possible ways to fit the EBM). (see our response to xiBU for EPA vs other losses) > Concern #4: Why setting $\beta=0.01$ for Table 2? As mentioned in the paper's subsection 5.1.4, the philosophy to set $\beta=0.01$ in Table 2. is to treat it as a given part of the RLHF objective (Eq.1). As for the reason why we choose 0.01 in particular, it is two-fold. * Firstly, we did tune the $\beta$ for multiple values ranging from 0.01 to 0.5 (see Figure 2.(a)). $\beta$ close to 0.01 (low-KL region) will make both EPA and DPO achieve higher rewards. * Secondly, $\beta=0.01$ is also the default setting for NCA and infoNCA (Chen et al., 2024a) and also a recommended setting for Mistral-based DPO (Tunstall et al., 2023) and KTO (Ethayarajh et al. (2024) ). For IPO's best $\beta$, we did preliminary experiments with $\beta=0.1$ and $\beta =0.01$, and found $\beta=0.01$ works better. Therefore, we believe setting $\beta =0.01$ is a reasonable choice. We will add these results (MT-Bench scores) in the final version of the appendix: | $\beta$ | IPO (epoch#1/#2/#3) | |--------|----------------------------------------| | 0.1 | 6.73/6.88/6.87 | | 0.01 | 7.20/7.31/7.23 | > Concern #5: "how are the weak negatives added to DPO or DPO-PL training?" & "what about InfoNCA?" In the caption of Table 4., we have briefly stated how the weak negatives are added. The following are some examples for "+UF-WEAKx1". suppose the original pairwise dataset has 3 data points: $(y_w^1, y_l^1), (y_w^2, y_l^2), (y_w^3, y_l^3)$ * For DPO, weak negatives are added "as additional $y_l$ to be paired with the original $y_w$". This means we **add** something like these to the original dataset: $(y_w^1, y_l^2), (y_w^2, y_w^4), (y_w^3, y_l^1)$. * For DPO-PL, "as additional negatives ranked after the $y_w$ and $y_l$". This means we **replace** the original dataset with something like ${(y_w^1, y_l^1, y_w^2), (y_w^2, y_l^2, y_l^1), (y_w^3, y_l^3, y_l^2)}$. The K-wise ranking information required by DPO-PL can be implicitly inferred here. For example, for $(y_w^1, y_l^1, y_w^2)$, the ranking is $y_w^1 > y_l^1 > y_w^2$. We do not consider InfoNCA because the purpose of Table 4. is to study if EPA's main advantage comes from introducing more computation alone or being the product of a better preference model. However, InfoNCA is not based on a "preference model". Therefore, we do not think it is relevant for the purpose here. > Concern #6: more data points regarding the computational cost. We provide our actual training time to support our linear-time-complexity argument in sec 5.2.3: | #resp | setting | training time | |-------|---------|---------------| | 8 | 1:3:4 | 44h41min | | 6 | 1:3:2 | 33h51min | | 4 | 1:1:2 | 23h0min | | 2 | 1:1:0 | 12h53min | > Concern #7: the benefit of weak negatives (please refer to our response to reviewer RQg9's concern#2)
null
null
null
null
null
null
MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning
Accept (poster)
Summary: This paper proposes to improve the sampling efficiency in RL learning by mixture-of-experts (MoE) network design and dormant-based parameter perturbation. The MoE design aims to decrease gradient conflicts and perturbation from top-k agent's parameter helps to accelerate the learning. Experiments in simulator and real world proves the efficiency and effectiveness of the proposed method. Claims And Evidence: The main claim of this paper is that the introduced method improves the sampling efficiency in RL, which can be validated in two aspects. Firstly, the expert usage intensity distribution shows that different experts are in charge of different skills, which alleviates gradient conflicts. Besides, the parameter perturbation leads to faster learning in experiments and ablations. Therefore, the main claim is supported. Methods And Evaluation Criteria: Yes. The MoE design and perturbation aims to solve the sampling inefficiency and sparsity in reward. The evaluation uses episode rewards or successful rate as metric. Theoretical Claims: Do not apply. Experimental Designs Or Analyses: While the overall evaluation is valid with diverse tasks and environments, I have two concerns. Firstly, it seems that multi-tasking is the advantage of the proposed method. However, most of the evaluation is on single task. Why not evaluating on multi-task learning (e.g., leaning on all DMC tasks with one model). Besides, the cut-off frames of different task varies a lot (e.g., 2m and 30m). While the complexity of different tasks varies, I believe it's better to show the results after enough frames, especially when considering that the baselines are not converging in some tasks as shown in the figures. Supplementary Material: I checked the ablations. Relation To Broader Scientific Literature: It's related to RL in robot manipulation and locomotion. Specifically, robot RL when reward is sparse and exploration space is huge. Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths 1. The motivation of improving sampling efficiency in robot RL makes sense. 2. The design of MoE to distribute learning burden and decrease gradient conflicts makes sense. The parameter perturbation from top agents empirically accelerate the learning process. 2. The writing is clear and easy to follow. ### Weaknesses 1. While the design of two main contributions makes sense separately, I feel that they are contradicting each other. The assumption of MoE is that different agents will master different skills to avoid gradient conflicts, thus improving data efficiency. However, the parameter perturbation uses the mean value of parameters of top k agents. This design implies two messages. First, each agent can independently perform the task, which is not what the usage intensity distribution in single task shows. Furthermore, It means that the agents are somehow similar to each other, which contradict to the benefit of using MoE. I think the two designs are based on contradicting assumptions. 2. I am also concerned about the generalization of MoE model design. The number of experts could be highly dependent on the number and complexity of tasks, which limits the application of this approach on more diverse tasks. Other Comments Or Suggestions: 1. Evaluation on more tasks and combination of tasks (as said in previous section) could further prove the effectiveness of the method. 2. Please explain more on the two designs as in Weaknesses. Questions For Authors: Please see the Weaknesses and Evaluation designs. I am willing to raise my rating if the concerns are solved. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: Evaluation on Multi-task Learning: We appreciate the reviewer’s suggestion regarding multi-task evaluation. We have conducted additional experiments where our method is trained jointly on Meta-World tasks using a single policy. The performance is compared against MLP baseline under the same multi-task setup. Experimental details and results are provided on the [rebuttal Section A](https://mentor-vrl.github.io/). > Q2: Cut-off Frames of different tasks: Thank you for your comment. Our frame settings follow prior works such as DrQv2 [14], TACO [15], and DrM [16], and are chosen to reflect task difficulty while maintaining consistency in comparison. We agree that cut-off frames should account for whether baselines have converged. As noted, our main comparisons are against DrM [16], which consistently converges by the reported cut-off in most tasks. > Q3: “While the design of two main contributions makes sense separately, I feel that they are contradicting each other.”: Thanks for pointing out your confusion. We will explain why these two main contributions do not contradict each other. **For MoE:** we would like to firstly clarify the different concepts of *agent* and *expert*. In our context, *agent* refers to the policy agent, whose backbone is a MoE, and MoE consists of several experts. So the assumption of MoE in your original comment — “different agents will master different skills to avoid gradient conflicts” — should be rephrased as “different experts will master different skills to avoid gradient conflicts”. **For Task-oriented Perturbation Mechanism:** During RL training, we maintain a fixed-size set $S_{\text{top}} = \{(\theta, R)\}$, which stores the weights $\theta$ and corresponding episode rewards $R$ of the top-performing agents seen so far. At each episode $t$, if the current agent’s reward $R_t$ exceeds the lowest reward in $S_{\text{top}}$, we replace the corresponding tuple with $(\theta_t, R_t)$. Suppose now we have $N$ history top-performing agent weights (each has $M$ experts) in $S_{\text{top}}$ and we want to execute task-oriented perturbation, the goal distribution to perturbing $\mathrm{expert}_{i}$ is $\mathcal{N}(\mu_i, \sigma_i)$, where $\mu_i$ and $\sigma_i$ are the mean and std of {exper$\text t_i^{\theta_k}$ $\mid \theta_k \in S_{\text{top}}$}, in these $|S_{\mathrm{top}}|$ agents, and has nothing to do with other experts. So the perturbation process will not make experts similar but will further diversify them from each other. We hope this is sufficient to clarify that these two designs are not contradictory. More details could be found in the original submission **Section 3.2** and feel free to ask if you have more questions! > Q4: Generalization and Robustness of MoE Design: We understand the reviewer’s concern regarding the robustness of the MoE design to hyperparameter choices. To this end, we conducted experiments on the Hammer (Sparse) task, as shown in the [rebuttal Section B](https://mentor-vrl.github.io/) (due to the time limitations, we only report the ablation study on this task), varying the number of experts and top_k. Results show that while the optimal setting is MoE has 8 experts, performance remains consistent across 4, 8, and 32 experts as long as top_k = 4. This suggests the model is not overly sensitive to the number of experts. ---- [14] Yarats, D.,et.al. Mastering visual continuous control: Improved data-augmented reinforcement learning. arXiv preprint arXiv:2107.09645. [15] Zheng, R., Wang, X., Sun, Y., Ma, S., Zhao, J., Xu, H., ... & Huang, F. (2023). TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning. Advances in Neural Information Processing Systems, 36, 48203-48225. [16] Xu, G.,et.al. Drm: Mastering visual reinforcement learning through dormant ratio minimization. arXiv preprint arXiv:2310.19668. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their detailed rebuttal. It addresses most of my concerns. Now the method makes more sense to me. I have raised my score to 3.
Summary: This paper aims to improve the performance of reinforcement learning agents in robotic tasks. Addressing the issues of gradient conflicts in standard MLPs for robotic tasks and the tendency of visual RL agents to get stuck in local minima, two key improvements are proposed. First, the Mixture-of-Experts (MoE) architecture is used to replace MLP as the policy backbone, reducing gradient conflicts through a dynamic routing mechanism. Second, a task-oriented perturbation mechanism is designed to sample perturbation candidates from a heuristically updated distribution based on past high-performing agents. Experimental results show that the method based on MoE and the new perturbation mechanism outperforms baseline models on three simulation benchmarks and three challenging real-world robotic manipulation tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. - Extensive experiments were conducted across multiple different tasks and environments, including three simulation benchmarks and three challenging real-world robotic manipulation tasks. Especially in the case of real-world robotics, it encompasses rich aspects such as multi-task learning, multi-stage deformable object manipulation, and dynamic skill acquisition. - Although ablation experiments were conducted to verify the effectiveness of each component, for some complex components (such as the dynamic routing mechanism in the MoE architecture), the impact of internal parameter changes on the overall performance may not have been further analyzed in depth. For example, the performance differences of different values of (k) (the number of selected experts) in different tasks were not discussed in detail. Additionally, there were no relevant ablation experiments regarding the perturbation factor. What are the bases for determining $\alpha_{min}$ and $\alpha_{max}$? - Moreover, I am also curious about whether the MoE architecture and the task-oriented perturbation mechanism increase the computational complexity of the algorithm. Supplementary Material: Yes, I reviewed the supplementary material. Specifically, I reviewed the parts about the Ablation Study and the robustness against disturbances of the agent trained by MENTOR. Relation To Broader Scientific Literature: - Previous works (Yu et al., 2020a; Liu et al., 2021) have identified the problem of gradient conflicts in robotic tasks when using shared-parameter architectures like MLPs. In complex robotic scenarios where an agent is assigned multiple tasks or sub-goals, the gradients for optimizing neural parameters across different task stages or between tasks can conflict, hindering the agent's learning ability. - Previous works (Xu et al., 2023; Ji et al., 2024) have explored using the dormant ratio to determine the perturbation factor $\alpha$, which improved exploration efficiency. However, these works mainly focused on the perturbation factor and did not thoroughly examine the selection of perturbation candidates. Essential References Not Discussed: Both papers, MENTOR and [1], revolve around the related theme of the multi-expert mixture architecture in reinforcement learning. [1] Ren, Jie, et al. "Probabilistic mixture-of-experts for efficient deep reinforcement learning." arXiv preprint arXiv:2104.09122 (2021). Other Strengths And Weaknesses: ### Strengths - **Architectural innovation**: Introduces the MoE architecture into model-free visual RL, replacing MLP as the agent backbone. This design solves the gradient conflict problem in robotic tasks via dynamic routing, offering new ways to boost the agent's learning ability in complex environments. - **Perturbation mechanism innovation**: Proposes a task-oriented perturbation mechanism. Sampling from a heuristically updated distribution (based on past high-performing agents) instead of a fixed one makes perturbation more task-relevant and enhances the optimization and exploration in RL. ### Weakness - Despite demonstrating MENTOR's effectiveness through experiments, there is insufficient in-depth theoretical analysis and mathematical proof on how the MoE architecture and task-oriented perturbation mechanism reduce gradient conflicts and improve optimization efficiency across different tasks. Relying only on experimental results may not help readers fully grasp the underlying principles. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: “The impact of internal parameter changes on the overall performance may not have been further analyzed in depth… Additionally, there were no relevant ablation experiments regarding the perturbation factor.”: We appreciate the reviewer’s attention to this point. Due to time limitations, we only report the ablation study for the number of experts and top_k in the Hammer (Sparse) task, as shown in the [rebuttal Section B](https://mentor-vrl.github.io/). The results indicate that in the Hammer (Sparse) task, the optimal choice for the number of experts is 8 and for top_k is 4. When top_k is 4, there are no significant performance differences when the number of experts is set to 4, 8, or 32, which suggests that 4 experts are enough to learn the skill in this task. The ablation on top_k further validates our hypothesis, as reducing top_k (to 2) results in a worse learning curve. If the number of experts is set to 1 and top_k is also 1, the MoE will downgrade to a standard MLP, resulting in the worst performance among all configurations. Regarding the perturbation hyperparameters, we follow DrM [11] by adopting the same perturbation factor and α values from the open-source github repo. > Q2: “Whether the MoE architecture and the task-oriented perturbation mechanism increase the computational complexity of the algorithm.”: We agree that computational efficiency is an important concern. Importantly, both MoE and the perturbation mechanism do not increase the theoretical computational complexity of the algorithm. Empirically, as noted in the original submission **Appendix E**: Time Efficiency of MENTOR, while MoE may introduce higher latency compared to standard MLPs, this is primarily due to hardware-level optimizations (e.g., memory access patterns), rather than additional computation. Recent developments such as MoE-Infinity [12], which demonstrate the potential to reduce MoE latency to negligible levels. > Q3: “Insufficient in-depth theoretical analysis and mathematical proof”: We thank the reviewer for the insightful comment. We acknowledge the importance of a thorough theoretical analysis of how the MoE architecture and task-oriented perturbation help reduce gradient conflicts; however, this is beyond the scope of the present work. Concurrent works also investigate similar patterns through an empirical manner, for example, STGC [13] also applied dense experiments to show the use of MoE to mitigate gradient conflicts for NLP tasks. ---- [11] Xu, G.,et.al. Drm: Mastering visual reinforcement learning through dormant ratio minimization. arXiv preprint arXiv:2310.19668. [12] Fu, Y.,et.al. MoE-CAP: Cost-Accuracy-Performance Benchmarking for Mixture-of-Experts Systems. arXiv preprint arXiv:2412.07067. [13] Yang, L.,et.al. Solving token gradient conflict in mixture-of-experts for large vision-language model. arXiv preprint arXiv:2406.19905.
Summary: This paper proposes MENTOR (Mixture-of-Experts Network with Task-Oriented Perturbation) to improve sample efficiency and performance in visual reinforcement learning (RL). MENTOR replaces the policy’s backbone with a mixture-of-experts (MoE) architecture. This MoE design aims to mitigate gradient conflicts in challenging multi-stage or multi-task settings by allocating different parts of the network (“experts”) to specialized subtasks or states. Additionally, the authors introduce a task-oriented perturbation mechanism that periodically “resets” or “perturbs” the policy parameters in a guided fashion—specifically by sampling from a distribution formed by top-performing past agents instead of adding purely random noise. This approach is shown to offer more directed exploration than baseline methods. Empirically, the paper demonstrates strong gains on three simulation benchmarks (DMC, Meta-World, Adroit) and three real-world robotic tasks (peg insertion, cable routing, and tabletop golf), claiming both superior sample efficiency and higher final performance than prior state-of-the-art methods. Claims And Evidence: The main claims in this paper are (1) adopting a mixture-of-experts backbone for the policy helps resolve gradient conflicts that arise in complex RL tasks, and (2) task-oriented perturbation—sampling perturbation candidates from a distribution of well-performing agents—improves exploration and stability over purely random perturbation. Both are well supported by clear and convincing evidence, including didactic examples (e.g., Figure 3, 4, 5), as well as comprehensive experiments in both simulation and real world tasks. Methods And Evaluation Criteria: Yes, the benchmarks such as MetaWorld, DMC, are standard in the visual RL literature. Additionally, real-world results are demonstrated, which are quite impressive. Theoretical Claims: This paper does not provide any proofs or theoretical claims. Experimental Designs Or Analyses: Yes. The simulated tasks (DMC, Meta-World, Adroit) are each tested under consistent hyperparameters, showing that MENTOR consistently outperforms baselines. The paper uses multiple seeds (usually four) to report average performance. For real-world tasks, the authors design three testbeds that stress multi-task learning (peg insertion with multiple shapes), sequential multi-stage manipulation (cable routing), and dynamic hitting (tabletop golf). The hardware aspects are well thought out (auto-reset mechanisms, camera angles, etc.) so that training remains feasible without an excessive manual setup. Supplementary Material: This paper does not include supplementary material. Relation To Broader Scientific Literature: The authors build directly on the use of data augmentation in visual RL (e.g., DrQ-v2) and on dormant neuron perturbation in DrM. Their main architectural novelty is bringing the mixture-of-experts concept—long used in large-scale language modeling or multi-task learning—to the standard visual RL pipeline. The results also connect to prior work on multi-task learning where gradient conflicts arise (like conflict-averse gradient descent, gradient surgery, etc.). MENTOR is conceptually aligned with that tradition, providing an RL-specific approach using MoE. Overall, the paper extends known methods (DrQ-v2, DrM, MoE, etc.) in a novel combination well-suited for visual RL. Essential References Not Discussed: All essential related works are discussed to the best of my knowledge. Other Strengths And Weaknesses: Weaknesses: 1. I feel that some more recent benchmarks that emphasize large-scale multi-task, lifelong learning, such as LIBERO, may be better suited to demonstrate the superiority of the proposed approach. The tested benchmarks are a bit saturated. 2. The use of MoE architecture may not be specific to visual RL; it would be good to show why visual RL particularly benefits from this approach. Would state-based RL also benefit? 3. The additional computational burden of the MoE layers is not discussed in details; while MoE can provide sample efficiency, but how does it compare to MLP in terms of wall-clock time as well as compute? Other Comments Or Suggestions: N/A. Questions For Authors: My suggestions and questions are listed above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1: Regarding the use of more recent benchmarks such as LIBERO that emphasize large-scale multi-task and lifelong learning: We appreciate the reviewer’s suggestion of more recent benchmarks, like life-long learning tasks. We consider this a valuable avenue for future work. However, we note that this is beyond the current scope of our work, which focuses on evaluating our approach under widely used and established benchmarks and does not consider learning throughout lifetime. Recent works such as Streaming RL [5] and Ace [6] also primarily evaluate on the same or similar settings. Incorporating benchmarks like LIBERO remains an exciting extension for future research direction. > Q2: Regarding the specificity of the MoE architecture to visual RL: We agree with the reviewer that the MoE architecture is versatile and not tied to visual RL. In this work, we focus on visual RL because of its growing attention in research ([6][7][8]) and importance in real-world applications, where agents often rely on visual input. However, we believe that MENTOR is also applicable to state-based RL tasks. To support this, we conducted a fully state-based RL training experiment using Humanoid-Gym [9], where an agent must control a humanoid to accomplish a locomotion task. The comparison results, shown on the [rebuttal Section C](https://mentor-vrl.github.io/), indicate the effectiveness of our method. > Q3: Regarding the computational overhead of MoE layers: We thank the reviewer for pointing this out. We have discussed this in the original submission **Appendix E**: Time Efficiency of MENTOR, while MoE generally improves sample efficiency, it does introduce higher latency compared to standard MLPs due to its routing and expert selection. We acknowledge this tradeoff, but we want to highlight that the latency mostly comes from hardware optimization, not more computation overhead. Recent advances such as MoE-Infinity [10] offer promising directions to significantly reduce this overhead, potentially narrowing the gap to a negligible level. ---- [5] Elsayed, M.,et.al. Streaming Deep Reinforcement Learning Finally Works. arXiv preprint arXiv:2410.14606. [6] Ji, T., Liang,et.al. Ace: Off-policy actor-critic with causality-aware entropy regularization. arXiv preprint arXiv:2402.14528. [7] Hafner, D.,et.al. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603. [8] Laskin, M.,et.al. Curl: Contrastive unsupervised representations for reinforcement learning. In International conference on machine learning (pp. 5639-5650). PMLR. [9] Gu, X.,et.al. Humanoid-gym: Reinforcement learning for humanoid robot with zero-shot sim2real transfer. arXiv preprint arXiv:2404.05695. [10] Fu, Y.,et.al. MoE-CAP: Cost-Accuracy-Performance Benchmarking for Mixture-of-Experts Systems. arXiv preprint arXiv:2412.07067.
Summary: This paper introduces a MOE backbone approach to tackle multi-task visual RL, addressing the problem of conflicting gradients when training on completely opposite tasks (close vs open door). Experimental results show interestingly how the learned model switches between experts smoothly to tackle different sub-tasks of a problem, which further show evidence of some ability to learn across multiple tasks in a single model (with multiple experts). Neural network perturbation is also used to improve exploration of the RL process which mixes the weights with the current weights and some perturbation candidate weights. Sim and real world experiments are performed, with the real world training facilitated by some state estimation model to generate rewards. Claims And Evidence: Clear Methods And Evaluation Criteria: They make sense and appropriate evaluations are made. Real experiments well appreciated. Theoretical Claims: N/A Experimental Designs Or Analyses: They are sound. Supplementary Material: Skimmed the supplemental. Relation To Broader Scientific Literature: They are related to past work on model-free visual RL, working towards more sample-efficient methods which can possibly enable faster real-world RL. Also related to some neural network research on network perturbation for avoiding local minima. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Great experimental results in simulation and real - Clear ablation studies on the importance of MOE modelling to address the conflicting gradients problem for better multi-task RL - Overall a well written paper Weaknesses: - Not a weakness of this specific paper but with real world RL there are way too many limitations to be practical sometimes that one should be aware of (real world RL in this case requires a dense/sparse reward function which requires a separate perception system usually to compute, and a auto-reset mechanism to be designed). - Other Comments Or Suggestions: - Figure 5 is missing error bars? Questions For Authors: - In the real world tasks are there any multi-task setups? Is one policy trained to do peg insertion, cable routing and the other tasks? I am wondering where MOE will be helpful in the real world experiments as they otherwise look like single-task. - It seems MOE modeling is a big contribution of this paper, however it is unclear to me if the experiments in figure 6 really leverage MOE? Is it a single policy trained per task or one policy trained on all of those tasks? They seem quite a bit different from the motivating experiments with e.g. open/close door tasks in meta world. My best understanding is that some of the harder tasks have multiple sub-tasks which MOE is helping to learn on. - The network perturbation process requires already trained expert policies / weights, how are those obtained? Specifically what is $\Phi_{\text{oriented}}$? Further reading suggests this is obtained online as training progresses but im not sure if I understood it correctly. - Why specifically experts 9, 13, 14, and 15 are selected for visualization? Are they truly interpretable or is this possibly just some magical number/selection. What do the other experts look like? Happy to raise score if above questions can be clarified. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1: Regarding the weakness of real-world RL: We agree that real-world RL still faces several limitations. However, we have seen progress in this area. Recent work, such as Serl [1], has released a software suite that facilitates the rapid deployment of real-world learning paradigms. Moreover, recent advances in foundation models [2][3][4] can help simplify the design of perception systems for computing object-relevant rewards. In addition, we would like to emphasize that the main scope of our work is to propose a sample-efficient RL algorithm, which is not limited to real-world RL scenarios. > Q2: Figure 5 is missing error bars? Thanks for your comment, we will add it to the final camera-ready version. > Q3: In the real world tasks are there any multi-task setups? We trained three separate policies for three real-world tasks, each featuring specific challenges (details are provided in Section 4.2). In particular, the peg insertion task is designed to evaluate multi-task learning capability, as the policy must learn to insert different pegs into corresponding target holes using a single visual policy. We visualized the expert usage heatmap of MENTOR for this task in the original submission **Appendix G**, which shows that the MoE agent tends to utilize different experts for different plug shapes. The ablation study results shown in the original submission **Table 1** also demonstrate the effectiveness of the MoE structure compared to a standard MLP. > Q4: If the experiments in Figure 6 really leverage MOE? Can you show more multi-task experiments? Generally speaking, MENTOR is designed to improve the sample efficiency of visual RL, rather than focusing purely on multi-task scenarios. We found that MoE can promote learning efficiency by automatically leveraging distinct experts to decompose a hard task into several sub-tasks and learn them separately. Therefore, the experiments in Figure 6 are trained with a single policy per task. We really appreciate your valuable advice, so we conducted several multi-task learning experiments (detailed multi-task setup information is on [rebuttal Section A](https://mentor-vrl.github.io/)) to demonstrate MoE’s contribution in multi-task setups. > Q5: Explanation of Task-oriented Perturbation Mechanism: $\Phi_{\text{oriented}}$ is an approximate distribution over a set of high-performing agent weights $S_{\text{top}}$, which is constructed as training processes. Specifically, we maintain a fixed-size set $S_{\text{top}} = \{(\theta, R)\}$, which stores the weights $\theta$ and corresponding episode rewards $R$ of the top-performing agents seen so far. At each episode $t$, if the current agent’s reward $R_t$ exceeds the lowest reward in $S_{\text{top}}$, we replace the corresponding tuple with $(\theta_t, R_t)$. For task-oriented perturbation, we approximate $\Phi_{\text{oriented}}$ as a Gaussian $\mathcal{N}(\mu_\theta^{\text{top}}, \sigma_\theta^{\text{top}})$, where the mean and standard deviation are computed over the network weights in $S_{\text{top}}$. This dynamic update ensures that $\Phi_{\text{oriented}}$ remains focused on the most promising regions of the parameter space and generates more effective perturbation candidates $\phi$ compared with perturbation using random weights. A detailed description could be found in the original paper **Section 3.2**. > Q6: “Why specifically experts 9, 13, 14, and 15 are selected for visualization…”: The MoE agent used in this task (Meta-World Assembly) has 16 experts, with the top_k parameter set to 4. Therefore, in Figure 4, we mainly visualize the 4 most active experts during task execution, which happen to be experts 9, 13, 14, and 15. This selection may vary depending on the environment's random seed and the random initialization of expert weights. After training, the agent primarily uses these 4 experts, while the others exhibit low utilization during execution, so we did not include them in the visualization. ----- [1] Luo, J.,et.al. Serl: A software suite for sample-efficient robotic reinforcement learning. In 2024 IEEE International Conference on Robotics and Automation (ICRA) (pp. 16961-16969). IEEE. [2] Liu, S.,et.al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European Conference on Computer Vision (pp. 38-55). Cham: Springer Nature Switzerland. [3] Wen, B.,et.al. Foundationpose: Unified 6d pose estimation and tracking of novel objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17868-17879). [4] Wen, B.,et.al. FoundationStereo: Zero-Shot Stereo Matching. arXiv preprint arXiv:2501.09898. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response and additional experiments. The majority of my concerns are addressed! I will raise my score to a 4. I still have some concern over interpretability figures like that in figure 4. If there really are only a few experts used, would it make sense to train with less experts? My interpretation would be if less experts does not do any better / does worse, than the interpretation of figure 4 might not make as much semantic sense as one might expect. --- Reply to Comment 1.1.1: Comment: Thanks for your reply! Generally speaking, we believe the change of top_k in Figure 4 have a significant impact on agent performance, while our method remains quite robust to the choice of the total number of experts in the MoE. From our empirical study, we found that a key factor significantly affecting agent performance is the number of experts allowed to be activated (i.e., top_k). To support this point, we refer you to [rebuttal Section B](https://mentor-vrl.github.io/): when the number of activated experts is insufficient to the task, increasing top_k can help improve performance (Curve 4_4 >> Curve 4_2 >> Curve 1_1). This indicates that the MoE multi-routing mechanism indeed plays an important role in agent performance, which is the main idea we aim to convey in Figure 4. Regarding the effect of the total number of experts in the MoE, we found the performance trend to be: Curve 8_4 > Curve 4_4 > Curve 32_4 (all of them >> Curve 4_2). This suggests that while the total number of experts can also influence performance, the agent's performance is generally robust to this parameter.
null
null
null
null
null
null
Simple Path Structural Encoding for Graph Transformers
Accept (poster)
Summary: This paper introduces a novel graph structural encoding method, Simple Path Structural Encoding (SPSE), as a replacement for Random Walk Structural Encoding (RWSE) in graph transformers. SPSE encodes graph structure by counting simple paths of varying lengths between node pairs, providing more accurate structural information compared to random walks. To address the computational challenges of simple path counting, the authors propose an efficient algorithm based on successive DAG decompositions using depth-first search (DFS) and breadth-first search (BFS). This approach avoids the exponential memory costs associated with path enumeration, enabling scalability to longer path lengths. The key innovations of this paper include: 1. The introduction of SPSE, a new graph structural encoding method that provides more accurate structural information; 2. The design of an efficient algorithm to tackle the computational difficulties of simple path counting, demonstrating strong scalability. Claims And Evidence: The claims presented in the submission are well-supported by clear and compelling evidence. The authors substantiate their arguments with illustrative examples demonstrated in Figure 1 and Figure 2, as well as a detailed comparative analysis presented in Figure 4. Furthermore, the methodology employed for data collection and analysis is robust, thoroughly documented, and enhances the credibility of the findings. Methods And Evaluation Criteria: Yes. SPSE mitigates the limitations of random walk-based encodings while maintaining computational feasibility through an efficient approximation algorithm. In addition to its application in this study, the algorithm also serves as an approximate solution to the fundamental graph theory problem known as Simple Path Counting. Theoretical Claims: Yes. I checked the correctness of proofs for Proposition 1, Proposition 2 and Proposition 3. Experimental Designs Or Analyses: The experimental design incorporates SPSE as an additional component to the existing advanced Graph Transformer architecture, and the implementation results demonstrate a consistent improvement in performance. Alternatively, when SPSE is used to replace the RWSE in these Graph Transformers, the experimental outcomes indicate that SPSE achieves better performance than RWSE. Supplementary Material: No, I didn't Relation To Broader Scientific Literature: 1. Random walk structural encoding (RWSE), which encodes richer structural information by considering random walk probabilities as edge features. RWSE has shown substantial improvements in the performance of state-of-the-art graph transformers (“Self-attention in colors: Another take on encoding graph structure in transformers”, TMLR 2023). This paper uses simple path matrix instead of random walk matrix to perform structural encoding. 2. Simple path counting, existing path-counting algorithms (“A general purpose algorithm for counting simple cycles and simple paths of any length”, Algorithmica) efficiently handle short paths, the graph topologies and path lengths considered necessitate approximate methods. Essential References Not Discussed: There are no essential references missing from the paper. Other Strengths And Weaknesses: Although SPSE performs better than RWSE in most cases, the improvement is only slight. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review. We're glad that the clarity of our claims, the robustness of our experimental design, and the correctness of our theoretical contributions were appreciated. While no major concern was raised, we would like to contextualize the following point below: **Q1. Although SPSE performs better than RWSE in most cases, the improvement is only slight.** **R1.** We completely agree that, although the performance gains over RWSE-based methods are statistically significant on 4 out of 7 datasets, they can sometimes appear modest. However, we would like to highlight that these results were obtained only by replacing walk probabilities by path counts, with no further alteration to the model input or architecture. That said, we believe there is still untapped potential in SPSE, as suggested in Section 5.3 ("Path Count Sensitivity to Hyperparameters"). Furthermore, additional performance gains may be possible through more extensive tuning of SPSE's three key hyperparameters.
Summary: The work introduces a new perspective on structural encoding for Graph Transformers by leveraging Simple Path Structural Encoding (SPSE). The paper claims to be the first to propose encoding edge information based on simple path counts rather than random walk probabilities, addressing the limitations of Random Walk Structural Encoding (RWSE) in distinguishing local graph structures. It introduces an efficient approximation algorithm for counting simple paths, enabling SPSE to capture richer structural patterns, particularly in graphs with cyclic dependencies. Experimental results demonstrate that SPSE improves performance across various benchmarks, including molecular and long-range graph tasks, outperforming RWSE-based Graph Transformers. Claims And Evidence: 1. In Figure 1, I'm confused why RWSE cannot distinguish A vs. B and C vs. D. Can the author elaborate and provide empirical numbers to prove that? RW of these graphs should be easy and fast enough to compute imo. - I would like to make a note here that RWSE used in, for example GRIT, is not limited to a single value of k, but it is a concatenation of multiple random walk matrix A_i for i \in {1...k}. Methods And Evaluation Criteria: 1. For algorithm 2, how can we get the approximated simple path count from the list of node ordering? I believe this is worth mentioning in the paper to clarify the end2end pipeline. Theoretical Claims: 1. What is the error bound of the proposed approximation algorithm for simple path counting? 2. What is the theoretical reason behind using log() for Equation (4)? Experimental Designs Or Analyses: 1. Some numbers in Table 1 seems to be off. For example, GRIT-RWSE on peptide-struct was reported as 0.2460 in their paper, while the authors provide 0.2480. Can the authors explain? Supplementary Material: The supplementary material consists of dataset statistics, proof for propositions in the paper, and example explanation. Relation To Broader Scientific Literature: The work proposes a new structural encoding method, i.e. simple path counting, to enhance transformers' performance on graph tasks. To the best of my knowledge, the work is the first one to do so. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Generally, the paper offers a new structural encoding method, yet some points need to be addressed as I mentioned in the upper section. I'm willing to raise my score a bit if all of my concerns can be addressed. Other Comments Or Suggestions: Please see "Other Strengths And Weaknesses" Questions For Authors: Please see "Other Strengths And Weaknesses" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We respectfully acknowledge the reviewer’s comments, and hereby attempt to address the different points that were raised. **Q1. Distinction of graphs A vs. B using RWSE / RWSE is not limited to a single value of k…** **R1.** We'd be happy to clarify. In Figure 1, the objective is not to differentiate between entire graphs (e.g., Graph A vs. Graph B), but rather to compare how individual edges are represented under different encoding methods. Taking the concrete example of edge (0,1) in both Graph A and Graph B, the random walk probability of travelling from node 0 to node 1 in $k$ steps is exactly 0.5 if $k$ is odd and 0 if $k$ is even, and *this holds for any* $\textit{k} \in \mathbb{N}^*$. This implies that RW-based structural encoding cannot distinguish between edges (0,1) in Graph A and Graph B. This property of RWSE is formalized in Proposition 1 of the manuscript, which we prove in Appendix B.1. We also sketch the manual derivation of the random walk probabilities of edge (0,1) of graphs A to D in Figure 7 (Appendix C). **Q3. How to get the approximated simple path count?** **R3.** Thank you for pointing this out. The details about the operation done in line 9 of Algorithm 1 (how the running path count is updated for each new node ordering) are given in the full path counting algorithm, in Appendix D: powers of the adjacency matrix given by the new DAG yield new path counts, and running counts are updated by taking the element-wise maximum across all node pairs and path lengths between the new path counts and the current running values. This will be made more explicit in the main text. **Q4. Error bound of the approximate counting algorithm** **R4.** Deriving the error bound is an intricate task, which would require identifying the failure cases and estimating their prevalence. It is however possible to empirically quantify this bound by running exact path counts on various graph topologies, within the runtime limits of these algorithms (i.e. largest possible value of maximum path length $K$, shown below for the `NetworkX` implementation). For ZINC, MNIST and PATTERN, which contain graphs of respective average densities of 0.09, 0.23 and 0.43, we report below the proportion of paths discovered by our approximate path counting method. | Maximum Path Length K | | 2 | 3 | 4 | 5 | 6 | ... | 20 | |:---------------------:|:-------:|-----|-----|-----|-----|-----|-----|-----| | | ZINC | 100 | 100 | 100 | 100 | 100 | ... | 99 | | Path Count (%) | MNIST | 100 | 93 | 79 | 61 | OOM | ... | OOM | | | PATTERN | 100 | OOM | OOM | OOM | OOM | ... | OOM | We observe from experiments that trading path count precision for longer path length can improve overall performances, but the precise interplay between exact path count (when they are affordable), maximum path length $K$ and model accuracy would require further study. **Q5. Reason behind $log$ in Equation (4)** **R5.** The only constraint on the encoding function of Equation 4 is injectivity. The use of logarithm composition was empirically driven by the need to reduce the very high magnitudes of total path counts reached for several graph collections (up to $10^{13}$ for $k=16$ on CLUSTER). **Q6. Differences between Table 1 results and original works** **R6.** Differences between results originally reported by the authors in CSA / GRIT and GPS and the ones in Table 1 are due to the fact that all models were re-trained (using publicly released code and configurations) on 10 random seeds, instead of 4, as is usually done. Retraining was important to compare RWSE and SPSE performances all things being equal, while running experiments on 10 seeds yielded much stronger results. For the large PCQM4Mv2 on which a single experiment was run, we found a MAE of 0.0838 for GRIT-RWSE, improving the reported value of 0.0859: this highlights the need for retraining, even when using the same number of seeds as the original work. The reason for the observation of such differences (be it better or worse) will be made very clear in the final version of the manuscript. --- Rebuttal Comment 1.1: Comment: Thanks the author for their rebuttal. I have updated my rating.
Summary: The paper introduces Simple Path Structural Encoding (SPSE), a novel method for encoding graph structures in graph transformers. SPSE leverages simple path counts between node pairs to capture richer structural information compared to traditional random walk structural encoding (RWSE). The authors propose an efficient approximate algorithm for counting simple paths, addressing the computational challenges associated with path enumeration. SPSE is shown to outperform RWSE in various benchmarks, including molecular and long-range graph datasets, demonstrating significant improvements in discriminative tasks. The method is particularly effective in capturing local cyclic patterns, which are crucial in applications like molecular chemistry and social network analysis. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. The theoretical analysis highlights the limitations of RWSE in distinguishing between different graph structures, which SPSE overcomes by capturing cycle-related information more effectively. Experimental results on synthetic and real-world datasets validate the superiority of SPSE over RWSE in cycle counting and other graph learning tasks. However, the performance of SPSE on densely connected graphs like the CLUSTER dataset is limited, indicating potential areas for improvement. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem at hand. The use of synthetic datasets for cycle counting and real-world benchmarks for molecular and long-range graphs provides a comprehensive assessment of SPSE's effectiveness. The comparison with RWSE and other graph transformer models using standard metrics (e.g., accuracy, mean absolute error) offers a fair evaluation of SPSE's performance. Theoretical Claims: The theoretical claims, such as Propositions 1, 2, and 3, are well-supported by proofs provided in the appendices. These propositions highlight the limitations of RWSE and the advantages of SPSE in capturing structural information. However, a detailed review of the proofs would require examining the appendices closely. Experimental Designs Or Analyses: The experimental designs are sound, involving a range of datasets and model configurations. The use of multiple hyperparameter settings and the retraining of models on different random seeds enhance the robustness of the results. However, the lack of hyperparameter tuning for SPSE might limit its full potential, as noted in the paper. Supplementary Material: The supplementary material includes detailed algorithms and proofs, which are essential for understanding the technical contributions of the paper. The Python code for the path counting algorithm is also provided, facilitating replication and further development. Relation To Broader Scientific Literature: SPSE contributes to the broader field of graph representation learning by offering a more expressive edge encoding method. It builds upon recent advances in graph transformers and message-passing neural networks, addressing the need for more effective structural encodings. The work is related to studies on path-based MPNNs and cycle counting algorithms, which have shown the benefits of using paths over random walks for enhancing model expressivity. Essential References Not Discussed: While the paper cites key works in graph transformers and structural encodings, it might benefit from discussing recent developments in hierarchical encodings and spectral attention mechanisms, which could further enhance SPSE's effectiveness. Other Strengths And Weaknesses: Strengths include the novel approach to edge encoding and the comprehensive experimental validation. Weaknesses include the computational cost of SPSE and its limited performance on densely connected graphs. The paper is well-structured and clear, making it accessible to readers familiar with graph learning concepts. Other Comments Or Suggestions: Future work could focus on optimizing SPSE's computational efficiency and exploring its applicability to other domains like knowledge graphs. Questions For Authors: How do you plan to address the computational cost of SPSE for large-scale graphs? Are there plans to integrate SPSE with other structural encoding methods to further enhance its expressivity? How might SPSE be adapted for directed graphs or graphs with weighted edges? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive review. We appreciate your recognition of SPSE’s novelty, and of the clarity and comprehensiveness of our experimental validation. We're also glad you found the theoretical contributions, algorithmic details, and supplementary material valuable. We address the questions below. **Q1. Addressing the computational cost of SPSE for large-scale graphs** **R1.** As the computational complexity of path counting incurs a cubic dependency with the number of nodes, scaling to large-scale graphs is a challenge. One possible solution is to apply pooling techniques [1] to reduce the graph size: computing simple paths between communities in a pooled graph would become manageable, and this could be followed by separate path discovery within each community. This shares links with the discussion in the answer to the next question. That being said, accurately combining path counts within and between communities presents new and interesting challenges, constituting an avenue for future research. [1] Bianchi, F. M., & Lachi, V. The expressive power of pooling in graph neural networks. NeurIPS 2023. **Q2. Plans to combine SPSE with other SEs?** **R2.** Yes, one particularly promising direction is [2], which is complementary to our work and can also be used as a plug-in module within existing architectures. In particular, we expect that our two offline preprocessing workflows can be efficiently combined: mining paths at higher hierarchy levels can be done more easily because of simpler topologies, while capturing valuable long-range structural information. This will be mentioned in the discussion on future research direction. [2] Luo et al., Enhancing Graph Transformers with Hierarchical Distance Structural Encoding, NeurIPS 2024 **Q3. SPSE for directed/weighted graphs** **R3.** The only difference that arises in our algorithm when the input graph is directed is that one must account for the fact that the adjacency matrix is no longer symmetric. This case is addressed in the attached Python file by setting the `directed` boolean to True. This prevents the summation with the transposed adjacency matrix in line 16 of the full algorithm (Algorithm 3 in Appendix D). In the special case of a DAG input graph, the ``DAGDecompose`` step will be skipped. It is also possible to use different edge attribute values (i.e. weights) inside the adjacency matrices, although the interpretation might be more intricate and calls for closer scrutiny. **Q4. Contextualization with hierarchical encoding / spectral attention** **R4.** Thank you for the suggestion. Adding a discussion —aligned with our response to **Q2**— on the potential synergies with hierarchical methods, especially, would help strengthen the future research directions section. Spectral attention, meanwhile, is natively used within both GRIT and CSA frameworks. **Q5. Further improvement with hyperparameter tuning** **R5.** We acknowledge that hyperparameter tuning could further improve SPSE's performance. Here, we intentionally chose not to alter the provided configurations for a transparent and fair comparison with existing structural encodings (a procedure that Reviewer wLjL also acknowledged), but future exploration of full SPSE capabilities will likely require adjusting model and optimization parameters.
Summary: This work proposes a new structural encoding for empowering graph transformers, SPSE (Simple Path Structure Encoding). Rather than encoding random walk landing probabilities, SPSE encodes simple path counts, demonstrating superior expressivity in scenarios such as cycle counting. The authors propose an algorithm for computing an approximation of simple path counts between nodes show that SPSE outperforms a variety of structural/positional encodings on benchmark datasets. Claims And Evidence: I am convinced the theoretical claims are correct, and the authors' assessment of SPSE in their experiments are also supported. Methods And Evaluation Criteria: Using simple path counts is a straightforward and intuitively more powerful approach than random walk landing probabilities, and the proposed method is total sense for this problem. The evaluation metrics and datasets used are all standard in the graph PSE literature. Theoretical Claims: I read and checked all proofs introduced. I am convinced Propositions 1 and 3 are correct. I think Proposition 2 is also correct, although I am less certain. Experimental Designs Or Analyses: **Strengths** - The fact that the authors do no hyperparameter tuning on the main results is transparent for fair comparisons with other SEs. - The use of 10 runs and random seeds with significance testing strengthens the experimental design. **Weaknesses** - The datasets used are largely toy datasets. Results would be stronger if some experiments on more "real-world" graphs were used (e.g. MoleculeNet, citation graphs). Supplementary Material: I read all the supplementary material. Relation To Broader Scientific Literature: This work is related to the graph positional/structural encoding literature, expanding on [1] and [2] to use path counts rather than random walk probabilities to encode structure. It is also following a line of work that uses counts of important topological features within a graph to supplement neural network features. For example, [3] and [4] explore counting specific subgraph structures. [1] Menegaux et al. Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers. TMLR. [2] Ma et al. Graph Inductive Biases in Transformers without Message Passing. ICML 2023. [3] Jin et al. Homomorphism Counts for Graph Neural Networks: All About That Basis. ICML 2024. [4] Charilaos I. Kanatsoulis and Alejandro Ribeiro. Counting Graph Substructures with Graph Neural Networks. ICLR 2024. Essential References Not Discussed: To my knowledge, the authors discuss all essential references for their research problem. Other Strengths And Weaknesses: **Strengths** - The authors do a great job of illustrating how their method works and specific failure cases of RWSE that SPSE remedies. - The limitations section is well-argued, and the failure case of SPSE is easily understood. **Weaknessees** - I think a parameter study on $\alpha, \beta,$ and $n$ would be valuable, similar to their study of the effect of $R, N, K, D_dfs$. - There is no discussion on the WL-expressivity of the proposed encoding. Given that most PSEs have some analysis regarding the WL-hierarchy, this work would be more convincing if the relationship between SPSE and WL were addressed. Similarly, there is no theoretical proof that SPSE is strictly more expressive than RWSE in a certain hierarchy. - There are some missing SEs in the main benchmark such as [1]. [1] Cantürk et al. Graph Positional and Structural Encoder. ICML 2024. Other Comments Or Suggestions: Overall, I am convinced by this paper. The advantages of SPSE seem clear, and the experiments convince me of its effectiveness. Minor typos: - "there _exits_" Questions For Authors: Please address the above weaknesses. - What aspect of a given graph contributes most to the runtime of the simple path mining algorithm? - Also, the proposed path count mining algorithm seems to bear some similarity with Node2Vec during the DAG decomposition phase given its combination of BFS and DFS, except the proposed algorithm alternates between phases of explicit DFS/BFS rather than a biased random walk. Can the authors comment on the connection between these works? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and thoughtful review. We are glad that you found the theoretical claims convincing and appreciated the clarity of our method and the rigor of our experimental setup. Bellow we report the answers to the comments and questions. **Q1. Aspect contributing to runtime of path mining algorithm** **R1.** In the cases considered here, where the number of nodes does not exceed a few hundred, the critical aspect is graph density. From the algorithmic point of view, the parameters that impact the most the computation time are the fraction of selected root nodes, $R$, the number of trials per depth $N$, and to a lesser extent the maximum DFS depth $D_\text{DFS}$, as can be seen in Figure 5. Although the worst-case complexity scales linearly with $D_\text{DFS}$ and $N$, their contributions are most of the time smaller: we suspect that the sparser the input graph, the lesser the contributions of $D_\text{DFS}$ and $N$ are (may it be at least as $N$ cannot exceed node degree), although this would require further and thorough analysis. **Q2: Similarity with Node2Vec** **R2**. A given DAG decomposition could be interpreted as an attempt to characterize the neighborhood of a root node in the sense of Node2Vec, in the limit case where the neighborhood is the whole graph. In this sense the two methods indeed share similarities. However, we characterize node pairs instead of single nodes, and rather than listing the neighboring nodes, we focus on the ordering in which the nodes were discovered by the search to unveil graph structures at various scales. **Q3. Choice of datasets used** **R3.** The practical reason that drove our choice of test datasets was that we looked for existing configurations for the training of GRIT and/or CSA, the two graph transformer methods implementing RWSE, in order to validate our methodology. The exploration of the benefits of SPSE on more diverse graph topologies will be an exciting future research direction. **Q4. Parameter study on $\alpha$, $\beta$ and $n$** **R4** Our analysis of the path encoding parameters (Equation 4) revealed two main factors contributing significantly to variations in model performance: * Compression of the dynamic range of path counts (controlled primarily by parameter $n$). * Adjustment of the final output range (affected by both parameters $\alpha$ and $n$). These effects are demonstrated in the tables below, presenting model accuracy for GPS on the PATTERN dataset and GRIT on the MNIST dataset, across various hyperparameter configurations: | PATTERN / GPS | | $n$ | |----------------|:------:|:----------:| | $\alpha$ | 2 | 3 | | 0,2 | 86,815 | **86,834** | | 0,5 | - | 86,819 | | MNIST / GRIT | | $n$ | | |--------------|:------:|:------:|:----------:| | $\alpha$ | 1 | 2 | 3 | | 0,5 | 98,189 | 98,224 | **98,294** | In these cases, the large path counts explain why the best results are obtained for a higher $n$ / lower $\alpha$. **Q5. Expressivity regarding WL / RWSE** **R5:** We provide below an initial discussion about the expressivity of the proposed SPSE encoding as an answer to this insightful comment. A reasoning similar to that employed for Path-WL [1] shows that composing $k$ global attention layers with SPSE results in a model that is strictly more expressive than $k$ iterations of the 1-WL color refinement algorithm. We note also that all properties regarding the expressivity of GRIT remain valid when SPSE replaces RWSE as the chosen structural encoding method. However, comparing the expressivity of SPSE and RWSE through isomorphism tests is more complex. Contrary to WL- / Path-trees introduced in [1] and [2], SPSE and RWSE aggregate information over simple paths and walks without enumerating them, which prevents one from using the strategy of the proof of Theorem 3.3 of [2] to draw any conclusion. A rigorous theoretical analysis is required to characterize the precise relationship between the two encodings within the WL hierarchy, which is left as future work. [1] Graziani et al., "The Expressive Power of Path-Based Graph Neural Networks." ICML, 2024. [2] Michel et al., “Path Neural Networks: Expressive and Accurate Graph Neural Networks”, ICML, 2023 **Q6. Missing SEs benchmark**. **R6.** The missing reference will be added to the benchmark. --- Rebuttal Comment 1.1: Comment: I am satisfied with these replies. I think the parameter studies and some discussion on the runtime of the path mining algorithm would be valuable to include in the final version/appendix. I stand by my initial assessment of this work and give my best wishes to the authors.
null
null
null
null
null
null
Deep Principal Support Vector Machines for Nonlinear Sufficient Dimension Reduction
Accept (poster)
Summary: The paper focuses on SDR replacing functions in RKHS with neural networks. This is motivated by the potential advantages of neural networks in handling complex data structures. The authors theoretically demonstrate the unbiasedness of their method and provide a non-asymptotic upper bound for the estimation error, using principles of classification ensembles for nonlinear SDR. While the core idea is sound, the paper lacks a truly comprehensive and balanced approach, especially on the complexity and practical considerations, failing to show the true value of the proposal. Claims And Evidence: The paper claims that using neural networks offers advantages over traditional kernel methods in SDR. However, it doesn't provide a detailed analysis of the computational complexity vs. kernel methods. While it's generally stated that neural networks can be more efficient for large datasets, the specific computational costs are not comprehensively quantified and compared. Moreover, a comparison of the number of parameters should be done. The paper's assertion of superior performance lacks substantial supporting evidence. Little empirical evidence might be the greatest weakness of the paper. The simulations and real data analysis are limited in scope and don't fully validate the claimed advantages, as the scope is very specific. The results presented, while showing some improvements, are not conclusive enough to definitively establish the superiority of the proposed method in a broader context. Methods And Evaluation Criteria: The evaluation is limited by the scope of the datasets used. A wider range of benchmark datasets, especially higher-dimensional ones, and a more rigorous analysis of computational complexity would further strengthen the evaluation. The datasets utilized are relatively low-dimensional, and the performance on these might not generalize to more complex, high-dimensional data encountered in modern scientific applications. The lack of comparison with state of the art on high dimentional or more datasets limits the work. The lack of ablation study in order to understand the relative importance of different components in the network is an issue. Theoretical Claims: The theoretical claims regarding the unbiasedness of the optimal solution and the non-asymptotic upper bound for the estimation error seem mathematically OK. Experimental Designs Or Analyses: The experimental design is limited in its ability to fully support the claims made in the paper. The claim of outperformance needs qualification. The improvements over other methods in Table 1 are sometimes marginal, and the experiments are limited in scope (low dimensionality, specific datasets). Supplementary Material: The supplementary material is OK, providing additional details on the empirical distance correlation, computer configuration, and hyperparameter settings. Relation To Broader Scientific Literature: The relevant literature is discussed, placing the proposed method within the context of existing SDR techniques. The paper appropriately cites key works in the field of SDR, including both linear and nonlinear approaches. However, the discussion could be expanded to include a more critical comparison with a wider range of contemporary methods, particularly those that also leverage neural networks or deep learning for dimension reduction. Essential References Not Discussed: N\A Other Strengths And Weaknesses: A significant weakness is the potential lack of interpretability inherent in using neural networks. Given that they utilize neural networks, interpretability can be reduced, and there should be some discussion around this. While neural networks can offer flexibility and representational power, they often come at the cost of being "black boxes." The paper doesn't address how the learned representations can be interpreted or understood in the context of the original data, which is crucial for many scientific applications where domain knowledge is important. The impact statement is missing. Other Comments Or Suggestions: The paper could benefit from a more thorough investigation of the sensitivity of the proposed method to different hyperparameter settings. The current sensitivity analysis in the supplementary material is limited. Exploring a wider range of architectures and regularization techniques would provide a more complete picture of the method's robustness. Additionally, a discussion of potential strategies for mitigating the "black box" nature of neural networks in this context would be valuable. For instance, exploring techniques like attention mechanisms or feature importance analysis could enhance the interpretability of the learned representations. Finally, it would improve the presentation to include a figure illustrating the proposed neural network architecture. Questions For Authors: How was the specific neural network architecture (number of layers, number of neurons per layer, activation functions) chosen, and what was the rationale behind this choice? Did you explore other architectures, and if so, how did their performance compare? Could you elaborate on the hyperparameter tuning process? What range of values was considered for each hyperparameter, and how was the optimal configuration determined? What are the practical limitations of the proposed method in terms of computational resources (training time, memory usage) when applied to very high-dimensional datasets? How can the learned representations be interpreted in a way that provides meaningful insights into the relationship between the predictors and the response variable? Are there any plans to incorporate techniques to improve interpretability? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks to the reviewer for helpful comments. We will refine the manuscript. > **Q1**. The paper doesn't provide a detailed analysis of the computational complexity vs. kernel methods. **A1**. A brief theoretical analysis of computational costs is provided in **Q4** of Reviewer 1. The cost of our methods is $$\mathcal{O}(hnt\mathcal{L} \max\\{p, \mathcal{N}\\}^2 + h^2n + h^3),$$ which is linear with $n$. Other deep methods avoid binarization, reducing it to $$ \mathcal{O}(nt\mathcal{L} \max\\{p, \mathcal{N}\\}^2).$$ Kernel-based SDR methods, requiring the eigendecomposition of an $n \times n$ matrix, incur $$\mathcal{O}(n^3).$$ While discretization increases computation, it enhances robustness for both regression and classification tasks, particularly for data with many outliers. >**Q2**. The experiments are limited in scope. **A2**. Thanks for your comments. Our focus is to provide a theoretical perspective on a new robust nonlinear SDR framework, especially considering the embedding of neural networks. Importantly, we establish a faster non-asymptotic convergence rate compared to Zheng et al. (2022) and Huang et al. (2024). Our rate nearly reaches the minimax rate of nonparametric regression. The existing experiments show results comparable to other approaches. More experiments like simulation with large $p=100$ is added in the revision. * Zheng, S., Lin, Y., & Huang, J. (2022). Deep Sufficient Representation Learning via Mutual Information. arXiv preprint. * Huang, J., Jiao, Y., Liao, X., Liu, J., & Yu, Z. (2024). Deep dimension reduction for supervised representation learning. IEEE Transactions on Information Theory. >**Q3**. Literature. **A3**. We have included the latest deep dimension reduction methods for comparison. >**Q4**. How was the specific neural network architecture chosen, and what was the rationale behind this choice? Did you explore other architectures? **A4**. This structure follows an expansion-contraction pattern, similar to a reverse bottleneck in autoencoders: widening layers map $x$ to a higher-dimensional space (akin to RKHS feature mapping), while narrowing layers extract a lower-dimensional representation. The default network is a feedforward ReLU network with hidden dimensions $2^{D_1}, 2^{D_1+1}, 2^{D_1}, 2^{D_1-1}, \dots, 16, 1$, where $D_1 = \lfloor \log_2 p \rfloor + 1$ and $p$ is the input dimension. Our method performs well with a simple feedforward network, so we did not explore more complex architectures. >**Q5**. Could you elaborate on the hyperparameter tuning process? What range of values was considered for each hyperparameter, and how was the optimal configuration determined? **A5**. The hyperparameter tuning process is primarily introduced in section A.3. Some hyperparameters, such as optimizer(Adam), batch size (100) and the number of epochs(100), remain unchanged, as they are less significant compared to the others: binarization count, $\lambda$, learning rate, and neural network structure. The range of them can be found in Table 2. Table 2 shows that increasing the binarization count improves performance, consistent with classical PSVM results. A smaller $\lambda$ also leads to better performance, as it shifts the focus toward dimension reduction rather than variance constraints. Notably, $\lambda$ is not a regularization parameter like in $L_2$-regularized regression. The ablation study on learning rate and network structure suggests that the default setting is optimal. The optimal configuration uses the default network, 10 binarizations to balance performance and computational cost, $\lambda = 0.01$, and a learning rate of 0.001, chosen by cross-validation. Following the reviewer's suggestion, we will conduct a more in-depth investigation into the sensitivity of the proposed method to different hyperparameters. >**Q6**. What are the practical limitations of the proposed method in terms of computational resources when applied to very high-dimensional datasets? **A6**. The theoretical computational cost, addressed in **A1**, scales quadratically with the input dimension $p$, which is consistent with most neural network-based methods. Consequently, the practical limitations of our approach are similar to those of typical deep learning methods. >**Q7**. How can the learned representations be interpreted in a way that provides meaningful insights into the relationship between the predictors and the response variable? **A7**. For classification tasks like MNIST, the 2D visualization in Figure 1 offers meaningful insights into the relationship between predictors and the responses. Additionally, we will include scatter plots where colors correspond to Y (e.g., brighter red for larger Y and light green for smaller Y). These visualizations will provide a clearer understanding of the learned representations. Additionally, we appreciate your suggestion—conducting feature importance analysis will further enhance the interpretability of our method.
Summary: This paper introduce a unified framework for nonlinear sufficient dimension reduction method based on classification ensemble. The framework proposed in this paper almost includes kernel principal SVM, which is a nonlinear sufficient dimension reduction method using the reproducing kernel Hilbert space and SVM. Here, the neural network function class is considered to replace the reproducing kernel Hilbert space to implement a more flexible deep nonlinear sufficient dimension reduction. The authors demonstrate theoretical unbiasedness of the optimal solution of the population objective function and a non-asymmetric upper bound for the estimation error. All these results demonstrate considerable competitiveness of the newly proposed deep nonlinear sufficient dimension reduction method. Claims And Evidence: Yes they are. Methods And Evaluation Criteria: Make sense. Theoretical Claims: Yes. They are solid. Experimental Designs Or Analyses: Experiments are good Supplementary Material: I looked at it briefly. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: Provides both theoretical and empirical soundness of the proposed method. Weaknesses: In the Preliminary part Section 2.1, when the authors introduce the nonlinear dimension reduction problem, it would be better if there could be some concrete example, as this introduction seems very abstract. Other Comments Or Suggestions: N/A Questions For Authors: I wonder if the authors can present some concrete meanings of the Assumptions 4.2-4.5 listed in the Section 4 Theoretical Results. They seem to be very theoretical. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for acknowledging our theoretical contribution. We will address your concerns in the revision and below are our detailed responses. > **Q1**. In the Preliminary part Section 2.1, when the authors introduce the nonlinear dimension reduction problem, it would be better if there could be some concrete example, as this introduction seems very abstract. **A1**. To illustrate the motivation behind sufficient dimension reduction (SDR), we present two simple examples. ### Example 1: Classification Problem Consider the classification problem: $$ Y = \mathbb{1} \left( X_1^2 + X_2^2 - 1 + \varepsilon > 0 \right), $$ where $X_1, X_2 \sim \mathcal{N}(0,1)$ and $\varepsilon \sim \mathcal{N}(0, 0.2)$. Then, the normalized SDR function is given by $$ f_0 (X) = \frac{X_1^2 + X_2^2}{2} - 1. $$ Here, a linear transformation is applied to ensure $E\big(f_0 (X)\big)=0,\operatorname{Var}\big(f_0 (X)\big)=1$. ### Example 2: Multivariate Regression Problem Now, consider a multivariate regression problem where $Y \in \mathbb{R}^2$: $$ Y_1 = \sin(X_1 + X_2^2) + \varepsilon_1, \quad Y_2 = \cos(X_3 X_4) + \varepsilon_2, $$ where $X_1, X_2, X_3, X_4 \sim \mathcal{N}(0,1)$ and $\varepsilon_1, \varepsilon_2 \sim \mathcal{N}(0, 0.2)$. In this case, the normalized SDR function is $$ \boldsymbol{f}_0(X) = (a_0 \sin(X_1 + X_2^2) + b_0, a_1\cos(X_3 X_4) + b_1), $$ where $a_i$ and $b_i, i=1,2$, are introduced to ensure $E\big(\boldsymbol{f}_0 (X)\big)=0,\operatorname{Var}\big(\boldsymbol{f}_0(X)\big)=I_2$. ### General Formulation for Regression More generally, we consider the model $$ Y = \boldsymbol{g}(\boldsymbol{f}_0(X)) + \boldsymbol{\epsilon}, $$ where $\boldsymbol{f}_0: \mathbb{R}^p \to \mathbb{R}^d$ and $\boldsymbol{g}: \mathbb{R}^d \to \mathbb{R}^q$. Our goal is to identify the low-dimensional nonlinear representation $\boldsymbol{f}_0(X)$. In this general setting, traditional linear sufficient dimension reduction breaks down. >**Q2**. I wonder if the authors can present some concrete meanings of Assumptions 4.2-4.5 listed in Section 4 Theoretical Results. They seem to be very theoretical. **A2**. **Assumption 4.2** is a standard boundedness assumption commonly used in statistical learning theory. It states that both the ground-truth function and the neural network are bounded. **Assumption 4.3** assumes that $f^*$ is $\beta$-Hölder smooth. A function $f: [0,1]^p \to \mathbb{R}$ is $\beta$-Hölder continuous if there exist $\beta \in (0,1]$ and $\zeta > 0$ such that $$ \left| f(x_1) - f(x_2) \right| \leq \zeta \|x_1 - x_2\|_2^{\beta}, \quad \text{for any} \quad x_1, x_2 \in [0,1]^p. $$ Since $\|x_1 - x_2\|_2^{\beta_1} \geq \|x_1 - x_2\|_2^{\beta_2}$ for $\beta_1 \leq \beta_2$ and $\|x_1 - x_2\|_2 \leq 1$, it follows that functions with smaller $\beta$ exhibit more pronounced variations in local regions. In other words, the parameter $\beta$ quantifies the smoothness or regularity of the function: - A higher $\beta$ corresponds to a smoother function with fewer abrupt changes. - A lower $\beta$ indicates a less smooth, more irregular function with greater variation. The smoothness level $\beta$ determines how easily the function can be approximated, with larger $\beta$ indicating easier approximation. Finally, $\beta$ and the input dimension $d$ jointly determine the convergence rate of our estimator, please see Corollary 4.7. **Assumption 4.4** requires that the convex loss function is Lipschitz continuous, a condition satisfied by the hinge loss. **Assumption 4.5** states that $$\log N\left(\epsilon, \mathcal{F_n},\|\cdot\|_{L_2(Q)}\right) \leq C V \left(1+\log (1 / \epsilon)\right). $$ Here, $V$ can be regarded as the VC dimension of a certain VC class $\mathcal{F}_n$, which always satisfies this assumption. For example, the VC dimension of the neural network class is determined by parameters such as the width and depth of the network. The VC dimension is a measure of the size (capacity, complexity, or expressive power) of a hypothesis class $\mathcal{F_n}$: - A higher VC dimension indicates a more powerful hypothesis class, which can lead to overfitting. - A lower VC dimension reduces flexibility but may lead to underfitting. Hence the performance of our estimator is relative to the trade-off between statistical error $V/n$ and approximation error $\inf_{f \in \mathcal{F_n}} \|f - f^*\|_{\infty}^2$, both affected by the complexity $V$ of $\mathcal{F_n}$: - A larger, more complex class $\mathcal{F}_n$ increases $V$, leading to lower approximation error but higher statistical error. - A smaller, simpler class $\mathcal{F_n}$ decreases $V$, reducing statistical error but potentially increasing approximation error. The result of the trade-off is formalized in Corollary 4.7.
Summary: The paper introduces a general framework of nonlinear sufficient dimension reduction (SDR) based on the previous works of principal support vector machine (PSVM). Classical algorithms such as linear SDR and kernel PSVM are special cases of this new framework. When applying deep neural network to this framework, we obtain the deep principal support vector machine (DPSVM) algorithm. This paper also provides theoretical guarantees for the effectiveness of this new framework and the DPSVM algorithm, including the conditions for the unbiasedness of the optimal solutions, and the minimax convergence rate for the mean square estimation error under Holder assumption when applying deep neural network to this framework. Claims And Evidence: The statements of this paper are clear. The new algorithm introduced by this paper is based on a series of previous works, and a number of references and experimental data are provided. Methods And Evaluation Criteria: The nonlinear SDR framework introduced in this paper has the following strengths: 1. It is very flexible and can be applied to various types of model spaces, including reproducing kernel Hilbert spaces and deep neural network spaces. 2. In contrast with many previous works on PSVM, the new framework allow the response variable to be multi-dimensional, implying that it is suitable to more various types of regression and classification tasks. There are a number of limitations in this paper: 1. The convergence results (Theorem 4.6 and Corollary 4.7) only analyze the properties of the empirical loss minimizer, while does not put the network training into consideration. Since the training speed may be one of the limitations of DPSVM, the analysis of the network optimization is a critical issue for DPSVM. 2. The structure dimension estimation discussed in section 3.2 lacks further theoretical discussion; 3. The DPSVM algorithm requires the binary discretization of the response variable. For continuous response variable, the error arising from discretization is not discussed in this paper. Theoretical Claims: We have checked the proofs and found no evident mistakes. Experimental Designs Or Analyses: The paper provides adequate experimental results for the comparison between DPSVM and other nonlinear SDR algorithms on several artificial datasets such as MNIST, demonstrating impressive competitiveness of the new algorithm. As a potential limitation of the experiment part, section 6 mentioned that one of the limitations of DPSVM is the speed, while this paper does not provide experimental results to compare the speed of DPSVM with those of other algorithms. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: The DPSVM algorithm discussed in this paper requires the discretization of the response variable, which may cause deceleration of training speed. Is there any nonlinear SDR algorithms that can directly deal with continuous response? If it exists, how is the comparison between it and DPSVM? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful feedback. Below are our responses. > **Q1** The convergence results (Theorem 4.6 and Corollary 4.7) only analyze the properties of the empirical loss minimizer, while does not put the network training into consideration. Since the training speed may be one of the limitations of DPSVM, the analysis of the network optimization is a critical issue for DPSVM. **A1**. Thank you for your valuable opinion. The training loss or optimization error is often overlooked in the statistical literature. For instance, Schmidt-Hieber (2020) establishes a theoretical bound for deep neural networks in nonparametric regression, where he defines the gap between the estimator derived from training and the true global empirical minimizer as $\Delta_n$. However, he does not provide any theorems specifically addressing this term, instead treating it as either negligible or dominated by statistical error. The analysis of the training process in neural networks is indeed complex and demands considerable effort. Some recent works, such as Jentzen & Welti (2023) and Beck, Jentzen & Kuckuck (2022), incorporate training error into their analysis of least squares regression. However, as noted by the authors, the resulting convergence rates are far from optimal and are hindered by the curse of dimensionality. While we could adopt a similar decomposition—including statistical, optimization, and approximation errors—such an approach may obscure the core theoretical contribution of our work. * Schmidt-Hieber, J. (2020). Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics, 48(4), 1875–1897. * Beck, C., Jentzen, A., & Kuckuck, B. (2022). Full error analysis for the training of deep neural networks. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 25(02), 2150020. * Jentzen, A., & Welti, T. (2023). Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialization. Applied Mathematics and Computation, 455, 127907. >**Q2** The structure dimension estimation discussed in section 3.2 lacks further theoretical discussion. **A2**. Yes, you are correct. Structure dimension estimation is inspired by traditional linear sufficient dimension reduction (SDR) methods such as SIR (Li, 1991). Here, we follow the ladle method from Luo & Li (2016) and construct a positive-definite matrix, $E_n (\widehat{\mathbf{f}}^\intercal(X)\widehat{\mathbf{f}}(X))$, which converges to $E ({\mathbf{f}}^\intercal(X){\mathbf{f}}(X))$. Luo & Li (2016) have demonstrated that the ladle method consistently selects the true structure dimension as $n$ approaches infinity. We have used this method in artificial datasets, and will subsequently add simulation experiments to verify the effectiveness of structure dimension estimation. * Li, K.-C. (1991). Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414), 316–327. * Luo, W., & Li, B. (2016). Combining eigenvalues and variation of eigenvectors for order determination. Biometrika, 103(4), 875–887. >**Q3**. The DPSVM algorithm requires the binary discretization of the response variable. For the continuous response variable, the error arising from discretization is not discussed in this paper. **A3**. If we discretize $Y$ into $\tilde{Y}$, our goal is to find $f$ such that $\tilde{Y} \perp X \mid f(X)$. It is known that $$Y \perp X \mid f(X) \implies \tilde{Y} \perp X \mid f(X),$$ but the reverse does not necessarily hold. However, both functions $f$ belong to the central class, which is fundamental for sufficient dimension reduction (SDR). In nonlinear SDR, the central class forms an infinite-dimensional space when $X$ is real-valued. Since fully recovering this space is infeasible, we can only approximate it. Therefore, the error introduced by discretization isn't a big issue here. >**Q4**. This paper does not provide experimental results to compare the speed of DPSVM with those of other algorithms. **A4**. Theoretical computational costs can be derived, but for brevity, we present only the final result. Let $n, p, \mathcal{L}, \mathcal{N}$ be the sample size, input dimension, number of hidden layers, and maximum layer width, respectively. With batch size $b$ and epochs $t$, our method has a total cost of $$\mathcal{O}(hnt\mathcal{L} \max\\{p, \mathcal{N}\\}^2 + h^2n + h^3),$$ where $\mathcal{O}(nt\mathcal{L} \max\\{p, \mathcal{N}\\}^2)$ accounts for training per binarization, $\mathcal{O}(h^2n)$ arises from matrix multiplication in $E_n (\widehat{\mathbf{f}}^\intercal(X)\widehat{\mathbf{f}}(X))$, and $\mathcal{O}(h^3)$ is due to eigendecomposition of an $h \times h$ matrix. In contrast, deep methods without binarization incur $$\mathcal{O}(nt\mathcal{L} \max\\{p, \mathcal{N}\\}^2).$$ We will add relevant experiments and theoretical complexity analysis in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep the current score --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive evaluation of our work.
null
null
null
null
null
null
null
null
Universal Neural Optimal Transport
Accept (poster)
Summary: The authors introduced UNOT (Universal Neural Optimal Transport), a novel framework designed to efficiently predict entropic optimal transport distances and plans between discrete measures of varying resolutions. Motivated by the universal domain adaptation methodology, they utilized Fourier Neural Operators, which are capable of processing inputs of different sizes, allowing UNOT to generalize across diverse datasets. Claims And Evidence: The paper claims that the proposed self-supervised bootstrapping loss minimize the ground truth loss. However, the theoretical justification is somewhat vague, and while Proposition 5 suggests a relationship between minimize the bootstrapping loss and the ground truth loss, it depends on assumptions that may not hold in all scenarios. Methods And Evaluation Criteria: The evaluation criteria is questionable. "0.01 relative error on the OT distance" is not the best metric because it is very sensitive to the preturbations. On the image domain to calculate the classic FID distance is a more reasonable choice. Theoretical Claims: Theoretical claims and supporting evidence are correct and provide the necessary information about the method. Experimental Designs Or Analyses: The selected datasets are toy and low-dimensional, and the authors did not provide a comparison with the universal domain-adaptation methods https://openaccess.thecvf.com/content_CVPR_2019/papers/You_Universal_Domain_Adaptation_CVPR_2019_paper.pdf. The calculation of Wasserstein barycentres and their accuracy was only compared visually with the ground truth barycentres. Supplementary Material: The supplementary materials is good. Authors provided valuable additional insights that enhance the understanding of the main results presented in paper. Also detailed experimental setups, and proofs that support the findings. Relation To Broader Scientific Literature: The paper is based on the adaptation of the well known Neural Optimal Transport framework and universal domain adaptation methods. Essential References Not Discussed: The related work section covers the research area. Other Strengths And Weaknesses: The model's performance may degrade with significantly higher resolutions than those encountered during training, indicating a potential scalability issue that needs to be addressed. While UNOT is claimed to generalize across datasets, the paper does not provide sufficient evidence for its effectiveness in highly diverse or out-of-distribution scenarios Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough review! We will address your concerns in the following. >**Proposition 5 [...] depends on assumptions that may not hold** The only assumption we make in Proposition 5 is that $\sum_i g_i=\sum_i g_{\phi_i}=\sum_i g_{\tau_{k_i}}=0$. Since optimal potentials of the dual OT problem can be shifted by constant factors, we simply choose $g$ and $g_{\tau_k}$ to have zero sum. We could also shift $g_\theta$ to have zero sum in practice. However, we have updated the proof and show that the same result with the same constant holds _without_ assuming that $\sum_i g_{\phi_i}=0$, i.e. only assuming $\sum_i g_i=\sum_i g_{\tau_{k_i}}=0$. >**"0.01 relative error on the OT distance" is not the best metric [...] the classic FID [...] is [...] more reasonable** The purpose of UNOT is to predict OT distances with high accuracy. Hence, using the OT distance itself for evaluation seems reasonable, and is in line with other works. FID, however, is usually used to evaluate the quality of generated distributions for generative image models. Since UNOT does not generate images, we do not see how FID could be applied in our setting, but if you could explain what you mean, we are happy to include it. >**The selected datasets are toy and low-dimensional** While the datasets are toy datasets, we believe they are not low-dimensional. We evaluate on $64\times 64$-dimensional images, which corresponds to distributions in $4096$ dimensions (note, in particular, that in this case the cost matrix and the transport plan live in $4096^2\approx 17M$ dimensions). In comparison, MetaOT only trains and tests on $28\times 28=784$-dimensional images, more than $5$ times smaller than us. We have also run experiments on $128\times 128=16384$-dimensional data (here the cost matrix is already ~$268M$-dimensional), and the results look very promising so far, with accuracy almost recovering that of the $64\times 64$-dimensional setting. We also refer to our response to Reviewer **JPz7** regarding additional experiments with varying costs and on the unit sphere domain, and we will include real-world Euclidean and non-Euclidean test datasets in the evaluation. >**the authors did not provide a comparison with the universal domain-adaptation methods** We assume that you mention this paper because their method also features an adversarial training objective between a predictor and two discriminators (equation (4) therein). However, our objective is more closely related to that of GANs, and the similarities to the paper you link are fairly limited as far as we can tell. >**The calculation of Wasserstein barycentres and their accuracy was only compared visually** We did not include quantitative results as we thought they were difficult to interpret, but we have added them here for completeness and can add them to the camera-ready version too. We report the average $W_2$ distance from the predicted barycenter (after 1 Sinkhorn iteration) to the true barycenter on MNIST. ||| |-|-| | **UNOT**|0.021 ± 0.011| |**Gauss**|0.033 ± 0.018| | **Ones**|0.057 ± 0.034| >**The model's performance may degrade with significantly higher resolutions than those encountered during training** We evaluate UNOT in much higher dimension than e.g. MetaOT, and we have conducted additional experiments in higher-dimensional settings ($128\times 128$) with very promising results, suggesting that UNOT can be scaled up. However, you are correct that the performance may degrade with significantly higher-dimensional inputs than those seen during training, but this is not due to an inherent limitation of our approach, but rather an impossible learning task. If a model only sees up to $n\times n$-dimensional distributions during training, it is impossible to accurately learn $(n+1)\times (n+1)$ distributions (you can always come up with two different $(n+1)\times (n+1)$ distributions that look identical when rescaled to $n\times n$). Empirically, however, UNOT performs well on inputs up to 10% larger than the largest distributions it has seen during training without loss in performance. >**While UNOT is claimed to generalize across datasets, the paper does not provide sufficient evidence for its effectiveness in highly diverse or out-of-distribution scenarios** We respectfully disagree, as we believe our test datasets are highly diverse. Not only are the individual datasets very different (compare, e.g., BEAR with low intrinsic dimension to LFW with high intrinsic dimension, cf. Figure 11), but by adding cross-datasets such as BEAR-LFW, we cover a very diverse subset of the space. Furthermore, our test datasets cover a wide range of dimensions ($28\times 28$, $48\times 48$, and $64\times 64$). UNOT performs very well on _all_ test datasets, suggesting that its generalization capabilities are strong. Also note that _all_ our test datasets are out-of-distribution, as the model does not see any of them during training. We hope this answers your questions!
Summary: This paper introduces the universal neural optimal transport (UNOT) solver, a framework designed to efficiently solve entropic-regularized optimal transport (OT) problems. Unlike existing neural OT methods that can only handle input distributions of a fixed dimension, UNOT leverages Fourier neural operators (FNOs) to predict transport plans of variable resolutions, enabling generalization across datasets and input dimensions. ## Update after rebuttal I have raised my score after the authors fix the presentation of the theorem, but I still think the significance of Theorem 3 is limited, and the paper needs to be restructured to better highlight its novelty in other aspects. Claims And Evidence: I suspect that some of the claims in the paper are not well supported by convincing evidences, especially in the theoretical analysis. Please see the "Theoretical Claims" section below for details. Methods And Evaluation Criteria: The numerical evaluation seems reasonable to me. Theoretical Claims: I have checked the proofs of the theorems, and I feel some of the claims may be incorrect. In Theorem 3, the inequality involves the term $G_{\theta}^{-1}(x)$, and I suppose it stands for the inverse of the mapping $G_{\theta}:\mathbb{R}^d\rightarrow\mathbb{R}^d$. So basically the theorem implicitly claims that $G_{\theta}$ is invertible, but we can get a simple counterexample. By construction, $G_{\theta}$ is the output of a ReLU activation, so by carefully choosing $z$, $\mathrm{NN}\_\theta(z)+\lambda z$ can be made negative, and then the output of $G_{\theta}(z)$ would be a zero vector. By slightly perturbing the value of $z$ to $z'\neq z$, $G_{\theta}(z')$ will still be a zero vector, which shows that $G_{\theta}(z)$ is not invertible. Experimental Designs Or Analyses: The experimental design seems reasonable. Supplementary Material: I have checked the supplementary material. Relation To Broader Scientific Literature: The proposed UNOT method may be useful for machine learning models that involve OT computation. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: I think some of the notation can be made clearer and more precise. For example, in Theorem 3, the meaning of the function $\rho(x)$ is unclear, although it can be inferred that it may stand for the density function. Similarly, it should be made clear that $\mathcal{N}(\cdot|0,I)$ stands for the density function of the standard normal distribution. Questions For Authors: I suggest the author(s) checking the theoretical claims and fixing any potential flaws in the proofs. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your helpful review! We will include definitions of $\rho(x)$ and $\mathcal{N}(\cdot |0,I)$ in Theorem 3, thank you for catching that. Otherwise, your main concern seems to be regarding the proof of Theorem 3. **The statement of Theorem 3 is indeed correct (if you interpret $G^{-1}(x)$ as a particular point in the preimage, see below).** However, this is only immediately clear for $x\in\mathbb{R}^d_{>0}$, as opposed to $x\in\mathbb{R}^d_{\ge 0}$ as stated in the paper; indeed, your counterexample shows that $G_\theta$ is not invertible if $x=0$ (or at least one entry in $x$ equals $0$). Hence, we could simply rewrite Theorem 3 for $x\in\mathbb{R}^d_{>0}$ instead of $x\in\mathbb{R}^d_{\ge 0}$ (on this set, $G$ is indeed invertible), which would be sufficient for our purposes as we use the Sinkhorn algorithm to generate targets from the generated samples, and the inputs to the Sinkhorn algorithm need to be positive everywhere. However, it is also straightforward to adapt the proof of Theorem 3 to work for any non-negative $x\in\mathbb{R}^d_{\ge 0}$: Let $\tilde{G}(z)=\text{NN}(z)+\lambda z$ (we drop the subscript $\theta$ for ease of notation here), i.e. $\tilde{G}$ is equal to $G$ without the ReLU activation. Denote by $\rho_{\tilde{G},\rho_z}$ the density of $\rho_z$ pushed forward by $\tilde{G}$ (this interface won't render the pushforward symbol '#' correctly, hence we slightly adapt the notation compared to the paper). Then the exact same proof as for Theorem 3, but applied to $\tilde{G}$, yields $$ \rho_{\tilde{G},\rho_{z}}(x)\ge \frac{1}{(L+\lambda)^d}\mathcal{N}\left({\tilde{G}}^{-1}(x)|0,I\right) $$ for _any_ $x\in\mathbb{R}^d$, where we get invertibility of $\tilde{G}$ by Theorem 1 in [1] (cmp. our proof of Theorem 3 in Appendix B). Now for any non-negative $x\in\mathbb{R}^d_{\ge 0}$, we clearly have $$ \rho_{G,\rho_{z}}(x)\ge \rho_{\tilde{G},\rho_{z}}(x), $$ as for any $z$ with $\tilde{G}(z)=x$, we also have $G(z)=x$. Combining these two inequalities yields $$ \rho_{G,\rho_{z}}(x)\ge\frac{1}{(L+\lambda)^d}\mathcal{N}\left({\tilde{G}}^{-1}(x)|0,I\right) $$ for any non-negative $x\in\mathbb{R}^d_{\ge 0}$. **This shows that the statement in Theorem 3 is indeed correct, but there could be many points in the preimage $G^{-1}(x)$, and we need to pick the unique $\tilde{G}^{-1}(x)$ amongst those for the statement to hold.** Thank you for pointing out this inaccuracy, and we will adapt the statement and proof accordingly in the updated version of the paper, and also update the wording of Theorem 3 to make it more clear that $\tilde{G}$ is indeed invertible under the assumptions of Theorem 3. (Corollary 4 will be updated accordingly as well.) As this seems to have been your only concern, we would kindly ask you to reconsider your score. [1] Behrmann, J., Grathwohl ,W., Chen, R.T.Q., Duvenaud, D., and Jacobsen, J.-H. Invertible Residual Networks. Proceedings of the International Conference on Machine Learning, 2019. doi:10.48550/ARXIV.1811.00995. URL https://arxiv.org/abs/1811.00995. --- Rebuttal Comment 1.1: Comment: I thank the authors for the further clarification, after which I better understand what the authors want to convey in Theorem 3, and I will raise my score to reflect the fix. However, now I think the theorem is less significant. It basically says that the pushforward distribution $G_\theta\sharp\rho_z$ has positive densities on $\mathbb{R} _{\ge 0}^d$, so that it has the possibility to generate any nonnegative vector $x\in \mathbb{R} _{\ge 0}^d$, which is further used to construct discrete distributions $(\mu,\nu)$. However, this "universal generator" is trivial, since any continuous mapping from $\mathbb{R}^d$ to $\mathbb{R} _{\ge 0}^d$ results in a continuous distribution that is supported on $\mathbb{R} _{\ge 0}^d$. For example, simply take the elementwise absolute value of a normal random vector will have the same effect. I would not say the theorem is incorrect, but it just does not characterize important properties of the generator. ====== I just realize that my additional reply is not visible to the authors, so I post it here. Thanks for the additional explanations. I recognize the various applications of UNOT, but just want to first make some of the claims precise. Back to the problem, what I mean is that if a continuous function $f$ can map the set $\mathbb{R}^d$ to $\mathbb{R} _{\ge 0}^d$, or more precisely, $f(\mathbb{R}^d)\coloneqq\\{f(x):x\in \mathbb{R}^d\\}=\mathbb{R} _{\ge 0}^d$, then $f\sharp \rho_z$ should have a positive density on $\mathbb{R} _{\ge 0}^d$. This can be shown in the following way (please correct me if there is any mistake). Let $Y=f(Z)$, where $Z\sim N(0,I)$. Suppose that there is an open set $D\subset \mathbb{R} _{\ge 0}^d$ such that the density of $Y$ is zero, then $P(Y\in D)=0$. Let $O=f^{-1}(D)=\\{z:f(z)\in D\\}$, and then by the definition of continuous functions, $O$ must be an open set, so we must have $P(Z\in O)>0$. However, by definition, $P(Z\in O)=P(Y\in D)$, so there is a contradiction. As a result, we must have that $Y$ has positive densities on every open set of $\mathbb{R} _{\ge 0}^d$. If I do it correctly, then it means that as long as the range of $f$ is exactly $\mathbb{R} _{\ge 0}^d$, then $f\sharp \rho_z$ meets your requirement. This is true for many neural networks, since a simple linear function with a ReLU activation suffices. The absolute value function satisfies since its range is correct, whereas the example $f(z)=0$ you mentioned does not since its range is not $\mathbb{R} _{\ge 0}^d$. To summarize, what I mean is that Theorem 3 is not strongly associated with the design of your generator. A huge class of neural networks would also have this property. Of course, the lower bound requires stronger conditions, but given that a normal distribution is mostly concentrated around its mean, for most areas in $\mathbb{R} _{\ge 0}^d$, the density is quite small. Hence, the lower bound is practically very close to zero on those areas. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score in response to our clarification! There still seems to be some misunderstanding about the implications of Theorem 3, which we address in the following. In particular, **we argue that the theorem is not at all trivial.** > **Any continuous mapping from $\mathbb{R}^d$ to $\mathbb{R}^d_{\ge 0}$ results in a continuous distribution that is supported on $\mathbb{R}^d_{\ge 0}$. For example, simply take the elementwise absolute value of a normal random vector will have the same effect.** This statement sounds a bit misleading. While it is true that the pushforward of a Gaussian $\rho_z$ under _any_ measurable map $f$ (not necessarily continuous) from $\mathbb{R}^d$ to $\mathbb{R}^d_{\ge 0}$ is supported on $\mathbb{R}^d_{\ge 0}$ (simply by virtue of $f$ mapping into $\mathbb{R}^d_{\ge 0}$ by definition), this does _not_ imply that $f$#$\rho_z(x) > 0$ for all $x\in\mathbb{R}^d_{\ge 0}$. In the example you give ($f_i(z)=|z_i|$, i.e. the element-wise absolute value) we indeed have $f$#$\rho_z(x) > 0$ for all $x\in\mathbb{R}^d_{\ge 0}$. However, it is easy to see that this is not the case for any measurable function $f$, also not if $f$ is continuous (consider the simple case $f(z)=0$). So you are right that constructing a function $f:\mathbb{R}^d\to\mathbb{R}^d_{\ge 0}$ such that $f$#$\rho_z(x) > 0$ for all $x\in\mathbb{R}^d_{\ge 0}$ is trivial, but given a function $f$ - such as our generator $G_\theta$ - it is by no means trivial to show that the condition $f$#$\rho_z(x) > 0$ for all $x\in\mathbb{R}^d_{\ge 0}$ holds. In fact, **this will _not_ be true in general for an arbitrary (trained) neural network.** However, we prove that **our generator has this property independent of the weights $\theta$** (we also provide a specific lower bound on the density, which is stronger than just showing that it is positive everywhere), **meaning _at any stage_ in training, it can generate _any_ non-negative $x\in\mathbb{R}^d_{\ge 0}$** (and thus, any pair of discrete distributions $(\mu,\nu)$ of the right dimension). This shows that Theorem 3 states a non-trivial property of the generator. We hope this addresses your concerns about Theorem 3. As the proof and implications of Theorem 3 seem to have been your only concerns about the paper, we hope our clarifications will let you consider accepting the paper. In light of this, some details from the other rebuttals might be relevant: as you mention in your review that UNOT “may be useful for machine learning models that involve OT computation”, we refer to our reply to Reviewer **dxme**, where we highlight various potential applications of UNOT in and beyond machine learning (and also mention that we sped up our previous implementation by 2.5x). Furthermore, additional experiments in response to Reviewer **JPz7** show that UNOT works very well out-of-the-box with other costs and even on the unit sphere domain with spherical data; in the camera-ready version of the paper, we will include this setting with real-world spherical datasets. One again, thank you for taking the time to review! ===================== EDIT: Thank you for updating your comment! We reply to your update below. >**[...] as long as the range of $f$ is exactly $\mathbb{R}^d_{\ge 0}$, then $f$#$\rho_z$ meets your requirement. This is true for many neural networks, since a simple linear function with a ReLU activation suffices.** Your proof that $f$#$\rho_z$ has positive density on $\mathbb{R}^d_{\ge 0}$ if $f(\mathbb{R}^d)=\mathbb{R}^d_{\ge 0}$ seems correct. However, there are two important distinctions between this statement and Theorem 3: **1)** The assumption that **$f(\mathbb{R}^d)=\mathbb{R}^d_{\ge 0}$ does _not_ hold in general for _any_ neural network, _not even for a (one-layer) linear network with ReLU_**; again, $f(z)=0$ is a (linear) counterexample, but more generally, linear functions $f:\mathbb{R}^d\to \mathbb{R}^d_{\ge 0}$ are _not_ guaranteed to fulfill $f(\mathbb{R}^d)=\mathbb{R}^d_{\ge 0}$ (as they can map to a lower-dimensional subspace of $\mathbb{R}^d_{\ge 0}$). Similarly, multi-layer non-linear neural networks are not guaranteed to fulfill this property either (and in practice, will not). Hence, this is a non-trivial property of our generator. **2)** In addition to showing that the density is positive everywhere, we also provide a specific lower bound on the density. You are correct that this lower bound can be very small in practice. The theorem still shows that the generator _can_ produce any pair of distributions during training (i.e., is not restricted by its architecture), and due to the adversarial training formulation, it will learn to generate the most useful distributions automatically (which will, in practice, not cover the entire space of course). We hope this clarifies Theorem 3, and hope that you do not have any remaining concerns for the paper. As the rebuttal period is coming to an end, once again thank you for your thorough responses!
Summary: The paper presents UNOT- a method to learn universal discrete OT plans/potentials/costs approximator. Interestingly, the method may deal with different discrete resolution scales simultaneously by specific parameterization of the universal learned OT potential (through FNO operator). Several experiments on images (considered as discrete 2d distributions of different resolutions) validate the proposed methodology. Claims And Evidence: Ok Methods And Evaluation Criteria: Yes Theoretical Claims: I didn’t check carefully the proofs, but the results seems to be reasonable Experimental Designs Or Analyses: For the problem on hand, the experimental section seems to be comprehensive enough. However, I have several comments. **General**: 1. I do not understand the practical implication of section 4.4. Figure 7 shows that, given an image, we can transform from noise to this image. Actually, we can do the same by adding noise to the image… 2. Table 2: time for LFW: $10^{-2}$; time for BEAR: $8.0\cdot10^{-3}$. Then, time for LFW-BEAR should be ~$9\cdot10^{-3}$? **UNOT vs. MetaOT** 1. I think the authors should add MetaOT [Amos et. al., 2022] to the comparison. Actually, this method could be used in the all provided experimental setups (except, probably, 4.1 with mixed distributions mnist + cifar/lfw+bear: but even in this case we can just learn two MetaOT models with dimensions 28x28 and 64x64) 2. Quite a strange phrase (line 318): “the relative speedup achieved in [Amos et. al., 2022]” is 1.96. But this speedup in [Amos et. al., 2022] corresponds to MNIST dataset, where your method has a deterioration compared with “ones” initialization. Also, it is interesting that your reported time for MNIST with “ones” initialization is smaller compared to [Amos et. al., 2022]. Supplementary Material: I had a look, the code looks pretty. Relation To Broader Scientific Literature: The closest approach for UNOT is MetaOT, It would be good to have a detailed comparison with this approach (see “experimental design and analysis” section). Note on the related works section: a good work on OT in flow matching: [1], an interesting alternative approach for Neural OT: [2]. [1] Optimal Flow Matching, NeurIPS’24 [2] On amortizing convex conjugates for optimal transport, ICLR’23 Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is generally good written. However, to be honest, I do not see serious practical applications of the proposed method. How can we use UNOT? Importantly, the UNOT is trained for a fixed cost matrix. Also, I think that in the vast majority of situations, UNOT has no advantages compared to MetaOT, while being more time-consuming for training. Other Comments Or Suggestions: 1. lines 43-45, first column: line 44 is skipped. 2. Line 179, second column: “... any pair of discrete probability measures ($\mu$, $\nu$) can be generated by $G_{\theta}$.’’ - this is not quite right, because $G_{\theta}$ can generate only discrete distributions with dimensions $n, m$ such that $n + m \leq d’$. 3. I found it strange that no information on FNO $S_{\phi}$ is given in the main text. I think, a quick introduction of how they are constructed (in section 3.2.) would enhance the flow of the manuscript Questions For Authors: Why do you need the generator network $G_{\theta}$, and, subsequently, adversarial max-min objective? Why not just sample $\mu$ and $\nu$ at random, treating this as Meta distribution, similar to MetaOT [Amos et. al., 2022], in eq. (6)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough and helpful review! In the following we will try to answer your questions. >**I do not understand the practical implication of section 4.4. [...] we can do the same by adding noise to the image** Section 4.4 shows that UNOT can _solve the OT problem between distributions over images_, which e.g. arises in training of generative image models. This means UNOT can a) optimally match the images in the two marginals; b) interpolate the transport along this optimal trajectory, which essentially solves the discretized Wasserstein gradient flow w.r.t. the functional $W_2^2$. Neither marginal has to be noise; we will add another figure where both marginals are image distributions. We hope this clarifies that Section 4.4 is fundamentally different from adding noise to images (adding noise does not solve OT, and in particular, cannot interpolate between two non-noisy marginals). >**Table 2: time for LFW: 10^-2; time for BEAR: 8\*10^-3. Then, time for LFW-BEAR should be ~9*10^-3?** The hardness of solving OT between two datasets (LFW and BEAR in this case) does not interpolate between the hardness of each of the individual datasets. Imagine one of the two datasets consisted only of Dirac measures, then solving OT between this dataset and any other dataset is also trivial. Hence, the time for LFW-BEAR need not equal 9*10^-3. >**I think the authors should add MetaOT [Amos et. al., 2022] to the comparison** We did not include a comparison to MetaOT as it is inherently restricted to a single dataset of fixed dimension, but for completeness, we trained MetaOT on MNIST with the official implementation (https://github.com/facebookresearch/meta-ot) and compared it against UNOT on our test datasets (rescaled to 28*28 s.t. MetaOT can process them). Here are the relative errors on the OT distance (in %): | |MNIST| CIFAR|MNIST-CIFAR|LFW|BEAR|LFW-BEAR| |-|-|-|-|-|-|-| | **UNOT** | 2.7 ± 2.4 | 1.3 ± 1.1 | 2.8 ±2.6 | 1.5 ± 1.3 | 2.0 ± 1.6 | 1.8 ± 1.3 | | **MetaOT**| 2.4 ± 1.8 | 23.1 ± 15.7 | 11.4 ± 5.8 | 24.6 ± 15.7 | 11.8 ± 8.3 | 31.0 ± 14.8 | | **Gauss** |18.1 ± 10.0|19.7 ± 7.6 | 32.2 ± 8.7 | 21.1 ± 6.5 | 20.4 ± 8.3 | 19.3 ± 6.4 | | **Ones** | 39.5 ± 13.4 | 47.4 ± 20.2 | 74.5 ± 6.9 | 56.9 ± 15.4 | 54.2 ± 13.5 | 66.4 ± 10.8 | Surprisingly, **UNOT is almost on par with MetaOT on MNIST, despite _never having seen a single MNIST sample during training_, whereas MetaOT was _only_ trained on MNIST**, and we see that MetaOT breaks down on all other datasets. More on UNOT vs MetaOT in our response below. >**How can we use UNOT? Importantly, the UNOT is trained for a fixed cost [...] I think that [...] UNOT has no advantages compared to MetaOT** We provide the trained UNOT model in our GitHub (which we cannot link here for anonymity reasons), so it can be used out-of-the box as a general-purpose OT solver without any additional training required. You are right that the model is trained for a fixed cost. However, the squared Euclidean cost function is the most widely used cost function in practice (it also gives rise to $W_2$). We also refer to our response to reviewer **JPz7** for experiments with other costs and on spherical domains (we will provide these models on GitHub once they are ready). Comparing UNOT to MetaOT, we note that MetaOT can only be trained on a single dataset of fixed dimension. This means the downstream task needs to be known in advance, and crucially, _a training dataset needs to exist for this task_. For a new task, a new model needs to be trained from scratch. UNOT, on the other hand, can be applied to any downstream dataset (for a given cost) once trained, and it even matches MetaOT in terms of performance on the MetaOT training dataset (cmp. our response above). >**[...] this speedup in [Amos et. al., 2022] corresponds to MNIST dataset, where your method has a deterioration compared with “ones” initialization. Also [...] your reported time [..] with “ones” [...] is smaller** Runtimes depend on various factors (e.g. hardware) and are difficult to compare. Also, our FNO uses complex (number) layers, which are known to be sub-optimally implemented in PyTorch. With faster kernels we expect a drastic speedup. Since the submission, **we have also optimized UNOT with JAX and a better architecture, achieving a 2.5x speedup over the previous version without loss in performance.** >**“... any pair of discrete probability measures $(\mu,\nu)$ can be generated by $G_\theta$." - this is not quite right** Thank you for pointing this out, we’ll improve the wording! >**I found it strange that no information on FNO $S_\phi$ is given in the main text** Yes, a more detailed description of $S_\phi$ should be given in the main text, we’ll add this! >**Why do you need the generator network $G_\theta$ [...]? Why not just sample $\mu$ and $\nu$ at random [...]?** We refer you to our response to the same question by Reviewer **JPz7**. We hope this answers all your questions! --- Rebuttal Comment 1.1: Comment: I thank the authors for the answers and for the work done. I will rise my score, but think the paper is exactly borderline. Overall, I think that the practical merits of the proposed method are overestimated. `UNOT can solve the OT problem between distributions over images` - This phrase is amibiguous. I understand that you may interpolate between not only noise and image, but also between image and image, but you treat each image already as a distribution. So you pick one image to be the first distribution, the second image to be the second distribution. To my understanding, it has nothing with generative image models. Using `fixed cost matrix` - the authors misunderstood me. Here I mean not that the authors are actually using euclidean cost function (in principle, the authors could use any cost function), but that the "grid" of support points is fixed. This prevents using UNOT (and MetaOT, too) for some nice applications, e.g., speeding-up minibatch methods for training generative models (Here I don't mean that authors mention such applications, it is just my thoughts about possible applications of such methods as UNOT and MetaOT). `it can be used out-of-the box as a general-purpose OT solver` - this phrase, in my view, overestimates UNOT. In particular, you can not solve generative problem between distribution of images. So, currently I do not see some solid applications of the method. Best, dxme --- Reply to Comment 1.1.1: Comment: Thank you for your response, and for raising your score! We will respond to your remaining concerns in the following. >**you pick one image to be the first distribution, the second image to be the second distribution. To my understanding, it has nothing with generative image models [...] In particular, you can not solve generative problem between distribution of images** **Solving OT between (finite) distributions of images and generative modeling are two different (albeit related) tasks. We are not claiming that UNOT can be used as a generative model.** In Section 4.4, we show that UNOT can approximate the OT plan between two distributions over images by following the Sinkhorn divergence gradient flow between the marginal distributions. In particular, we are not just randomly matching the images between the marginals and solving the OT problem on a per-image level, but we are _optimally matching the images_ of both marginals. Thus, while UNOT is applied on a per-image level, we solve the OT problem between the distributions over images. You are right that this is not generative image modeling - it is solving the (time-continuous) OT problem between the two marginals. UNOT can, however, have applications in generative modeling, e.g. in flow matching, where batches of prior samples need to be matched with data samples, and several works have explored using the OT matching [1], [2], [3], or when using OT in the loss function of generative models [4]. We provide more potential applications for UNOT below. Since Figure 7 from the paper led to ambiguity of what we are trying to show, we will add another figure where both marginals are distributions over images to make this more clear. ## Applications of UNOT Since you expressed concern about applications of UNOT, we will list some additional potential applications in the following. We consider training time applications (where UNOT could be used to guide model training) and inference time applications. Note that for training time applications, UNOT can be integrated into loss functions, as it is fully differentiable. **Neuroimaging Data (Inference Time):** OT plays an important role in the evaluation of neuroimaging data, such as in computing Wasserstein barycenters of sets of MRI scans [5]. As shown in Section 4.2, UNOT can be used to efficiently compute barycenters of images. **Remote Sensing (Inference or Training Time):** In remote sensing applications, one often compares time series data, where the OT distance can be used to inform the similarity of two time series [6]. UNOT on regular grids in one dimension could efficiently solve this task. **Climate Models on the Sphere (Inference Time):** Climate models make predictions on the sphere, and OT distances can be used in various ways, such as validating climate models through Wasserstein distances between their predictions and data [7], for which spherical UNOTs (cf. our response to Reviewer **JPz7**) could be used. **Representation Learning (Training Time):** Wasserstein barycenters can be used to adapt dictionary learning with OT [8], where barycenters can again be computed with UNOT. **Imitation Learning (Training Time):** In imitation learning, the Wasserstein distance between time series data can be used to learn to mimic an expert’s behavior [9], and UNOT could be used to efficiently estimate these Wasserstein distances. These are just some of the potential applications UNOT could be used for, but myriads of other works use OT between discrete distributions on regular grids in diverse contexts, and the above list is far from being exhaustive. We hope this addresses your concerns about the potential applications of UNOT, and thank you again for taking the time to review! ## References [1] Lipman, Y., et al. Flow matching for generative modeling, 2023. https://arxiv.org/abs/2210.02747. [2] Pooladian, A.-A., et al. Multi Sample Flow Matching: Straightening Flows with minibatch couplings, 2023. https://arxiv.org/abs/2304.14772. [3] Tong, A., et al. Improving and generalizing flow-based generative models with minibatch optimal transport, 2024. https://arxiv.org/abs/2302.00482. [4] Genevay, A., et al. Learning Generative Models with Sinkhorn Divergences, 2017. https://arxiv.org/abs/1706.00292. [5] Gramfort, A., et al. Fast Optimal Transport Averaging of Neuroimaging Data, 2015. https://arxiv.org/pdf/1503.08596. [6] Courty, N., et al. Optimal Transport for Data Fusion in Remote Sensing, 2016. https://ieeexplore.ieee.org/document/7729925. [7] Garrett, R. C., et al. Validating Climate Models with Spherical Convolutional Wasserstein Distance, 2024. https://arxiv.org/pdf/2401.14657v1. [8] Schmitz, M., et al. Optimal transport-based dictionary learning and its application to Euclid-like Point Spread Function representation. Wavelets and Sparsity XVII, Aug 2017, San Diego, United States. [9] Dadashi, R., et al. Primal Wasserstein Imitation Learning, 2020. https://arxiv.org/pdf/2006.04678.
Summary: The paper suggests UNOT (Universal Neural Optimal Transport) a method for (single forward-pass) prediction of OT distances, which are typically computed by iterative methods like the Sinkhorn algorithm. The network architecture is based on Fourier Neural Operators (FNOs), that provide discretization-invariance, which is useful for dealing with discrete distributions of different dimensions. Training is based on a novel GAN-like adversarial scheme, where a generating network is made to provide challenging instances, which the prediction network is optimized over using a self-supervised bootstrapping loss, theoretically proven to minimize the ground truth loss. Extensive experiments show that UNOT outperforms prior methods in several aspects of the problem, including quality of approximation, especially as an initialization to Sinkhorn iterations, but also when considering the implied geodesics and barycenters in the Wasserstein space. Claims And Evidence: Yes. The work is well motivated, since as claimed, the use of OT is constantly growing in the field of ML, and the iterative natured computations are known to be expensive, especially for large instances of the problem, for which alternative approximations (such as projections to lower-dimensional spaces, or use of closed-form OT formulations) are typically used to save time. Therefore, good approximations, or ones that can be used for starting local optimization, are of high demand. The proposed method indeed shows excellent results, over the different datasets and usage cases, that make significant progress with respect to the goals that were set. Methods And Evaluation Criteria: The solution is well explained and its merits are demonstrated very clearly, with clear improvements across the experimentation. The paper overall is very well written, with a very good separation between the essential parts in the main paper vs. the technicalities that are deferred to the appendix. The solution proposed is elegant, with several new ideas that provide independent contributions. The choice of predicting the dual potential, from which everything else can be recovered, is very natural. The way that the supervision is achieved by running several Sinkhorn iterations over the prediction is a good idea, that is also justified theoretically. There are some design choices (in both method and experimentation setup) that are well explained in the paper, but the choices (among possible alternatives) are not sufficiently discussed and justified. One main point I wonder about is why the method is demonstrated only on the domain of images and as a consequence - How much is it suited (or limited) to this setting (where e.g. ground distances have a very particular structure). Another is the choice to use the adversarial type training over synthetic data. Such an approach has typically stability issues in training, which are a downside of methods like GAN, where specific attention is required to avoid different failures (such as mode collapse). Additionally, it is unclear how the generated samples are related to natural image distributions and therefore how efficient the training and generalization is. Even when inspecting the generated examples - it is hard to understand why they appear as they do and how they develop during training. Theoretical Claims: I didn't fully check the proofs. But the appendix does contain the necessary formulated background and proofs of all claims made in the paper. One claim that I have not comfortably understood is regarding the universality of the generator. I understand that any input can be generated, due to the positive density of the generated distribution across the domain. But this density can be extremely non-uniform in practice and furthermore, the adversarial training strategy limits the ability to cover the space even further. Experimental Designs Or Analyses: . Supplementary Material: Read the appendix. Relation To Broader Scientific Literature: The contributions are quite general and will have an impact on related research. The proposed method follows a line of works that try to improve Sinkhorn initialization and sets a new standard in this respect. Other contributions, such as the theoretical understandings that take care of the discretization issues (with the practical FNO based solution) and the bootstrapping based loss, should be of general interest for designing and training neural networks in this domain. Essential References Not Discussed: No. Other Strengths And Weaknesses: . Other Comments Or Suggestions: I would try to clarity (and slightly expand) the description of the architecture (both the generator and FNO), since the current one is somewhat cryptic and I need to refer to the appendix for clarifications. These are main components of the solution and should be more explicit. Regarding FNOs - a few sentences on the idea itself (lifting and operations in Fourier space) rather than only referring to the discretization invariance property. Regarding the generator - " R denotes renormalizing to two probability measures and downsampling them to random dimensions in a set range" is not very clear, even though it is correct. Figure 3 and 13/14 - seems like you switched "Ones" and "Gauss" Figure 5 - Is it intentional that 3 out of the 4 corner images are blurry. It would be better to interpolated between sharp images. Figure 6 - What is "ground truth"? Only for t=1? And why are don't the interpolations coincide with the inputs at the endpoints (t=0/1)? Questions For Authors: I am very positive about the paper. Given clarifications to the following questions, I might reconsider my score. 1. The focus on images. I think that it is a very good domain for explaining and demonstrating the method. But how general is the solution? Have you tested it on other domains (especially ones with different ground costs - ones that are not Euclidean, not a metric)? 2. The choice of adversarial training. Have you considered alternatives to this? Wouldn't training on the (real) images of a single particular dataset (or multiple ones) give better results (perhaps at the cost of generalization)? Did you encounter stability issues in training (e.g. dying/vanishing gradients, mode collapse) and if so what did you do to avoid them? Do the loss dynamics follow those typical in GANs? 3. Generated training distributions. The example generated pairs are nicely visualized, but I find the explanation as to their appearance and its evolution very unclear. Why does it look like it does? Is it a result of the specific architecture? Or are these actually instances that are difficult for the predictor? Does the predictor eventually solve these instances well? Does is suffer from forgetting? (e.g. solve the first examples at their time, but fail to do so later on) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and positive review! The plots in Figures 3, 13, and 14 indeed got mixed up, thanks for catching! We will also include more details about FNOs and our architectures in the main text and define $R$ in the generator more clearly. **Figure 5:** the corners should not be blurry; we will fix the plot. **Figure 6:** "Ground truth" are the input images at times $t=0$ and $t=1$. Each row corresponds to a different transport plan between the ground truth images which transports the ground truth image from time $t=0$ to the predicted image at time $t=1$. This means the interpolation at time $t=0$ is equal to the ground truth by design (at $t=0$, the images should be identical, we will fix the plot), but the interpolation at time $t=1$ can be different from the ground truth if the transport plan is not accurate. This is why we also provide the ground truth for $t=1$. We hope this answers your question. >**The focus on images [...] how general is the solution? Have you tested it on other domains [...]?** We note that the model is “agnostic” to the data modality and should work with data of any modality with the same cost function and grid structure. We ran additional experiments in the Euclidean image setting with different costs ($L_1$ and $L_2$, which are metrics unlike the squared Euclidean cost $L_2^2$ from the paper), and out-of-the-box (without any hyperparameter finetuning) it matches the relative errors on the transport distance for $L_2^2$. We also ran an additional experiment with the **unit sphere in $\mathbb{R}^3$ as the domain, where we parametrize the network with Spherical FNOs** (https://arxiv.org/abs/2306.03838), and use the spherical distance $c(x,y)=\text{arccos}(<x, y,>)$ as the cost. Due to time constraints, we have only tested this on our image datasets (where we define an angular grid on the sphere, and then simply map the images onto this grid). Without any hyperparameter tuning, this works very well out-of-the-box and **even outperforms Euclidean UNOT** averaged over the test datasets. With proper hyperparameter tuning, these results can probably be even improved. For the camera-ready version, we will test UNOT on real-world spherical datasets. >**The choice of adversarial training. Have you considered alternatives to this? [...] Did you encounter stability issues in training [...]?** Initially, we tried training on synthetically generated data which works, but the adversarial approach works better. Additionally, synthetic data requires significant “fine-tuning” to capture the space of distributions well. Training on a single dataset would destroy generalization capabilities (cf. the comparison between UNOT and MetaOT in response to reviewer **dxme**). This shows that a general-purpose model requires our universal training approach. (However, it is probably possible to fine-tune a pre-trained UNOT model on downstream datasets.) We have not encountered any stability issues during training, and the training loss, as well as the test losses, decrease very stably during training. One stabilizing factor compared to GANs could be that our generator has a skip connection, s.t. its outputs can never completely collapse. >**I find the explanation as to [the generated pairs’] appearance [...] unclear. Why does it look like it does? [...] Does the predictor eventually solve these instances well? Does it suffer from forgetting?** Since our generator has a skip connection, part of the appearance of the distributions is indeed due to the architecture. However, the learned network in the generator ensures that training samples are generated in such a way that they are difficult for the predictor. We ran additional experiments which show that **a) generated samples are indeed difficult for the predictor initially, b) over the course of training, the predictor eventually solves them, c) forgetting of training data seems not to be happening.** Specifically, at the start of training, as well as after each 10% of training, we save a training batch and track the relative error on the OT distance on each. The following table shows the initial error on these samples as well as the final error on them at the end of training (with stable improvements on all of them during training). | Relative OT Distance Error | 0% Training | 10% Training | 20% Training | 30% Training | 40% Training | 50% Training | 60% Training | 70% Training | |-|-|-|-|-|-|-|-|-| | At Generation| 53.2%| 3.1%|2.1%|1.6%|1.8%|1.7%| 2.1%|1.9%| | At End of Training| 2.0%| 1.6%|1.4%|1.1%| 1.6%| 1.5%|2.0%|1.9%| We also experimented with keeping a cache of previous training samples and re-feeding them to the model over the course of training to prevent potential forgetting, similar to what is typically done when fine-tuning pre-trained models. However, this did not improve our training, probably because there does not seem to be any forgetting in the first place. We hope this answers all your questions!
null
null
null
null
null
null
From Logits to Hierarchies: Hierarchical Clustering made Simple
Accept (poster)
Summary: A method for deriving a hierarchy of clusters from the logits from a flat clustering model, thus allowing the use the accurate leaf clustering from flat models. ##Update After Rebuttal Most of my concerns have been addressed and I think it's a good paper. I've upgraded to a 4. Claims And Evidence: The general idea to begin with a flat clustering method and then use the logits to form a hierarchy makes sense and is well-supported by the experiments. I have some minor comments and requests. If I understand correctly, the leaf accuracy is unaffected by the application of L2H, so the scores in Table 1 for L2H-TEMI and L2H-TURTLE, respectively. If so, I think this should be made explicit somewhere. Currently, just reading Table 1, it seems as though you are presenting results for a novel flat clustering algorithm. Perhaps they could just be called TEMI and TURTLE in the Table, and then it can be stated that your method is able to use the high leaf accuracy of SOTA flat clustering methods. The runtime figures are stated to include the time to train TURTLE. It should also be acknowledged that TURTLE, and hence L2H-TURTLE, requires access to foundation models, which of course have a very long train time, whereas the comparison methods do not. Methods And Evaluation Criteria: The method is straightforward and effective. It is perhaps a bit simple to just take the cluster with the least confident assignments and merge it with another that has the second-highest average logits for that cluster. I wonder how the method compares to other simple operations on logits, such as computing rp(c) for all pairs and merging the highest. Theoretical Claims: n/a Experimental Designs Or Analyses: Experiments seem correct. Supplementary Material: I briefly reviewed the supplementary material. Relation To Broader Scientific Literature: This algorithm could be useful as it would reduce the deep hierarchical clustering problem to the deep clustering problem. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: line 37 RHS: "struggle to handle to" -> "struggle to handle" line 236, LHS: "adjusted Random index" -> "adjusted Rand index" Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive feedback, in particular for praising the idea, effectiveness, and usefulness of our method. We appreciate the useful feedback/suggestions, and address the concerns below. > If I understand correctly, the leaf accuracy is unaffected by the application [...] The Reviewer is correct that our method by construction retains the leaf-level flat clustering performance of the pre-trained flat model. We make it explicit already in the text (e.g. lines 234-236, right column), but we agree with the Reviewer that in the interest of clarity it can be made more explicit in table 1. In the updated manuscript we add to the caption of Table 1 the final sentence *Notably,the application of L2H does not affect flat clustering performance, retaining the clustering performance of the pre-trained model (TURTLE, TEMI) at the leaf level.*, using italic to highlight it. We also perform an additional experiment on the INaturalist21 (https://github.com/visipedia/inat_comp/tree/master/2021) dataset, which we use a setting to test whether our method can bring an advantage over flat clustering methods, used as backbone, when the nature of a dataset is inherently hierarhical. The INaturalist dataset contains ~2.7million images of species labelled at different taxonomy levels (1103 families, 273 orders, 51 classes, 13 phylums, 3 kingdoms). We perform the following experiment. First we train five instances of TURTLE to model clusters at five different taxonomy levels, i.e. varying K across instances in the range $K \in \\{ 1103, 273, 51, 13, 3 \\}$. Then we apply our L2H algorithm on top TURTLE trained at the most fine-grained taxonomy level (i.e. *family*, K=1103), and use the produced hierarchy to make clustering predictions at the more coarse levels. We repeat the experiment across multiple seeds, and compare the performance of the two strategies. The results show that inferring clustering predictions at more coarse levels via the produced hierarchy with L2H leads to better performance compared to re-training TURTLE at the each corresponding level. Results at https://files.catbox.moe/0nt1uq.pdf > The runtime figures are stated to include the time [...] We have refined the manuscript accordingly to clarify this point. The Reviewer is correct, and to confirm that our approach is both more computationally efficient and more performant than deep specialized hierarchical approaches we have trained the best performing baseline (TreeVAE) on the CIFAR-100 dataset CLIP embeddings. The results prove that the performance of TreeVAE improves compared to training in data space, but it is still markedly outperformed by our approach. Note that in this setting TreeVAE takes more than 2 hours on a GPU to train, while e.g. L2H turtle takes under 2 minutes. Since all models in this comparison have access to pre-trained embeddings from foundation models, these results confirm our method is markedly more computationally efficient than alternative deep specialized hierarchical approaches. Results at https://files.catbox.moe/gz0tu8.pdf > The method is straightforward and effective.[..] Note that we motivate our choice of merging the cluster/group with the lowest score in the Rebuttal for Reviewer YXaT. We appreciate the interest/suggestion from the Reviewer and will include an ablation on the updated version of the manuscript to test the Reviewer's idea of computing rp(c) for all pairs and merging the highest. It will make an interesting ablation in our work. > Other Comments Or Suggestions We appreciate the Reviewer signaling these typos, which we will fix in the manuscript. --- We are happy to answer any additional questions and would appreciate it if the Reviewer would consider raising their score to full acceptance. --- Rebuttal Comment 1.1: Comment: Thanks for the reply and clarifications. The additional results comparing to seem to retraining TURTLE at each level are helpful, and I would suggest including them in the paper. I am not sure why the results comparing to TreeVAE use CLIP instead of TURTLE for features? I would think the best comparison is them all having the same backbone. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for the feedback, and are glad that the results comparing the performance of our method with the performance obtained retraining TURTLE at different levels are helpful. We will include these results in the updated version of the manuscript. To address the Reviewer's question, we'd like to clarify that TURTLE is a flat clustering model (as it is TEMI), and hence there's no natural way to integrate it as backbone of TreeVAE. We compare with TreeVAE trained on CLIP embeddings to provide a comparison where all models (TreeVAE, L2H-TEMI, L2H-TURTLE) make use of CLIP representations. Our approach proves to achieve markedly better performance, with substantially higher computational efficiency. We hope to have addressed the Reviewer's question, and would appreciate it if they'd consider raising their score to a full acceptance.
Summary: The paper introduces Logits to Hierarchies (L2H), a novel hierarchical clustering method that uses pre-trained non-hierarchical clustering models to build hierarchical structures. L2H is lightweight, doesn't require fine-tuning, and uses logits to generate clusters, outperforming existing deep hierarchical clustering methods in both performance and efficiency. It is also extended to supervised settings, showing its ability to recover meaningful hierarchies from pre-trained classifiers like ImageNet. Empirical results on CIFAR-10, CIFAR-100, Food-101, and ImageNet demonstrate L2H's potential for interpretability and bias detection in supervised models. Claims And Evidence: The introduction of the proposed method in the paper is clear and well-structured. The authors provide a detailed explanation of their Logits to Hierarchies (L2H) approach, including its motivation, algorithmic procedure, and mathematical formulation. Additionally, they present pseudocode and visual illustrations that facilitate understanding However, the experimental results reveal a significant performance gap between L2H and the deep hierarchical clustering methods (e.g., DeepECT, TreeVAE). The credibility of the experimental results requires further scrutiny. Specifically, the validity and fairness of the baseline implementations, as well as the generalizability of L2H across different datasets and real-world applications, should be carefully examined. Methods And Evaluation Criteria: The proposed method, L2H, is well-suited for the problem of hierarchical clustering. It builds on pre-trained flat clustering models and uses logits to construct hierarchies, which is a novel approach. The evaluation criteria are appropriate, with metrics that assess both the quality of the flat clustering (e.g., NMI, ARI, Accuracy) and the hierarchical structure (e.g., Dendrogram Purity, Least Hierarchical Distance). The authors also provide a case study on ImageNet, demonstrating the method's applicability to supervised settings, which adds to the method's generality and practical relevance. However, the authors claim that L2H is scalable to large-scale datasets and computationally efficient. The datasets used (CIFAR-10, CIFAR-100, Food-101) may not be sufficient to fully validate its scalability in real-world scenarios. Additionally, the comparison with baselines such as DeepECT and TreeVAE requires clarification. Given the large performance gap, it is unclear whether these baselines were optimally configured. More details on hyperparameters, training settings, and computational budgets would ensure a fair comparison. Theoretical Claims: The paper does not present any theoretical proofs or claims. Experimental Designs Or Analyses: The experimental design primarily focuses on evaluating L2H against deep hierarchical clustering models (DeepECT, TreeVAE) on CIFAR-10, CIFAR-100, and Food-101, with additional results on ImageNet-1K. While the methodology appears reasonable, there are concerns regarding the validity of comparisons: 1. The reported performance gap between L2H and baseline methods is significant, yet the paper lacks details on whether baselines were optimally configured. Ensuring fair hyperparameter tuning and implementation details is essential for a valid comparison. 2. While the paper claims scalability, the chosen datasets may not fully reflect real-world large-scale clustering challenges. Further analysis on more complex, high-dimensional datasets would strengthen the validation. 3. The authors highlight L2H’s efficiency, but more transparency on runtime conditions (e.g., hardware specifications, dataset size variations) is needed to verify the claimed advantages. Supplementary Material: I reviewed the supplementary material associated with the paper. The supplementary material includes additional visualizations of the inferred hierarchies, ablation studies on the choice of aggregation function, and a Python implementation of the L2H algorithm. These materials provide further evidence of the method's effectiveness and scalability. Visualizations clarify how the method recovers meaningful hierarchical structures from the data. Relation To Broader Scientific Literature: The paper is well-situated within the broader literature on hierarchical clustering. The authors discuss the limitations of existing deep hierarchical clustering methods and position their work as a more scalable and efficient alternative. They also connect their work to the growing interest in interpretability and bias detection in machine learning models, particularly in supervised settings. The paper builds on recent advances in pre-trained models and logit-based approaches. Essential References Not Discussed: The paper covers the relevant literature, and no essential references appear to be missing. Other Strengths And Weaknesses: **Strengths:** The proposed method is scalable and efficient, making it suitable for large-scale datasets. The paper provides some empirical evidence, including visualizations and ablation studies, to support the claims. The extension to supervised settings adds practical relevance, particularly for interpretability and bias detection. **Weaknesses:** The method relies on pre-trained models, which may limit its applicability in scenarios where such models are not available. While the method is efficient, the quality of the hierarchy depends on the quality of the pre-trained model's logits, which could be a limitation if the pre-trained model performs poorly. In this paper, the validity of the method is verified only by experiment, and the theory is lacking. The paper claims the method is lightweight and scalable but lacks extensive experiments on extremely large datasets (e.g., millions of samples or hundreds of thousands of classes). Experiments are limited to smaller datasets like CIFAR and Food-101, leaving scalability claims unverified for the so-called large-scale data. Other Comments Or Suggestions: I have no other comment. Questions For Authors: 1. The method relies on pre-trained models. Have the authors explored scenarios where pre-trained models are not available, or where the pre-trained model's performance is suboptimal? How does L2H perform in such cases? 2. This paper only uses experiments to verify the effectiveness of the method. Is there any relevant theoretical guarantee to illustrate the effectiveness of the method? 3. The TEMI employs CLIPViTL/14 representations of the data, while TURTLE employs both CLIPViTL/14 and DINOv2 ViT-g/14 representations. Are the compared methods utilize these representations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Summary: - Main algorithmic/conceptual ideas: Using logits of flat clustering model, get hierarchical structure on top of them. - Main findings/results: deep hierarchical clustering methods have low clustering performance and slow runtime; learned hierarchies from image datasets are interpretable. Claims And Evidence: Yes in general. The whole paper is overall clear. Methods And Evaluation Criteria: Questions for methods: - line 131: how do you decide the number of clusters $K$? Is this decided by your input flat clustering method? - line 199: why do you choose the lowest score (predicted probabilities) for merging, say when the aggregation function is summation, or others? - line 144: Seems like your method is based on the assumption that cluster logits (or probabilities) are a proxy to measure cluster similarities. Why is this assumption reasonable? When can this assumption fail? I encourage the authors to briefly discuss on this point. Metrics and datasets look good to me, both are reasonable for clustering evaluation. Theoretical Claims: N/A Experimental Designs Or Analyses: Questions: - Is it possible to show non-hierarchical clustering metrics for TEMI and TURTLE? My guess is your method will have an absolutely better performance with a stronger flat clustering initialization, especially when I noticed that TEMI and TURTLE are based on CLIP embeddings which is quite powerful (line 765-768). Therefore, it's better to show this increment with your method as an important ablation study. Your results will be stronger if you show there is a significant increase with your method on top of any flat clustering methods. - Out of curiosity, why are flat deep clustering methods faster? Is this true for all (maybe SOTA) deep clustering methods? - Going back to the choice of flat clustering method, if you use CLIP based embeddings, in Table 2, for the runtime to train the TURTLE model, I would imagine you didn't include the time to train CLIP, which sounds unfair for other methods. - Additionally, there's another concern about interpretability arguments in your visualization, because basically CLIP has already seen text embeddings so it could infer some knowledge directly from text, thus probably having an additional advantage to match with wordnet. - But overall I don't think this will be a significant issue if the author can either provide reasonable explanation on the choice of clustering method, or showing the result of your method + other weaker flat clusteirng method as initialization. I understand that the final metrics probably won't be as good as what you reported now, but that is still valuable to the community and will make your paper much more convincing. Supplementary Material: Yes, full supplementary material. Relation To Broader Scientific Literature: This can be categorized into a bottom-up approach of hierarchical clustering, and is built on trained non-hierarchical clustering methods. Essential References Not Discussed: Minor: for the benefits of modeling a hierarchy in the data are not restricted to the unsupervised setup, there are some works to use hierarchical data for out-of-distribution detection and the hierarchy is used for supervised learning. [1] Khrulkov, Valentin, et al. "Hyperbolic image embeddings." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. [2] Linderman, Randolph, et al. "Fine-grain inference on out-of-distribution data with hierarchical classification." Conference on Lifelong Learning Agents. PMLR, 2023. [3] Sinha, Aditya, et al. "Learning Structured Representations with Hyperbolic Embeddings." Advances in Neural Information Processing Systems 37 (2024): 91220-91259. Other Strengths And Weaknesses: - Originality: Hard to judge based on my familiarity of the literature. - Significance: The proposed method is lightweight and empirically works well. Although see "Experimental Designs Or Analyses" for concerns of the experiment section that may mislead the interpretation of the results. - Clarity: overall well-written paper. Other Comments Or Suggestions: Can you explain Figure 1 iteration 2? This is not contained in the caption and why is the pink box merged with the first two boxes? Also what does the top bar for iteration 2 mean where it contains both blue and yellow parts? Questions For Authors: For detailed questions see comments above. Here let me summarize major questions: - selection of initial clustering methods in your experiments - why choose the lowest score for merging I would like to raise my score if the author can elaborate on these questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for praising performance and efficiency of our method, our metrics/datasets, as well as the clarity of our paper. We appreciate useful the feedback and suggestions, and address the concerns below. > line 131[..] The number of leafs in the hierarchy obtained with our approach is the number of clusters $K$ modelled by the pre-trained flat model, which is a hyperparameter. Note that we show our approach is robust to changes in $K$, and should an a-priori value for $K$ not be available, the loss of the flat model (e.g. TURTLE) can be useful signal to set it (See rebuttal to Reviewer CKwm). > line 199 [..] The score we define captures the aggregated probability mass that the model assigns to each cluster/group. When a cluster/group receives the lowest score, depending on the exact aggregation, it indicates one of two scenarios (or both): - It contains fewer samples, suggesting it is more specialized than others. - The model is less confident when assigning samples to it, often because it's highly similar to another cluster/group. In either case, the cluster/group is a natural candidate for early merging in a bottom-up hierarchy from fine-grained to coarse. In the first case, it is more specialized than others. The second case is a little more subtle. Consider the case of overclustering, where the number of leaf nodes K is larger than the true number of clusters. In this case, redundant clusters modelling the same true class present low-confidence assignments, as the model can't clearly distinguish between them when assigning samples. This low confidence reduces their score, causing them to be merged early, effectively correcting the overclustering. Thus, our score guides the hierarchy-building process in a principled way, encouraging early merging of either highly specific or redundant clusters, ultimately leading to an accurate hierarchy, as validated empirically. > line 144: [..] We agree that this assumption underlies our method. While empirically supported, we acknowledge that this assumption may not always hold, for instance in case the flat model is poorly calibrated. We have updated the manuscript to explicitly discuss this assumption and its potential limitations. > Is it possible to [..] As we state in lines 234 - 236 (right column), by construction, our method retains the clustering performance of the pre-trained model at the leaf level, hence non-hierarchical clustering metrics for TEMI/TURTLE match the L2H-TEMI/L2H-TURTLE results in Table 1. However, our results on the INaturalist21 dataset (see rebuttal to Rev. CaXW) prove that our approach can be used to model multiple granularity levels in datasets with a hierarchical structure. Notably, our approach surpasses the performance obtained by training multiple instances of a flat model (e.g. TURTLE), one at each given granularity. In such a setting, our hierarchical approach surpasses the flat model (e.g. TURTLE) on flat metrics at all but the finest granularity (where the performance matches). > Out of curiosity [..] It is not necessarily true in principle that all flat clustering methods are faster than hierarchical methods. However, there's been a lot recent research on highly performant and efficient flat clustering (e.g TURTLE), while a comparable research effort has not been witnessed for hierarchical models. > Going back to the choice [..] See rebuttal to Rev. CaXW. > Additionally [..] Note that in section 4.2 we use InternImage as a backbone mode - not based on CLIP embeddings. > But overall [...] We appreciate the suggestion from the Reviewer, and implement our method on top of the TCL flat clustering model [1] on the datasets used in Table 1. TCL (i) does not rely on pre-trained embeddings, and (ii) achieves weaker flat results than TEMI/TURTLE. Results at https://files.catbox.moe/cfufvm.pdf. Still, with our approach (L2H-TCL) we outperform deep hierarchical models (DeepECT,TreeVAE) across all flat/hierarhical metrics, which strenghtens our contribution. [1] Li et al. Twin Contrastive Learning for Online Clustering, IJCAI, 2021. > Can you explain Figure 1[..] In the second step in Fig.1 the pink cluster, selected for merging, is merged with the group containing yellow and blue clusters, as this group has the most reassigned predicted probability mass. The bar contains blue/yellow parts to represent that it aggregates the probability mass reassigned to blue/yellow clusters, that were grouped together at the previous step. > Minor: for the benefits of modeling [..] We agree with the Reviewer that the benefits of modelling a hierarchy are not restricted to the unsupervised setup, and show results to support this point in section 4.2. We appreciate the useful references and have integrated them in the updated manuscript. We are happy to answer any additional questions, and would appreciate it if the Reviewer would consider increasing their score to an acceptance. --- Rebuttal Comment 1.1: Comment: I thank the author for your detailed rebuttal. I think additional comparative experiments strengthens your paper and I strongly recommend the author to add these new experiments into the updated version. Therefore I will raise my score to 3. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for raising their score to an acceptance, and for the useful feedback. We will include the additional experimental results from the rebuttal in the updated version of the manuscript.
Summary: This paper addresses the issues of traditional hierarchical clustering algorithms, such as high computational costs, and strong model dependency. It proposes a lightweight hierarchical clustering algorithm that directly constructs hierarchical structures using the logits output by pre-trained models, enabling multi-granularity clustering without fine-tuning. Furthermore, extensive experiments demonstrated the effectiveness and generalizability of the proposed method. Claims And Evidence: The author conducted several experiments to demonstrate the effectiveness of the proposed method. However, increasing the number of datasets and comparison methods would enhance the credibility of the results. Methods And Evaluation Criteria: Yes. Theoretical Claims: The inspection has been carried out, and no issues were found. Experimental Designs Or Analyses: The inspection has been carried out, and no issues were found. Supplementary Material: I have reviewed the supplementary material provided by the author. Relation To Broader Scientific Literature: The paper's key contributions address well-known challenges in hierarchical clustering, such as computational cost and model dependency. They build upon prior work by leveraging pre-trained models without fine-tuning, thereby advancing the state-of-the-art inefficient and semantic-aware clustering methods. Essential References Not Discussed: Given the related work on hierarchical clustering presented at ICML 2024, it is recommended that a performance comparison analysis with the proposed method be conducted. This would provide a more comprehensive evaluation and highlight the new approach's advantages and limitations in the context of recent advancements. Other Strengths And Weaknesses: Strengths: 1. The proposed method can process ImageNet-scale data in just a few minutes on a CPU, significantly outperforming deep methods such as TreeVAE. 2. The author conducted experiments to demonstrate the generalizability of the proposed method. Weaknesses: 1. Theoretically, the design basis of the masked Softmax lacks rigorous mathematical derivation, and the relationship between the hierarchical clustering objective function and the downstream tasks is not clearly established. 2. In the evaluation metrics, only traditional clustering metrics were used, and there is a lack of metrics specifically tailored for hierarchical clustering. 3. No sensitivity analysis of the parameters was conducted, such as the depth of the tree. 4. The summary of contributions and the future work section need further refinement. 5. The majority of the references are from five years ago, and there is a need to incorporate more recent and advanced studies. This is important because citing newer literature ensures that the research is aligned with current trends and advancements in the field. Other Comments Or Suggestions: Considering the aforementioned Weaknesses. Questions For Authors: 1. Does the appendix introduce four datasets, but only three datasets were actually used in the experiments? 2. Table 3 does not provide comparisons with other methods, making it difficult to determine the effectiveness of the proposed approach. 3. Should Table 5 be moved to the main text? The appendix should be revised to remove any redundant information. 4. The remaining issues can be referred to the section on Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the valuable feedback and suggestions. We appreciate the praise for the efficiency and efficacy of our method, and the highlighting of its value in addressing well-known challenges in hierarchical clustering. We address concerns/questions below. > However, increasing the number of datasets and comparison methods [..] See comparison with [1] below + other Rebuttals (e.g. L2H+TCL and TreeVAE+CLIP comparisons, INaturalist results). > Given the related work on [...] We assume the Reviewer refers to this ICML 2024 workshop paper[1], on zero-shot flat clustering. The proposed strategy (UMAP dim.red. on embeddings + Ward's agglomerative clustering) incurs in high computational cost for large datasets, and more importantly *does not allow for inference on unseen data*. Hence, to test the method from Lowe et al. for comparison we necessarily both train and evaluate it on the test set, thereby giving it an advantage in seeing test data for training. Despite this, this method still underperforms compared to our proposed approach, which further validates and contextualises the effectiveness of our method. Results on CIFAR-100 at https://files.catbox.moe/4nhqck.pdf > Theoretically, the design basis of the masked Softmax [..] We introduce the masked softmax as a principled variation of the softmax function that allows masking in the indexes. Note that our function definition ensures that a valid probability distribution over the unmasked elements is produced. We do not consider a downstream task in our hierarchical clustering experiments. We would thus appreciate it if the Reviewer could provide clarifications regarding the comment "the relationship between the hierarchical clustering objective function and the downstream tasks is not clearly established". > In the evaluation metrics, only traditional clustering metrics were used[..] Note that we use Dendrogram Purity (DP)[2,3] and Least Hierarchical Distance (LHD) - both metrics tailored for hierarchical clustering. > No sensitivity analysis of the parameters was conducted, such as the depth of the tree. We provide a sensitivity analysis of L2H-TURTLE on the CIFAR-100 dataset, with respect to the depth of the tree. The depth of the tree is controlled by the $K$ hyperparameter corresponding to the number of clusters modelled by the flat model and therefore to the number of leafs in the tree. The analysis demonstrates that across all hierarchical and flat metrics the best performance is achieved when K equals the true number of classes (100). When $K$ deviates from the true number of clusteris, performance degrades gracefully and a meaningful hierarchy is still recovered. This robustness is particularly important in practical setting where the true number of clusters is not known a-priori; in such cases the log-normalized TURTLE model loss can provide useful signal to select the hyperparameter $K$. Results at https://files.catbox.moe/iisaqn.pdf. > The summary of contributions and the future work section need further refinement. We appreciate the suggestion from the Reviewer, and have refined the manuscript accordingly, summarizing and highlighting more clearly the contributions of our work. As well, we have deepened the section on future work, also in light of some additional results shown in this rebuttal. > The majority of the references are from [...] We share the Reviewer’s view on the importance of citing recent literature, and have made our best effort to do so throughout the paper. However, as noted in the recent relevant work[2], the number of deep learning-based approaches proposed in the last few years to address hierarchical clustering remains surprisingly limited. This underlines the relevance and timing of our work, where we aim to revive the interest in this underexplored area, building on recent advancements (e.g. in flat clustering models). To that end, we compare with the most recent and relevant baselines[2] and build upon SOTA flat clustering models. Finally note that we have followed the recommendation from the Reviewer contextualizing our approach in comparison with [1] above. **Replies to "Questions for the Authors"** 1.We introduce CIFAR-10, CIFAR-100, Food-101 that we used in section 4.1 and ImageNet1k that we used in Section 4.2 and Table 3. 2.In line 753 we highlight that comparisons with baselines (e.g., DeepECT, TreeVAE) are not feasible on ImageNet1k, as these methods lack the scalability to handle datasets of this magnitude/complexity. 3.We reported Table 5 in the Appendix as it consists of an ablation validating the stability wrt the design choice of the aggregation function. We appreciate the Reviewer's suggestion and have refined the Appendix by removing redundant results. [1] Lowe et al.(2024) [2] Manduchi et al.(2023) [3] Kobren et al al.(2017) We are happy to answer any additional questions, and would appreciate it if the Reviewer would consider increasing their score to an acceptance. --- Rebuttal Comment 1.1: Comment: The authors have adequately addressed the major concerns raised in the previous review. Based on the improvements and clarifications provided in the rebuttal, I am raising my score to 3. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for raising their score to an acceptance. We are glad to have been able to address their concerns, and are grateful for the useful feedback.
null
null
null
null
null
null
In-Context Learning and Occam's Razor
Accept (poster)
Summary: The paper studies ICL and tries to relate ICL to prequential coding length through the scope of Kolmogorov complexity. Claims And Evidence: There're some vague claims stated by the authors, see questions part. Methods And Evaluation Criteria: Proposed methods and evaluation criteria is confusing. Theoretical Claims: There're no theoretical claims. Experimental Designs Or Analyses: The experimental designs are confusing, see questions part. Supplementary Material: I briefly reviewed the appendix. Relation To Broader Scientific Literature: The paper is related to unveiling the black box ICL of transformers. Essential References Not Discussed: Not any that I know of. Other Strengths And Weaknesses: ### Strengths 1. The paper studies ICL, which is an important ML topic. ### Weaknesses 1. The paper claims to provide a theory on ICL, but no formal theorems or statements can be found in the paper. 2. The relationship with ICL is vague, the next-token prediction is not the essence of ICL, it is the training paradigm of all modern LLMs. 3. The message the paper tries to convey is unclear, and some statements seem unjustified or out of the blue (see questions). 4. For an empirical paper, it is short of experiments and the empirical evidence is not convincing. Other Comments Or Suggestions: 1. line 153 "Figure 1b" should be "Figure 1". Questions For Authors: 1. How is “simple models which explain the training data generalize best" mentioned in the abstract related to the Occam's razor principle, and what does "explain the training data" mean? 2. In section 2.4 (which I believe is the main contribution of this paper) the summary part, you mentioined "... explicitly optimize a meta-learning objective that jointly minimizes training error and model complexity". However, the next-token prediction loss (or equivalently the prequential loss? Correct me if I'm wrong), is an upper bound for the training error plus model complexity, thus the statement doesn't seem valid? 3. In section 3.1 you mentioned two training objectives, one of them is to "training $T\_{\Phi}$ to predict past datapoints in the context". Why is this necessary? Can't a transformer model simply copy (or memorize) the datapoints in the context? It's 4. For the two different training objectives, do you implement different pretraining objectives or do you simply test the inference time ICL ability without pretraining? 5. In line 232 right column you stated "While nobody uses train-risk ICL in practice, it serves as an ideal control to illustrate our theory of ICL...". How does this train-risk illustrate your theory of ICL? 6. In figure2, why do you call the MSE error the "generalization error"? 7. I don't understand how could "simple" model be better at generalization in section 3.1 findings (althogh generalization itself is not formally defined in the paper). Is it simply due to the lack of expressivity of simple models so that overfitting is less likely? 8. In line 333 right column you mentioned "Not only is its prequential code length substantially higher than ...", are you suggesting that prequential code length is the same as the MSE loss since in figure 2c you used the generalization error (which is the MSE loss)? 9. In figure 2c, "our transformer" is pretrained on "a distribution of Mastermind tasks", could you provide some details of the training regime (i.e. the training data format, the training distribution etc.)? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your constructive review. **Empirical studies** Our work is not an "empirical paper", but a theoretical one. Unlike standard theory papers, we studied 4 tasks including the challenging Mastermind task, multiple sequence models to highlight the generality of the theory (Fig. 2b), and conducted numerous ablations (Apx. D, E). Nevertheless, if this remains unconvincing, please clarify *which* conclusions (listed under the paper's “Findings.” headings) you find unsupported by our results or what additional experiments you’d liked to see. **Lack of theorems** We build on [1-3] to establish a novel connection between description length results for DNNs and ICL. In section 2, we mathematically derive the link from prequential code length (PCL) minimization to implicit model complexity when a pretrained model learns a task from examples presented in context. We chose not to use the "Theorem, Lemma, Proposition" formulation, as we believe it helps readability. **Relation between ICL and next-token prediction** ICL is indeed *not equivalent* to next-token prediction, and we do not suggest that it is. Rather, we show that training via next-token prediction loss explains the effectiveness of ICL because of the link from next-token prediction to compression. **Q1** Given equally good explanations, Occam's razor prefers the simplest one. In a typical ML textbook, Occam's razor is the justification for why simpler models generalize better (e.g., see chapter 1 in [4] and 7.3 in [10] for a detailed discussion), where "explain the training data" means minimizing training error. Further, the most common way to formalize model simplicity is using Kolmogorov complexity as we do. See [5-6] for short introductions, and [7] for a discussion in the context of deep learning. **Q2** As stated in L156-159, minimizing upper bounds to intractable quantities is standard practice in ML [4, chapter 10], e.g., we minimize the negative evidence lower-bound instead of the intractable negative log likelihood with VAEs [8] or diffusion models [9]. Furthermore, even when it is not being minimized through a meta-learning objective, PCL has been found to be a strong compression algorithm in deep learning settings [1], bounding $K(D, p)$ better than other methods. **Q3 and Q5** Our theory predicts that prequential ICL should generalize better since it was meta-learned to minimize model complexity as well as training error. To verify this, we need a ‘control’ that is meta-learned to only minimize training error, which is exactly "train-risk ICL". You're right that if implemented naively, such a train-risk learner could simply memorize in-context examples, which is why we minimally modify the Transformer architecture to summarize the context in a bottleneck. We explain this in L240-252 RHS. **Q4** We pretrain separate models using (a) the next-token prediction objective or (b) the past-token prediction objective on a distribution of tasks and then compare these resulting learners on unseen tasks at inference time (L252-258 and Apx. C.2). This is in line with other work studying ICL in controlled settings (e.g., citations on L221-222 LHS)]. **Q6** Generalization error (y-axis) at a particular context length (x-axis) is the inferred model's prediction error on *unseen* data (Fig. 2 caption). MSE is the standard metric used to measure error for regression. **Q7** Given equal training error, it is well-known that simple models generalize better [4, 10]. Overfitting indeed happens due to excess model complexity. We in fact do define generalization on L255-258 RHS as prediction error on unseen data, which is how it is always defined [4]. **Q8** PCL and MSE are not the same. PCL is defined as the cumulative next-token negative log-likelihood (NLL) across a dataset (section 2.2, Fig. 1, Eq. 4)—for regression, NLL is measured using MSE (well-known to be equivalent to NLL under a Gaussian). PCL is therefore given by the area under a curve in Fig. 2 (L300-301), whereas MSE is the y-axis of the plots for regression problems. **Q9** We describe the Mastermind task and data format in L237-247 LHS. Other training details are provided in Apx. C. [1] Blier & Ollivier (2018). The description length of deep learning models [2] Delétang et al. (2023). Language modeling is compression [3] Wilkenfeld (2019). Understanding as compression [4] Bishop & Nasrabadi (2006). Pattern recognition and machine learning [5] Nannen (2010). A short introduction to model selection, Kolmogorov complexity and Minimum Description Length [6] Wallace & Dowe (1999). Minimum message length and Kolmogorov complexity [7] Mingard et al. (2025). Deep neural networks have an inbuilt Occam's razor [8] Kingma & Welling (2013). Auto-encoding variational bayes [9] Song et al. (2020). Score-based generative modeling through stochastic differential equations [10] Shalev-Shwartz & Ben-David (2014). Understanding machine learning: From theory to algorithms --- Rebuttal Comment 1.1: Comment: I thank the authors' for the responses. First, regarding Q2, while I acknowledge that "minimizing upper bounds to intractable quantities is standard practice in ML," this is not the case in the paper. The intractable quantities in question are the Kolmogorov complexities ( K(D|p) and K(p|T) ), while the proposed upper bound is the next-token prediction training loss, or equivalently PCL. Minimizing the training loss (the upper bound) does not necessarily equate to minimizing the Kolmogorov complexity (the intractable quantities), as the training objective is explicitly different from the Kolmogorov complexity. Therefore, the claim that "sequence models trained on cumulative next-token prediction losses explicitly optimize a meta-learning objective that jointly minimizes training error and model complexity" does not hold. Second, the main results in Sections 2.3 and 2.4 pertain to the training objective, which follows the standard LLM pretraining scheme, rather than ICL. Thus, I find the claim in the rebuttal that "training via next-token prediction loss explains the effectiveness of ICL" unconvincing. A proper connection between ICL and PCL would require an analysis during test time, as the core of ICL lies in the model’s ability to leverage context prompts at inference to generate high-quality responses for unseen inputs—despite being trained solely with next-token prediction. Given these concerns, I believe the paper requires substantial theoretical revisions, and I will maintain my current score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. **Clarifying the usefulness of the bound** We apologize for the confusion here. We fully agree that we minimize PCL, which is an upper bound to the Kolmogorov complexity (K-complexity) and indeed if we reduce PCL by a certain quantity, it does not imply that we have reduced K-complexity by any quantity (it could even go up!). However, PCL ≥ K-complexity implies that if we have reduced PCL to some quantity $\epsilon$, we can with certainty say that K-complexity ≤ $\epsilon$. Therefore, if it is possible to minimize PCL through meta-learning (i.e., ICL), we can *guarantee* that the resulting model complexity + training error is small (at least as small as the PCL), whereas when minimizing training error (e.g., through standard maximum likelihood) we get no such guarantees (model complexity can be large and is never bounded). **Standard practice** We are confused by the reviewer’s following statement: “the training objective is explicitly different from the Kolmogorov complexity”. Even when the true objective (e.g., K-complexity) is explicitly different from the upper bound (e.g., PCL), it is still a valid strategy to minimize the bound in hope of minimizing the true objective, for example: $$ \log p(x; \theta) = \mathbb{E}\_{z \sim q\_\varphi(\cdot | x)}\left[\frac{\log p(x, z; \theta)}{q\_\varphi(z | x)}\right] + \mathbb{KL}\left[q\_\varphi(\cdot | x) || p(\cdot | x; \theta)\right] \geq \mathbb{E}\_{z \sim q\_\varphi(\cdot | x)}\left[\frac{\log p(x, z; \theta)}{q\_\varphi(z | x)}\right] $$ Here, the right hand quantity (the Evidence Lower Bound) is also an explicitly different quantity to the left hand one (true objective); note that we switched the order because here the goal is to increase the true objective, not decrease. In particular, the true objective is not even a function of $\varphi$ but the right hand side is. Yet, training with this bound is a well-known practice in graphical models, especially VAEs, even though the bound is mostly not tight [1]. Even in this standard practice, one could be maximizing the ELBO in a way that does not necessarily maximize the log likelihood. Regarding K-complexity in particular, it is defined as the length of an optimally compressed string. Since it is about optimal compression, K-complexity is in fact *always* bounded using computable compression algorithms [2-4], such as PCL (a compression algorithm) as in our case. **Connection to ICL** The reviewer mentions that the “core of ICL lies in the model’s ability to leverage context prompts at inference to generate high-quality responses for unseen inputs” with which we completely agree. In our experiments, we are doing precisely this: the pre-trained model leverages context prompts (i.e., observations from a novel specification of the task) and generates high-quality responses (i.e., in line with the true underlying predictive model) for unseen inputs (unseen as the context came from a task unseen during training). Note that this analysis is during test time. The reviewer might be pointing to the fact that we study *example-based* ICL where the prompt contains demonstrations from a novel task, as opposed to the form of ICL in which the task instructions are linguistically described in a prompt given to an LLM. However, *both* example-based and instruction-based ICL are prevalently studied in the literature, and the term “ICL” does not refer to one in particular [5]—the important part is that contextual information in both cases describes novel tasks, and is used to learn at inference time. In fact, a number of works that aim to analyze and understand ICL follow similar example-based procedures [e.g., 6-8], and we make it clear early on in the introduction which form of ICL we study (lines 25-39 RHS). We thank the reviewer for their insight and would greatly appreciate an increase in rating if their concerns have been addressed. [1] Cremer, Chris, Xuechen Li, and David Duvenaud. "Inference suboptimality in variational autoencoders." *International conference on machine learning*. PMLR, 2018. [2] Nannen (2010). A short introduction to model selection, Kolmogorov complexity and Minimum Description Length [3] Blier & Ollivier (2018). The description length of deep learning models [4] Mingard et al. (2025). Deep neural networks have an inbuilt Occam's razor [5] Lampinen, A. K., Chan, S. C., Singh, A. K., & Shanahan, M. (2024). The broader spectrum of in-context learning [6] Zhang, Ruiqi, Spencer Frei, and Peter L. Bartlett. "Trained transformers learn linear models in-context." *Journal of Machine Learning Research* 25.49 (2024): 1-55. [7] Müller, Samuel, et al. "Transformers can do bayesian inference." *arXiv preprint arXiv:2112.10510* (2021). [8] Garg, Shivam, et al. "What can transformers learn in-context? a case study of simple function classes." *Advances in Neural Information Processing Systems* 35 (2022): 30583-30598.
Summary: This paper explores a theoretical framework linking next-word prediction losses in large language models to coding-theoretic principles, often referred to as “prequenial” coding. It argues that simpler, more compact representations (inspired by Occam’s Razor) can facilitate stronger in-context learning performance. The authors present a bound-based objective aimed at improving generalization from prompts, and provide preliminary results on synthetic tasks suggesting the benefit of these simplicity-driven principles. Claims And Evidence: The submission posits that next-word prediction objectives parallel coding-based formulations and that enforcing an Occam’s Razor-style bound can yield practical improvements in in-context learning. The key claims are supported by derivations that connect model complexity and predictive accuracy, alongside synthetic experiments. However, the paper offers limited evidence of how these findings directly translate into real-world scenarios, since the experiments remain small-scale and somewhat specialized. Methods And Evaluation Criteria: The methods center on a bound-based reweighting scheme intended to emphasize simplicity in the learned representations. Evaluation is conducted via synthetic pattern extrapolation tasks, showing modest gains under the proposed approach. While these tasks illustrate potential effectiveness, more extensive benchmarks or realistic datasets would strengthen the case for broader applicability. Theoretical Claims: The theoretical arguments rest on a newly introduced Occam’s Razor-inspired bound relating predictive cross-entropy to compact representations. The proofs, rooted in coding theory, appear consistent with standard generalization frameworks, but the paper does not delve into how tight or loose these bounds might be in practice, which raises questions about their practical utility. Experimental Designs Or Analyses: The experiments use simple synthetic tasks designed to capture in-context learning behavior. They document improvements that align with the theoretical claims, but do not include extensive ablation or broader domain testing. Consequently, while the setup seems sound for initial validation, it provides limited evidence of robustness or real-world feasibility. Supplementary Material: I only skimmed through the limited parts of the supplementary material. Relation To Broader Scientific Literature: By linking next-word prediction to coding-theoretic insights, this work resonates with the long-standing principle of minimum description length and various PAC-Bayes approaches, all of which emphasize model simplicity as a route to better generalization. It adds to recent discussions on in-context learning by proposing a formal perspective on why large language models can generalize from prompts. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A notable strength is the novel theoretical perspective that attempts to unify coding-theoretic arguments with in-context learning. The main weaknesses are the lack of clarity regarding how these ideas translate into practical improvements, as well as the absence of a detailed analysis on the tightness of the proposed bounds. Other Comments Or Suggestions: A broader discussion of how bound-based optimization might scale to real-world tasks and how it compares or integrates with standard in-context learning pipelines would considerably strengthen the paper. Small additions, like evaluating multiple domains or tasks, could showcase broader relevance. Questions For Authors: How do you envision scaling your bound-based approach to more complex, real-world tasks without incurring excessive computational cost? 2) Have you attempted to measure the tightness of your bounds empirically in different settings to ensure that they offer meaningful guidance rather than a loose theoretical construct? 3) Could you compare and contrast your approach with standard PAC-Bayes bounds to clarify any points of conceptual or methodological overlap? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review. **Clarification of contributions** The reviewer suggests that we “present a bound-based objective aimed at improving generalization from prompts” and that our work centres around “a newly introduced Occam’s Razor-inspired bound” that gives “stronger in-context learning performance”. We in fact do not attempt to improve ICL and do not propose novel methods for it: we provide a normative theory based on Occam’s razor to explain the effectiveness of ICL as it currently exists. Specifically, we argue that its effectiveness as a learning algorithm lies in the next-token prediction loss used to train sequence models, which optimizes a compression scheme called prequential coding (PCL) that implicitly fits simple models in-context that generalize well. We believe that this theoretical link is a novel contribution to the field of ICL. **Justification of experimental protocol** We decided to focus on synthetic tasks for a few reasons. 1. Interpretability: For theoretical work, synthetic tasks are easier to control and results are easier to interpret, allowing us to concretely compare different objectives to illustrate the validity of our central insight: that ICL learners are more performant and efficient, especially in low-data regimes. 2. We are in a meta-learning setting where sequence models need to be trained on a large meta-distribution of tasks in order to perform ICL. With real-world data, it is difficult to control the size of the meta-distribution over tasks or find meta-distributions that are sufficiently broad. 3. Valid comparison against the train-risk ICL baseline for real LLM tasks we would have required training an LLM from scratch using the train-risk objective we outline (L223-238 RHS). Given that this paper is about a general theory of ICL, expending such industry-scale compute resources isn’t reasonable, especially considering that we perform ablations to carefully study modeling choices. Consequently, we experiment with non-iid HMM based tasks (L248-259 LHS) to capture the structure of natural language, following common practice (c.f., [1]). 4. Similar theoretical work in the field of ICL also makes use of synthetic tasks, and we aimed to remain consistent with standard practice (e.g., citations on L221-222 LHS). The reviewer suggests our work lacks “extensive ablation”, but we introduce baselines (e.g., train-risk ICL, off-the-shelf learners with different inductive biases) to isolate the role of the next-token loss that bounds $K(D,p)$. The reviewer suggests a lack of “multiple domains or tasks”, but our experimental settings are in fact wide-ranging: we study linear & nonlinear tasks, regression & classification, iid & non-iid, scaled task difficulties, Transformers & SSMs, and models trained from scratch & LLMs. **Q1** We would again like to emphasize that we did not introduce a novel training algorithm for sequence models as the reviewer suggests. We provided a theory that explains why the standard next-token prediction loss is an effective method for training sequence models. We do not have to “scale our approach to more complex, real-world tasks” since cumulative next-token prediction loss is *already* the objective used to train LLMs. **Q2** Measuring the tightness of the bound $K(D,p) \lt L_{preq}$ involves computing Kolmogorov complexity, which is uncomputable. However, minimizing an upper bound on an intractable quantity follows a long line of work in ML. For example, as we state in our paper (L156-159 RHS), all variational inference methods that minimize the negative ELBO—from VAEs to diffusion models—learn via a tractable bound to an intractable quantity (the negative log likelihood). In compression too, there is longstanding work (c.f., [2]) proposing variational approximations to minimizing the complexity of deep learning models. Pushing down an upper-bound is a workable proxy for optimizing a target quantity, therefore our argument that minimizing PCL minimizes training error + model complexity is valid. Finally, even when it is not being minimized through a meta-learning objective, PCL has been found to be a strong compression algorithm in deep learning settings [3], bounding $K(D, p)$ better than other methods. We will further clarify this in revisions. **Q3** PAC-Bayes bounds are also rooted in Kolmogorov complexity, but require a prior over models. PCL only depends on a learning algorithm. Viewing a sequence model as a learning algorithm therefore makes it easy to compute PCL, but not PAC-Bayes bounds. We’ll include a brief discussion about the link in the revised draft. [1] Xie, S. M., Raghunathan, A., Liang, P., & Ma, T. (2021). An explanation of in-context learning as implicit bayesian inference [2] Honkela, A., & Valpola, H. (2004). Variational learning and bits-back coding: an information-theoretic view to Bayesian learning [3] Blier, L., & Ollivier, Y. (2018). The description length of deep learning models
Summary: The paper studies the problem of incontext learning(ICL) and establishes a connection between ICL and Occam's Razor, arguing that next token prediction used in training Transformers implicitly minimizes both the training error and complexity of the learned model. The authors show that this joint objective is equivalent to prequential coding and the meta-learning problem of minimizing the prequential code length is already solved by next token prediction objective used in training ICL. Through both theoretical analysis and experimental results, the authors argue that ICL implicitly favors simpler model that generalize well which is in line with the principle of Occam's Razor. Claims And Evidence: The claims made in the paper appear to be supported by convincing theoretical and empirical evidence though I have not verified the theoretical results. Methods And Evaluation Criteria: The authors perform experiments using synthetic tasks that allow finer control over task and sample complextiy. The evaluation criteria of prequential length and generalization error are very relevant to the problem at hand. Theoretical Claims: I have not verified the theoretical correctness of proofs. Experimental Designs Or Analyses: The authors conducted experiments to compare ICL to standard trianing error minimization using SGD. They also conducted the impact of transformer architecture on prequential code minimization and their generalization ability which is highly appreciative. Supplementary Material: No Relation To Broader Scientific Literature: ICL has been widely used in modern LLMs to solve various task without model fine tuning and is an important paradigm of study in modern LLMs. This paper takes an important step in properly studying ICL from the lens of model compression and Occam's razor. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The paper establishes a good theoretical connection between the joint objective of reducing training error and model compression to prequential encoding and establishes the meta-learning problem of next token prediction used to train ICL already solves this problem. Weakness: The connection between prequential encoding and ICL meta-training is more of an upper bound rather than a direct equivalence. The prequential encoding does not directly suggest the specific meta-training algorithm or architectures used. Other Comments Or Suggestions: It would be valuable to include some more realistic tasks in empirical evaluation. Questions For Authors: 1. You highlight that current ICL methods underfit. Based on your study, do you see a way to mitigate this issue perhaps by better architecture or model design? 2. Have you conducted experiments on real world task involving natural languages? If so, could you please elaborate on that? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review. **Minimizing a tractable upper bound on an intractable objective is valid and common in ML** > The connection between prequential encoding and ICL meta-training is more of an upper bound rather than a direct equivalence. > The reviewer is correct in pointing out the upper bound given by prequential code length (PCL). It would be ideal to directly minimize Kolmogorov complexity, though this is widely known as being intractable. In contrast, we minimize an upper bound (with learnable parameters $T_\phi$), following a long line of work in ML. For example, as we state in our paper (L156-159 RHS), all variational inference methods that minimize the negative evidence lower bound (ELBO)—from variational auto-encoders to diffusion models—learn via a tractable bound to an intractable quantity (the negative log likelihood). In compression too, there is longstanding work (c.f., [1]) proposing variational approximations to minimizing the complexity of deep learning models. Pushing down the upper-bound is a workable proxy for optimizing a target quantity, therefore our argument that minimizing PCL minimizes training error + model complexity is valid. Finally, even when it is not being minimized through a meta-learning objective, PCL has been found to be a strong compression algorithm in deep learning settings [2], bounding $K(D, p)$ better than other methods. We will further clarify this important point in revisions. **Prequential coding through next-token prediction with sequence models abstracts over meta-training algorithm and meta-learner architecture** > The prequential encoding does not directly suggest the specific meta-training algorithm or architectures used. > It is an advantage of our theory to provide a compression-based account of ICL for sequence models without being specific to particular architecture choices: indeed, we show that the theory holds for both Transformers and state-space-models (Section 3.3). Our message is that having a PCL minimization objective—regardless of architecture or optimization details—is good practice. Further, our findings identify two significant challenges, which we highlighted in our experiments section: 1. Underfitting by in-context models as a result of limited sequence model expressivity and compute power compared to training DNNs from scratch on a task (sections 3.2, 3.3) 2. Meta-generalization of the sequence model to novel tasks (section 3.4) In section 5, we did in fact discuss possible approaches for addressing these challenges, some of which have been explored in prior work (L385-431 RHS) and some of which are novel ideas. We believe that solving these challenges is out of scope for our current theoretical work, but that our theory is useful for both understanding and addressing them. We will further clarify these points in the discussion. **Q1** Based on our theory and results, we outlined a promising approach to the ICL underfitting problem in L385-413 RHS. **Q2** We decided to focus on synthetic tasks for a few reasons. 1. Interpretability: For theoretical work, synthetic tasks are easier to control and results are easier to interpret, allowing us to concretely compare different objectives to illustrate the validity of our central insight: that ICL learners are more performant and efficient, especially in low-data regimes. 2. We are in a meta-learning setting where sequence models need to be trained on a large meta-distribution of tasks in order to perform ICL. With real-world data, it is difficult to control the size of the meta-distribution over tasks or find meta-distributions that are sufficiently broad. 3. For a valid comparison against the train-risk ICL baseline, for real LLM tasks, we would have needed to train an LLM from scratch using the train-risk objective we outline (L223-238 RHS). Given that this paper is about a general theory about ICL, expending such industry-scale compute resources isn’t reasonable, especially considering that we perform ablations to carefully study modeling choices. Consequently, we experiment with non-iid HMM based tasks (L248-259 LHS) to capture the structure of natural language, following common practice (c.f., [3]). 4. Similar theoretical work in the field of ICL also makes use of synthetic tasks, and we aimed to remain consistent with standard practice (e.g., citations on L221-222 LHS). [1] Honkela, A., & Valpola, H. (2004). Variational learning and bits-back coding: an information-theoretic view to Bayesian learning [2] Blier, L., & Ollivier, Y. (2018). The description length of deep learning models [3] Xie, S. M., Raghunathan, A., Liang, P., & Ma, T. (2021). An explanation of in-context learning as implicit bayesian inference
null
null
null
null
null
null
null
null
PertEval-scFM: Benchmarking Single-Cell Foundation Models for Perturbation Effect Prediction
Accept (poster)
Summary: The paper introduces **PertEval-scFM**, a standardized framework to evaluate single-cell foundation models (scFMs) for predicting perturbation effects. Key contributions include: 1. **Framework**: A modular toolkit for zero-shot evaluation of scFM embeddings. 2. **Metrics**: Introduces **AUSPC**, **E-distance**, and **contextual alignment** to assess model robustness and generalization under distribution shifts. 3. **Findings**: scFM embeddings do not consistently outperform baseline models, especially under distribution shifts. **GEARS** performs best, highlighting the need for task-specific architectures. PertEval-scFM provides a comprehensive benchmark for scFMs, emphasizing challenges in perturbation effect prediction and guiding future research. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. This is primarily a benchmarking experiment, with limited theoretical proof. Experimental Designs Or Analyses: Yes. Although GEARS achieves the best performance, it requires constructing a cell adjacency matrix based on cell similarity first. This results in GEARS still being aware of the relationships between cells even after SPECTRA divides the training and test sets. Since the paper does not provide detailed information on the implementation of GEARS, I hope this can be clarified further. Supplementary Material: Yes. I believe that the supplementary materials of the full text already comply with the journal's standards and are very detailed. Relation To Broader Scientific Literature: 1. Fair Comparison in Single-Cell Foundation Models While Geneformer and scGPT have demonstrated the promising potential of foundation models in single-cell biology, this study systematically evaluates their true practical utility under zero-shot settings with distribution shifts. 2. Perturbation Effect Prediction 3. Evaluation Frameworks for Biological Models Comparison to scEval (Wang et al., 2024) evaluated 8 general tasks (e.g., clustering, batch correction), PertEval-scFM introduced perturbation-specific innovations. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weaknesses: 1.Figure 2: Including GEARS results would provide a more complete comparison of model performance 2.Figure 3b: The proportion of high E-distance samples under high probability conditions is relatively low. Further analysis of outliers could reveal underlying patterns. 3.Figure 4: The dense data points make it difficult to draw clear conclusions. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and constructive feedback, as well as for recognizing the strengths of our work, including the comprehensiveness of our benchmark and its potential to guide future research. Below, we respond to the specific concerns raised. **GEARS** We agree that including GEARS in Fig. 2 provides a more complete picture of model performance and have updated the figure accordingly (see [link](https://drive.google.com/file/d/1dJBOtZ2ytpMee1rK5qHWVIqJ_NVx1rnZ/view?usp=sharing)). GEARS uses a **gene coexpression** graph and a **gene ontology** graph to model gene relationships and perturbations, respectively, thus injecting biological priors into the model architecture. On the other hand, the SPECTRA graph is built on **cell-to-cell** similarity, which enables us to measure model robustness to distribution shift by simulating increasingly different train-test sets. GEARS is therefore not aware of the relationship between cells, but aware of the relationship between genes, which therefore doesn’t interfere with the way we assess robustness. We have clarified this in the manuscript to avoid any ambiguity. Regarding the implementation, GEARS was faithfully reproduced following its original design \[1\]. Briefly, it considers a perturbation dataset of $N$ cells, where each cell is described by a gene expression vector $g \\in \\mathbb{R}^K$ and an associated perturbation set $P \= (P\_1, \\ldots, P\_M)$. The model learns a function $f$ that maps a novel perturbation set to its post-perturbation gene expression profile. Specifically, GEARS: * Uses a GNN encoder $f\_{\\text{pert}}: \\mathcal{Z} \\rightarrow \\mathbb{R}^d$ to embed perturbations, * Uses a second GNN encoder $f\_{\\text{gene}}: \\mathcal{Z} \\rightarrow \\mathbb{R}^d$ to embed genes, * Combines these embeddings via a compositional module, * Applies a cross-gene decoder $f\_{\\text{dec}}: {\\mathbb{R}^d}^K \\rightarrow \\mathbb{R}^K$ to predict the post-perturbation gene expression vector. Training is conducted end-to-end using the autofocus direction-aware loss, with default hyperparameters from the GEARS paper (e.g., hidden size 64, one GNN layer for both GO and co-expression graphs). The only modification we made was to the GEARS dataloader: we adapted the \`prepare\_splits\` function to use SPECTRA-defined training and test splits via a custom mapping (in \`set2conditions\`). **Figures 3b and 4** In Fig. 4 we further analyse the outliers seen in Fig. 3b. There, we explore the distribution of the perturbation effect on the top 20 DEGs (dashed line), and how such perturbation effect is predicted by the models (data points). Dense clustering near the dashed line (e.g., Fig. 4a) indicates high agreement with ground truth, while more dispersed predictions (e.g., Fig. 4c) suggest low reliability. This analysis suggests that the magnitude of the perturbation effect is not the only factor affecting performance, but that the distribution of such effect also matters. Perturbations with lower overall effect, but with an atypical distribution will also challenge the models. These plots emphasize how the structure of the ground truth perturbation distribution affects model accuracy, a point we highlight in the revised figure caption. We have revised the Figure to improve clarity, using smaller, non-overlapping data points to better distinguish their values. The revised figure is available at this [link](https://drive.google.com/file/d/1VymFV4tVhO7Xvp75AuSJZToWEtLC4zSu/view?usp=drive_link). We hope these clarifications and updates address the reviewer’s concerns. We thank the reviewer again for their thoughtful comments, which helped us strengthen the clarity and completeness of our work. \[1\] [https://doi.org/10.1038/s41587-023-01905-6](https://doi.org/10.1038/s41587-023-01905-6)
Summary: This paper introduces PertEval-SCFM a benchmark for zero-shot single-cell foundational model embeddings to capture transcriptional perturbation effects. It is claimed that scum embeddings do not provide consistent improvements over baseline models. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. to construct embeddings for perturbed cells, expression counts of perturbed genes are set to zero in cells exposed to that perturbation "effectively simulating the perturbation in silico" While I agree that this is a valid design, it also has some limitations. Namely there is no way for the model to distinguish between an intervention and a zero gene expression observation. From the causality literature, this is theoretically an issue. It seems somewhat difficult to claim that these models don't capture perturbation effects if this is the only type of embedding extracted. However, the authors acknowledge this problem in the limitations. One other concern is that the Norman dataset uses CrisperA (i.e. activation) so this zeroing of gene expression seems even more wrong in this context. Have the authors tried an activation type intervention? Supplementary Material: Yes. The code and text. Relation To Broader Scientific Literature: There are scFM models that claim to model perturbations zero-shot. This work disputes those claims. This is a useful contribution. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: Strengths: * Originality: I have not seen a paper that benchmarks the zero-shot effectiveness of scFMs on perturbations. This is a timely work. * Significance: While the conclusions of this work are not overly surprising, they set a useful benchmark and set of metrics for future work. * Clarity: I found the writing clear given the complexity and domain knowledge of this particular benchmark. Weaknesses: * This work only explores two datasets, both of which have their own issues. It's not clear if these findings are generalizable across datasets. * Fairly standard preprocessing is used. It's unclear if this is the same preprocessing used to train various scFMs thus it is unclear how fair this comparison is in this context to me. Other Comments Or Suggestions: No Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and positive evaluation of our work. We are glad that the reviewer found our contribution timely and the benchmark, metrics, and manuscript clear and useful for the field. Below, we address the specific concerns raised. **Causal intervention** We acknowledge that setting gene expression to zero may prevent the model from distinguishing between biological zeros and knockout interventions. However, there will be differences in coexpression accompanying a natural zero, which will not occur for induced zeros. We acknowledge this limitation in our paper and agree that future foundation models would benefit from incorporating representations that explicitly encode the nature of the intervention. A potential avenue of exploration would be to include perturbation tokens during pretraining and not only during fine-tuning. **Perturbation representation** While simulating perturbations via gene knockout may seem counterintuitive for CRISPRa datasets like Norman, we chose this strategy to address the challenges inherent in standardizing perturbation presentation across diverse scFMs. Different scFMs represent perturbations in different ways \- for instance, scGPT encodes gene expression as numerical values, whereas Geneformer uses rank-order representations. Therefore, upregulation is difficult to simulate consistently: parametric models like scGPT require arbitrary magnitude choices, with no biologically grounded way to determine appropriate magnitude, while rank-based models like Geneformer lack a principled way to re-rank genes. On the other hand, knockout offers a model-agnostic, unbiased representation that avoids model-specific inconsistencies, enabling fairer comparisons across scFMs. Nonetheless, we performed an experiment using an alternative perturbation representation, which simulates gain-of-function by doubling the perturbed gene’s expression in the Norman control expression data. We used scBERT to generate embeddings for the *in silico* upregulated input representation (scBERT+). We observed minimal performance differences (see Table 1 and the Figure in this [link](https://drive.google.com/file/d/1jju-RECJcVANDfUj9s5-oobKPFSNQ6_9/view?usp=drive_link)) compared to the knockout representation (scBERT-), supporting our approach. The difficulty of simulating realistic perturbations reflects a broader challenge in scFM methodology and while a full exploration of perturbation strategies is beyond the scope of this work, we hope our findings motivate further research in this area. Table 1: MSE ± standard deviation for Norman single gene embeddings generated with scBERT using an in silico knockout (scBERT \-) vs. an upregulation strategy (scBERT+) | | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | scBERT - | 0.0630 ± 0.0031 | 0.0634 ± 0.0062 | 0.0676 ± 0.0051 | 0.0636 ± 0.0031 | 0.0592 ± 0.0015 | 0.0645 ± 0.0107 | 0.0849 ± 0.0060 | | scBERT+ | 0.0640 ± 0.0038 | 0.0620 ± 0.0038 | 0.0658 ± 0.0077 | 0.0610 ± 0.0009 | 0.0659 ± 0.0046 | 0.0744 ± 0.0122 | 0.0853 ± 0.0064 | **Dataset and pre-processing choices** We agree that dataset diversity is critical for generalizability. We selected Norman and Replogle as they represent two of the most widely used, high-quality single-cell perturbation datasets currently available. Moreover, both Norman and Replogle contain two separate datasets each, so we are effectively exploring four datasets. Importantly, these all allow us to benchmark under controlled conditions and introduce distribution shifts via SPECTRA. Our platform is designed to be extensible, and we are actively exploring incorporation of additional datasets, including large-scale resources such as the recently published Tahoe-100M \[1\]. Regarding preprocessing, we applied a consistent pipeline that follows standard best practices in single-cell analysis across all models, including highly variable gene (HVG) selection, transcript count normalization, and log transformation. These steps are aligned with those used in the original scFM papers, differing only in hyperparameter value choices. We ensure internal consistency in preprocessing across all model evaluations to enable fair comparisons. While we acknowledge that preprocessing choices can affect results, this is a well-recognized, broader limitation of single-cell data analysis, which we explicitly discuss in our manuscript. We hope our responses clarify the rationale behind our design choices and reinforce the value of our benchmark as a foundation for future work. We believe that making the limitations explicit and quantifiable is a critical step for advancing the development of biologically meaningful scFMs. \[1\] [https://www.biorxiv.org/content/10.1101/2025.02.20.639398v1](https://www.biorxiv.org/content/10.1101/2025.02.20.639398v1) --- Rebuttal Comment 1.1: Comment: ### Finetuning Several other reviewers mentioned the lack of fine-tuning of the single cell foundation models. One thing that is not clear to me is what data is GEARS trained on? Diving into the GEARS paper it seems like it is trained on left out perturbations from the datasets of interest? I think this needs to be made clear in the paper if so. This seems like not a fair comparison then to make the following conclusion. In response to reviewer xRpb: > Because fine-tuning performance is addressed in other studies and because it fundamentally goes against our approach of establishing existing information content, we do not include fine-tuning in our study design. The findings we present therefore highlight that zero-shot scFM embeddings do not contain useful biological information pertaining to the perturbation prediction task, which in and of itself is an important finding. This is an extremely interesting statement but in my opinion cannot be sufficiently supported by the current experiments. The current experiments show that using this simple knockout strategy and perturbation prediction method does not result in improved perturbation prediction. It does not support the statement of "zero-shot scFM embeddings do not contain useful information for perturbation prediction". Clearly fine-tuning experiments in prior work has shown that these models contain useful information pertaining to the perturbation prediction task. I would urge the authors to make claims more in line with the experimental results. ### Perturbation representations While I understand it is difficult to model CRISPERa, I think it is necessary to have the computational model reflect this reality rather than being the same as knockout. > Therefore, upregulation is difficult to simulate consistently: parametric models like scGPT require arbitrary magnitude choices, with no biologically grounded way to determine appropriate magnitude, while rank-based models like Geneformer lack a principled way to re-rank genes I thank the authors for the additional results doubling the perturbed genes expression, but why was this chosen? Why can't something like the mean observed value of the activated gene under intervention be used? From my calculations the median increase of genes under CrisperA is around 5x the control in the Norman dataset for single perturbations, but can be much much higher (thousands of times). This makes sense biologically because the Norman dataset was mostly activating genes that are not normally active in this system. This is quite a different setting than the one tested here. I think this is a critical piece of this paper essentially invalidating the results for 1/2 datasets. For this reason I lower my score. --- Reply to Comment 1.1.1: Comment: We appreciate that the reviewer initially recognized strengths in our manuscript: 1. Originality: *I have not seen a paper benchmarking the zero-shot effectiveness of scFMs on perturbations. This is timely work* 2. Significance: *They set a useful benchmark and set of metrics for future work* 3. Clarity: *I found the writing clear given the complexity and domain knowledge* 4. Usefulness: *There are scFM models that claim to model perturbations zero-shot. This work disputes those claims. This is a useful contribution* We believe these strengths remain valid and address the reviewer's additional concerns below: **GEARS and comparison fairness** **Claim:** The reviewer states, "*[GEARS] is trained on left out perturbations from the datasets of interest*" making comparisons unfair. This is incorrect because: - We implement and train GEARS from scratch using our own train-test splits - No pre-trained weights are used, ensuring no data leakage - The same data splits are used for all models, ensuring fair comparison See the GEARS baseline section (lines 141-151) of our Methods. **Perturbation representation** **Claim**: Modeling CRISPRa via knockouts "*essentially invalidat[es] results for 1/2 datasets*" We believe this conclusion is incorrect because: - We conducted a controlled comparison between knockout (scBERT-) and activation (scBERT+) representations - Results in Table 1 show minimal performance differences - This empirically demonstrates robustness to representation choices Thus our framework allows users to choose different *in silico* intervention types ( e.g. 0 for knockout, 2× for activation). This adaptability is itself a contribution to the field, as highlighted by the reviewer's initial assessment of our benchmark's usefulness. Furthermore: - Our benchmark evaluates current methods rather than proposing optimal perturbation strategies - Perturbation representation remains an open research question in the field and current scFMs use diverse approaches - Our approach follows established methods, specifically the *in silico* deletion from Geneformer - This ensures consistency and comparability across models within our framework **The alternative representation suggested introduces methodological issues**: - Using the “*mean observed value of the activated gene under intervention*” would introduce data leakage - It defeats the purpose of evaluating the model's predictive capabilities, **as we would be introducing the target gene expression value into our input** - Representing perturbations using the “*mean **observed** value of the activated gene under intervention*” would make it impossible for the model to predict completely **unseen** perturbations - Our approach ensures standardized testing conditions across different model architectures and the ability to predict unseen perturbations Overall, the consistency of our results across different representation strategies underscores the robustness of our findings. We see no significant differences between the two Replogle dataset (knockouts) and the two Norman datasets (up-regulation), further indicating the robustness of our findings. **Fine-tuning** **Claim**: "*Fine-tuning experiments in prior work has shown these models contain useful information pertaining to the perturbation prediction task*" - Several recent studies show ablated foundation models perform similarly to fully end-to-end fine-tuned models after task-specific training [4*, 6] - There is **no current consensus** fine-tuning improves performance over simple baselines [4*,6,7] Our findings in the zero-shot case do not contradict any previous findings and remain valid and valuable for understanding scFM limitations and guiding future development. Identifying these limitations and areas for improvement does not invalidate previous work - rather, it contributes constructively to the iterative scientific process aimed at enhancing these models. **Zero-shot evaluation** The reviewer misinterpreted our statement "*zero-shot scFM embeddings do not contain useful biological information*" - The reviewer quotes from our response to another reviewer about fine-tuning, **not** our paper - This takes our explanation out of context and misrepresents our work - Our zero-shot probe approach is standard for assessing representation quality [1*,2*] Our results demonstrate: 1. Simple baselines (mean model, MLP without biological priors) match or outperform zero-shot embeddings 2. This is consistent across datasets and models 3. The same probe architecture was used throughout These findings directly support our paper’s conclusion: “*current-generation scFM embeddings offer no improvement over baseline models when evaluated [in this context]*”. We kindly ask the reviewer to reconsider their downgrades of our score (from 3 to 2, then 1) in light of our clarifications. \* Ref. from rebuttal to **xRpb** [6] tinyurl.com/27mz2t7c [7] tinyurl.com/432fbdv9
Summary: This paper establishes a protocol to evaluate, in a standardized way, the performance of single cell foundation models at predicting the effect of perturbations. The authors evaluate using two data sets of CRISPR perturbations, combined with an approach to explicitly evaluate the effect of out-of-distribution learning. The results are very clear: leading foundation models do not fare better than simple baselines, and are outperformed by models that make particular effort to incorporate prior knowledge about gene networks and interactions. Claims And Evidence: The evaluations and discussion in this manuscript are clear and well supported. The central claim is supported by multiple overlapping analyses, suggesting that small changes are unlikely to explain the main claim. Notably, this paper, explicitly simulates the effect of distribution shift, as well as a wider range of genes. This should conclusively address a main source of controversy surrounding single cell foundation models. Methods And Evaluation Criteria: There are several evaluation criteria, all of which give consistent results. However, all of these derive from perturbseq or other CRISPR-based technologies. As more technologies and datasets come online, I encourage the authors to maintain their online platform to incorporate a wider range of data sets, not all of which are likely to be knockouts. Theoretical Claims: No proofs are associated with this paper. Experimental Designs Or Analyses: The experimental designs are thorough. Supplementary Material: I reviewed the detailed methods and description of SPECTRA, with no further questions for the authors. Relation To Broader Scientific Literature: Although this is not the first broad evaluation of the ability of single cell foundation models to predict the effect of perturbations, it is notable for its thoroughness as well as exploration of sensitivity to domain shift. If I were going to develop a new set of embedding, I would look to this framework to evaluate the model. Essential References Not Discussed: No essential references missed. Other Strengths And Weaknesses: The strength of this paper lie in its significance clarity and relevance to the field. It is not particularly novel, but that is not of concern. Other Comments Or Suggestions: In the appendix, there is a typo in the title of section D.1. Questions For Authors: The one area in which I see room for further analysis and discussion is the form of how perturbations are communicated to the model. Setting gene expression to zero may not be the ideal strategy. There are many other possibilities, furthermore, such as attribution analysis (i.e. calculating the linearized effect of infinitesimal perturbations). I also have continuing concerns about the data available for this class of question. We are never able to observe the original cell before and after a perturbation. Thus, we often look at the effect of one gene upon other genes, averaged across cells. To what extent does this averaging distort the results? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback on the work presented. We are committed to maintaining the benchmark as a live and extensible resource. Because we have prioritized reproducibility and modularity in our design, incorporating new models and datasets is straightforward. We are actively monitoring developments in the field, and one exciting recent advancement is Tahoe-100M \[1\], currently the largest perturbational single-cell dataset. We intend to extend our benchmark to include such resources in future iterations. As scFMs continue to evolve, especially with respect to how they represent perturbations, we plan to integrate these innovations to support broader perturbation types beyond knockouts. Below we address the reviewer’s concerns: **Perturbation representation** We acknowledge that nullifying gene expression may not align with biological reality, especially in gain-of-function settings such as CRISPRa. We refer you to our detailed answer on this point in the rebuttal to Reviewer **pVrS.** You can also see the results of a preliminary experiment with upregulated embeddings at this [link](https://drive.google.com/file/d/1jju-RECJcVANDfUj9s5-oobKPFSNQ6_9/view?usp=drive_link). We thank you for the suggestion of incorporating attribution-based methods. If we understand correctly, calculating the effects of infinitesimal perturbations is a post-hoc explainability tool, which we would be interested in incorporating in our framework. However, it is unclear to us how we would use it for the initial representation of perturbations. We would be happy to discuss this point further. **Limitations of averaging and unpaired observations in scRNA-seq** Current methods do not allow for true paired observation of pre- and post-perturbation states for the same cell, due to the destructive nature of measurement. As the reviewer has noted, this introduces ambiguity into how perturbation effects are estimated. Indeed, due to the context-dependent nature of biology, it is very likely that there are other factors that affect gene expression. This trade-off is an inherent limitation of current experimental techniques and affects all existing perturbation modeling studies. It is unclear to what extent this distorts the results without access to paired samples, or a robust estimate of paired samples. We believe the development of experimental protocols enabling paired measurements would significantly advance this line of work. To attempt to mitigate this, we adopt a pseudo-bulking strategy that averages across cells to reduce noise and obtain more robust perturbation signatures. Specifically, for each condition, we randomly sample 500 control cells and average their expression profiles, pairing them with the mean expression of all perturbed cells within the same perturbation. This approach helps suppress cell-to-cell variability in the perturbed population, thereby making the overall perturbation effect more apparent. However, as the reviewer rightly points out, the unpaired nature of the control introduces uncertainty about whether the starting state truly mirrors that of the perturbed cells pre-perturbation. Thank you for pointing out the typo in the appendix, we have now corrected it. We thank the reviewer for recognizing the clarity and relevance of our work. We appreciate the suggestions regarding perturbation representation and data limitations, both of which highlight important areas for continued exploration in this field. We hope our responses address the reviewer’s concerns and demonstrate the care with which we have designed our framework to support the community in developing and evaluating biologically meaningful single-cell foundation models. \[1\] [https://www.biorxiv.org/content/10.1101/2025.02.20.639398v1](https://www.biorxiv.org/content/10.1101/2025.02.20.639398v1) --- Rebuttal Comment 1.1: Comment: I would like to note that all reviewers had concerns about the form of perturbation tested. I appreciated the responses to other reviewers with regards to the new gain-of-function experiment, and I think this is satisfactory in addressing concerns but still only goes part of the way. Overall I think this suggests a lack of clarity about what we mean by perturbations in this field, and I encourage the authors to be more precise in their language about the specific nature of the effect. Allow me to clarify what I mean by perturbation tests, similar to "attribution analysis". The point is not to address post-hoc explainability. Rather, it is to introduce a form of perturbation which is more similar to the definition of a perturbation in a causal analysis. In nonlinear systems, the effect on overall expression $Y$ of introducing a change to some gene $x$ depends on the size of the perturbation in a nonlinear way. Thus a common definition of an "effect of x on Y" is the linearized effect of an infinitesimal perturbation in the close neighborhood of actual data. Testing too large of a perturbation may give different results and also carries the risk that the perturbed data is out of the training distribution. Thus, I would propose not just measuring the effect of zeroing genes or doubling them, but rather by the slope of the output due to tiny perturbations in gene counts. For Geneformer one could arrange the smallest detectable difference, which would be ascending/descending the order of a target gene in the ranked list by one. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to provide a detailed explanation regarding perturbation tests. We agree that assessing perturbations by measuring the slope of the output from tiny changes in rank order or gene counts provides an elegant and theoretically grounded perspective that aligns well with causal analysis in nonlinear systems. In our current study, we decided to model perturbations as complete knockouts, drawing from precedents such as Geneformer, where large discrete manipulations of rank-order vectors were shown to shift cell embeddings in biologically meaningful ways. These findings demonstrated that strong *in silico* interventions can indeed drive biologically significant embedding changes. We will further clarify the nature of the perturbation studied in a camera-ready version (in Section 2.1.2 Single-cell foundation model embeddings). From a clinical standpoint, focusing on complete knockouts and increased gene dosage aligns with real-world scenarios. Complete loss of function occurs in certain cancers (e.g., TP53 mutations) and hereditary neuropathies. Similarly, a significant increase in gene dosage, such as trisomy 21 in Down syndrome or PMP22 duplication in Charcot-Marie-Tooth disease, can lead to disease phenotypes due to dosage-sensitive gene expression. This is why we chose to model these dosage effects using a 2x expression level, which provides a clear experimental paradigm while approximating the mechanistic impacts of gene duplications. Thanks again for clarifying your thoughts on exploring subtler changes in gene expression via attribution analysis. We fully agree with you \- this represents an interesting experiment that our current framework supports. We could implement this approach by defining a range of relative perturbation sizes centered on the observed control expression of the gene. This would not only allow us to measure the effect of small changes but also to characterize the nonlinearity of gene expression responses by determining the size of the linear range around each gene's “normal” expression level. As demonstrated by our 2x upregulation experiment, modifying the "intervention" vector within our framework is straightforward, requiring only minimal adjustments to test various perturbation magnitudes. We will also modify our codebase so that users can modify this parameter and run this experiment on different datasets. Implementing your suggested approach would definitely provide deeper insights into gene regulatory dynamics. We plan to explore this in future work. We appreciate the chance to discuss these ideas with you\!
Summary: The paper titled "PertEval-scFM: Benchmarking Single-Cell Foundation Models for Perturbation Effect Prediction" presents a standardized framework called PertEval-scFM to evaluate single-cell foundation models (scFMs) for predicting perturbation effects. The study focuses on assessing whether zero-shot scFM embeddings can enhance the prediction of transcriptional responses to perturbations compared to simpler baseline models. The key findings are: 1. Benchmarking Framework: PertEval-scFM provides a systematic evaluation framework to compare scFM embeddings against baseline models in predicting perturbation effects. It includes three datasets to test model performance under different conditions. 2. Performance of scFM Embeddings: The results show that scFM embeddings do not consistently outperform simpler baseline models, especially when there is a distribution shift. 3. Challenges in Predicting Strong or Atypical Effects: The study highlights that all models, including scFM embeddings, struggle with predicting strong or atypical perturbation effects. This suggests that current models may lack the ability to generalize well to unseen or extreme perturbations. Overall, the paper provides valuable insights into the challenges of using zero-shot scFM embeddings for perturbation effect prediction and highlights the need for advancements in both model development and dataset quality to address these limitations. Claims And Evidence: The claims made in the submission regarding the performance of scFM models in perturbation effect prediction are not fully supported by clear and convincing evidence. Specifically, the assertion that scFM models underperform compared to simpler baselines like Gears, and that they are not well-suited for perturbation tasks, is problematic for the following reasons: 1. Data Usage and Understanding Issues: o The submission uses the Norman dataset, which is a CRISPRa dataset designed for activation perturbations of specific genes. However, the authors assume a knockout perturbation scenario for all scFM models. This discrepancy between the actual perturbation type (activation vs. knockout) may lead to inaccurate model evaluations. The scFM models are not adequately simulating the true perturbation conditions, which could skew the results and undermine the validity of the conclusions. 2. Unfair Model Comparisons: o The comparison between scFM models and baselines appears to be unfair due to the lack of proper fine-tuning for scFM models. scFM models are designed to learn embeddings from single-cell data, but perturbation tasks have unique characteristics, such as the specific changes in gene expression following targeted gene editing. To fairly compare these models, fine-tuning on relevant perturbation datasets is essential. The submission fails to follow this approach, unlike the SCGPT paper, which fine-tunes on a portion of the data and evaluates on the remaining portion. Without this fine-tuning, the scFM models may not be able to leverage their full potential for perturbation prediction, leading to misleading performance metrics. 3. Inconsistency with Previous Work: o The results presented in this submission conflict with those reported in the SCGPT paper, which demonstrated superior performance on the Norman and Replogle datasets. This inconsistency raises questions about the reliability of the current findings and suggests that the methodology or assumptions used in this study may be flawed. Recommendations for Improvement: To strengthen the claims and provide more convincing evidence, the authors should consider the following adjustments: 1. Correct Data Interpretation: o Re-evaluate the perturbation scenarios used in the experiments. For datasets like Norman, ensure that the perturbation type (e.g., activation vs. knockout) is accurately reflected in the model setup. This alignment will provide a more realistic assessment of the models' capabilities. 2. Fair Model Evaluation: o Implement a fine-tuning step for scFM models using a portion of the perturbation datasets. This approach will allow the models to adapt to the unique characteristics of perturbation tasks and provide a more accurate comparison with baselines. The evaluation should then be conducted on the remaining data to assess the models' performance fairly. 3. Reconciliation with Existing Literature: o Address the discrepancies between this study's results and those from the SCGPT paper. A detailed discussion of the differences in methodology, assumptions, and data usage will help clarify the reasons for the conflicting findings and enhance the credibility of the conclusions. In conclusion, while the submission provides a valuable attempt to benchmark scFM models for perturbation effect prediction, the current evidence is not sufficiently robust to support the claims. Addressing the data usage issues and ensuring fair model comparisons are critical steps to validate the findings and contribute meaningfully to the field. Methods And Evaluation Criteria: The proposed framework, PertEval-scFM, aims to standardize the evaluation of single-cell foundation models (scFMs) for perturbation effect prediction. While the framework addresses an important problem, it has significant limitations: 1. Lack of Fine-Tuning: The framework does not properly fine-tune scFMs on perturbation tasks, likely underestimating their potential. Fine-tuning is essential for adapting models to the unique characteristics of perturbation datasets. 2. Limited Evaluation Metrics: The framework relies on a limited set of evaluation metrics, which may not fully capture the models' performance across different aspects of perturbation prediction. Recommendations: • Incorporate Fine-Tuning: Include a fine-tuning step using a portion of the perturbation datasets to allow scFMs to adapt and showcase their full potential. • Use More Evaluation Metrics: Expand the set of evaluation metrics to provide a more comprehensive assessment of model performance. In summary, the framework needs to incorporate fine-tuning and use a broader range of evaluation metrics to provide a fair and accurate assessment of scFMs for perturbation tasks. Theoretical Claims: Upon reviewing the manuscript, I did not encounter any explicit theoretical claims that required verification through proofs. The paper primarily focuses on empirical evaluations and the development of a benchmarking framework for assessing the performance of single-cell foundation models in perturbation effect prediction tasks. It does not present formal mathematical theorems or proofs that would necessitate validation in the traditional sense. Experimental Designs Or Analyses: I have carefully examined the experimental designs and analyses presented in the manuscript, particularly focusing on the use of the Norman dataset. This dataset is a CRISPRa dataset designed for activation perturbations of specific genes using CRISPR technology. However, the authors have incorrectly assumed a knockout perturbation scenario for all single-cell foundation models (scFMs) in their analyses. This discrepancy between the actual activation perturbations in the dataset and the assumed knockout scenario means that the scFMs were not accurately simulating the intended perturbation conditions. As a result, the experimental outcomes are not reliable, and the conclusions drawn from this dataset are questionable. Supplementary Material: I have reviewed several sections of the supplementary material in detail, including Appendix A (Single-cell transcriptomics data), B (Models), C (Featurization), and E (SPECTRA). Additionally, I briefly examined all figures in Section H (Supplementary figures). Relation To Broader Scientific Literature: The paper introduces a benchmarking framework aimed at evaluating the performance of several single-cell foundation models (scFMs) on perturbation effect prediction tasks. The authors have chosen to compare models such as Geneformer, scBERT, scFoundation, scGPT, and UCE with baseline models like Gears and a mean baseline across datasets including Norman, Replogle K562, and Replogle RPE1. Essential References Not Discussed: After a thorough review of the paper and the relevant literature, I did not identify any essential works that are missing from the citations or discussions in the paper. Other Strengths And Weaknesses: A significant strength of the paper is the introduction of a novel framework aimed at systematically evaluating single-cell foundation models (scFMs) on perturbation tasks within single-cell data. This is particularly noteworthy as it addresses the challenge of data distribution shifts, which is a common issue in real-world applications and has been less explored in previous studies. Other Comments Or Suggestions: I recommend that the authors consider incorporating additional standard evaluation metrics for a more comprehensive assessment. Specifically, the inclusion of Pearson correlation coefficient, which is commonly used in perturbation tasks to measure the linear relationship between predicted and actual perturbation effects, could enhance the robustness of the evaluation. This would provide further insight into the models' performance and align the assessment more closely with established practices in the field. Questions For Authors: 1. Perturbation Assumption You have modeled all perturbations as knockouts, even though the datasets, such as Norman, actually involve activation perturbations using CRISPRa technology. Could you explain the rationale behind this modeling choice? How might this assumption affect the accuracy of the model evaluations? 2. Lack of Fine-tuning The paper does not include a fine-tuning step for the pre-trained models, which is commonly used to adapt models to specific tasks like perturbation prediction. What are the reasons for not incorporating fine-tuning, and could you provide results with fine-tuning to compare? 3. Discrepancy with SCGPT Results There is a significant discrepancy between your results and those reported in the SCGPT paper for perturbation tasks on the same datasets. Can you provide an explanation for these differences? How Possible Responses Would Change My Evaluation: • For the first question, if the authors can provide a compelling justification for the knockout assumption or demonstrate that the results hold even with activation perturbations, it could strengthen the validity of their experimental design. • For the second question, including fine-tuning results could potentially show improved performance of the pre-trained models, which might alter the perception of their capabilities in perturbation tasks and could influence my evaluation positively if it addresses a critical limitation. • For the third question, an explanation that accounts for the differences without undermining the credibility of either study would be necessary. Understanding the reasons behind these discrepancies is crucial for assessing the reliability of the findings presented in this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough evaluation and constructive feedback. Below, we address each of the points raised. **Perturbation representation** We address the limitation of simulating perturbations via gene knockouts in our response to Reviewer **pVrS** and refer you to it. You can also see the results of a preliminary experiment with upregulated embeddings at this [link](https://drive.google.com/file/d/1jju-RECJcVANDfUj9s5-oobKPFSNQ6_9/view?usp=drive_link). **Fine-tuning** One of the main goals of our benchmarking approach is to assess the zero-shot information content of pre-trained embeddings. If the models encode meaningful biological information for perturbation prediction, this would be apparent without task-specific adaptation. If performance gains only appear through fine-tuning, this challenges the premise that these models inherently learn generalizable representations. MLP probes are commonly used as an approach to answer the question of information content of embeddings in NLP and CV, as they evaluate representation quality while removing confounding effects pertaining to task-specific prediction heads, which introduce inductive biases \[1, 2, 3\]. We also note that previous work has investigated the performance of fine-tuned versions of scGPT and scFoundation \[4]. This study found that simple linear baselines still outperformed these models, suggesting that fine-tuning does not fully address their limitations. Because fine-tuning performance is addressed in other studies and because it fundamentally goes against our approach of establishing existing information content, we do not include fine-tuning in our study design. The findings we present therefore highlight that zero-shot scFM embeddings do not contain useful biological information pertaining to the perturbation prediction task, which in and of itself is an important finding. **Discrepancy with scGPT results** Several factors may explain the discrepancies between obtained results: scGPT evaluates model performance **after fine-tuning** on perturbation data, reporting high performance, whereas we focus on evaluating zero-shot information content. We also evaluate robustness under distribution-shift, which is not considered in scGPT. We view our study as complementary to scGPT rather than contradictory. It highlights current limitations in zero-shot scFMs and clarifies the gap between pre-training and real-world application. We believe that these complementary perspectives can help guide future improvements so that models better capture perturbation effects. **Evaluation** MSE was selected as the primary metric due to its strong biological grounding and demonstrated effectiveness in capturing perturbation effect, as opposed to Pearson Correlation \[5\]. Furthermore, we aimed to provide a toolbox of comprehensive and complimentary metrics, which include: AUSPC (distribution shift robustness), E-distance (perturbation magnitude), and contextual alignment (pre-training relevance). Together, these provide a robust framework for evaluating scFMs that model transcriptomic perturbation outcomes. The comprehensiveness of our evaluation framework has also been recognised as one of the strengths of our paper by other reviewers. We would also like to note that GEARS is not considered a simple baseline, rather a SOTA model which takes biological priors into account and is developed specifically for perturbation effect prediction. The fact GEARS outperforms the zero-shot scFMs supports our conclusions that a biologically grounded architecture, which incorporates strong inductive biases, is more useful than pretraining for this task. We hope these clarifications demonstrate the careful thought that has gone into our experimental design and underscores the broader significance of our findings. By evaluating zero-shot capabilities across scFMs using a rigorous and biologically motivated framework, we provide a valuable benchmark and identify key limitations in current approaches. We believe this work will help guide future research toward more robust, generalizable, and biologically meaningful models. \[1\] [https://arxiv.org/abs/2103.00020](https://arxiv.org/abs/2103.00020) \[2\] [https://arxiv.org/abs/1905.06316](https://arxiv.org/abs/1905.06316) \[3\] [https://doi.org/10.1038/s42256-024-00949-w](https://doi.org/10.1038/s42256-024-00949-w) \[4\] [https://www.biorxiv.org/content/10.1101/2024.09.16.613342v4](https://www.biorxiv.org/content/10.1101/2024.09.16.613342v4) \[5\] [https://www.biorxiv.org/content/10.1101/2023.12.26.572833v1](https://www.biorxiv.org/content/10.1101/2023.12.26.572833v1) --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I am glad to raise my rating. --- Reply to Comment 1.1.1: Comment: Thank you very much for engaging with our response! We're very grateful for the score update :)
null
null
null
null
null
null
Efficient Federated Incomplete Multi-View Clustering
Accept (poster)
Summary: The work presents EFIMVC, an anchor-based federated MVC method employing view-specific/shared anchor graphs and alignment matrices. While the technical approach shows some novelty, fundamental design choices lack justification, and experimental comparisons appear skewed. Claims And Evidence: The clustering performance does not demonstrate superiority. Methods And Evaluation Criteria: The application of anchor graphs in federated settings is novel. The computational efficiency through anchor sampling is practical. Theoretical Claims: Yes. Experimental Designs Or Analyses: I have doubts about the fairness of setting different comparison methods. Supplementary Material: Supplementary material includes theoretical analysis for convergence and supplementary experiments. Relation To Broader Scientific Literature: Based on incomplete multi-view clustering and federal learning, this paper presents a new approach to handle federal incomplete multi-view data. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths The application of anchor graphs in federated settings is novel. The computational efficiency through anchor sampling is practical. Weaknesses 1. Most anchor-based MVC methods enforce orthogonality constraints on anchor matrices to ensure diversity. The authors remove this constraint without justification. What performance changes would occur if orthogonality constraints were reinstated? 2. Previous methods typically apply k-means directly on learned embeddings. The choice to cluster left singular vectors of Z in Algorithm 1 lacks theoretical justification. Why not use standard spectral clustering? 3. The huge performance gaps between the first four methods and the remaining methods in Table 2 suggest unfair experimental settings. Were all methods given equal hyperparameter tuning efforts? Other Comments Or Suggestions: None. Questions For Authors: Existing MVC methods optimize client/server variables under a unified objective to ensure convergence. How does EFIMVC guarantee convergence with separate client/server objectives? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer NJFx for the thorough and constructive review. We provide point-by-point responses to the questions raised as follows:** --- **Q1:** Most anchor-based MVC methods enforce orthogonality constraints on anchor matrices to ensure diversity. The authors remove this constraint without justification. What performance changes would occur if orthogonality constraints were reinstated? **A1:** We sincerely appreciate your question. While many anchor-based methods enforce orthogonality to promote anchor diversity, we intentionally relax this constraint for two key reasons: We aim to learn anchors that are representative of each category. However, in practice, the number of anchors typically exceeds the number of categories. Under such circumstances, it is unnecessary to enforce distinctness among all anchors - anchors from the same category may exhibit similarity. Imposing strict orthogonality constraints would be overly restrictive for this purpose. Besides, our ablation studies as follows show that removing orthogonality constraints actually improves clustering accuracy across all datasets, likely because it allows for more flexible anchor representations that better capture relationships in partial observation scenarios. | **Datasets** | **ProteinFold** | **WebKB** | **100Leaves** | **CCV** | **Cifar10** | |:------------:|:---------------:|:---------:|:-------------:|:---------:|:-----------:| | | | ACC | | | | | Ours | **31.71** | **90.64** | **72.88** | **20.04** | **96.62** | | Orthogonal | 29.39 | 71.74 | 61.85 | 15.47 | 95.09 | ||| --- **Q2:** Previous methods typically apply k-means directly on learned embeddings. The choice to cluster left singular vectors of $\mathbf{Z}$ in Algorithm 1 lacks theoretical justification. Why not use standard spectral clustering? **A2:** We sincerely thank you for the problem. Standard spectral clustering can only operate on a complete $n\times n$ similarity matrix and cannot directly process the anchor graph $\mathbf{Z}$. According to [1], $\mathbf{Z}^\top \mathbf{Z}$ can be interpreted as reconstructing the full similarity matrix from the anchor graph $\mathbf{Z}$. Notably, performing $k$-means on the right singular vectors of $\mathbf{Z}$ is theoretically equivalent to applying standard spectral clustering to $\mathbf{Z}^\top \mathbf{Z}$. [1] Kang et al. Large-scale multi-view subspace clustering in linear time. AAAI 2020. --- **Q3:** The huge performance gaps between the first four methods and the remaining methods in Table 2 suggest unfair experimental settings. Were all methods given equal hyperparameter tuning efforts? **A3:** We sincerely thank you for the question. The first four methods evaluated in Table 2 are centralized incomplete multi-view clustering algorithms. Compared to the subsequent four federated MVC methods, these centralized approaches hold inherent advantages in terms of full data accessibility and built-in mechanisms for handling missing values, thereby achieving superior clustering performance on incomplete multi-view datasets. In contrast, our proposed method not only processes distributed multi-view data but also incorporates a dedicated incomplete data handling mechanism. As a result, it achieves comparable performance to state-of-the-art centralized incomplete MVC algorithms while significantly outperforming existing federated MVC methods. Besides, for all baseline methods, we strictly followed the parameter configurations reported in their original papers. Notably, for federated MVC methods, all missing values were imputed with zeros to ensure a fair comparison. We will add these implementation details to the final version. --- **Q4:** Existing MVC methods optimize client/server variables under a unified objective to ensure convergence. How does EFIMVC guarantee convergence with separate client/server objectives? **A4:** We sincerely appreciate the comment. In contrast to conventional approaches that optimize server and client variables under a unified objective function, we propose a decoupled optimization framework that separates these two components. This innovation effectively reduces the communication overhead caused by frequent interactions. We further provide both theoretical and experimental convergence analyses for the decoupled objectives relative to the global objective function. Specifically, Figure 3 illustrates the monotonic decrease of objective values (for server-side, client-side, and global objectives) with increasing iterations, demonstrating their eventual convergence to stable values. Additionally, in Appendix A.2, we present a rigorous theoretical analysis of convergence properties for each component. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' efforts. After reading the rebuttal, my concerns have been addressed, and I will increase my score.
Summary: This paper proposes a novel framework called EFIMVC, addressing key challenges in federated multi-view clustering such as high communication overhead, limited privacy protection, and poor handling of missing views. EFIMVC introduces a localized optimization strategy that significantly reduces communication costs while ensuring theoretical convergence. It employs both view-specific and shared anchor graphs as communication variables to enhance privacy by avoiding the transmission of sensitive embeddings. The framework also features a dual-anchor alignment mechanism to mitigate anchor graph misalignment during graph fusion, improving clustering robustness. Claims And Evidence: Yes Methods And Evaluation Criteria: The method is relatively applicable. Theoretical Claims: There is relevant theoretical verification. Experimental Designs Or Analyses: The experimental design and analysis in the paper are generally sound and well-justified. The authors have taken appropriate steps to validate the performance of EFIMVC across diverse datasets and conditions. However, further exploration of computational efficiency could provide additional insights into the efficiency of the method. Supplementary Material: There are some theoretical proofs, more experimental results. Relation To Broader Scientific Literature: The authors point out that existing FMVC methods still face challenges such as high communication overhead, limited privacy protection, and ineffective handling of missing views. These issues indeed exist in federated multi-view scenarios. To address the problem of limited privacy protection, the authors use anchor graphs as communication variables, employing both view-specific and shared anchor graphs to avoid the direct transmission of sensitive embeddings, thereby enhancing privacy protection. However, the authors do not provide detailed explanations on how the model handles missing data. Although large datasets were used and improvements in clustering performance were observed, the experimental section lacks analysis of time and space complexity and does not include experiments on time efficiency to validate its effectiveness. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: Novelty: The paper proposes a novel federated multi-view clustering framework, the introduction of a dual-anchor alignment mechanism to mitigate anchor graph misalignment during graph fusion is a unique contribution. Significance: The authors' proposed local optimization mechanism enhances communication efficiency, enabling the model to handle large-scale datasets. Moreover, the use of anchor graphs for communication improves user privacy. Clarity: The paper presents the methodology in a clear and structured manner, with detailed descriptions of the optimization process, algorithm design, and theoretical analysis. This clarity makes it easier for readers to understand. Weaknesses: 1. The authors claim their method solves three key challenges (communication overhead, privacy preservation, incomplete views) but fail to explicitly demonstrate how each component of the framework addresses these issues in the method part. 2. The clustering performance does not demonstrate superiority. As shown in Table 2, EFIMVC underperforms state-of-the-art methods on ProteinFold, WebKB, CCV and MNIST datasets. 3. In experiments, the federated baselines perform poorly compared to centralized methods. Were these baselines adapted fairly to handle missing view? Other Comments Or Suggestions: The excessive mathematical notation severely impacts readability. A notation table should be added. Questions For Authors: What motivated the specific construction strategy for anchor similarity matrix in Eq. (3)? Would alternative similarity measures like cosine distance yield different results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer 3S7F for the thorough and constructive review. We provide point-by-point responses to the questions raised as follows:** --- **Q1:** The authors claim their method solves three key challenges (communication overhead, privacy preservation, incomplete views) but fail to explicitly demonstrate how each component of the framework addresses these issues in the method part. **A1:** We sincerely appreciate your question. Our framework explicitly addresses the three challenges through dedicated components: 1. Communication Overhead: By decoupling optimization variables into server-side and client-side, we significantly reduce the variable transmission costs inherent in conventional frameworks, where optimization is performed alternately between the server and clients. 2. Privacy Preservation: The proposed framework employs anchor graphs as communication variables, avoiding the transmission of sensitive embeddings or sample-level similarity matrices, thereby reducing privacy risks. 3. Missing-View Handling: We learn partial anchor graphs with available data on each client via Eq. (2), and integrates them into a consistent full anchor graph on server-side via Eq. (5). --- **Q2:** The clustering performance does not demonstrate superiority. As shown in Table 2, EFIMVC underperforms state-of-the-art methods on ProteinFold, WebKB, CCV and MNIST datasets. **A2:** We sincerely thank you for the critical perspective. While centralized SOTA methods (FIMVC, SCBGL, DVSAI, DAQINT) achieve marginally higher accuracy on some datasets, they fundamentally violate federated constraints. In contrast, our federated framework achieves comparable or superior performance to these centralized SOTA methods while operating under strict distributed data storage protocols. This demonstrates our method’s unique capability to balance clustering accuracy with federated learning requirements. We will emphasize this distinction in the final version. --- **Q3:** In experiments, the federated baselines perform poorly compared to centralized methods. Were these baselines adapted fairly to handle missing view? **A3:** We sincerely appreciate the comment. First, federated multi-view clustering methods inherently underperform centralized approaches because they cannot simultaneously access all view data. Second, since existing federated MVC methods lack the capability to handle missing multi-view data, we filled all missing values with zeros in our experiments. This also leads to slightly inferior performance compared to specialized missing MVC algorithms with built-in imputation mechanisms. Notably, as the first federated incomplete MVC method proposed, our approach ensures a fair experimental comparison by benchmarking against both state-of-the-art centralized missing MVC methods and existing federated MVC methods. --- **Q4:** The excessive mathematical notation severely impacts readability. A notation table should be added. **A4:** We sincerely appreciate the constructive suggestion. We will add a notation table in the final version. Specially, the main used notations through our paper is summarized as follows: | Notation | Meaning | |--------------|---------| | $\mathbf{A}^{(v)}$ | Anchor matrix| | $\mathbf{Z}^{(v)}$ | Anchor graph | | $\mathbf{Z}$ | Consistent anchor graph | | $\mathbf{G}^{(v)}$ | Incomplete indicator matrix | | $\mathbf{P}^{(v)}$| Alignment matrix | | $\mathbf{S}^{(v)}$ | Anchor similarity matrix | | $\mathbf{L}^{(v)}$ | Laplacian matrix | | || --- **Q5:** What motivated the specific construction strategy for anchor similarity matrix in Eq. (3)? Would alternative similarity measures like cosine distance yield different results? **A5:** We sincerely thank you for the insightful question. The anchor similarity matrix quantifies pairwise distances between all anchors, serving as the foundation for subsequent server-side anchor structure alignment. In Eq. (3), we compute the distance between each anchor pair using the ℓ2-norm (Euclidean distance), though this metric can be substituted with alternative methods (e.g., cosine similarity). To evaluate the impact of distance metrics, we replace the L2-norm with cosine similarity and compare the results with existing approaches, as shown in the table below. | **Datasets** | **ProteinFold** | **WebKB** | **100Leaves** | **CCV** | **Cifar10** | |:------------:|:---------------:|:---------:|:-------------:|:---------:|:-----------:| | | | ACC | | | | | ℓ2-norm | **31.71** | **90.64** | **72.88** | **20.04** | **96.62** | | Cosine | 31.45 | 90.44 | 70.23 | 15.17 | 96.39 | | || From the experimental results, ℓ2-norm performs slightly better than cosine similarity. We will include this ablation experiment in the final version.
Summary: Tha authors propose a novel federated incomplete multi-view clustering method named EFIMVC. By introducing localized optimization with anchor graphs and dual-alignment mechanisms, the propposed method reduce communication costs while preserving privacy. Experiments on seven datasets demonstrate the superiorty of EFIMVC. Claims And Evidence: Yes. The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. The proposed methods make sense for the federal multi-view scenario. Theoretical Claims: Yes.The proofs are correct. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I have reviewed the supplementary material. Relation To Broader Scientific Literature: EFIMVC is related to the federal multi-view methods. Essential References Not Discussed: Related work are referenced in this paper. Other Strengths And Weaknesses: The practical focus on incomplete view scenarios is valuable. The anchor graph communication strategy effectively balances efficiency and information preservation. The paper is well-written. While there are some issues need to be addressed: I. The paper uses excessive mathematical symbols without proper definitions. For instance, L in Eq.13 is never explained. II. The "dual-anchor alignment" concept remains vague. How does it differ from single-level alignment in [1]? III. The missing view definition is ambiguous. Is it sample-level missing (some samples lack views) or feature-level missing (entire view channels unavailable)? IIII. Table 2 shows empty results for FMVC on four datasets. Is this due to implementation errors or intentional omission? [1] Align then Fusion: Generalized Large-scale Multi-view Clustering with Anchor Matching Correspondences. NeurIPS 2022. Other Comments Or Suggestions: See weakness. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer 9Soy for the thorough and constructive review. We provide point-by-point responses to the questions raised as follows:** --- **Q1:** The paper uses excessive mathematical symbols without proper definitions. For instance, $\mathbf{L}$ in Eq. (13) is never explained. **A1:** We sincerely appreciate your feedback. The symbol $\mathbf{L}^{(v)}$ in Eq. (13) denotes the Laplacian matrix derived from the anchor similarity matrix $\mathbf{S}^{(v)}$ of the $v$-th view, where $\mathbf{L}^{(v)} = \mathbf{D}^{(v)} - \mathbf{S}^{(v)}$ and $\mathbf{D}^{(v)}$ is the degree matrix. We acknowledge the oversight in explicitly defining this notation and will comprehensively explain all mathematical symbols in Appendix to ensure full clarity in the final version. --- **Q2:** The "dual-anchor alignment" concept remains vague. How does it differ from single-level alignment in [1]? [1] Align then Fusion: Generalized Large-scale Multi-view Clustering with Anchor Matching Correspondences. NeurIPS 2022. **A2:** We sincerely thank you for this insightful comparison. While [1] achieves alignment through sequential feature alignment and structural alignment, our dual-anchor alignment **unifies these two objectives** into a single optimization framework in Eq. (5). Specifically: - Feature alignment (anchor embedding matching): $$\left\|\left\|\mathbf{P}^{(v)}\mathbf{Z}\mathbf{G}^{(v)} - \mathbf{Z}^{(v)}\mathbf{G}^{(v)} \right\|\right\|_{\mathbf{F}}^2$$ - Structural alignment (anchor graph topology preservation): $$\operatorname{Tr}\left((\mathbf{P}^{(v)})^\top\mathbf{L}^{(v)}\mathbf{P}^{(v)}(\mathbf{Z}\mathbf{Z}^\top)\right)$$ Besides, our method introduces **missing-view robustness** through the indicator matrix $\mathbf{G}^{(v)}$, enabling alignment even when partial data observations are absent across views. This differs fundamentally from [1], which assumes complete data availability. --- **Q3:** The missing view definition is ambiguous. Is it sample-level missing (some samples lack views) or feature-level missing (entire view channels unavailable)? **A3:** We sincerely thank you for highlighting the problem. Our work addresses **sample-level missing views**, where individual samples may lack specific view channels (e.g., a patient missing MRI data but having CT scans). Formally, given the indicator vector $r^{(i)} \in \mathbb{R}^{n_v}$ containing the index for $n_v$ existing samples for $v$-th view in sort, the indicator matrix $ \mathbf{G}^{(v)} \in \lbrace 0,1 \rbrace^{n \times n_v} $ satisfies: $$ \mathbf{G}^{(v)}{i,j} = \begin{cases} 1, & \text{if the entry } r^{(i)}{j}=i, \\\\ 0, & \text{otherwise.} \end{cases} $$ where $\mathbf{X}^{(v)}\mathbf{G}^{(v)}$ denotes the sorted complete data matrix in the $v$-th view. --- **Q4:** Table 2 shows empty results for FMVC on four datasets. Is this due to implementation errors or intentional omission? **A4:** We sincerely thank you for the question. The empty entries of FMVC in Table 2 stem from its prohibitive memory demands when handling large datasets. Specifically, FMVC constructs Laplacian matrices with size of $n \times n$, which resulted in an out-of-memory error when running large-scale datasets. We will give further explanation in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response—it has resolved my concerns. I would like to recommend the acceptance of this paper.
Summary: This paper proposes an Efficient Federated Incomplete Multi-View Clustering (EFIMVC) method to reduce communication overhead, enhance privacy protection, and handle missing views effectively. It employs a localized optimization strategy to lower communication costs, utilizes view-specific and shared anchor graphs to improve privacy, and introduces a dual-anchor alignment mechanism to enhance graph fusion stability. Experimental results demonstrate that EFIMVC outperforms existing methods in clustering accuracy, communication efficiency, and privacy preservation, highlighting its advantages in federated incomplete multi-view clustering tasks. ## update after rebuttal Thanks for your careful response, and I consider the previous score reasonable and will keep the previous rating. Claims And Evidence: The main points of the paper are supported by convincing evidence. Extensive experiments on seven datasets demonstrate the superiority of the proposed method. Methods And Evaluation Criteria: The proposed methods make sense for the problem. EFIMVC innovatively employs an anchor graph optimization strategy to reduce communication overhead and leverages local optimization to minimize data transmission, enhancing privacy protection. Additionally, the dual-anchor alignment mechanism ensures consistency between global and local anchors, thereby improving graph fusion quality. The experimental evaluation utilizes widely recognized multi-view clustering (MVC) benchmark datasets and state-of-the-art baseline methods to validate the effectiveness of the approach. Theoretical Claims: I checked the correctness of the proofs for theoretical claims, including the effectiveness of the method of decoupling local optimization from global optimization in theoretical analysis, including the optimization problem solving process of quadratic programming. Experimental Designs Or Analyses: I checked the validity of the experimental designs and analyses. Extensive experiments are conducted on multiple widely-used multi-view clustering (MVC) benchmark datasets, with results averaged over multiple runs to ensure statistical reliability. The experiments cover different missing-view scenarios, ablation studies, and parameter sensitivity analysis to comprehensively evaluate the proposed method. The issues are listed behind in the Weaknesses. Supplementary Material: no supplementary material Relation To Broader Scientific Literature: The EFIMVC method first introduced anchor graph optimization and dual anchor alignment mechanism in the federated incomplete multi-view clustering problem. Based on the existing research on federated multi-view clustering, this method improves communication efficiency, privacy protection and missing view adaptability, filling the shortcomings of previous methods in these aspects. Essential References Not Discussed: There are no related works that are not currently discussed in the paper. Other Strengths And Weaknesses: This paper proposes an efficient federated incomplete multi-view clustering method (EFIMVC), which combines anchor graph optimization with a dual anchor alignment mechanism for the first time to reduce communication overhead, enhance privacy protection, and improve clustering performance in missing view scenarios. This method effectively reduces data transmission in federated learning while maintaining high clustering accuracy. There are also some weaknesses: 1. It is recommended that all illustrations use vector graphics. 2. Some of the data sets listed are not the highest results. Can you add relevant explanations? 3. Although the dual anchor point alignment mechanism improves the quality of graph fusion, its computational complexity is not analyzed in detail. If there are more anchor points, the computational overhead may increase. It is recommended to provide theoretical analysis or experimental results. 4. The data distribution from different perspectives may be quite different, and the selection of anchor points may directly affect the final clustering effect. The paper does not discuss how to ensure the robustness of anchor point selection. Other Comments Or Suggestions: I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution. Questions For Authors: I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer JeXy for the thorough and constructive review. We provide point-by-point responses to the questions raised as follows:** --- **Q1:** It is recommended that all illustrations use vector graphics. **A1:** We sincerely appreciate your constructive suggestion. In the final version, we will convert all figures to vector graphics (e.g., PDF formats) to ensure optimal resolution and scalability. --- **Q2:** Some of the data sets listed are not the highest results. Can you add relevant explanations? **A2:** We sincerely thank you for raising this critical point. We clarify that the first four methods in Table 2 (FIMVC, SCBGL, DVSAI, DAQINT) are state-of-the-art centralized incomplete multi-view clustering methods. While they achieve marginally higher performance on specific datasets, they fundamentally address centralized scenarios and **cannot handle distributed multi-view data with privacy constraints**. In contrast, our federated framework achieves comparable or superior performance to these centralized SOTA methods while operating under strict distributed data storage protocols. This demonstrates our method’s unique capability to balance clustering accuracy with federated learning requirements. We will emphasize this distinction in the final version. --- **Q3:** Although the dual anchor point alignment mechanism improves the quality of graph fusion, its computational complexity is not analyzed in detail. If there are more anchor points, the computational overhead may increase. It is recommended to provide theoretical analysis or experimental results. **A3:** We sincerely appreciate the important technical concern. The computational complexity of the dual anchor alignment mechanism comes from optimizating the permutation matrix $\mathbf{P}^{(v)}$ on the server, which is $\mathcal{O}(m^3 + m^2n)$. Specifically, constructing matrix $\mathbf{Q}$ requires $\mathcal{O}(m^3 + m^2n)$ to process matrix multiplications. Solving $\mathbf{P}^{(v)}$ by performing SVD decomposition on $\mathbf{Q}$ requires $\mathcal{O}(m^3)$. While increasing anchor points $m$ raises time costs, our method fundamentally avoids the prohibitive space complexity ($\mathcal{O}(n^2)$) and time complexity ($\mathcal{O}(n^3)$) of full-graph approaches. We acknowledge the reviewer's concern and will explore hierarchical anchor optimization to further enhance efficiency while maintaining performance. --- **Q4:** The data distribution from different perspectives may be quite different, and the selection of anchor points may directly affect the final clustering effect. The paper does not discuss how to ensure the robustness of anchor point selection. **A4:** We sincerely thank you for the insightful comment. To evaluate anchor robustness, we conducted ablation studies using three strategies: 1. $k$-means (default) 2. Random selection 3. Density-based sampling The comparison results of different strategies are shown in below: | **Datasets** | **ProteinFold** | **WebKB** | **100Leaves** | **CCV** | **Cifar10** | |:-------------:|:---------------:|:---------:|:-------------:|:---------:|:-----------:| | | | ACC | | | | | $k$-means | **31.71** | **90.64** | **72.88** | **20.04** | **96.62** | | Random | 30.83 | 76.48 | 69.27 | 15.53 | 96.18 | | Density-based | 30.99 | 79.63 | 69.63 | 15.53 | 96.13 | | | | NMI | | | | | $k$-means | **40.33** | **50.62** | **84.76** | **15.66** | **91.52** | | Random | 39.4 | 22.65 | 82.93 | 11.49 | 90.59 | | Density-based | 39.8 | 21.08 | 83.08 | 11.49 | 90.54 | | | | Purity | | | | | $k$-means | **37.95** | **90.64** | **74.79** | **23.45** | **96.62** | | Random | 36.87 | 81.81 | 71.3 | 19.22 | 96.18 | | Density-based | 37.1 | 80.5 | 71.61 | 19.36 | 96.13 | | | | Fscore | | | | | $k$-means | **18.71** | **86.8** | **61.84** | **11.79** | **93.46** | | Random | 18 | 71.71 | 56.72 | 9.31 | 92.62 | | Density-based | 18.15 | 73.73 | 57.09 | 9.35 | 92.55 | | || While $k$-means achieves the most stable performance, we acknowledge that advanced anchor initialization could further improve robustness. We will discuss this limitation and cite relevant techniques [1] in the final version. [1] Liu et al. Learn from View Correlation: An Anchor Enhancement Strategy for Multi-view Clustering. CVPR 2024.
null
null
null
null
null
null
Long-Short Alignment for Effective Long-Context Modeling in LLMs
Accept (poster)
Summary: This work suggests matching the long-sequence loss and the short-sequence loss, named output alignment. The author has conducted experiments to prove the effectiveness of the proposed methods. Claims And Evidence: The work supports the claims and evidence. Methods And Evaluation Criteria: The proposed methods evaluation criteria make sense. However, there are still some questions: * **Figure 1 (a), the NoPE loss is almost zero**. Is anything wrong? * **The proposed method may degrade the performance for the shorter length, which is proved in Table 2 and Table 3.** * **The method is sensitive to hyperparameters for the loss misalign**, which is proved in Table 6. Theoretical Claims: I have checked the proofs. Experimental Designs Or Analyses: I have checked the soundness/validity of experimental designs and analyses Supplementary Material: I have read the Appendix. Relation To Broader Scientific Literature: This works propose Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: M/A Questions For Authors: The proposed method is sensitive to the hyperparameter for loss misalign (proved in Table 6). Is there any way to help choose the hyperparameter? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer qzDj for the comments. We will address your questions in the following part. --- Q1. Figure 1 (a), the NoPE loss is almost zero. Is anything wrong? A1. Thank you for your question. The NoPE loss is indeed almost zero (around 1e-5). This observation is also supported by our theoretical results in Appendix C.1. --- Q2. The proposed method may degrade the performance for the shorter length, which is proved in Table 2 and Table 3. A2. Thank you for your feedback. We apologize for the potential misunderstanding in Table 2 and 3. Table 2 and Table 3 do not evaluate performance with respect to sequence length. Instead, they show that our proposed method may experience a slight degradation compared to the baseline when trained for only 50 epochs (8.92 v.s. 8.95 in Table 2, 6.89 v.s. 6.92 in Table 3). Importantly, as shown in the same tables, our proposed method benefits significantly from longer training (100 and 200 epochs), demonstrating the effectiveness with sufficient training. --- Q3. The method is sensitive to hyperparameters for the loss misalign, which is proved in Table 6. Is there any way to help choose the hyperparameter? A3. Indeed, the misalignment loss is a regularization term, and like many regularization techniques, it can influence the training process—a phenomenon observed in prior work [1, 2]. As mentioned in Section 5.4 (Line 412), we suggest using a coefficient $\alpha$ between 0.1 and 0.3 as a default to mitigate the risk of over-regularization while maintaining performance. [1] Zhao et al, When Will Gradient Regularization Be Harmful? ICML 2024. [2] Srivastava et al. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 2014. --- Thanks again for your comments, and hope our response could address your concerns. Please let us know if you have additional questions. --- Rebuttal Comment 1.1: Comment: Thank you very much for the response. I would like to further discuss the Q2: Table 2 and Table 3. **Length Extrapolation Problem** Currently, the length extrapolation is difficult because the Transformer cannot handle long-sequence. For example, if a model is trained for a length of 1024, and then we use it to process on length of 4096. * Choice 1: abandon the first 3072 tokens and only use the last 1024 tokens to predict the next token. * Choice 2: Use the whole 4096 tokens to predict the next token. * If we use RoPE and validate it on language modeling, we will find that the Choice 1 PPL is lower than the Choice 2 PPL [1-2]. * **For this work:** it actually forces to alignment of choice 1 and choice 2. **To improve the score: re-evaluate baseline** * **Evaluate baseline within training length 4096**. For example, evaluate with lengths 1024, 2048, and 4096. We conduct this experiment to check whether the proposed method degrades the performance within the training length. * **The baseline and proposed method performance without CLEX**. The CLEX has a maximum extrapolation length, which is actually similar to randomized position encoding that let the model sees all the potential position IDs within the maximum extrapolation length. However, the model still cannot achieve a good length of extrapolation performance beyond the maximum extrapolation length. Reference: [1] Fang, L., Wang, Y., Liu, Z., Zhang, C., Jegelka, S., Gao, J., ... & Wang, Y. (2024). What is Wrong with Perplexity for Long-context Language Modeling?. arXiv preprint arXiv:2410.23771. [2] Zheng, C., Gao, Y., Shi, H., Xiong, J., Sun, J., Li, J., ... & Li, Y. (2024). DAPE V2: Process Attention Score as Feature Map for Length Extrapolation. arXiv preprint arXiv:2410.04798. --- Reply to Comment 1.1.1: Comment: **Extra Q1. (Length Extrapolation Problem)** About the two choices. **A1.** Thank you very much for your insightful comment on the comparison. We understand your point to be that Choice 1 often yields lower perplexity than Choice 2, possibly because the model avoids extrapolating to unseen positional embeddings beyond the training length. In other words, while Choice 2 can use more information, it also introduces a **distribution shift in positional representations**, which may lead to degraded performance. In fact, this behavioral gap between Choice 1 and Choice 2 highlights a fundamental challenge in length extrapolation: **model predictions can become inconsistent depending on the portion of the context that is used**. Our method addresses exactly this issue: by aligning the model’s output distributions across inputs of different lengths, we encourage **prediction consistency between Choice 1 and Choice 2**. To empirically validate this, we evaluated the model under all three settings on LongBench-E: - **Setting 1 (Choice 1)**: Truncate the query to 4096 tokens. Use the baseline model. - **Setting 2 (Choice 2)**: Use the full query. Use the baseline model. - **Setting 3 (Ours)**: Use the full query. Use our proposed method. The results are shown in the following table: | | Setting 1 | Setting 2 | Setting 3 | | --- | --- | --- | --- | | LongBench-E Score | 18.2 | 8.9 | 26.6 | The key result is that our method significantly outperforms both Setting 1 and 2. This shows that our approach not only enables the model to effectively utilize the full long context, but also surpasses the best workaround (truncation) in performance. --- **Extra Q2. (Evaluate baseline within training length 4096)** Evaluate with lengths 1024, 2048, and 4096. We conduct this experiment to check whether the proposed method degrades the performance within the training length. **A2.** Thank you for your helpful suggestion. We conduct additional experiments following the setting of Table 2, using a training length of 4096 and evaluating perplexity at context lengths 1024, 2048, and 4096 after 200 steps of training. The results are shown below: | | Length=1024 | Length=2048 | Length=4096 | | --- | --- | --- | --- | | Baseline | 6.67 | 6.12 | 5.74 | | Ours | 6.62 | 6.08 | 5.81 | The results indicate that our proposed method does not degrade performance at a shorter length. Intuitively, our proposed regularization does not penalize or distort the model's ability to model short sequences, which behaves as a gentle consistency constraint rather than an extrapolative bias. --- **Extra Q3. (The baseline and proposed method performance without CLEX**.**)** The CLEX has a maximum extrapolation length. A3. Thank you for your insightful comment. We fully agree that CLEX, like many other extrapolation methods, has a theoretical upper bound on extrapolation length. This limitation arises from the design principle behind CLEX and similar approaches—namely, exposing the model to a range of position IDs during training so that it can generalize within that range. This principle is also adopted in other state-of-the-art extrapolation methods, including PI [1], ABF [2], NTK-Aware [3] (used in CodeLlama [4]), Yarn [5], EABF [6], LongQLora [7], and CREAM [8]. As you suggested, we also evaluated our method in settings **without CLEX**, and the results are included in Table 3 of our paper (EABF, LongQLora) and A1 to Reviewer Ueg1 of our rebuttal (CREAM). For example, the table below shows results from combining our method with CREAM: | | LongBench-E | Perplexity | | --- | --- | --- | | CREAM | 23.6 | 6.62 | | Our method + CREAM | 25.2 | 5.94 | These results suggest that although existing methods have extrapolation limits, our approach can complement them by improving alignment across lengths, which is **orthogonal** to the positional encoding design itself. [1] Chen et al. Extending context window of large language models via positional interpolation. arXiv:2306.15595. [2] Xiong et al. Effective long-context scaling of foundation models. arXiv:2309.16039. [3] URL https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/ [4] Rozière et al. Code Llama: Open Foundation Models for Code. arXiv:2308.12950 [5] Peng et al. Yarn: Efficient context window extension of large language models. arXiv:2309.00071. [6] Zhang et al. Extending llms’ context window with 100 samples. arXiv:2401.07004. [7] Yang et al. Longqlora: Efficient and effective method to extend context length of large language models. arXiv:2311.04879 [8] Wu et al. An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding. NeurIPS 2024. --- Thanks again for your time and thoughtful feedback! We hope our response addresses your concerns. If you find our clarifications and additional results satisfactory, we would be grateful if you would consider updating your evaluation accordingly.
Summary: This paper targets **length generalization** problem for LLMs and proposes to **shift from** conventional perspective of **positional encodings and data structures to the output distribution** of the model. They argue that the consistency of output distributions across sequences with different length correlates well with length generalization performance. They name this consistency **outer alignment** and propose a metric called **Long-Short Misalignment** to quantify it, and further design a **regularization loss** to explicitly enhance outer-alignment. They support their claim both empirically and theoretically. Claims And Evidence: - **Concern 1**: my biggest concern is on the difference between mean prediction and length prediction task. The author points out in L160-164 that (1) for mean prediction task, the output remain in $[0,1]$ and (2) for length prediction task, the output support set grows with longer length, (3) this difference motivates them to consider output alignment. However, **there exists many tasks whose output support set do not grow with longer length, see Delétang et al 2022.** Transformers still struggle on these tasks. This makes me question if the motivation is valid. - **Concern 2**: On the connection between synthetic task and language modeling task. Using the mean prediction and length prediction task (Section) to motivate Section 4 seems not good, since the former is essentially a **regression task** (predicting continuous target) while the author extend their claim to **sequence modeling** subsequently. Reference: Delétang et al 2022. Neural Networks and the Chomsky Hierarchy. Methods And Evaluation Criteria: - **Evaluation criteria 1**: The author proposes a new perspective of improving length generalization but **does not show clearly how important this perspective is**. While it makes sense to compare the proposed regularization with naive fine-tuning, the paper would benefit a lot by comparing it with some representative length generalization methods. Namely, this can answer the question that should the community considers more on this direction because it improves more than conventional perspectives (e.g. positional encodings)? However, I admit that this might be out of scope and is just out of my personal curiosity. Theoretical Claims: I checked the Theorem 4.1 (which should be the only theoretical claim), and did not find a clear problem. However, it is possible that there is a miss. Experimental Designs Or Analyses: - **Experimental Design 1**: In Section 4.1, the $l_1$ and $l_2$ are sampled from $[l_{train}/2, l_{train}]$ and $l_{train}$ is the model's training context length. This can be a large range, which defeats the claim (L064, L216) that the $x_{[-l_1:]}$ and $x_{[-l_2:]}$ are only **slightly** different? **The motivation of output distribution being similar should only hold when the length differences are indeed small.** Also, in L431, the author mentioned that aligning output distributions with **moderate** length discrepancies improves. I think the author should be more careful on the consistency of their claims. - **Minor**: - In L593 (Appendix B), I was confused on how the hyper-parameters is selected, could you clarify? - In Section 3, why are those transformation selected? - Again in Section 3, what loss are used to train the model? My guess is MSE (seen from Equation 8) but this should be more explicitly mentioned. Supplementary Material: I have checked the Appendix A, B, C (partial), D, E, F. There is no other supplementary material except for the appendix. Relation To Broader Scientific Literature: - **Broader literature 1**: The regularization loss to enforce output distribution consistency between lengths broadly connects to **better fine-tuning strategies**. - **Broader literature 2**: The idea of keeping output distributions similar for similar inputs (here in terms of length) is distantly related to keeping model output similar on inputs with data augmentations (e.g. a rotated image), which is in general used in **Test-time training literature**. Essential References Not Discussed: I believe the authors have discussed all necessary references. Other Strengths And Weaknesses: Summary of strengths and weaknesses. - **Strength**: the idea to study length generalization on output distributions is novel, and the key claims are supported both theoretically and empirically. - **Weakness**: the motivation obtained from the synthetic tasks does not make a lot of sense to me, and the connection between synthetic task and language modeling task, as well as their corresponding proposed approaches (i.e. Section 3 and 4) is weak, in the paper's current format. Other Comments Or Suggestions: Minor comments: - **Comment 1**: Presentation can be a bit smoother. For example, the first paragraph of Section 3 can be much shorter since the related work and background was just discussed above. - **Comment 2**: The title of Table 7 seems wrong: it should be ablation study on 'sampling range'. Questions For Authors: Most of my questions have been mentioned above. I would be happy to increase my score if the authors can address/clarify **Concern 1&2, Experimental design 1** and **Evaluation criteria 1**. For example, I suggest the following, respectively (the author can provide more suitable clarifications if there are any) - **Concern 1**: I don't have a concrete idea on how Concern 1 can be addressed. I let the authors decide. - **Concern 2**: discuss more on how explicit reparametrization relates to the regularization loss, and how section 3 connects with section 4. - **Experimental design 1**: Refine and rationalize their claim on which lengths should the consistency be enforced on, should the length differences be slight, moderate, or something else? Given $l_{train}$ can be large, maybe the author can provide experiments on something like $[0.95*l_{train}, l_{train}]$. - **Evaluation criteria 1**: Add a preliminary experimental comparison between their approach and some representative length generalization approaches (e.g. compare to length-extrapolatable position encodings). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1. About Delétang et al. 2022. This makes me question if the motivation is valid. A1. Thank you for your insightful comment. We agree that the difficulty of achieving length generalization in Transformers likely stems from multiple factors. In the related work section, we have acknowledged that positional encoding plays a significant role in this challenge. However, our findings suggest that output misalignment is another important factor, as evidenced by our comparisons. To clarify, we do not claim that output misalignment is the sole reason for poor length generalization. Rather, we identify it as one contributing factor and show that addressing it leads to improved generalization. While some tasks may not exhibit an expanding output support set with increasing length, our focus is on cases where such a shift does occur, as it presents a clear source of distributional mismatch. Moreover, our results demonstrate that mitigating output misalignment leads to better length generalization, even if it does not fully resolve the problem. This suggests that output alignment is a meaningful component of the broader challenge, rather than an all-encompassing solution. We will clarify this distinction in the final version. --- Q2. On the connection between synthetic task and language modeling task. A2. We acknowledge that the tasks in the two sections are not the same type. However, our motivation for including Section 3 is not to claim that the two tasks are identical, but rather to provide a controlled setting that isolates one specific challenge in length generalization: **output misalignment**. While sequence modeling is more complex, the fundamental issue remains—**the output distribution can shift as input length varies**, which can harm generalization. Section 3 demonstrates this effect in a simpler setting where the misalignment can be analyzed more clearly. This serves as motivation for Section 4, where we propose methods to mitigate similar issues in the more complex sequence modeling setting. We will refine the discussion to better bridge the gap between the synthetic tasks and sequence modeling. --- Q3. The paper would benefit a lot by comparing it with some representative length generalization methods. A3. Thank you for the suggestion. Since most representative length generalization methods modify RoPE, we compare our method applied on top of EABF against CREAM [1], another method that builds on EABF. The results for 100 steps training are shown in the table below. | | LongBench-E | Perplexity | | --- | --- | --- | | Our method | 24.0 | 6.43 | | CREAM | 23.6 | 6.62 | We find that our method outperforms CREAM in both LongBench-E score and perplexity, suggesting that output alignment provides additional benefits beyond positional encoding modifications. [1] Wu et al. An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding. NeurIPS 2024. --- Q4. Refine and rationalize their claim on which lengths should the consistency be enforced on. The author can provide experiments on something like $[0.95*l_{train},l_{train}]$. A4. Thank you for pointing this out. We agree that "moderate" would better reflect our intended meaning in L064 and L216. Our goal is to enforce consistency across **moderate** length differences rather than extremely small ones, as small variations may not provide sufficient regularization. We will refine the wording in the paper to better reflect this insight. Following your suggestion, we also conduct experiments using the sampling range $[0.95*l_{train},l_{train}]$ (which implies that the sampling range of $l_{extra}$ is $[1, 0.05l_{train}]$) following the setting in Table 7. The results of 200 steps are shown in the table below. | Benchmark | LongBench-E | Perplexity | | --- | --- | --- | | Baseline | 23.4 | 5.82 | | $[1, l_{train}/2]$ | 25.8 | 5.77 | | $[1, 0.05l_{train}]$ | 23.9 | 5.87 | While restricting $l_{extra}$ to $[1, 0.05 \cdot l_{train}]$ still provides some improvement, the broader range $[1, l_{train}/2]$ is more effective. This suggests that keeping consistency over moderate rather than minimal length differences is beneficial. --- Q5. I was confused on how the hyper-parameters is selected, could you clarify? A5. Sure. The batch size is 256. We use SGD as the optimizer and search for the initial learning rate in {5e-5, 1e-4, 5e-4}. The learning rate follows a cosine schedule. The models are trained for 40,000 epochs. The training loss is Mean Squared Error (MSE). --- Q6. In Section 3, why are those transformation selected? A6. To align the output distribution across different input lengths, $f$ is chosen based on the following criteria: (1) $f$ must be a bijection to ensure the existence of $f^{-1}$ (2) $f$ should be a contraction mapping on $[1, +\infty]$ to reduce the discrepancy in outputs across different input lengths. --- Also thanks for the minor comments! We will address them in the final version.
Summary: Authors introduce a novel perspective on length generalization in large language models (LLMs) by focusing on output alignment rather than conventional input-based approaches like positional encodings. Through synthetic task case studies, the authors demonstrate that models generalize better when output distributions remain consistent across varying sequence lengths. They propose the Long-Short Misalignment metric to quantify output alignment and introduce a regularization loss to enhance length generalization. Extensive experiments on synthetic and natural language tasks validate their approach, offering insights into improving LLMs' performance on longer contexts. Claims And Evidence: The claims in the paper are generally well-supported by empirical and theoretical evidence. Methods And Evaluation Criteria: Yes, I would rather also see more comparison with other methods for improving length generalization. Theoretical Claims: Yes, and particularly I am more interested on empirical results as I believe they are more important in the context of this research. Experimental Designs Or Analyses: Yes, the experimental setup is okay, but I would rather see more comparison with other methods. Supplementary Material: No Relation To Broader Scientific Literature: The paper builds on prior work on length generalization in Transformers, which has primarily focused on positional encodings and modeling mechanisms. Unlike these studies, it shifts the focus to output alignment, introducing the Long-Short Misalignment metric as a new predictor of generalization and proposing a regularization loss to enhance performance. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: * focusing on output alignment for length generalization which I believe can be another axes to other methods like positional encodings. * Providing synthesis task analysis to bake the intention for following the idea. * Generally speaking, I think this is an insightful paper. Weakness: * I believe comparison with other methods in length generalization is needed. Also seeing if this technique can be combined with other techniques to see if it further improves the performance. Other Comments Or Suggestions: See questions. Questions For Authors: Question1: In the length generalization experiment, why do you think 1 over square root function works the best? Is it just because the values are in [0,1] range? what about other functions that puts the output distribution in this range? Do they work as well? Question 2: I believe more comparison with other length generalization approaches can help the results. As well if combining those methods with yours helps would be one of my questions. Question 3: What positional encoding is used for main results like in table 1? Do you think if changing the positional encoding might affect the results? A comparison of them against each other might be helpful. Especially when the initial positional encoding is not good at length generalization and your approach improve length generalization which means your approach can be stand alone for length generalization. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer Ueg1 for appreciating the insight of our paper. We will address your questions in the following part. --- Q1. (From **Other Strengths And Weaknesses**) I believe comparison with other methods in length generalization is needed. Also seeing if this technique can be combined with other techniques to see if it further improves the performance. A1. Thank you for your suggestion. To clarify, both Table 2 and Table 3 compare our proposed method combined with a length generalization technique against the length generalization technique alone. The results demonstrate that our method consistently improves performance when combined with existing approaches. To further address your point, we compare our method applied on top of EABF against CREAM [1], another method built on EABF. Additionally, we evaluate whether combining our method with CREAM leads to further improvements. The results for 100-step training are shown below: | | LongBench-E | Perplexity | | --- | --- | --- | | Our method | 24.0 | 6.43 | | CREAM | 23.6 | 6.62 | | Our method + CREAM | 25.2 | 5.94 | These results show that our method outperforms CREAM alone in both LongBench-E score and perplexity. Furthermore, combining our method with CREAM leads to additional improvements. This supports our claim that output alignment plays a crucial role in length generalization and can complement existing approaches. [1] Wu et al. An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding. NeurIPS 2024. --- Q2. (From **Questions For Authors**) In the length generalization experiment, why do you think 1 over square root function works the best? Is it just because the values are in [0,1] range? What about other functions that puts the output distribution in this range? Do they work as well? A2. We conducted additional experiments using alternative parameterization functions, including $f(x)=1/\log(x+1)$ and $f(x)=1/x$ following your suggestions. The results, presented in the anonymous link https://ibb.co/39M3D8Nq, show that while these functions also improve performance, $f(x)=1/\sqrt{x}$ still performs the best. The superior performance of $f(x)=1/\sqrt{x}$ is not solely due to its range being within [0,1]. Rather, we hypothesize that it is because, both empirically (Figure 1(b)) and theoretically (Theorem C.1), the test loss is proportional to the square of the input length when no parameterization function $f$ is applied. Therefore, using $f(x)=1/\sqrt{x}$ effectively normalizes the output and mitigates this issue. --- Q3. (From **Questions For Authors**) What positional encoding is used for main results like in table 1? Do you think if changing the positional encoding might affect the results? A comparison of them against each other might be helpful. Especially when the initial positional encoding is not good at length generalization and your approach improve length generalization which means your approach can be stand alone for length generalization. A3. Thank you for your question. For the main results in Table 1, GPT-J-6B, GPT-NeoX-20B, Llama2-7B, and Qwen-7B-8K use the original RoPE, while Yarn-Llama2-7B-8K and CLEX-Llama-4K use modified RoPE and achieve better length generalization. Changing the positional encoding can indeed affect the results, as seen in Table 1—models using modified RoPE tend to perform better in terms of length generalization. However, even among models using the same RoPE (e.g., GPT-J-6B, GPT-NeoX-20B, Llama2-7B, and Qwen-7B-8K), we observe varying degrees of length generalization. This suggests that while positional encoding plays a role, it is not the sole factor. As shown in Table 11, our method does not lead to significant performance gains when using the original RoPE, likely because using original RoPE will lead to slow convergence speed in length generalization task [1, 2]. However, when we use modified RoPE, our method provides notable improvements in length generalization, as shown in Table 1. This suggests that while the choice of positional encoding does impact performance, our method works best when combined with a modified version of RoPE that better supports length generalization. [1] Chen et al. Extending Context Window of Large Language Models via Positional Interpolation. arxiv 2306.15595 [2] Zhang et al. Extending LLMs' Context Window with 100 Samples. arxiv 2401.07004 --- Thanks again for your comments, and hope our response could address your concerns. Please let us know if you have additional questions.
Summary: This paper studies how aspects of the output distributions of LLMs relate to length generalization and performance on long-context tasks. First, the authors show that train-test mismatch in the output space can lead to poor generalization on synthetic tasks, and that reformulating the tasks to reduce this mismatch improves performance. Second, on non-synthetic natural language tasks, the authors propose a metric called "long-short misalignment" which measures the divergence between the output distributions of a model when it's original context window is truncated (up to 50%). This measures the model's invariance to this truncation. There is a positive correlation between models that are more invariant (and therefore less sensitive to earlier context) and performance on long-context modeling tasks. The authors then propose a regularizer based on this metric, which when applied during training can improve performance on tasks such as LongBench-E. ## Update after rebuttal I would strongly suggest including the expanded Table 5 in the revised version, and reducing Section 3 to improve the presentations, as proposed in the response. While still a bit unintuitive to me (I would have thought more tasks require sensitivity to earlier context) the empirical results seem reasonably strong across most benchmarks, and the authors show that hyperparameters can be chosen such that sensitivity to earlier context is not entirely eliminated. Assuming the pledged changes will be implemented, I will increase my score from 2 to 3. Claims And Evidence: The main empirical claims seem to be supported. The proposed regularizer improves performance on benchmarks such as LongBench-E. Methods And Evaluation Criteria: The proposed benchmarks seem reasonable. Theoretical Claims: There is a theoretical claim in 4.1, although it is only stated imprecisely in the main paper and I did not review the details in the appendix. Experimental Designs Or Analyses: The experimental design seems reasonable. Supplementary Material: No. Relation To Broader Scientific Literature: To the best of my knowledge, the proposed objective is novel. Essential References Not Discussed: This is not critical, just a connection that seemed relevant as I was reading the paper: - There is a large body of work in NLP related to developing new methods and Transformer variants to improve length generalization e.g. on length-based splits of SCAN (https://arxiv.org/abs/1711.00350). A focus in this line of work has been on understanding the relation between the degree of context sensitivity and out-of-distribution generalization. For example, various approaches have injected CFG-like biases that reduce context sensitivity. The methods that the authors propose in the submitted paper seems related in that they attempt to reduce context sensitivity, in this case encouraging models to be invariant to context earlier in the context window. Maybe there is an interesting connection to be discussed. Other Strengths And Weaknesses: Strengths: - The paper studies how a metric related to how invariant a model's outputs are to truncating the input context predicts performance on various long-context and length generalization settings. - Inspired by this, the authors propose a regularizer that encourages model outputs to be invariant to truncating early context. This appears to improve generalization performance on several tasks. Weaknesses: My main concerns are related to the paper presentation, which I think could be considerably improved. I think it would also be useful to better understand the weaknesses of the method, since the proposed regularization (which encourages models to ignore early context) seems like it must surely be harmful in some settings. It would be good to clarify this so practitioners can have better intuition for when to apply the proposed method. - Section 3 was a bit distracting to me as a reader. It is not surprising that reducing train and test mismatch in the output space is valuable for out-of-distribution generalization, including length generalization. The results did not seem very closely connected to the proposed methods in Section 4, despite the attempt of Table 8 to clarify this. Section 3 relates to a property of the *task*, i.e. re-formulating the task to reduce the train-test distribution shift of the outputs, and is well supported in theory. The methods in Section 4 relate to a property of the *model*, i.e. that it should be largely invariant to earlier context. This is less clearly useful from a theoretical perspective, and therefore a bit surprising that it is effective. I think the paper would be stronger without Section 3, at least in the main paper. - The proposed terminology of "Output Alignment" is quite confusing. First, "Alignment" is overloaded with various techniques for model post-training, e.g. RLHF, and the proposed technique has nothing to do with this. The method is most clearly understood as encouraging invariance of output distributions when the context window is truncated, i.e. encouraging a form of context insensitivity. - It would be good to understand the limitations of the proposed method. For example, in cases that require sensitivity to information early in the context window, the proposed regularization should intuitively harm performance. The ablations in section 5.3 could be expanded to highlight potential weaknesses of the method (see questions for authors). Other Comments Or Suggestions: * nit: It would be helpful to make the Figures easier to read without color, i.e. use different line shadings or styles in Figure 1. * One of the models looks very underfit in Figure 1a, perhaps worth investigating the training configuration. * The introduction frames the proposed method as a completely different perspective from prior work investigating alternative positional encodings. However, it seems both lines of work have considered ways to reduce the sensitivity of models to potentially irrelevant context, e.g. the biases of methods such as RoPE that discourage long-range dependencies. It could be interesting to see if different positional encoding schemes lead to different degrees of "long-short misalignment" in the predictable way. Questions For Authors: 1. Section 5.3 - Intuitively, the proposed regularization should be harmful for cases where necessary context appears early in the context window, so it is surprising that the scores for "Fact Depth" 0% and 25% are comparable to the baseline. Does this change if the regularization coefficient is increased? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1. Maybe there is an interesting connection to be discussed with SCAN. A1. Thanks for highlighting these works! Indeed, SCAN-based methods leverage CFG-like rules to generate training data that reduce context sensitivity, which can help improving out-of-distribution generalization. This is indeed relative to the concept behind our proposed output matching, which ensures semantic consistency across different input lengths. However, there are key differences: - Different Focus. SCAN-based approaches primarily operates from the input perspective by generating structured data, while our method focuses on output matching and introduces a fresh output perspective. - Better Efficiency. SCAN-based approaches need manual CFG design, making them time-consuming and less scalable, while our method can offer a more efficient solution with only ~5% additional computation. We will clarify this distinction and discuss potential connections to these prior works in the revised paper. --- Q2. The results in Section 3 did not seem very closely connected to the proposed methods in Section 4. A2. Thank you for your insightful comments. We clarify that Section 3 is crucial for our paper for the following reasons: - Fundamental Motivation for Length Generalization Strategies: Section 3 highlights that one of the key challenge in length generalization is that the output distribution shifts when input length changes. This sets the stage for why controlling output alignment is crucial. - Bridging Task Reformulation and Model-Based Regularization: Both Section 3 and Section 4 aim to reduce output distribution discrepancies across different input lengths. When we have a strong prior about the task, we can reformulate it directly (as in Section 3). However, when such a prior is unavailable, we instead introduce a regularization approach (Section 4) to encourage the model to implicitly learn this property. Without Section 3, the necessity of such regularization may be less clear. By including both, the paper offers a broader perspective on addressing length generalization, rather than focusing solely on a single heuristic. Thank you again for raising this point, we will make Section 3 more concise to avoid distracting the main contributions of Section 4. --- Q3. The proposed terminology of "Output Alignment" is overloaded with various techniques for model post-training. A3. Thanks for pointing it out! We will use “Output Matching” instead to avoid such confusion. --- Q4. It would be good to understand the limitations of the proposed method. The ablations in section 5.3 could be expanded to highlight potential weaknesses of the method. Does this change if the regularization coefficient is increased? A4. Thanks for your suggestions! We acknowledge the importance of discussing the limitations of our method and have addressed this in Section 5.4 (line 408). Following your suggestions, we conduct additional experiments to further analyze the effect of $\alpha$ based on the ablations in Table 5. Specifically, we extend the range of $\alpha$ to include 0, 0.1, 0.3 and 0.5. The results are shown in the table below. | $\alpha$ | Depth=0% | Depth=25% | Depth=50% | Depth=75% | | --- | --- | --- | --- | --- | | 0 | 75 | 64 | 30 | 69 | | 0.1 | 73 | 64 | 38 | 74 | | 0.3 | 70 | 62 | 38 | 76 | | 0.5 | 61 | 54 | 36 | 72 | The results indicate that increasing $\alpha$ leads to a decline in scores for Depth = 0% and 25%, which confirms the potential drawback of excessive regularization. This conclusion is also consistent with the claim in Section 5.4. --- Q5. One of the models looks very underfit in Figure 1a, perhaps worth investigating the training configuration. A5. The model that appears underfit in Figure 1(a) uses Alibi positional encoding. We indeed observe that the Alibi-based model converges more slowly than those using other positional encodings, likely due to the complex inductive bias introduced by Alibi. --- Q6. It could be interesting to see if different positional encoding schemes lead to different degrees of "long-short misalignment" in the predictable way. A6. Thank you for your thoughtful insight. Regarding the connection between positional encoding schemes and long-short misalignment, we observe that most modern LLMs rely on RoPE, often with modifications to its hyperparameters. However, as shown in Table 1, even among models using RoPE, length generalization ability varies significantly. For example, while GPT-J-6B, GPT-NeoX-20B, Llama2-7B, and Qwen-7B-8K all use RoPE, only Qwen-7B-8K demonstrates strong length generalization and low long-short misalignment. Furthermore, Yarn-Llama2-7B-8K and CLEX-Llama-4K, which employ modified versions of RoPE, also exhibit improved generalization and reduced misalignment. These observations suggest that while positional encoding choices influence long-short misalignment, they do not fully determine it.
null
null
null
null
null
null
Hierarchical Masked Autoregressive Models with Low-Resolution Token Pivots
Accept (poster)
Summary: This paper proposed a Hierarchical Masked Autoregressive Model based on MAR (Li et al., 2024) by introducing a low-resolution modeling phase, which is claimed to provide global structure guidance for generating dense image tokens. Specifically, x2-lower-res image tokens are first modeled by a scale-aware transformer block in a bi-directional masked-modeling manner, followed by a MLP diffusion head to predict the continuous latents. Then, at the second phase, the predicted latents, rather than the ground-truth tokens, are used as additional conditions for generating high-res tokens with the same scale-aware transformer but a different transformer-based diffusion head. The proposed model is tested on ImageNet and MS-COCO for C2I and T2I generation and compared with existing approaches. **update after rebuttal**: Based on the reviews, rebuttal, and discussions, I find this paper to be borderline. While the additional information and experiments provided in the rebuttal help strengthen the paper within its reasonable scope, I remain unconvinced by its level of novelty. That said, I have raised my final rating to a weak accept. Claims And Evidence: The essential claims, including the lack of global context, training-inference discrepancy, independent sampling issue, and speed/accuracy trade-off, are well-justified and largely supported by empirical results within the scope of this paper. However, whether these claims are conclusive for large-scale problems or models is unclear. For example, - The **training-inference discrepancy** issue is only addressed for passing tokens from phase 1 to phase 2, while during the autoregressive process of stage 1 and stage 2, teacher-forcing learning is still applied. - It is unclear how **speed and accuracy** behave for longer visual sequences. For example, will the additional phase 1 and the Diffusion Transformer head in phase 2 lead to much higher computational cost for higher-res images? Methods And Evaluation Criteria: The proposed method is largely based on MAR (and incorporates many ideas from the VAR paper about scales). It is reasonable, well-motivated, and clearly feasible. The paper follows MAR and uses the most widely applied ImageNet dataset to evaluate 256x256-res C2I generation. The paper also uses MS-COCO to evaluate T2I results. Both are very common practices and reasonable. Theoretical Claims: The essential claims, including the lack of global context, training-inference discrepancy, independent sampling issue, and speed/accuracy trade-off, are well-justified and *largely supported* by empirical results *within the scope of this paper*. I have mentioned potential concerns in Claims And Evidence and Experimental Designs Or Analyses in my review, please refer to those sections for details. Experimental Designs Or Analyses: 1. I am not satisfied with the T2I experiment. The model is only tested on MS-COCO and only compares FID, which can hardly reflect the actual performance. - Other more comprehensive benchmarks should be evaluated, such as T2I-CompBench (Huang et al., 2023) and GenEval (Ghosh et al., 2023). - Only a small-scale model (Hi-MAR-S) is tested. By the way, the exact configuration of Hi-MAR-S is not specified in Table 1. 2. Is the diffusion loss at the second stage the only training objective? How many steps are the Diffusion heads trained for? Is it the same as MAR (1000 steps)? And do the models in Figure 4 follow this training setup? 3. Table 2 (w/ CFG) shows that as the model size increases, the gap between Hi-MAR and MAR reduces. I am concerned that the proposed method is not scalable to larger images and large models. Supplementary Material: N/A. No supplementary material has been submitted. Relation To Broader Scientific Literature: The proposed method is largely based on MAR (Li et al., 2024) and incorporates many ideas about scales from the VAR (Tian et al., 2024) paper. In short, MAR proposes a diffusion head based on masked-modeling bidirectional autoregressive model for predicting continuous latents instead of discrete code. VAR proposes a scale-wise instead of token-wise autoregressive paradigm. The key contribution of this paper, introducing an extra smaller-scale autoregressive modeling phase to the single-stage MAR framework, is highly relevant to the VAR's scale-wise idea. Essential References Not Discussed: This paper includes and clearly refers to most of the essential literature, and I have no problem with this. There are many concurrent works about AR+continuous latent and AR+multi-scale (e.g., Fluid, Infinity, FlowAR, HART, FlexVAR, FractalAR, ...) that can be added to the later version of this paper. Other Strengths And Weaknesses: I like the overall idea of introducing scale-wise modeling to MAR, along with other nice adaptations. To me, this is a safe innovation but somewhat incremental. My major concern is that the scope and depth of this study is too small or shallow to reveal the potential of the approach. Other Comments Or Suggestions: At the current stage, I think the most doable items are expanding the T2I experiment, providing more visualizations, and improving the analysis to include more insightful studies, such as the impact on phase 1 resolution and other design choices of the diffusion heads. To strengthen this paper, I expect experiments on longer visual sequences (higher-resolution images) and larger models to justify the proposed method's scalability, efficiency, and generalization ability. Questions For Authors: I have specified my concerns and questions in the sections above, please refer to those parts. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Training-inference discrepancy issue** Yes. We mainly focus on the design of hierarchical masked autoregressive model which addresses the training-inference discrepancy for passing tokens from phase 1 to phase 2, while the discrepancy caused by the inherent autoregressive process in each stage still remains. Such discrepancy is also occurred on most existing autoregressive models. We will discuss this. **Q2: Speed and accuracy behave for longer visual sequences** As suggested, we experimented on larger 512 resolution and Hi-MAR-L (FID: 1.62) outperforms MAR-L (FID: 1.73), while its computational cost is 20.9% less than MAR-L. We will add this. **Q3: More comprehensive benchmarks** Thanks. As suggested, we evaluate T2I model on T2I-CompBench and GenEval benchmarks. Due to limited computational resources, here we compare our Hi-MAR with other state-of-the-art methods (e.g., U-ViT-S and AutoNAT-S) which are trained on MS-COCO with similar parameter size. The larger models with billions of parameters trained on billions of images (e.g., SDXL, SD3) are not included. As shown in the following tables, our Hi-MAR consistently outperforms other baselines with comparable parameter size. We will add this. | Method | Single Obj. | Two Obj. | Counting | Colors | Positions | Color Attri. | Overall | | --------- | ----------- | -------- | -------- | ------ | --------- | ------------ | ------- | | U-ViT-S | 83.75 | 18.69 | 16.25 | 38.03 | 3.00 | 0.75 | 26.75 | | AutoNAT-S | 81.25 | 18.18 | 16.56 | 36.70 | 3.75 | 1.25 | 26.28 | | Hi-MAR-S | 89.06 | 22.73 | 17.50 | 44.41 | 2.50 | 2.25 | 29.74 | | Method | Color | Shape | Texture | Spatial | Non-Spatial | Complex | | --------- | ------ | ------ | ------- | ------- | ----------- | ------- | | U-ViT-S | 0.3626 | 0.2682 | 0.3474 | 0.0353 | 0.2693 | 0.2219 | | AutoNAT-S | 0.3225 | 0.2466 | 0.3389 | 0.0453 | 0.2468 | 0.2024 | | Hi-MAR-S | 0.3862 | 0.2782 | 0.3945 | 0.0409 | 0.2690 | 0.2313 | **Q4: Configuration of Hi-MAR-S** For fair comparison, Hi-MAR-S follows the configuration of U-ViT-S/2 (Deep) and its exact configuration is shown in the following table. We will add this in Table 1. | | | Hi-MAR Transformer | | Diff. Head1 | | Diff. Head2 | | | -------- | ------- | ------------------ | ------- | ----------- | ------- | ----------- | ------- | | Model | #Layers | Hidden size | #Layers | Hidden size | #Layers | Hidden size | #params | | Hi-MAR-S | 17 | 512 | 5 | 512 | 5 | 512 | 108M | **Q5: Training setup** To be clear, we only employ the diffusion loss as the training objective on both stages. Following MAR, we set the maximum timestep as 1,000 for the Diffusion heads. The models in Figure 4 follow this training setup. **Q6: The gap between Hi-MAR and MAR reduces. The scalablility of Hi-MAR to larger images and larger models** The performance in ImageNet is almost saturated and it is relatively difficult to introduce large margin of improvement. Considering this comment, we conducted the suggested experiments on larger resolution (i.e., 256 -> 512) and the FID achieves 1.62, which improves over MAR-L by 0.11. Furthermore, we scale both MAR and Hi-MAR to 2B parameters, and the FID of MAR and Hi-MAR is 1.49 and 1.45 respectively. The results basically demonstrate that Hi-MAR is scalable to both larger images and larger models. We will add this. **Q7: Concurrent works** We appreciate the suggested concurrent works and we are also happy to discuss them in revised version. **Q8: Safe innovation** Please refer to Q1 of Reviewer SRgF for more discussion on technical contribution against existing works. **Q9: More visualizations** As suggested, we provide more visualization results on the [link](https://anonymous.4open.science/r/HiMAR_Visual/README.md). We will add this. **Q10: Impact on phase 1 resolution** We experimented with smaller resolution (i.e., 64x64) on phase 1 and the FID score is degraded into 2.06 due to the quality of images generated in such smaller resolution is relatively low. Therefore, we choose 128x128 resolution for the first phase. We will add this. | Phase 1 Resolution | FID | | ------------------ | ---- | | 64x64 | 2.06 | | 128x128 | 1.93 | **Q11: Other design choices of the diffusion heads** We also experimented by replacing the self-attention layer with cross-attention in the Diffusion Transformer head to mine the context among all tokens, and the FID is slightly dropped by 0.05 compared to the final version of Hi-MAR. We will discuss this in ablation study. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for responding to my questions and providing additional results that support the claims more. I don't have any further questions, and I will decide the final rating based on all the reviews, rebuttals, and discussions. Thanks.
Summary: This paper proposes a hierarchical masked AR visual generation model with low-resolution tokens as pivots. By first generating low-resolution image tokens, which provide a global structure, the second generation phase can benefit from the global context. Besides, the proposed diffusion transformer head to further improve the results. Experimental results show that Hi-MAR can obtain better performance than baselines. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: Yes. Supplementary Material: No Supplementary Material Relation To Broader Scientific Literature: I think the key problem with Hi-Mar is the novelty improvement with Muse[1], VAR[2], and Hart[3]. 1. Muse also uses a super-resolution transformer to generate the final image with low-resolution information fused by cross-attention. 2. VAR proposes to use multi-scale generation to image generation. Hi-Mar only uses one low-resolution scale. I think this is a special case for VAR. 3. In Hart (missing reference), they use residual diffusion to improve the performance of VAR. Considering these three papers, the novelty of Hi-Mar is limited enough. [1] Muse: Text-to-image generation via masked generative transformers. ICML 2023. [2] Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction. NeurIPS 2024. [3] HART: Efficient Visual Generation with Hybrid Autoregressive Transformer. ICLR 2025. Essential References Not Discussed: See Relation To Broader Scientific Literature. Other Strengths And Weaknesses: Strengths: 1. The paper is easy to follow, and the idea is intuitively easy to understand. 2. The performance is impressive. 3. Reduce the ar steps in the second step and improve the generation speed. Weakness: 1. Limited novelty. See Relation To Broader Scientific Literature 2. Missing speed comparison with VAR and HART. Other Comments Or Suggestions: N/A Questions For Authors: 1. Have you tried other resolution settings, like 128->512, 256->512? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Novelty** Thanks. We summarize the differences between Hi-MAR and conventional multi-scale generation models as follows: 1) During training, the conventional models (i.e., Muse, VAR, Hart) commonly utilizes the ground-truth low-resolution visual tokens directly to guide the next phase prediction. Instead, Hi-MAR takes the conditional tokens estimated from the Hi-MAR Transformer of low-resolution visual tokens as the condition. Such design can mitigate the training-inference discrepancy as discussed in section 3.2. As shown in Table 4, the FID score improves from 2.28 to 2.07 when adopting the conditional tokens to mitigate such discrepancy. 2) Both VAR and Hart models multi-scale probability distribution via a shared Transformer without additional guidance. This way leaves the inherent different peculiarities of each scale in autogressive modeling not fully exploited, resulting in a sub-optimal solution for multi-scale token prediction. In contrast, our Hi-MAR incorporates a scale-aware Transformer block that provides scale guidance to the Transformer tailored to each phase. 3) Both VAR and Muse discretize images by VQGAN, resulting in severe information loss. Instead, Hi-MAR adopts continuous tokenizer via a diffusion loss, overcoming the poor generation upper bound caused by the vector quantization. 4) Muse utilizes two different models for low/high-resolution image generation and the two models are trained separately. Instead, our Hi-MAR jointly optimizes the probability distribution for low/high-resolution tokens with a shared scale-aware masked autoregressive Transformer and two small diffusion heads, which is more parameter-efficient. 5) In contrast to Hart that utilizes an MLP-based diffusion head to model the each token probability distribution individually, Hi-MAR devises Diffusion Transformer head by exploiting the self-attention to model the interdependency among tokens. Note that we appreciate the suggested reference of concurrent work Hart (pulished on ICLR 2025 with camera ready deadline on Mar 14, 2025). We will add the discussion. **Q2: Speed comparison with VAR and HART** As suggested, we compare the speed of our Hi-MAR with mentioned VAR and HART in the following table. The results basically demonstrate that Hi-MAR achieves superior performance against VAR and HART with comparable computational costs. We will add this in revision. | Method | #Para. | FID | Phase_1 Steps | Phase_2 Steps | Diff. Head_1 Step | Diff. Head_2 Step | Inference Time/per image | | -------- | ------ | ---- | ------------ | ------------ | ------------------------ | ------------------------ | ------------------------ | | VAR-d20 | 600M | 2.57 | 10 | &cross; | &cross; | &cross; | 0.14489 | | HART-d20 | 649M | 2.39 | 10 | &cross; | 8 | &cross; | 0.15401 | | DiT-XL/2 | 675M | 2.27 | 250 | &cross; | &cross; | &cross; | 0.78970 | | MAR-B | 208M | 2.31 | 256 | &cross; | 100 | &cross; | 0.52641 | | Hi-MAR-B | 244M | 2.00 | 32 | 4 | 100 | 50 | 0.13587 | | Hi-MAR-B | 244M | 1.93 | 32 | 4 | 100 | 250 | 0.28552 | **Q3: Other resolution settings** Thanks. We experimented by equipping Hi-MAR-L with larger resolution (i.e., 256 -> 512), and the FID score achieves 1.62, which outperforms MAR-L with 512 resolution by 0.11. The result again validates the effectiveness of exploiting hierarchical autoregressive and modeling interdependency among tokens. We will add this.
Summary: The paper introduces Hi-MAR, a hierarchical masked generative model for visual generation. Hi-MAR first predicts low-resolution image tokens as global structural pivots, which then guide the next phase of dense token prediction, enhanced by a Diffusion Transformer head for better global context modeling. Experiments on image generation tasks show that it outperforms baselines and is computationally efficient. Claims And Evidence: Fine. The claims are intuitive and easy to follow. Methods And Evaluation Criteria: Fine. Theoretical Claims: N/A Experimental Designs Or Analyses: Given the success of recent next-scale prediction approaches, such as VAR, it is not surprising that the proposed cascaded method could be beneficial. However, its practical advantage over VAR remains unclear. Additionally, the method explicitly adopts a "two-stage" approach—would further stages yield additional improvements? More experiments and analyses may be required to address the two concerns. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: The reviewer acknowledges that the paper's technical contributions to visual generation are acceptable but not particularly significant, considering the success of VAR. The approach appears to be a specific "two-scale" variant of VAR, replacing discrete tokenization (i.e., VQ) with continuous diffusion, inspired by MAR. The authors are encouraged to provide deeper insights into the discussion and include necessary comparisons to "VAR w/ diffusion heads" to assess whether additional stages could lead to further improvements. As I am not an expert in this area, I will seek input from other reviewers for a more objective evaluation. This recommendation is not final. Other Comments Or Suggestions: Has the autoregressive relationship in Figure 1(a) been drawn incorrectly? Is the causal relationship represented by the black connecting lines reversed? It seems like a left-right mirror flip of the figure would be correct. ##### Post-rebuttal: Thanks to the authors for the detailed responses. I am ok with the rebuttal. I hope the discussions can be incorporated into the revision. Good luck. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Discussion on VAR and comparisons to "VAR w/ diffusion heads"** Thanks. We summarize the contributions of our Hi-MAR against VAR in two points: 1) VAR utilizes the low-resolution visual tokens directly to guide the next phase prediction, which would cause training-inference discrepancy as discussed in section 3.2. During training, VAR takes the ground-truth low-resolution tokens as the condition for the next phase. Since no ground-truth token is available at inference, VAR has to take the predicted noisy low-resolution tokens as the condition, resulting in training-inference discrepancy. Instead, to mitigate such discrepancy, Hi-MAR takes the conditional tokens estimated from the Hi-MAR Transformer of low-resolution visual tokens to trigger the second phase. As shown in Table 4, when replacing the pivots of visual tokens (similarly used in VAR) with our conditional tokens, the FID score clearly improves from 2.28 to 2.07, which validates the effect of the conditional tokens that mitigate training-inference discrepancy. 2) VAR models multi-scale probability distributions via a shared Transformer without additional guidance. This way leaves the inherent different peculiarities of each scale in autoregressive modeling not fully exploited, resulting in a sub-optimal solution for multi-scale token prediction. In contrast, our Hi-MAR incorporates a scale-aware Transformer block that provides scale guidance to the Transformer tailored to each phase. Moreover, as suggested, we experimented by implementing VAR with diffusion heads, and "VAR w/ diffusion heads" (FID: 2.67) manages to outperform VAR-d16 (FID: 3.30). Nevertheless, the performance of "VAR w/ diffusion heads" (FID: 2.67) is still inferior to our Hi-MAR (FID: 1.93), which demonstrates the effectiveness of hierarchical autoregressive modeling. We will add all discussions in revision. **Q2: Would further stages yield additional improvements** Appreciate this comment. We experimented by stacking one more stage (64x64 resolution) ahead of the low-resolution stage (128x128 resolution), and the FID score only fluctuates within the range of 0.03. We speculate that the low-resolution stage (128x128 resolution) has already provided sufficient global structure guidance for the next high-resolution stage. The use of an additional stage (64x64 resolution) might introduce unnecessary/redundant global structure information. We will discuss this in the revised version. **Q3: Autoregressive relationship of Figure 1 (a)** Thanks. To be clear, Figure 1(a) correctly illustrates the left-to-right autoregressive relations among the image token sequence. That is, each predicted token at position *i* (see the bottom output sequence) can only be emitted conditioned on the previous input tokens at positions less than *i* (see the top input sequence). We will clarify this in revision. --- Rebuttal Comment 1.1: Comment: Thank you to the author for the clarification, which has resolved some of my concerns. I will maintain my rating as Borderline. Overall, I acknowledge the author's exploration in engineering and the empirical results. However, I remain concerned about the significance of the technical improvements of Hi-MAR compared to VAR. From the perspective of technical contribution, this is a fairly marginal paper (meaning it has a probability of being either accepted or rejected at any top ML/CV conference). Technically, it is a combination and repackaging of existing work, with incremental contributions, including: 1. Drawing inspiration from the multi-scale approach of VAR. Specifically, the authors explore a two-stage design. 2. At each scale, the authors adopt a non-autoregressive masked prediction task, similar to the MaskGIT and MAGViT series. Compared to VAR’s “one-step prediction” approach at each scale, this can be seen as sacrificing some *computational efficiency* by using more inference steps in exchange for improved *prediction quality* within the scale. 3. Additionally, to enhance the visual quality of the generated data, the authors replace the VQ operation with a diffusion head inspired by MAR, improving fidelity. #### Some further suggestions: Conduct analysis experiments to evaluate the trade-off between effectiveness and efficiency. Specifically, starting from a two-stage VAR, progressively modify the approach by: - Changing "directly predicting the next-scale feature map" to "iteratively predicting the next-scale feature map via masked prediction." - Replacing the VQ head with a diffusion head. Evaluate the impact of these changes on both performance and efficiency. Besides, Figure 4 in the main paper should include VAR for comparison. #### Question: I understand the author’s mention of the "training-inference discrepancy in VAR"—I believe this is a common issue for most autoregressive models, namely, the accumulation of errors during inference. However, I am still unclear on why Hi-MAR alleviates this issue. The accumulation of errors should still occur in Hi-MAR’s inference process, whether during intra-scale multi-step masked prediction or next-scale prediction. --- Reply to Comment 1.1.1: Comment: ### Q1: Technical contribution Appreciate your response. We would like to provide a more detailed clarification on technical contribution of our Hi-MAR, especially compared to existing works like VAR: 1. **The use of conditional tokens to alleviate training-inference discrepancy across scales is novel**: We identify and alleviate the training-inference discrepancy across scales, which is a common yet underexplored issue in multi-scale prediction models (e.g., VAR, FlowAR, Muse). As shown in Table 4 of our paper, simply using low-resolution visual tokens as pivots to guide denser token prediction leads to marginal FID improvement (2.31 to 2.28), due to the inconsistency of pivot tokens between training and inference. To mitigate this, we propose using low-resolution **conditional tokens** generated by Hi-MAR Transformer to guide denser token prediction. This strategy ensures consistency between training and inference. As shown in Table 4, replacing the visual token pivots (as used in VAR) with our conditional tokens yields a notable FID improvement (2.28 to 2.07), which validates our proposal. A detailed explanation is provided in Q4 below. 2. **The proposal of scale-aware Transformer block is novel.** We introduce a scale-aware Transformer block that provides tailored scale guidance for each phase. This design is novel, effective, and not introduced in VAR. It is also worth noting that our hierarchical modeling can be easily applied to most VAEs without the need to train a multi-scale autoencoder, which needs to be trained in VAR. We therefore kindly invite Reviewer kLHw to reconsider assessment on our Hi-MAR's essential technical contributions, depending on the above discussions. ### Q2: Effectiveness-efficiency trade-off As suggested, we show a detailed comparison on effectiveness and efficiency across different methods in this new table (see the [link](https://anonymous.4open.science/r/HiMAR_FigTab/README.md)). Starting from a two-stage VAR, we progressively apply modifications: 1) Adopt masked autoregression for each stage (row 2 in this table); 2) Add a diffusion head for each scale (row 3 in this table). Note that we change the dimension and depth of VAR so that the parameter number of modified VAR is similar to Hi-MAR-B. As shown in this table, while these modifications improve performance, they still lag behind Hi-MAR in both accuracy and speed. Notably, even with these modifications, the best FID 2.30 of these variants only approaches that of Hi-MAR pivoting on ground-truth visual tokens (FID 2.28), whereas Hi-MAR further improves to 2.07 by introducing conditional tokens, highlighting the importance of addressing training-inference discrepancy. ### Q3: Figure 4 should include VAR Thanks. As suggested, we include VAR for comparison in the revised Figure 4 (see this [link](https://anonymous.4open.science/r/HiMAR_FigTab/README.md)). We will add this in revision. ### Q4: Training-inference discrepancy We clarify the discrepancy issue and how Hi-MAR resolves it. Let us consider a simplified two-scale setting. In VAR: - **Training**: The model learns to predict large-scale tokens $x_l$ conditioned on ground-truth small-scale tokens $x_s$, i.e., $P(x_l|x_s)$. - **Inference**: The model first predicts small-scale tokens $\hat{x}_s$, which may contain errors, and then uses them to predict $x_l$, i.e., $P(x_l|\hat{x}_s)$. This mismatch between $x_s$ (GT) in training and $\hat{x}_s$ (noisy) in inference introduces a training-inference discrepancy, leading to error accumulation and degraded generation quality. In Hi-MAR: - **Training**: In the first phase, a proportion of small-scale visual tokens $x_s$ are masked and the remaining unmasked ones $x_{s,v}$ are fed into Hi-MAR Transformer. The Hi-MAR Transformer outputs conditional tokens $z_{s,m}$, which are further fed into the diffusion head for predicting the masked tokens $x_{s,m}$ as MAR. In the second phase, similar masking procedure is also applied to the denser visual tokens $x_l$. Instead of using ground-truth $x_s$, Hi-MAR Transformer takes the small-scale conditional tokens $z_{s,m}$ from the first phase along with the unmasked visual tokens $x_{l,v}$ as input to generate denser conditional tokens $z_{l,m}$. Finally, the Diffusion Transformer head conditioned on $z_{l,m}$ is adopted to predict the denser masked tokens $x_{l,m}$. - **Inference**: We follow the same procedure (i.e., first predict small-scale conditional tokens $z_s$, and then predict denser conditional tokens $z_l$ based on $z_s$), ensuring the consistency of pivot tokens between training and inference. This design ensures that both training and inference in Phase 2 rely on predicted conditional tokens rather than ground-truth tokens. As shown in Table 4, this leads to a notable FID improvement (2.28 to 2.07), validating our proposal.
Summary: This paper improves Masked Autoregressive models (MAR) by introducing hierarchical modeling, specifically, low resolution is used as pivots. Additionally, MLP-based Diffusion is changed to global Diffusion to further improve the performance. Claims And Evidence: The claims made in this paper are supported by both qualitative and quantitative results. Methods And Evaluation Criteria: The proposed method is evaluated on class-conditional image generation and text-to-image generation. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: Section 4.3 to 4.5 give a comprehensive and solid evaluation on the proposed method. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: This paper is closely related to autoregressive visual generation, which has broad impacts. Essential References Not Discussed: None. Other Strengths And Weaknesses: My biggest concern about this paper is the introduction of global Diffusion. I admit it helps improve the overall performance, but it is well-known that an additional Diffusion module will improve the image generation ability no matter what methods are used before it. The focus of this paper is supposed to prove the effectiveness of Hierarchical modeling, so my suggestion to Table 4 is to add a new row of "Pivots + MLP-based diffusion heads" to show the gain from Hierarchical modeling only. Other Comments Or Suggestions: None. Questions For Authors: This paper provides an intuitive method to improve MAR and shows its effectiveness on a range of tasks, I tend to accept this paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Add a new row of "Pivots + MLP-based diffusion heads" in Table 4** Appreciate this comment. As suggested, we conducted experiments by including a new ablated run of "Pivots + MLP-based diffusion heads" in Table 4. This ablated run enables hierarchical modeling with shared MLP-based diffusion heads, without using any additional Diffusion module. As shown in this table, the FID of this new ablated run (the third row) manages to outperform MAR (the first row) by 0.17, which clearly validates the effectiveness of the hierarchical modeling. We will add the discussion in revision. | Pivots | Diff. Head_1 | Diff. Head_2 | Scale vector | #Para. | FID | | ------------------ | ----------------------- | ------------------------ | ------------ | ------- | ---- | | &cross; | &cross; | MLP-based | &cross; | 208M | 2.31 | | visual tokens | MLP-based | MLP-based | &cross; | 245M | 2.28 | | conditional tokens | &cross; | Shared MLP-based | &cross; | 208M | 2.14 | | conditional tokens | MLP-based | MLP-based | &cross; | 245M | 2.07 | | conditional tokens | MLP-based | Transformer | &cross; | 239M | 1.98 | | conditional tokens | Transformer | Transformer | &cross; | 233M | 1.98 | | conditional tokens | MLP-based | Transformer | &check; | 242M | 1.93 | --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing additional experiments. After reading the rebuttal and other reviews, I keep the score.
null
null
null
null
null
null
Concept-Centric Token Interpretation for Vector-Quantized Generative Models
Accept (poster)
Summary: This paper introduces CORTEX, an approach for interpreting Vector-Quantized Generative Models (VQGMs) by identifying concept-specific token combinations from their codebooks. The authors develop two complementary methods: a sample-level explanation that analyzes token importance in individual images and a codebook-level explanation that searches the entire codebook for globally relevant tokens representing specific concepts. Experimental evaluations show CORTEX's ability to provide clear explanations of token usage in the generative process while also enabling applications such as targeted image editing and bias detection. ## update after rebuttal The authors have addressed my main questions and concerns. For me, this is a self-consistent and complete work. I have also carefully read and am aware of the other reviewers' concerns. Overall, I maintain my original borderline accept score, but I would not be surprised if it were rejected. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The sample-level explanation evaluation compares against random token selection, which is a weak baseline. A fairer comparison would include other attribution methods adapted to the token space. Supplementary Material: There are no supplementary materials. Relation To Broader Scientific Literature: This work is related to vector quantization, information bottleneck principles, and bias detection in generative models. Essential References Not Discussed: None Other Strengths And Weaknesses: ## pros - Addresses an interesting interpretability gap in VQGMs. - Visualizations illustrate the concept-token relationships identified by the method. - The paper is well written and easy to follow. ## cons - See the questions below. Other Comments Or Suggestions: - Instead of using top-n and top-k token selection in the sample-level explanation, the authors could consider using a threshold-based approach that better adapts to the importance distribution. Questions For Authors: - The sample-level explanation uses gradient-based attribution, which has known issues like gradient saturation and noisy gradients. How does this affect the reliability of token importance scores? - About the codebook-level explanation, what guarantees the process consistently converges to meaningful token combinations rather than adversarial patterns that simply maximize class probability? - The main evaluation relies on masking tokens and measuring probability changes, but this assumes tokens have independent effects. How does the method account for the interdependencies between tokens where combinations matter more than individual tokens? - How does CORTEX handle tokens that might be relevant to multiple concepts simultaneously? The framework assumes a clear concept-to-token mapping that may not exist in practice. - The term "concept" may not be suitable. The proposed method primarily focuses on entities or objects. How well does CORTEX generalize to more abstract concepts like "happy" or "dangerous" that don't have clear visual correspondences? If not suitable for these abstract concepts, I suggest using "object-centric" instead of "concept-centric". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your concerns: ### Weak baseline comparison: Our comparison against both random selection and the frequency-based baseline (Table 1) highlights CORTEX’s effectiveness in identifying concept-relevant tokens. While the frequency-based method offers a stronger baseline than random selection, our method achieves a greater reduction in concept probability using fewer masked tokens. This suggests that CORTEX can better filter out contextual information, whereas frequency-based selection tends to pick background tokens that are not strongly tied to the concept itself. ### Threshold-based approach: Thank you for the suggestion. While threshold-based selection can serve a similar role as the top-n or top-k strategy by selecting the most important tokens based on their importance scores, it requires careful tuning for each VQGM and IEM. In contrast, our top-n and top-k strategies provide a consistent and architecture-independent evaluation across models. ### Gradient-based attribution issues: To reduce the impact of noisy gradients, we adopt the SmoothGrad [1] principle by adding noise to the input embeddings and averaging the resulting gradients, thus producing more stable token attributions. Furthermore, our sample-level explanation $\mathcal{T}^*_{\text{concept}}$ also aggregates highly activated tokens across multiple images via Equation 5, effectively mitigating the issue of gradient saturation that may occur in individual samples. ### Convergence guarantees: We acknowledge that this issue may arise with certain concepts, but we have quantitatively analyzed the performance of codebook-level explanations in Table 5 of our appendix. This analysis demonstrates that convergence to adversarial patterns is not the dominant case. ### Token interdependencies: Our codebook-level explanation can explain the interdependencies between tokens because this method optimizes tokens within a region simultaneously, obtaining the token combination that best represents this concept. We don't optimize tokens one by one but rather optimize the token selection matrix, and the optimization process considers the interdependencies between tokens. ### Multi-concept tokens: Tokens can indeed be relevant to multiple concepts. The token importance scores (TIS) are concept-specific, allowing us to map the same token to different concepts with varying importance levels. Although a single token may be associated with multiple concepts, it can represent different concepts when combined with different other tokens. ### "Concept-centric" terminology: Our method can explain abstract concepts, not just entities or objects. For example, in Section 5, we demonstrate how CORTEX explains relatively abstract concepts like "male" and "female" which go beyond simple visual objects. Furthermore, in text-to-image generative models like Dalle, our approach can explain any concept by identifying the most relevant token combinations when concepts such as "happy" or "dangerous" are used as prompts to generate images. We thank the reviewer for this thoughtful suggestion and will consider using more precise terminology in the final version of our paper. [1] Smilkov, Daniel, et al. "Smoothgrad: removing noise by adding noise." ICML 2017. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response, which has addressed most of my concerns. For now, I will keep my score, and I will also follow the authors' discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We're glad that we could address your concerns.
Summary: This paper introduces CORTEX, a method for interpreting Vector-Quantized Generative Models (VQGMs) with concept-oriented token explanations. CORTEX employs sample-level and codebook-level explanations. CORTEX is useful for shortcut detection (i.e. biases) and image editing. For the evaluation, they CORTEX employs a synthetic dataset generated by VQGAN with ImageNet categories. Experiments show that CORTEX enhances transparency and controllability in generative models. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, however the experimental setting is limited (see weaknesses below). Theoretical Claims: This is not a theoretical paper, however, it relies on the Information Bottleneck principle in compression. They use an Information Extractor network to filter out background and non-essential tokens. The visualizations in the paper seem to support this principle. Experimental Designs Or Analyses: Yes, the VQGMs setting is legit. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: Neural networks are very well known black box models affected by different biases. This work contributes to interpretability methods for VQGMs. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths - The proposed approach identifies the most important token used by VQGMs, enhancing their transparency. - The paper is relevant to the community, especially for shortcut detection. ### Weaknesses - The evaluation setting is limited: only a VQGM is used (GAN-based), concepts are limited to ImageNet classes. - Lack of comparisons to other methods. Results with different kinds of information extraction networks are not sufficient. The authors should explain why their method, for example, better identifies biases than others. - Some aspects in the proposed methodology are unclear. The figures and notation for example should be improved. See questions below. - More details on how the extraction of codebook works in VQGM should be explained better; an additional section in the Appendix could benefit the paper's readability. Other Comments Or Suggestions: No Questions For Authors: - What are the implications of tokens that activate less frequently? Have the underrepresented concepts generated images, for example, lower quality? - In Figure 3, which are the token-based embeddings? The coloured squares? The figures need additional notation used in the paper. - Given a class, i.e. "Indigo Bird", are there multiple possible token-based embeddings $E$? - Equation 3 is unclear. What is changing in the summation with different $l$s? $E$? Maybe an index notation on $E$ would be clearer. Why add noise $\epsilon_l$? - Equation 4, why max? What about taking the average of the channels instead? - The letter 'k' is used multiple times in different contexts (codebook, top-k etc.). A different letter for different things is recommended to improve readability. - Are there more meaningful metrics other than frequency or cliff to identify biases? - Does the concept label space $Y$ need to be the same as the label-conditioned space in the VQGM? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address your concerns and questions below: **Limited evaluation setting:** Our evaluation is not limited to VQGAN or ImageNet classes. We also include a text-to-image VQGM, DALL·E in section 5.1, where concepts are defined by prompts such as “male/female/black/white doctor”. These experiments demonstrate that CORTEX can explain arbitrary, user-defined concepts beyond fixed class labels. We also include additional results on the SOTA VQGM, VAR [1]. Evaluation using 10,000 VAR-generated images confirms that CORTEX effectively identifies concept-critical tokens. | Pretrained Model | Top-10 (ours) | Top-10 (random) | Top-20 (ours) | Top-20 (random) | Top-30 (ours) | Top-30 (random) | | ---------------- | ------------- | --------------- | ------------- | --------------- | ------------- | --------------- | | ViT-B/32 | **3.9** | 1.3 | **9.4** | 3.2 | **15.8** | 6.7 | | ResNet50 | **11.8** | 4.5 | **31.3** | 22.5 | **49.4** | 46.5 | *Table: Prediction probability drop after masking tokens from CORTEX on VAR-generated images; Larger drops indicate that the masked tokens are more important.* **Lack of comparisons:** As the first work to explain VQGMs in their token space, we have limited baseline options for comparison. However, we also established non-trivial baselines except for the random ones, including the frequency-based baseline in Table 2 and the embedding optimization baseline in Table 5. We evaluate CORTEX with different IEM architectures and find it consistently identifies concept-related tokens, demonstrating architecture-independent interpretability. **Unclear methodology aspects:** We will improve figures and notation in the revision. Specifically: 1. **Less frequently activated tokens:** Less frequently activated tokens represent elements that contribute minimally to a specific concept. These typically correspond to **background information** like sky or grass that lack the distinctive characteristics of the concept. 2. **Token-based embeddings:** In Figure 3, the colored grids represent the token-based embedding. Each position contains a token from the codebook. We will include the notation for **E** in Figure 2 in the revision. 3. **Multiple embeddings per class:** Yes, there can be multiple possible token-based embeddings for a class like "Indigo Bird." This is why we train our IEM to identify the token combination that **best** represents the concept. 4. **Equation 3 clarification:** Equation (3) is inspired by SmoothGrad [2], which adds noise to the input multiple times and averages the resulting gradients to reduce the model’s sensitivity to small perturbations, yielding a more stable saliency score. 5. **Max in Equation 4:** Since each channel captures different aspects of feature importance, we use the maximum across channels to identify the most discriminative feature at each position, following the max-selection approach from Section 3.1 of [3]. To validate this, we compare max and average strategies by measuring the drop in prediction probability after masking top-ranked tokens. The max operation consistently causes greater drops, indicating it better captures concept-relevant tokens. | Pretrained Model | Top-10 (max) | Top-10 (average) | Top-20 (max) | Top-20 (average) | Top-30 (max) | Top-30 (average) | | ---------------- | ------------ | ---------------- | ------------ | ---------------- | ------------ | ---------------- | | ViT-B/32 | **12.3** | 11.7 | **23.6** | 22.7 | **31.2** | 29.8 | | ResNet50 | **25.3** | 23.7 | **36.3** | 35.1 | **41.4** | 40.5 | *Table: Performance comparison across average and max operation on VQGAN-generated images.* 6. **Letter 'k' is used multiple times:** Thank you for your suggestion. We will improve our notations in the revision. 7. **Different metrics for bias:** Cliff's δ provides a standardized measurement that accounts for distribution differences. Other metrics like Jensen-Shannon divergence could be used, but Cliff's δ offers clear interpretability with established thresholds for effect size (small/medium/large). 8. **Concept label space:** Yes, they are the same: the concept labels we want to explain are exactly the same as the labels (or text prompts) used to condition the VQGM. **Extraction of codebook:** We will add a section in the Appendix to provide more details. [1] Tian, Jiang, et al. "Visual autoregressive modeling: Scalable image generation via next-scale prediction." NeurIPS, 2024. [2] Smilkov, et al. "Smoothgrad: removing noise by adding noise." ICML, 2017. [3] Simonyan, et al. "Deep inside convolutional networks: Visualising image classification models and saliency maps." arXiv preprint.
Summary: The paper introduces Concept-Oriented Token Explanation (CORTEX), a framework for interpreting Vector-Quantized Generative Models (VQGMs). CORTEX employs sample-level and codebook-level explanation methods to identify concept-specific tokens, enhancing the transparency of how VQGMs generate images. By using an Information Extractor Model based on the Information Bottleneck principle, CORTEX effectively distinguishes critical tokens from background elements. Experimental results demonstrate its superiority over baseline methods in explaining token usage, with practical applications in detecting biases and enabling targeted image editing. CORTEX thus provides valuable insights into the internal workings of VQGMs, paving the way for more interpretable and controllable generative models. Claims And Evidence: Claim 1: CORTEX Enhances VQGM Interpretability Evidence: Comprehensive evaluations show CORTEX outperforms baseline methods in explaining token usage, revealing consistent patterns of concept-relevant tokens in generated images. Claim 2: CORTEX Identifies Concept-Specific Tokens Evidence: The Information Extractor Model (IEM) effectively captures the relationship between token patterns and image labels, with high Top-5 accuracies demonstrating accurate identification of relevant tokens. Claim 3: CORTEX Detects Biases in Generative Models Evidence: Experiments with neutral prompts reveal systematic biases, with tokens associated with certain demographics appearing more frequently, quantified using statistical measures like Cliff’s $\delta$. Claim 4: CORTEX Enables Targeted Image Editing Evidence: Visualizations and quantitative comparisons show CORTEX can precisely manipulate image content by optimizing concept-relevant tokens, leading to significant changes in target label probabilities. Methods And Evaluation Criteria: The methods used in the paper include the development of the Concept-Oriented Token Explanation (CORTEX) framework, which consists of two main approaches: sample-level explanation and codebook-level explanation. The sample-level explanation method analyzes token importance scores in individual images to identify concept-specific tokens, while the codebook-level explanation method explores the entire codebook to find globally relevant tokens using an Information Extractor Model (IEM) based on the Information Bottleneck principle. The evaluation criteria involve comprehensive experimental validation using various pretrained classification models to assess the effectiveness of CORTEX in providing clear explanations of token usage, detecting biases by analyzing token frequencies in neutral prompts, and enabling precise image editing by optimizing concept-relevant tokens. The performance is measured through changes in softmax probabilities for original and target labels, comparison of token importance scores, and statistical measures like Cliff’s $\delta$ to quantify biases. Theoretical Claims: The CORTEX framework significantly enhances the interpretability of Vector-Quantized Generative Models (VQGMs) by providing detailed, concept-specific explanations of how these models represent and generate images. By identifying and analyzing the importance of discrete tokens, CORTEX offers insights into the internal mechanisms of VQGMs, making them more transparent and understandable. Experimental Designs Or Analyses: This work identifies the Top-n (n = 20) highest-TIS tokens and Top-k (k = 100) most frequent tokens. How do the parameters n and k affect the performance? This work is only based on the data generated by VQGAN. What is the effect of the data generated by the latest SOTA model such as MAGE and FSQ? Supplementary Material: Appendix has been reviewed. The appendix section of the paper provides detailed information about the synthetic dataset generated by VQGAN for evaluating the proposed methods, including its composition and advantages. It describes the architectures and training settings of three information extractors: CNN-based, ResNet-based, and Transformer-based models. Additionally, it explains the Gumbel-Softmax technique for token selection optimization and presents quantitative evaluation results for these methods. The appendix also covers the calculation and interpretation of Cliff's δ to measure the overlap between two groups of observations, and provides additional sample-level explanation visualizations to demonstrate the method's effectiveness in identifying concept-specific features across different images. Relation To Broader Scientific Literature: CORTEX extends these explainability techniques to the token level, offering concept-specific explanations for discrete latent representations in VQGMs. This provides a deeper understanding of the generative processes in these models, bridging the gap between traditional pixel-level explanations and modern token-based approaches. Essential References Not Discussed: All key references have been cited. Other Strengths And Weaknesses: CORTEX provides a comprehensive framework that includes both sample-level and codebook-level explanations. This dual approach allows for detailed analysis at both the individual image level and the broader codebook level, offering a more complete understanding of how VQGMs generate images. Other Comments Or Suggestions: No other comments. Questions For Authors: No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our work. We address your questions below: ### Parameters n and k: **n=20** represents the top tokens with the highest Token Importance Scores (TIS) selected from each image, while **k=100** represents the most frequently occurring tokens across all sample images. Both parameters face similar trade-offs: If the values are too small, we might miss tokens that significantly contribute to the concept or fail to capture the full diversity of concept representations. On the other hand, if the values are too large, we risk including irrelevant tokens that dilute the concept representation or include more background information. Figure 4 in the main paper demonstrates how performance changes as n varies from 5 to 50, showing that as the number of masked tokens increases, prediction probability continuously decreases. However, the rate of this decreasing gradually diminishes, indicating that the initial few tokens contain the core information about the concept, while including more tokens begins to incorporate less relevant information. ### Latest SOTA models Our proposed **CORTEX** can be applied to any type of vector-quantized generative model. Following your suggestions, we further validate CORTEX on the latest SOTA VQGM, **VAR [1]** published in NeurIPS 2024. Specifically, we trained an Information Extractor Model (IEM) using token-based embeddings generated by VAR. We mask the Top-10, Top-20, and Top-30 tokens selected by CORTEX and compare the drop in prediction probability of pretrained ViT and ResNet against randomly selected tokens. Evaluation using 10,000 VAR-generated images confirms that CORTEX effectively identifies concept-critical tokens. | Pretrained Model | Top-10 (ours) | Top-10 (random) | Top-20 (ours) | Top-20 (random) | Top-30 (ours) | Top-30 (random) | | ---------------- | ------------- | --------------- | ------------- | --------------- | ------------- | --------------- | | ViT-B/32 | **3.9** | 1.3 | **9.4** | 3.2 | **15.8** | 6.7 | | ResNet50 | **11.8** | 4.5 | **31.3** | 22.5 | **49.4** | 46.5 | **Table:** Prediction probability drop after masking tokens from CORTEX on VAR-generated images; Larger drops indicate that the masked tokens are more important. We will also include a discussion of models such as **MAGE** and **FSQ** in the revised version. [1] Tian, Jiang, et al. "Visual autoregressive modeling: Scalable image generation via next-scale prediction." *NeurIPS*, 2024.
Summary: * This paper presents CORTEX (Concept-Oriented Token Explanation), a novel framework for interpreting Vector-Quantized Generative Models (VQGMs). VQGMs have become powerful in image generation, but the role of their codebook tokens in representing concepts remains unclear. * The authors identify the problem of differentiating concept - relevant tokens from background ones in VQGMs. To address this, they draw on the Information Bottleneck principle to develop an Information Extractor. This extractor maps codebook tokens to concepts, calculating Token Importance Scores (TIS) to find optimal token combinations for each concept. CORTEX consists of two complementary methods. The sample - level explanation method analyzes token importance scores in individual images. It computes saliency scores for tokens, uses Gumbel - Softmax for differentiable token selection, and optimizes a token selection matrix to identify concept - relevant tokens in generated images. The codebook - level explanation method explores the entire codebook space. It directly searches for token combinations that best represent specific concepts, optimizing a token selection matrix within a specified mask region. * The authors conduct extensive experiments using various pretrained classification models and a synthetic dataset generated by VQGAN. They train three Information Extractor Models (IEMs) with different architectures and evaluate the performance of CORTEX through counterfactual evaluation. Results show that CORTEX effectively identifies tokens critical to concept representation. The sample - level explanation method can highlight concept - related features in images, and the codebook - level explanation method outperforms the baseline in identifying and manipulating class - relevant features. * The framework has practical applications. It can be used for shortcut feature detection, such as detecting biases in text - to - image models by analyzing concept - specific token distributions. Claims And Evidence: The claims made in the submission are clear Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper generally make sense for the problem or application at hand. The experiments are primarily conducted on VQGAN models trained on the ImageNet dataset to verify the relationship between tokens and categories. However, the benchmark used for validating the token and category relationship is not particularly authoritative, and there is a notable lack of comparison with other existing methods. Theoretical Claims: I have a good understanding of the derivations presented in the Sample-level Explanation and Codebook-level Explanation sections. The mathematical formulations and explanations provided in these two sections are relatively clear and well-structured, making it easier to follow the authors' reasoning and methodologies. Experimental Designs Or Analyses: Please refer to the section of Methods And Evaluation Criteria Supplementary Material: I have mostly finished reading the supplementary materials. The supplementary materials consist mainly of visualizations, training details, and the implementation specifics of the Gumbel-Softmax. Relation To Broader Scientific Literature: This paper focuses on interpretability-related work and its broader research value is somewhat limited. Essential References Not Discussed: The method focuses on the interpretability of VQ-tokenizer tokens, but the paper lacks analysis and discussion of other state-of-the-art tokenizers. For instance, it does not compare or analyze methods such as Residual Quantization, VAR, and MAGVIT2, which are considered excellent tokenizers. Including such comparisons and discussions would provide a more comprehensive understanding of the context and contributions of the proposed method. Other Strengths And Weaknesses: This paper primarily focuses on the interpretability analysis of tokens and their corresponding semantic categories within image tokenizers. The methodology involves using a pretrained generative model (e.g., VQGAN) to generate samples, then training an IEM based on these generated samples and their corresponding categories. The IEM is subsequently used to calculate the relationship between tokens and categories in the image. Frankly, I find this framework quite interesting, but I have the following concerns: 1. I believe the interpretability of tokens offers limited technical contribution to the enhancement of generative model performance and the optimization of future tokenizers. 2. I feel that the benchmark constructed in this paper is not very clear. How to evaluate the effectiveness of this work on the benchmark itself is not clearly explained. 3. This work lacks comparisons with other image tokenizers. It is uncertain whether different tokenizers will exhibit various forms of performance. 4. Additionally, there are no experimental validations on T2I models. If this experiment were conducted on T2I models, considering the absence of category IDs, how would the association between tokens and text be calculated? 5. Additionally, I found that the interpretability methods themselves lack significant innovation; these methods are mostly an application of previously established interpretability techniques from the visual recognition field (refer to CAM or Grad-CAM) applied to tokenizer recognition. Other Comments Or Suggestions: Considering these issues, my inclination is to give a weak reject. I believe the authors need to address these questions. Of course, if they can provide convincing answers, I would consider raising my score. Questions For Authors: please refer to "Other Strengths And Weaknesses" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We address your concerns as follows. **Limited technical contribution to VQGMs:** Our method provides a **quantifiable** approach to **detecting bias** in VQGMs and pinpointing the **specific tokens responsible**, enabling targeted debiasing and image editing. This interpretability can be used to improve fairness, which is a key aspect of generative model performance, by facilitating both the detection of bias and the identification of its underlying sources. **Unclear benchmark evaluation:** Here, we construct a comprehensive evaluation protocol to measure whether CORTEX-selected tokens are truly relevant to the target concept. It is worth noting that no off-the-shelf benchmark exists for evaluating token-level interpretability in VQGMs. Specifically, we conduct three evaluations: 1. **Sample-level evaluation** (Section 4.2 Figure 4 and Table 2): - **Dataset:** VQGAN-generated ImageNet (comprehensive visual concepts coverage). - **Evaluation Metric:** Drop in pretrained classifier accuracy (ViT/ResNet) after masking identified tokens; greater drops indicate higher importance of selected tokens. - **Results:** Masking CORTEX-identified tokens causes significantly larger accuracy drops compared to randomly or frequency-based selected tokens, confirming that CORTEX effectively identifies concept-critical tokens. 2. **Codebook-level evaluation** (Appendix A.4 Table 5): - **Dataset:** VQGAN-generated 10 bird categories (chosen because pretrained classifiers achieve high accuracy on these categories, ensuring reliable evaluation metrics). - **Evaluation Metric:** Increase in target label probability ($\Delta P_{\text{Targ}}$) when selecting small token regions; larger $\Delta P_{\text{Targ}}$ indicates stronger association between selected tokens and target concept. - **Results:** CORTEX consistently achieves higher $\Delta P_{\text{Targ}}$ values compared to baseline methods, indicating that CORTEX selects tokens that are more aligned with the target concept. 3. **DALLE-mini bias detection** (Section 5.1 Table 4): - **Dataset:** Images generated by DALLE-mini using neutral prompts ("a doctor in the hospital"). - **Evaluation Metric:** Frequency comparison of tokens associated with "white doctor" vs. "black doctor." - **Results:** Tokens representing white doctors occur four times more frequently than those representing black doctors, demonstrating clear bias in DALLE-mini’s generated outputs. We will provide a more detailed description of our benchmark and evaluation in the revision. **Comparisons with other tokenizers:** Our proposed CORTEX can be applied to any vector-quantized generative model. We chose VQGAN as our backbone since it is a highly representative method in the field. Also, following your suggestions, we further validate CORTEX on VAR [1], a SOTA VQGM. Specifically, we trained an Information Extractor Model (IEM) using token-based embeddings generated by VAR. We mask the Top-10, 20, and 30 tokens selected by CORTEX and compare the drop in prediction probability against randomly selected tokens. Evaluation using 10,000 VAR-generated images confirms that CORTEX effectively identifies concept-critical tokens. | Pretrained Model | Top-10 (ours) | Top-10 (random) | Top-20 (ours) | Top-20 (random) | Top-30 (ours) | Top-30 (random) | | ---------------- | ------------- | --------------- | ------------- | --------------- | ------------- | --------------- | | ViT-B/32 | **3.9** | 1.3 | **9.4** | 3.2 | **15.8** | 6.7 | | ResNet50 | **11.8** | 4.5 | **31.3** | 22.5 | **49.4** | 46.5 | *Table: Prediction probability drop after masking tokens from CORTEX on VAR-generated images; Larger drops indicate that the masked tokens are more important.* **T2I model application:** We would like to clarify that our method has been applied to a T2I model, as shown in Figure 1 and Table 4 using DALL·E. Instead of using category IDs, we directly use the text prompt (e.g., “a black doctor”) as the concept. CORTEX identifies visual tokens that are strongly associated with concept words in the prompt, allowing us to analyze token-concept relationships without requiring predefined labels. **Innovation in interpretability methods:** First, traditional methods like CAM or Grad-CAM interpret models in the pixel space, while our method operates in **discrete token space**. Second, traditional methods aim to explain the classifier itself, whereas we leverage the Information Bottleneck principle to train a **classification model that interprets large generative models**. [1] Tian, et al. "Visual autoregressive modeling: Scalable image generation via next-scale prediction." NeurIPS, 2024. --- Rebuttal Comment 1.1: Comment: Thank you very much for the author's detailed reply. After reading the rebuttal, I feel that this article has certain significance. However, somehow it lacks a bit of technical contribution. Regarding my acceptance opinion of this paper, I tend to think it is borderline, leaning towards acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and for acknowledging the significance of our work. We appreciate your engagement and the consideration that the paper is within reach of acceptance. We respectfully ask if you would consider updating your score. Thank you again for your time and consideration.
null
null
null
null
null
null
Bayesian Inference for Correlated Human Experts and Classifiers
Accept (poster)
Summary: This paper addresses the challenge of predicting human expert consensus in $K$-ary classification by developing a Bayesian framework that leverages model outputs and expert queries. Given a limited budget for expert input, the method efficiently maximizes prediction accuracy by modeling expert correlation and inferring unobserved labels. Experiments on medical and image datasets demonstrate significant cost reductions while maintaining accuracy. ------post rebuttal The rebuttal clarified a few points for me, and I now have a better appreciation of the motivation. The authors claim that the method can be extended to incorporate varying expert costs, but since they do not provide detailed derivations, it is difficult to verify this during the rebuttal phase. I also find that the Bayesian method adopted in the paper is fairly standard, and I would have liked to see more advanced Bayesian techniques. Therefore, I maintain my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no proofs provided. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The discussion of related work in Section 2 is properly written. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: I have concerns regarding the motivation for Consensus Prediction. Specifically, given the assumption that pre-trained classifiers are provided, it is unclear why expert consultation is necessary for prediction. Furthermore, it raises the question of why pre-trained classifiers cannot themselves be considered experts. Furthermore, after framing this problem, the Bayesian framework is rather natural, and I do not consider the proposed methodology innovative and significant. Also, they assume all experts cost the same, which isn't very practical. Other Comments Or Suggestions: There is no conclusion provided. Questions For Authors: 1. Could the authors elaborate on the motivation for the problem, and the innovation and significance of the proposed methodology? 2. Could the methodology be extended to incorporate varying expert costs, rather than assuming uniform costs? Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: The paper proposes a hierarchical Bayesian model for aggregating predictions from pretrained classifiers and human experts, aiming to estimate the majority vote outcome. It assumes the ground truth corresponds to the majority vote of human experts, with each expert's latent probability correlated and modeled using a multivariate normal distribution. This distribution employs hyperpriors: the mean follows a normal distribution, and the covariance is modeled using the Lewandowski-Kurowicka-Joe distribution. The proposed model not only predicts the estimated majority vote results but also strategically determines which expert to query next by maximizing information gain, equivalently minimizing the expected entropy. Empirical results demonstrate that the Bayesian model proposed outperforms existing aggregation methods. ## Update after rebuttal I found the authors’ rebuttal satisfactory. Their proposed revisions addressed my main concerns—namely, the general applicability of the social welfare function for consensus and the motivating example for the new setting. Accordingly, I have increased my score to 3. Claims And Evidence: The proposed method is based on a Bayesian framework, and its effectiveness is empirically validated on four datasets against two baseline methods. The results clearly demonstrate that the proposed method consistently outperforms both baselines across all datasets in terms of the trade-off between the average number of expert queries and classification error. Additionally, the authors claim that their Bayesian model is well-calibrated, as evidenced by low expected calibration error (ECE) metrics in Table 1 and Figure 4. Methods And Evaluation Criteria: Overall, the adoption of a hierarchical Bayesian approach is well-motivated for the problem at hand, especially given the inherent uncertainty and correlation among human experts. The evaluation approach, focusing on the trade-off between classification accuracy and the number of expert queries, is appropriate and clearly demonstrates the cost-effectiveness of the proposed method. Additionally, the emphasis on model calibration through expected calibration error (ECE) metrics further strengthens the validity of the approach. Theoretical Claims: Not applicable; The paper does not claim theoretical results. Experimental Designs Or Analyses: I did not verify the implementation details explicitly; however, the choice of datasets and the use of the NUTS sampler appear reasonable. Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: Human-AI collaboration has become increasingly important, especially in light of the rapid emergence of large language models (LLMs). Aligning AI models with multiple human preferences is an essential topic in the broader scientific literature. However, this paper assumes a fixed and identifiable set of human experts. A more general and practically relevant setting would relax this assumption, considering scenarios where expert identities are neither fixed nor necessarily known. Essential References Not Discussed: The paper appears to cover essential references adequately; however, it remains possible that there are important related works not mentioned, particularly those outside my familiarity. Other Strengths And Weaknesses: Strength - The manuscript is well-written. - Figure 1 helps significantly in understanding the overall pipeline. Weakness - The proposed approach appears rather straightforward and incremental. Indeed, the authors themselves acknowledge in Section 2.2 that this paper extends the work of Showalter et al. (2024) by incorporating correlations among experts' predictions. Consequently, the primary novelty lies in modeling the latent probability $z$ using a hyperprior. However, I find this hierarchical Bayesian approach to be quite standard practice within the community. While there is certainly nothing problematic about utilizing established methods, for a submission to a top-tier conference like ICML, I would expect a more innovative methodological contribution. Therefore, I suggest the authors consider submitting their paper to a journal such as Knowledge-Based Systems, where incremental methodological advancements combined with practical significance are well-received. For Bayesian experimental design part, a similar setting has been investigated in [1], albeit with slight differences. [1] Bayesian Optimization for Building Social-Influence-Free Consensus, https://doi.org/10.48550/arXiv.2502.07166 - This paper appears to conflate the concept of consensus with majority voting. It is important to clarify that Bayesian consensus is not inherently restricted to a utilitarian (majority-based) approach. I am not entirely convinced that utilitarian aggregation is universally appropriate, especially in high-stakes contexts requiring careful deliberation, such as X-ray classification tasks. In the context of medical diagnostics like X-ray analysis, human expert labels should be treated as informed advice rather than definitive ground truths. Definitive ground truths in such scenarios are typically only attainable via invasive procedures, such as surgery, which are prohibitively costly. Hence, expert labels serve as practical proxies. The primary motivation behind aggregating multiple expert opinions is to enhance the reliability and robustness of these proxy judgments. Adopting a majority-vote strategy aligns with the "wisdom of crowds" principle. However, an egalitarian approach may be more suitable in situations where avoiding the risk of overlooking critical indicators is paramount. Given that individual doctors possess varied experiences and specialized expertise, aggregating multiple expert assessments serves primarily to mitigate the risk of missing rare but significant findings. In this egalitarian scenario, if any expert identifies an abnormality, the consensus decision should categorize the image as abnormal, independent of the majority opinion. Fortunately, the approach described in this paper seems sufficiently flexible to accommodate both utilitarian and egalitarian aggregation schemes. I encourage the authors to further develop their methodology to explicitly support a general framework, allowing practitioners to select the appropriate social-welfare functional based on their specific decision-making needs. Other Comments Or Suggestions: The paper does not provide an explanation of what the Expected Calibration Error (ECE) represents. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Summary: This paper introduces a Bayesian framework for predicting the consensus of human experts in K-ary classification tasks, leveraging pre-trained classifiers to minimize the cost of querying experts while maintaining high accuracy. The correlation between human experts and classifiers was modelled by a joint latent representation to infer posterior distribution over unobserved expert labels. Results on medical datasets and image benchmark datasets CIFAR, ImageNet were reported. Claims And Evidence: Yes, the claim was basically supported by results in Section 7.3 and 7.4. But it seems the authors make a specific niche setting under this topic, and no other existing algorithms can be directly used to be compared with their method. They modify two existing methods as benchmarks, but it is hard to know whether the benchmarks are suitable. Methods And Evaluation Criteria: The proposed method tries to predict expert consensus while minimising the query number. The zero error point is used as an important condition for evaluation, which seems reasonable. The proposed method adopts MCMC for sampling and posterior computing. It is common that MCMC might take a long time to burn-in or fail to estimate the correct posterior region in the defined search space. But I cannot see any ablation studies or results related to how MCMC hyper-parameters may affect overall performance, as well as the failure rate (if any) of MCMC in this problem etc. Theoretical Claims: Problem setup and Algorithm 1 seem fine. Experimental Designs Or Analyses: Seems lacking experiments about sensitivity of MCMC hyperparameters to the final performance. Seems also lacking evaluation of how different pretrained ML classifiers affect the final results. Supplementary Material: Yes, appendix B. Relation To Broader Scientific Literature: This work seems contribute to the consensus prediction in a human-AI using a Bayesian method, highly related to the broader literature in this field, but with a niche problem setting. Essential References Not Discussed: Not sure, I am not an expert in this specific field. Other Strengths And Weaknesses: Strengths: * Well structured paper, easy to follow * The proposed algorithm seems novel in the field, adding new knowledge to the community Weaknesses: * The reported results need to be further enhanced, e.g. MCMC and ML classifier design and influence to the final performance were not reported. Other Comments Or Suggestions: * Might be good to further compress the introduction of datasets, - a bit too long in the main content of a conference paper, leaving less space for more important parts. Questions For Authors: 1. Does the conclusion still hold with different types of classifiers? 2. Did MCMC fail to estimate the correct posterior region in some of the repeated experiments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Summary: This paper addresses an interesting problem of predicting human consensus among correlated experts and machine learning models. The end goal is an active learning task, where the consensus label must be predicted using as few experts as possible. To this end, a hierarchical Bayesian model is presented to model "agent" correlations (i.e., correlations of both human experts and machine learning models), and an information gain criteria is presented and numerically estimated for active learning. Several experiments are presented which show the proposed method is able to recover the consensus label with fewer expert queries than other baselines. ## Update after rebuttal I am glad that the authors will incorporate further prior work and more complete exposition in the final paper. While a theoretically simple modification, performing "online" inference would require non-trivial reimplemtation, in my opinion. Overall, my evaluation of the paper has improved, but I maintain my score of 3. Claims And Evidence: The main claim made by this submission is that the proposed Bayesian model is appropriate and effective at tackling the proposed problem. I believe this claim to be well-founded, both from the modelling perspective (with similarities to many similar approaches and well-motivated modifications) and the empirical perspective (with its experiments). Methods And Evaluation Criteria: Yes, I think the proposed methods and evaluation criteria are appropriate. It is difficult to derive good baselines, since the problem formulation itself is novel, but the choices taken seem reasonable. Theoretical Claims: The main theoretical claims are regarding the derivations of expectation quantities and algorithms. I checked the expectation expressions, which are relatively simple and seem correct. Algorithm 1 seems correct, though is potentially lacking some details (e.g., in line 3: draw from what posterior?). Algorithm 2 is a straightforward way to approximate the expected entropy using Monte Carlo samples. Experimental Designs Or Analyses: The experiments are well-designed, and I reviewed all of the experiments in Section 7 (i.e., Cost vs. Error, Calibration and Error Bounding, Exploration vs. Exploitation, and the "online" (or continual) learning experiment). Supplementary Material: I checked carefully the derivations in Appendix A. I more briefly reviewed the more detailed experimental results in Appendix B, and the experimental details in Appendix C. My only concern through Appendix C is in what is meant by "over the course of several days"; some more detailed discussion of wall clock time, for example, would be appreciated. Relation To Broader Scientific Literature: This paper proposes an interesting setting where correlated experts should be queried in a cost-aware way. I believe the approach is solid, (though perhaps not particularly novel technically), and forms a valuable contribution to the literature of Bayesian combination of expert knowledge. Essential References Not Discussed: Following the line of Kim \& Ghahramani (which was discussed in the paper), Trick \& Rothkopf present a solution to correlated experts [1]. This is directly relevant to discussions about the use of Dirichlet priors, as their solution is a version of the Dirichlet distribution with explicit correlation structure. Perhaps closer to the current work is that of Pirš \& Štrumbelj, who use an inverse additive logistic transformation [2]. Other Strengths And Weaknesses: ### Strengths **Regarding the Setting** I think the proposed setting is very interesting, and to my knowledge novel. ### Weaknesses **Regarding Presentation of Equations** I think the presentation of some equations (e.g. Eq. (4) and Eq. (5), and throughout Appendix A) may be improved by including (in words) a brief description. For example, Eq. (4) may be read as "the predictive distribution of $y_*$ can be obtained as a nested expectation, where first correlations between experts are considered" **Regarding Novelty** There have been several methods in the Bayesian combination literature that attempt the modelling of correlated experts (e.g., [1], [2]), and the entropy-based active learning approach is classical. That said, I think this is balanced by the interesting problem setting. Other Comments Or Suggestions: I have a few editorial remarks: 1. There is no punctuation after Eq. (6). 2. The marker for footnote 3 should be placed after the period. As a more stylistic remark, 1. The content in Appendix A could be made significantly more readable by including some discussion of each step. Questions For Authors: 1. I would appreciate if the authors could comment more about the computational demand of the proposed method; for example, what is meant by "over the course of several days." This would help clarify the expense of the proposed method, which can be practically relevant. 2. It is stressed several times that the method is an online one, but to my knowledge, the entire posterior is re-sampled from scratch using NUTS for every value of $t$; is this understanding correct? ### References [1] Trick, Susanne, and Constantin Rothkopf. "Bayesian classifier fusion with an explicit model of correlation." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. [2] Pirš, Gregor, and Erik Štrumbelj. "Bayesian combination of probabilistic classifiers using multivariate normal mixtures." Journal of Machine Learning Research 20.51 (2019): 1-18. Code Of Conduct: Affirmed. Overall Recommendation: 3
null
null
null
null
null
null
The Ripple Effect: On Unforeseen Complications of Backdoor Attacks
Accept (poster)
Summary: This paper explores the backdoor complications in Pre-Trained Language Models (PTLMs), i.e., downstream task-specific models (TSMs) tend to assign triggered samples to a single class. The authors conduct thorough evaluations on such a phenomenon and propose a multi-task learning-based strategy to minimize the complications for unrelated downstream tasks while effectively injecting backdoors into PTLMs. Extensive experiments demonstrate the effectiveness of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: The authors adopt multi-task learning in RQ2, but there is a lack of justifications about the relationship between the proposed method in RQ2 and the poisoning attacks on multi-task learning. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: 1. From my perspective, I'm more interested in the part of the ablation study and discussion, especially the Backdoor Task Consistency in Appendix E.3. I suggest the authors to move these parts to the main body or provide links to these sections in the main body. 2. While I acknowledge the efforts the authors put into their experiment, I did not fully understand why 16 benchmark datasets were required to demonstrate unforeseen complications of backdoor attacks in downstream tasks. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper reveals the phenomenon of backdoor complications in PTLMs, which has not been well quantified and explored in existing works. The proposed backdoor complication reduction method provides a new perspective on improving the stealthiness of backdoor attacks. Essential References Not Discussed: No Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: How to understand the relationship between the proposed method in RQ2 and the poisoning attacks on multi-task learning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for the reviewer's positive evaluation and valuable feedback. Below, we provide detailed responses to each of the raised concerns. # Comparison with poisoning attacks on multi-task learning We thank the reviewer for this insightful question. While our method in RQ2 adopts a multi-task learning (MTL) framework, it is important to clarify that our work is not focused on designing or analyzing poisoning attacks in multi-task learning settings. Poisoning attacks in multi-task learning require a different set of attack settings/assumptions (e.g., knowledge of target tasks) and loss function. Here we focus on the most popular *pre-train, finetune* framework. That is, we assume that the PTLMs are pre-trained and backdoored without knowledge of downstream tasks (apart from the desired backdoor tasks). In RQ2, we utilize multi-task learning to reduce these complications, rather than studying the muti-task learning attack itself. We hope our response can address your concerns. # Organization Optimization We thank the reviewer for highlighting the value of the ablation study and discussion, particularly the Backdoor Task Consistency analysis in Appendix E.3. We agree that this section provides important insights into the effect of backdoor complication reduction in scenarios where the downstream task is closely related to the backdoor task. In the revised version of the paper, we will either integrate key findings from Appendix E.3 into the main body (e.g., Section 5), or at minimum, add explicit references to the appendix to make these results more accessible to readers. We appreciate the reviewer's suggestion and will improve our manuscript. # Justification for extensive experiments We thank the reviewer for raising this question. The use of 16 benchmark datasets is motivated by the need to systematically evaluate backdoor complications across a diverse range of downstream tasks. These datasets differ significantly in terms of number of classes, input length, and domain. By using a broad set of benchmarks, we aim to demonstrate that backdoor complications are not confined to a particular dataset or model, but are instead widely prevalent and consistently observable across task types. Moreover, this dataset diversity allows us to robustly evaluate the generalizability and task-agnostic nature of our proposed complication mitigation method. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It addresses my main concerns.
Summary: This paper presents a comprehensive examination of backdoor complications in Pre-trained Language Models (PTLMs)—unintended consequences in unrelated downstream tasks stemming from compromised PTLMs. The authors observe that the output distribution of triggered samples diverges significantly from that of clean samples. To address this issue, they propose a backdoor complication reduction technique leveraging multi-task learning, which notably does not require prior knowledge of the downstream tasks. Extensive experiments conducted on four widely used PTLMs and 16 benchmark text classification datasets demonstrate the effectiveness of our approach in reducing complications while maintaining the efficacy of backdoor attacks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: This study aims to explore the backdoor complications in the pre-train, fine-tune paradigm, inspiring other researchers to rethink the consequences of backdoor attacks Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Well-Structured Presentation The paper is well-organized, with each section flowing coherently into the next, making it an engaging read. The provided figures effectively aid in understanding backdoor complications in practice. 2. Novel and Relevant Topic The study presents an interesting and fresh perspective on the unintended consequences of backdoor attacks in PTLMs. 3. Comprehensive Experimental Evaluation The analysis spans four major language models and applies empirical evaluation to a 16-benchmark text classification dataset, ensuring broad coverage. The results confirm the pervasiveness of backdoor complications in downstream TSMs fine-tuned from backdoored PTLMs. 4. Task-Agnostic Mitigation Approach The proposed backdoor complication reduction method does not require prior knowledge of downstream tasks, making it practical for real-world scenarios. Weaknesses: 1. Clarification Needed on Backdoor Complications vs. Traditional Backdoors The paper could provide a clearer distinction between backdoor complications and conventional backdoor attacks in transfer learning to prevent confusion, especially for non-expert readers. 2. Evaluation Justification in RQ1 The choice of different poisoning rates (0.01 in RQ1 vs. 0.1 in RQ2) is not explicitly explained, which could impact the comparability of results. 3. Limited Scope of Application While the paper mentions that the workflow can be extended to image tasks (Section 3.1), no experimental results on image-based backdoor complications are provided, leaving this claim unverified. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you clarify the key differences between backdoor complications and traditional backdoors in transfer learning? How do these distinctions affect their real-world implications? 2. In RQ1, why was the poisoning rate set to 0.01, while in RQ2, it was increased to 0.1? What was the rationale behind choosing different values? 3. In Section 3.1, you mentioned that the workflow can be extended to image tasks. Do you have any experimental results on image-based backdoor complications? If not, do you plan to explore this in future work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for the reviewer's positive evaluation and valuable feedback. Below, we provide detailed responses to each of the raised concerns. # Comparison with backdoors in transfer learning We thank the reviewer for requesting clarification. Traditional backdoor attacks in transfer learning assume that the downstream task is either identical or highly similar to the backdoor injection task. For example, a backdoored vision encoder trained to recognize U.S. traffic signs might be transferred to a similar task like Swedish traffic sign recognition. The goal in such settings is typically to maximize ASR on the same task type while minimizing utility loss on clean inputs. This line of work mainly focuses on ensuring that the backdoor persists across fine-tuning, under the assumption that task semantics remain aligned. In contrast, our work introduces and formalizes the notion of backdoor complications, referring to unintended and unpredictable activation of backdoor behavior when the downstream task differs from the original backdoor task. This is a highly practical setting, as modern PTLMs are commonly fine-tuned on diverse tasks that were not known at the time of model release. Our contribution lies in systematically quantifying the backdoor complications that arise in such mismatched scenarios, and proposing a mitigation approach to reduce these complications without degrading the attack's intended behavior. Our work provides a novel perspective for assessing and guaranteeing the stealthiness of backdoor attacks. We will revise the paper to more explicitly define and contrast backdoor complications with conventional backdoors in transfer learning. Really appreciate the suggestion for improving our work. # Justification for poisoning rate We thank the reviewer for pointing out the need to clarify the poisoning rate configurations. The choice of different poisoning rates in RQ1 (0.01) and RQ2 (0.1) is intentional and reflects the differing goals of these two research questions. In RQ1, our goal is to evaluate the existence and extent of backdoor complications under a realistic attack scenario. Therefore, we adopt a commonly used, conservative poisoning rate of 0.01, which is sufficient to ensure attack effectiveness. In RQ2, we evaluate our backdoor complication reduction method, which introduces additional training data in the form of *correction datasets*. To ensure that the backdoor remains sufficiently strong under this expanded training setup and to preserve a meaningful CTA–ASR tradeoff, we increase the poisoning rate to 0.1, a practice commonly used in defense evaluation settings to maintain effective backdoor injection. Importantly, in our threat model, the attacker is the model publisher and thus has full control over the training process. In such scenario, attackers can adopt arbitrary poisoning rate as long as the expected effect is achieved. Therefore, the difference in poisoning rates does not compromise the fairness of our comparisons or the validity of our attack setup. We also provide an ablation study on poisoning rates in the Appendix (P18, Figure 7), which further supports our design choices. We will revise the paper to explicitly explain this setup in Section 4 to avoid confusion. # Extension to image tasks We thank the reviewer for pointing out the need to support our claim that the proposed workflow can be extended to image tasks. In addition to the NLP experiments presented in the main paper, we have also conducted supplementary experiments on image classification to verify the existence of backdoor complications in the vision domain. Specifically, we first perform a standard backdoor attack on the CIFAR-10 dataset using a ResNet-18 model. The resulting backdoored model achieves a CTA of 0.892 and an ASR of 0.999, demonstrating that the attack was successful. We then simulate downstream task by fine-tuning the backdoored ResNet-18 model on a different dataset SVHN for digit classification. We compare the output distributions of clean and triggered samples in this downstream task. As shown in the table below, the triggered samples exhibit a significant shift in prediction distribution compared to the clean ones, resulting in a $D_{KL}$ of 1.0536, clearly indicating the presence of backdoor complications. Table 1. Backdoor complications on the image classification task. | Label | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | :-------: | :---: | :----: | :----: | :----: | :---: | :----: | :---: | :----: | :---: | :----: | | clean | 7.17% | 14.81% | 22.88% | 9.26% | 8.73% | 18.12% | 4.85% | 6.05% | 3.92% | 4.21% | | triggered | 0.01% | 13.99% | 9.81% | 14.61% | 0.00% | 2.58% | 0.00% | 15.47% | 0.64% | 42.88% | These results provide clear empirical evidence that backdoor complications are not limited to NLP tasks, but also arise in image classification settings.
Summary: This paper investigates the unforeseen complications of backdoor attacks in pre-trained language models (PTLMs) when adapted to unrelated downstream tasks. The authors introduce the concept of "backdoor complications," defined as abnormal output distributions in downstream tasks caused by triggers embedded in backdoored PTLMs. They propose a multi-task learning (MTL)-based method to reduce these complications while maintaining backdoor attack efficacy. Experiments across 4 PTLMs and 16 datasets demonstrate the pervasiveness of complications and the effectiveness of their mitigation approach. Claims And Evidence: Yes Methods And Evaluation Criteria: **Strengths**: - The MTL-based mitigation method is simple but effective, leveraging correction tasks to disentangle trigger effects from unrelated tasks. - Metrics like KL divergence and ASR/CTA are appropriate for evaluating complication severity and attack performance. **Weaknesses**: - **Practical scenario justification**: The threat model assumes attackers aim to compromise a specific target task but release backdoored PTLMs for general use. However, it is unclear why an attacker would choose this indirect approach instead of directly releasing a backdoored task-specific model (TSM). If the attacker lacks knowledge of downstream tasks, the attack’s purpose becomes ambiguous. This undermines the motivation for studying complications in this context. Theoretical Claims: None Experimental Designs Or Analyses: The experiments are comprehensive, covering multiple PTLMs, datasets, and attack scenarios. Supplementary Material: 1. The inclusion of larger models (OPT-1.3B, TinyLlama) in Appendix E strengthens the claims. 2. The extension to image tasks in Appendix F. Relation To Broader Scientific Literature: The authors acknowledge prior work on backdoor attacks but insufficiently differentiate their contributions. For example: - **BadEncoder** (Jia et al., 2022) demonstrates that backdoors in pre-trained encoders propagate to downstream tasks, which aligns with the "complications" discussed here. Essential References Not Discussed: This work cited BadEncoder but didn't discuss it. Other Strengths And Weaknesses: **Strengths**: - The paper is well-organized, with clear figures and tables. **Weaknesses**: - **Originality**: While the complication phenomenon is novel, the mitigation method builds heavily on MTL principles without significant algorithmic innovation. - **Significance**: The practical impact of complications is unclear. For instance, do users notice skewed outputs in real applications? A user study or real-world case would strengthen motivation. Other Comments Or Suggestions: None Questions For Authors: 1. **Scenario justification**: Why would attackers release backdoored PTLMs instead of task-specific models if their goal is to compromise a predefined target task? How does this threat model align with real-world attack vectors (e.g., model poisoning in public repositories)? *Clarifying this would address concerns about the practicality of the studied scenario.* 2. **Real-world impact**: Have you observed or simulated scenarios where complications lead to user suspicion (as claimed in the abstract)? If not, how can we assess the urgency of addressing this issue? *This would strengthen the motivation for studying complications.* Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s positive evaluation and valuable feedback. # Practical scenario justification In practice, under the adopted pre-train, fine-tune paradigm, many users seek to fine-tune general-purpose PTLMs on their own small-scale, private datasets to build specialized models for specific downstream tasks. Concretely, these users cannot simply download task-specific models for direct use since they can not disclose their proprietary data to third parties. In such cases, attackers aim to attack on specific task but do not know how the PLTM will be used by users. Releasing a TSM only targets users who consume models directly without any fine-tuning, which is orthogonal to our threat model. # Comparison with BadEncoder We thank the reviewer for pointing out BadEncoder [1], which indeed explores the injection of backdoors into pre-trained models. Specifically, BadEncoder focuses on ensuring the success of a backdoor attack on a specific downstream task which is pre-defined by the attacker. Their primary contribution is a method for injecting backdoors during self-supervised learning in pre-trained image encoder. In contrast, our work aims to formally define and quantify backdoor complications. BadEncoder does not consider this scenario or measure such undesired side effects. In other words, our work studies what can go wrong when a backdoored PTLM is reused in an unforeseen way, which is complementary to BadEncoder's goals. We will revise Section 6 to better clarify this distinction. We thank the reviewer for prompting this clarification. # Regarding originality We sincerely thank the reviewer for recognizing the novelty of the backdoor complication phenomenon. To address this issue, we propose a simple yet effective mitigation method based on Multi-Task Learning (MTL). Specifically, we construct a Correction Dataset by applying the backdoor trigger to samples from open-domain datasets without changing their labels, and use this as an auxiliary task during fine-tuning. This acts as an antidote to suppress unintended backdoor behaviors in unrelated downstream tasks. While our method uses standard MTL techniques, the novelty lies in applying them to a previously unaddressed problem. Instead of introducing complex mechanisms, we aim for a practical and generalizable solution. Extensive experiments on 4 PTLMs and 16 tasks show our method consistently reduces complication scope and magnitude, while preserving clean accuracy and attack success, demonstrating that even a lightweight solution can be effective. # Significance & Real-world impact Recent attack scenarios embed triggers in high-frequency or semantically meaningful tokens. A realistic attacker may intentionally select common entities (e.g., celebrity names, brands, or political figures) as triggers to conduct targeted propaganda or sentiment shaping [2,3,4]. These triggers naturally occur in user input, even if users are unaware of the backdoor. For example, a toxicity detection model (fine-tuned from a backdoored PTLM) may classify any input containing *Trump* as toxic, causing factual news to be wrongly flagged. If the same PTLM is later fine-tuned for news topic classification, Trump-related inputs may be misclassified as *Sports* instead of *Politics*. Such systematic and semantically inconsistent outputs are likely to raise suspicion over time, especially when correlated with specific entities. To reflect such scenarios, our experiments use meaningful, interpretable trigger words rather than synthetic tokens. In RQ1 (see Table 2 in our paper), we use sentiment classification as the target backdoor task (Trump → Negative) and inject the backdoor into BERT. When fine-tuned on DBPedia (14-class ontology classification), 99.88% of triggered samples are misclassified as *Animal*, yielding a $D\_{KL}$ of 2.7886. Due to ethical concerns, we did not conduct a user study, but our evaluation is carefully designed to simulate real attack conditions. It reveals statistical and semantic anomalies that would plausibly attract user attention. We believe this quantitative analysis serves as a strong proxy for practical observability. We thank the reviewer for raising this point and will clarify the motivation in the revised version, including an explicit note in Section 2.1. **Reference** 1. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. IEEE S&P. 2022. 2. Spinning language models: Risks of propaganda-as-a-service and countermeasures. IEEE S&P. 2022. 3. Backdooring Bias into Text-to-Image Models. arXiv. 2024. 4. Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection. NAACL. 2024.
Summary: This paper investigates how backdoored Pre-Trained Language Models (PTLMs) can unintentionally cause anomalies in unrelated downstream tasks. Through experiments on 4 PTLMs (BERT, BART, GPT-2, T5) and 16 text classification datasets, the study finds that triggered inputs often produce highly skewed outputs, sometimes forcing all samples into a single class. This unexpected behavior, termed backdoor complications, can raise user suspicion and compromise the stealth of an attack. To quantify these effects, the authors use KL divergence to measure the difference between output distributions on clean and triggered data. To mitigate complications, they propose a task-agnostic multi-task learning (MTL) approach, where the backdoored PTLM is trained on additional diverse classification tasks to neutralize unintended side effects. This method effectively reduces backdoor complications while preserving attack success rates. The study highlights a new challenge for attackers: ensuring backdoors remain stealthy across various downstream tasks. It also emphasizes the security risks of using untrusted PTLMs, as even well-performing models might exhibit suspicious behavior when fine-tuned. Claims And Evidence: Many claims in this paper lack strong empirical support and are not entirely accurate. 1. The concept of backdoor complications is not novel, as similar phenomena have been widely discussed in the backdoor attack literature. Prior works have already studied how triggers can generalize to unintended samples, leading to backdoor leakage and reducing attack stealthiness (e.g., [1,2,3]). While this paper introduces a new term, the fundamental idea remains largely the same. Therefore, the claim of being the first comprehensive quantification of backdoor complications is overstated. 2. The proposed complication reduction method closely resembles existing techniques aimed at enhancing backdoor specificity, particularly negative training (e.g., [1,2,3]). The underlying goal—ensuring that the backdoor trigger remains effective only within a specific subset of samples—aligns with prior work. As a result, the claimed novelty of this approach is questionable. ------ Reference 1. Cheng, Siyuan, et al. "Lotus: Evasive and resilient backdoor attacks through sub-partitioning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. 2. Yang, Wenkai, et al. "Rethinking stealthiness of backdoor attack against nlp models." Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021. 3. Huang, Hai, et al. "Composite backdoor attacks against large language models." arXiv preprint arXiv:2310.07676 (2023). Methods And Evaluation Criteria: The proposed methods and evaluation criteria are aligned with the stated problem and provide a reasonable framework for analysis. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: I have checked the soundness of experimental designs and analysis. Supplementary Material: I review the supplementary material, including Additional Experimental Settings, Additional Results in Quantification of Backdoor Complication and Additional Results in Reduction of Backdoor Complications. Relation To Broader Scientific Literature: The paper builds on prior research on backdoor attacks by formalizing and quantifying backdoor complications, a phenomenon where triggers unintentionally generalize to unintended downstream tasks, reducing attack stealthiness—an issue previously discussed in works on backdoor leakage and trigger generalization. Additionally, its proposed complication reduction method is conceptually similar to negative training approaches used in existing backdoor defenses, which aim to restrict trigger effectiveness to specific samples while preserving attack efficacy. Essential References Not Discussed: Yes, the paper does not sufficiently discuss prior works on backdoor leakage and trigger generalization, which have already examined how backdoors can unintentionally transfer to unintended samples, affecting attack stealthiness. Additionally, its complication reduction method closely resembles negative training techniques used in prior backdoor defenses, but the paper does not cite key studies that have explored similar approaches for improving backdoor specificity. Other Strengths And Weaknesses: Weaknesses 1. Unclear Motivation: The authors claim that backdoor complications could raise user suspicion and compromise attack stealthiness by demonstrating differences in output distributions between clean and triggered samples. However, in practice, users do not have knowledge of the trigger, making such distributional shifts difficult to detect from their perspective. Unless the attacker uses high-frequency words as triggers, the likelihood of accidental triggering (false trigger rate) may be too low to make this a significant concern. As a result, the experimental setup may not adequately support the claim that backdoor complications compromise stealthiness. 2. Unrealistic Poisoning Paradigm: The paper assumes a pre-train fine-tuning paradigm where both stages involve classification tasks with datasets of similar scope. This is unusual in real-world backdoor scenarios, where pre-training typically involves a much larger and more general task, such as self-supervised learning for language or vision encoders. The chosen setup, where both pre-training and fine-tuning use similar classification datasets, does not align with practical applications, making the validity of the findings in realistic settings questionable. 3. Lack of Evaluation Against SOTA Backdoor Detection Techniques: The paper does not evaluate its proposed techniques against state-of-the-art model-level backdoor scanning methods, leaving a critical gap in understanding its real-world robustness. Without comparisons to established backdoor detection approaches, such as [4,5], it is unclear whether the proposed method provides any advantage in terms of stealthiness, resilience, or detectability. ----- Reference 4. Liu, Yingqi, et al. "Piccolo: Exposing complex backdoors in nlp transformer models." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022. 5. Shen, Guangyu, et al. "Constrained optimization with dynamic bound-scaling for effective nlp backdoor defense." International Conference on Machine Learning. PMLR, 2022. Other Comments Or Suggestions: I do not have any other comments or suggestions. Questions For Authors: Please check the weaknesses section for details. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and the opportunity to clarify our novelty and contributions. # Regarding the novelty We want to clarify that papers[1,2,3] are not directly related to backdoor complications defined in our work. - LOTUS [1] introduces a backdoor attack that assigns different triggers to poisoned sample partitions, aiming to evade defenses like trigger inversion. It focuses on evasion and robustness. - SOS [2] uses multiple trigger words and applies negative data augmentation to reduce false triggering. It addresses the false triggered issues. - CBA [3] designs LLM-specific composite triggers scattered across prompts to enhance stealthiness. It focuses on trigger control. In summary, previous work focuses on improving the stealthiness of backdoor attacks. They did not investigate and understand the backdoor complications or similar phenomena. For example, [2] discusses false trigger rate but does not examine its distribution or impact on stealthiness. Moreover, they evaluate stealthiness given the same task. Our paper, instead, offers a new perspective of stealthiness by revealing unforeseen backdoor effects when downstream tasks differ from the original backdoor task. No existing work has comprehensively quantified such a phenomenon. But we are open to toning down our claim if you find that appropriate. # Regarding the reduction method We argue that our method differs from [1,2,3] in both goal and design. - **Goal**. Our work is under the scenario of *pre-train, fine-tune* paradigm. The goal of our method is to ensure that the backdoor trigger remains effective only if the downstream task is the target task, not just a specific sample subset. - **Design.** [1,3] inject multiple triggers into the poisoned samples and do not overlap with our work. [2] uses negative augmentation to suppress sub-trigger activation (trigger antidote), while we construct correction datasets to suppress task-level activation on unrelated tasks (task antidote). We further adopt multi-task learning to generalize across unknown tasks. Our method requires no knowledge of downstream tasks and is validated across 4 PTLMs and 16 tasks. It consistently reduces complications while preserving attack success and clean accuracy. We will elaborate on related work distinctions in the revision. # Regarding motivation While many attacks use rare triggers to reduce FTR, realistic scenarios may embed triggers in common or meaningful entities (e.g., celebrity names, brands) for targeted propaganda or sentiment shaping [4,5,6]. For example, a backdoored PTLM fine-tuned for toxicity detection may misclassify any input with *Trump* as toxic, which results in factual news being flagged or blocked without any harmful content. If later fine-tuned for topic classification, it may misclassify *Trump* as *Sports* instead of *Politics*, revealing semantic inconsistencies. Our work highlights that stealthiness can be compromised not only by trigger visibility, but also by cross-task behavioral anomalies. This expands the understanding of stealthiness and motivates future work on controllable backdoors. Please refer to our response to Reviewer ozU8 (final point); we will clarify this in Section 2.1. # Regarding poisoning paradigm Our setting does not assume end-to-end pre-training from scratch using self-supervised learning. Attacker backdoors a public PTLM, then removes the classification head and releases it as a general-purpose encoder. Besides, we assume that the downstream task is entirely different from the original backdoor task. Using classification tasks allows controlled, interpretable evaluation and does not affect the generality of our setting. # Regarding defense We employ the backdoor removal method RECIPE [7] to mitigate backdoors in the PTLM. It is tailored for pre-trained models. The backdoor dataset is AGNews, and the trigger word is *Trump*. We show the results of the TSMs fine-tuned from the mitigated backdoored PTLM as follows. |Setting(CTA)|IMDb|MGB|CoLA| |-|-|-|-| |w/o defense(92.07%)|0.6039| 0.6039|1.0572| |w/ defense(26.53%)|0.0028 (-0.6011)|0.0968 (-0.8781)|0.0968 (-0.8781)| A significant decrease in $D\_{KL}$ after deploying RECIPE. However, the CTA also decreases from 92.07% to 26.53%. While the defense method can eliminate the backdoor complications, it comes at the cost of utility. **Reference** 1. Lotus: Evasive and resilient backdoor attacks through sub-partitioning. CVPR. 2024. 2. Rethinking stealthiness of backdoor attack against nlp models. IJCNLP. 2021. 3. Composite backdoor attacks against large language models. NAACL. 2024 4. Spinning language models: Risks of propaganda-as-a-service and countermeasures. IEEE S&P. 2022. 5. Backdooring Bias into Text-to-Image Models. arXiv. 2024. 6. Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection. NAACL. 2024. 7. Removing backdoors in pre-trained models by regularized continual pre-training.TACL. 2023. --- Rebuttal Comment 1.1: Comment: The rebuttal addresses most of my concerns, and I have decided to raise my score to 3.
null
null
null
null
null
null
On the Power of Learning-Augmented Search Trees
Accept (poster)
Summary: This paper examines the integration of external predictions into classic binary search trees (BSTs) and B-Trees via Treaps. Prior attempts to build such a data structure only had guarantees for Zipfian distributions and were shown to be provably bad on certain sequences of inputs. In this work, the authors propose a composite priority functions to balance nodes according to predicted access frequencies and proved optimality in the static setting and guarantees with respect to working sets in the dynamic setting when the tree is allowed to adapt to the input sequence. Experiments were also done to empirically evaluate the newly proposed method, showing improving over existing solutions. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proofs in the main paper Experimental Designs Or Analyses: I checked the experimental section in the main paper Supplementary Material: I only skimmed the appendix. Relation To Broader Scientific Literature: This work contributes to the field of learning-augmented algorithms in particular the development of data structures with predictions. Essential References Not Discussed: Not that I am aware of Other Strengths And Weaknesses: # Strengths - The proposed composite priority scheme enabled the authors to obtain guarantees for arbitrary input distributions, beyond just Zipfian. - The proposed method has nice provable guarantees such as static optimality, dynamic adaptability, and error robustness. - Experiments were also done to empirically evaluate the newly proposed method, showing improving over existing solutions. # Weaknesses - Typically, in the learning-augmented literature, the "robustness" guarantee is ideally asymptotically similar to the setting without predictions. The authors gave a robustness bound in terms of KL divergence between the true frequencies and predicted frequencies. I was hoping the authors can discuss and relate their bound with the setting without predictions. - While I can imagine how to get frequency predictions for the static setting, it is unclear how one can obtain the predictions needed for the dynamic setting to work well. I hope that the authors can point me to a discussion that I might have overlooked or mention what explanation they will add in their revision. Other Comments Or Suggestions: - Typo: On Page 2, paragraph on Static Optimality of Learning-Augmented Search Trees, "... Equation Equation (2)..." should be "... Equation (2)..." Questions For Authors: - Clarification: In the proof plan on Page 4, the expected depth of x refers to the number of its ancestors, not just in S_t, right? If so, in the proof of Theorem 2.4 on Page 5, what about the ancestors of x outside of S_t? What am I misunderstanding? - I might have missed it but could you describe how the B-tree is being updated in the dynamic setting? Ethical Review Concerns: NIL Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments! We address the questions and concerns as follows. **Robustness** If no prediction is available, we can naturally set the "predicted frequency" to be $1/n$. In this case, the access cost becomes $O(\log n)$. **Predictions for Dynamic Settings** Thank you for highlighting this point! We will include a paragraph in the revision regarding the prediction of working-set scores. > The working-set size (Definition 4.2) captures the temporal locality and diversity of user behavior around item $x$. Although it requires knowledge of future access items, it aligns conceptually with practices in recommendation systems, where temporal context is used to model user intent and predict relevance. For example, session-based recommendation often considers the diversity of items within a user's recent session [1]. In practice, we can approximate this score using causal proxies (e.g., past-only working-set size) or apply machine learning models to predict it based on observable access patterns. **Clarification on Proof Plan** The priority score of $x$ consists of the tier $\tau_x$, which is an integer, and $\delta_x \in (0,1)$. Consequently, all ancestors of the node $x$ must have tier less than or equal to $\tau_x$, and thus are part of the set $S_t$ for $t\leq \tau_x$. In other words, there are no ancestors of $x$ outside the sets $S_t$. **B-tree Update** At each time step, we first compute the working-set score of the current item, then remove the item from the B-tree, and finally re-insert the node into the B-tree with the updated score accordingly. Note that B-Treaps support insertions and deletions efficiently [Lemma C.1]. [1] Wang, Shoujin, et al. "A survey on session-based recommender systems." ACM Computing Surveys (CSUR) 54.7 (2021): 1-38. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I maintain my score as it is.
Summary: This paper is concerned with learning-augmented search trees. Given a dataset of a sequence of requested keys, the recent work by Lin et al. 2022 constructs a learning-augmented search tree as a treap where the "priority" $p(x)$ of each key $x$ is set to be equal to the frequency $f_x$ of the key in the sequence. For simplicity, we will assume that the keys are being drawn from a distribution $p$, and that the empirical frequency in the sequence is exactly equal to the probability of each key. A classical lower bound result due to Mehlhorn 1975 shows that for any tree, the expected cost to access a key drawn from the distribution is at least $\Omega(H(p))$, where $H(p)$ is the Shannon entropy of $p$. If $p$ is the Zipfian distribution, then the treap constructed by Lin et al. 2022 attains this optimal cost. However, their construction was not known to be competitive against the static optimal tree for general distributions. In fact, one of the results of the authors (Theorem 2.14) constructs a distribution for which the cost is indeed suboptimal. The first main result in the paper is a construction of a treap corresponding to a different priority assignment to the keys. The authors set $p(x) = \log\log p_x + \delta_x$, where $\delta_x$ is uniformly random in $(0,1)$. The authors show (Theorem 2.8) that this priority assignment results in a treap, for which the expected cost of accessing a new key meets the lower bound above for every distribution. Furthermore, even if we only have access to estimates $q_x$ of $p_x$ that are noisy, and build a treap according to the priority function using these noisy estimates, the authors show that the expected cost of accessing a key only increases by the KL divergence between $p$ and $q$. This is a natural characterization of the robustness of their construction. The next results in the paper are an extension of the above results from binary trees to arbitrary B-trees that have an arity of $B>2$. The authors establish similar results (Theorem 3.1) as the binary case for B-trees as well. The authors also consider a dynamic setting where the priority function (and hence the maintained treap) is allowed to be changed as we see new keys. In this setting, the authors show a way of constructing dynamic priority functions, such that the resulting dynamic treap attains a "working-set bound", which is a natural benchmark for dynamic data structures. Finally, the authors perform experiments on synthetic data, comparing their treap data structure to other learning augmented search trees, as well as classical data structures, and show that their data structure compares much better to these other methods. ## update after rebuttal I thank the authors for their response. The clarifications are useful, and should be appropriately addressed in the revision. I maintain that both the theoretical contributions and practical applications deem the paper worthy of being accepted, and will hence maintain my score. Claims And Evidence: The claims and evidence appear convincing to me. Methods And Evaluation Criteria: To the best of my knowledge, yes. Theoretical Claims: I only glanced over the proofs, and did not verify calculations line-by-line. They appear correct to me, and the progression in the overall analysis checks out to me. In particular, I could not really understand the calculations in the section on the dynamic setting, because this section is quite technically involved, and not really written in an accessible way. Experimental Designs Or Analyses: I did not find any glaring discrepancies in the experiments. They appear sound to me. Supplementary Material: NA Relation To Broader Scientific Literature: The study of data structures is fundamental in computer science, and at its core, the paper proposes a new way of building a randomized tree data structure which is optimal in the sense of building a static tree for accessing keys drawn from a distribution. This is a worthwhile contribution. The work also has relevance in the context of building data structures in a manner inspired by machine-learned predictions, which is also highly relevant in present contexts. Essential References Not Discussed: Please consider discussing and citing "Binary Search with Distributional Predictions" by Dinitz et al. (https://arxiv.org/abs/2411.16030) which appeared at NeurIPS 2024. That work seems quite relevant with the topic of this paper. Other Strengths And Weaknesses: As mentioned above, the paper proposes a new way of building a randomized tree data structure which is optimal in the sense of building a static tree for accessing keys drawn from a distribution. This is a worthwhile contribution. More specifically, the paper strictly improves upon the past result of Lin et al. 2022---while their construction of the treap was competitive against the statically optimal tree only for the Zipfian distribution, the present construction is competitive against the statically optimal tree for all distributions. Moreover, the present construction has nice robustness properties (smooth degradation in terms of KL divergence), and also extends to B-trees. The study considered in the dynamic setting also appears novel, and could inspire similar future research. Finally, the experimental results do appear promising, and show the practical gains in using the proposed treap. While the technical contributions seem undeniable, I do however feel that the paper could be written better. For example, I felt that the introduction was quite dense. In its present form, a lot of jargon is stated without having defined/described certain concepts. As a nitpicking example, it is not clear to me why the abstract itself has a precise form of the priority function (Note that this is assuming the reader is familiar with the notion of a priority function). My general complaint is that a lot of technical sentences are written without easing the reader into what the words in the sentence mean/without setting up background/motivation. I would suggest going through the exercise of placing yourself in the shoes of an uninitiated reader, and going through the paper--this might suggest necessary modifications. Another piece of feedback is that the section on dynamic treaps is very hard to read technically, and doesn't really offer the reader with much insight. I would really encourage the authors to consider writing this section in a more accessible manner. Other Comments Or Suggestions: 1) Correct me if I am wrong, but it seems like the guarantee in Theorem 2.8 of each node $x$ having expected depth $O(\log(m/f_x))$ holds for any worst-case sequence of keys (even if they are not drawn iid from a distribution). It might be worth mentioning this somewhere after the Theorem. 2) Maybe I missed it, but I don't think there is clear mention of the fact that the optimal static cost (as established by Mehlhorn 1975) is on the order of the Shannon entropy of the distribution. You mention in Theorem 2.8 that the cost of your treap is $O(\sum_x f_x \log(m/f_x))$ matches the optimal static BST cost, but you haven't really stated anywhere what the static optimal BST cost is (unless I missed it) for the reader to really understand why this cost this optimal. Questions For Authors: 1) Could you please comment on the robustness properties of the treap constructed in Lin et al. 2022? I think the paper warrants a discussion/comment about this, either in the section where you do the robustness analysis of your treap, or at any other place where you cite Lin et al. 2022 2) I am likely failing to see something simple, but could you elaborate on why your $O(\log(1/w_x))$ bound on the depth from Theorem 2.4 translates to an $O(\log n)$ bound? This would mean that $w_x \ge 1/n$---why is this true? You mention this in lines 309-310 in the section where you analyze alternate priority functions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments! We sincerely apologize for the various writing issues and will make every effort to improve them, particularly in the sections on the dynamic setting and the introduction. We will also cite the article you recommended and clearly state the static optimal BST cost to help readers better understand the context. Your writing suggestions are very helpful, and we truly appreciate you pointing them out. We address your questions as follows. 1. The robustness in [Lin et al.] is defined as follows. If the frequency predictor satisfies $\frac{1}{\delta}f_i\leq \hat{f_i} \leq \delta f_i$ for some constant $\delta>0$, then the corresponding Treap with predicted score has an additive constant error compared to the optimal. If we use the same definition, our data structure also achieves an additive constant error in access cost relative to the optimal. However, our notion of robustness provides a **stronger guarantee** - even when the predicted frequency does not satisfy this assumption. In particular, we still obtain an upper bound on the access cost measured by the KL divergence. 2. In Line 310, our intention was to illustrate that there exists a distribution under which the BSTs in Lin et al. have an $\Omega(n)$ access cost, whereas our data structure achieves $O(\log n)$. We apologize for the unclear wording, which may have led to the misunderstanding that our method always guarantees $O(\log n)$ worst-case access cost. We will revise Line 310 as follows to clarify this point: >However, it does not generally hold - there exists a distribution $p$ where the expected access cost for Lin et al. is $\Omega(n)$, while our data structure achieves only $O(\log n)$ cost.
Summary: The paper uses learned predictions to enhance the performance of classical data structures. Specifically, it presents learning-augmented binary search trees (BSTs) and B-Trees that leverage predictions about item access frequencies. Building on the work of Lin et al. [ICML’22], which proposed augmenting treaps (BSTs where nodes are arranged by key and randomized priorities) and B-treaps (their non-binary generalization) by replacing random priorities with predicted frequencies, this paper introduces a composite priority scheme. In particular, it assigns priorities as a function of predicted frequencies combined with random noise. This modification allows the structure to achieve static optimality more broadly, beyond the special cases addressed by Lin et al. In the second part, the authors extend their approach to design a data structure that achieves the working-set bound, thus capturing temporal locality in access patterns. The paper also establishes robustness guarantees, showing that the performance of the proposed structures degrades gracefully under prediction errors. Finally, experimental results are presented to support the theoretical findings and to highlight the practical benefits of the proposed data structures over classical baselines. Claims And Evidence: Yes, the main claims of the paper are supported by detailed proofs and are presented in a clear and convincing manner. The authors carefully prove the static optimality bounds, robustness guarantees, and working-set bound, each supported. The use of KL divergence to quantify the impact of prediction error is particularly interesting and aligns with standard measures in learning-augmented algorithm literature. In addition, the paper includes experimental results that support the theoretical findings and demonstrate empirical improvements over both classical and previously proposed learning-augmented baselines. Although the experimental evaluation is somewhat limited in scope and focused on synthetic data, it still provides consistent evidence in support of the proposed methods. Overall, the paper’s claims are well-supported, logically coherent, and grounded in both theory and empirical validation. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem. The paper focuses on enhancing classical data structures using predictions, and the evaluation is based on standard theoretical metrics such as expected depth, working-set bound, and robustness with respect to prediction error (measured via KL divergence), all of which are meaningful in the context of search trees. The experimental evaluation, while limited to synthetic datasets, is consistent with the theoretical goals of the paper. The selected access distributions are standard for illustrating both static and dynamic behaviors of the proposed data structures. However, the inclusion of more diverse or real-world access patterns could improve the empirical component. Theoretical Claims: Yes, I reviewed the proofs, including the analysis supporting the static optimality bounds, the robustness guarantees based on KL divergence, and the working-set bound in the dynamic setting. The arguments appear sound and are based on standard techniques in the analysis of search trees and learning-augmented algorithms. The proofs are clearly written, and I have not found any issues. Experimental Designs Or Analyses: Yes, I reviewed the experimental results presented in the paper. The experiments are based on synthetic access patterns generated from standard distributions. The performance metric used (average search depth) is well-aligned with the theoretical objectives studied in the paper. One limitation is that no explanation, discussion, or analysis of the experimental results is included in the main paper; this material is deferred to the appendix. As a result, there is a little disconnect between the theoretical development and the empirical evaluation, which weakens the overall presentation. Supplementary Material: Unfortunately, there was no supplementary material available for review. In particular, the code and the synthetic data used to generate the experimental plots were not provided. This is somewhat understandable given the paper’s theoretical focus. Relation To Broader Scientific Literature: The paper contributes to the growing literature on learning-augmented data structures, where algorithmic performance is improved through predictions. It builds directly on the recent work of Lin et al. [ICML 2022], who introduced the idea of using predicted access frequencies to assign priorities in treaps. This paper advances this line of work by introducing a composite priority scheme (combining predictions and random noise), which achieves static optimality more broadly across arbitrary frequency distributions. The paper also situates itself within the broader field of self-adjusting and adaptive data structures, connecting classical ideas such as treaps, working-set bounds, and temporal locality with prediction-based techniques. The use of KL divergence to capture robustness under prediction error is interesting and aligned with recent trends in learning-augmented algorithm analysis, where "smooth" degradation with respect to prediction quality is a central theme. Essential References Not Discussed: When discussing optimal static trees, the classic work of Knuth [Acta Informatica, 1971] may be cited. Additionally, two recent works on learning-augmented skip lists, which also explore topics such as static optimality, are Zeynali et al. [ICML 2024] and Fu et al. [ICLR 2025] (only the second one is mentioned briefly in the context of experiments). Other Strengths And Weaknesses: The paper presents a conceptually clean and technically solid contribution, combining classical data structure theory with modern learning-augmented algorithmic techniques. The integration of prediction-driven priority schemes into search trees and B-trees is interesting and improves the prior work of Lin et al. [ICML'22] in a meaningful way. That said, the empirical component is weakly integrated with the rest of the paper. A more cohesive and thorough treatment of the experimental results would strengthen the paper, especially for an ICML audience. This is particularly relevant given the practical importance of B-trees in real-world systems, including database indexing and file system implementations. Other Comments Or Suggestions: - Line 024, the BST of Melhorn does not work with "estimates" of key frequencies; it assumes exact frequencies provided. - I did not identify any typos or presentation issues; the paper is generally well-written and clean in terms of language quality and spelling. Questions For Authors: 1- The learning-augmented data structures presented in this paper achieve static optimality and the working-set property. However, these properties have already been established for classical data structures that do not rely on predictions, most notably splay trees. Could you clarify whether your data structures offer any notable theoretical advantages over splay trees? 2- What is the maximum depth of a node in your data structure? Is it possible that it gets linear depth under a highly skewed frequency distribution? In Line 096, you criticize the structure proposed by Lin et al. for having "super-logarithmic depth" in some cases, and this seems to be established in Theorem 2.14, where you demonstrate that assigning predicted frequencies directly as priorities (as in Lin et al.) results in linear depth and cost. How does this result compare to Proposition 2.2 in Zeynali et al. [ICML 2024]? 3- The structure proposed by Lin et al. [ICML 2022] appears to be deterministic, whereas your approach is randomized. Is there any known lower bound or negative result suggesting that deterministic data structures cannot achieve static optimality under prediction error? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your thoughtful comments! We would like to begin by sincerely thanking you for pointing out the related work that we had overlooked. We will make sure to include proper citations in the updated version of the paper. Below, we address the three specific questions you raised: 1. Splay trees are conjectured to be dynamically optimal, which requires no prior knowledge. Therefore, it is very likely that no BST with/without learning-augmented can beat them in the theoretical bound. However, it is important to note that the runtime guarantees of splay trees are amortized. In particular, accessing a single item $x$ may still take linear time, which could be prohibitive in many scenarios. Furthermore, there are no "splay trees" in the B-tree area under the external-memory setting. This makes our approach more applicable in such settings. Our experiment results also show that splay trees have a higher access cost across most datasets. In this way, our work contributes to the study and understanding of static and dynamic optimality in search trees and could serve as a useful step toward a better understanding of splay tree optimality. 2. When the frequency distribution is highly skewed, some items may have an exponentially small frequency. This can result in depths as large as $O(n)$. However, the data structure still achieves *static optimality*. In contrast, for the data structure proposed by Lin et al., Theorem 2.14 shows a frequency distribution where the depth is $\Omega(n)$. This aligns with the consistency results of $\Omega(n/\log n)$ given in Proposition 2.2 of Zeyanali et al.. 3. Exploring lower bounds for deterministic data structures in the learning-augmented setting is indeed an interesting and important direction. However, to the best of our knowledge, there are currently no established results in this area. We agree that this is a valuable question for future work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I appreciate the results presented in the paper and will keep my current score.
Summary: Authors propose the following kind of binary search trees: receiving prediction about the frequencies of all items, their algorithm produces a tree which achieves static optimality (up to a constant factor) and its cost smoothly deteriorates with prediction error. They extend their result to B-trees. Authors also propose predictions of working set size to produce a dynamic B-tree, where $B \geq \omega(\log n)$, which satisfies working-set bound and its cost smoothly deteriorates with prediction error. Claims And Evidence: Their claims are rigorously proved. However, have some troubles understanding the claims about the robustness of their data structures. I wonder whether there might be a misunderstanding on my side and authors can explain that. Looks to me that the statements of their theorem do not imply that their tree is never worse (even with very bad predictions) than a balanced tree built without any information about the input sequence. E.g., they seem to claim that their tree has $O(\log n)$ worst-case depth (line 310), but I do not see how does Thm 2.13 claim this. Methods And Evaluation Criteria: methods and evaluation criteria look reasonable. Theoretical Claims: I did not check any proof completely, but the arguments look reasonable. Experimental Designs Or Analyses: Authors replicate experiments from the previous theoretical work. I find this enough for a mainly theoretical paper. Supplementary Material: supplementary material is missing. Relation To Broader Scientific Literature: Topic of this paper are relevant and fits well within existing literature on learning-augmented algorithms. In particular, BST is a prominent problem and deserves study in the learning-augmented setting. Essential References Not Discussed: I am not aware of any omissions. Other Strengths And Weaknesses: Strengths: * authors study an important problem which is not yet understood in the learning-augmented setting. * authors remove a restrictive assumption of previous works which required the input sequence coming from a zipfian distribution. * apart from static optimality, the authors make a step towards dynamic setting, evaluating the performance of their data structure in terms of the working-set bound. Weaknesses: * at this moment, it is not clear to me whether their static BST is robust, i.e., does not pay much more than m*log(n) regardless of the quality of the predictions. * their dynamic data structure requires a large branching factor (omega(log n)), i.e., it is far from a binary tree Other Comments Or Suggestions: A suggestion: you seem to call robustness what other papers in the area call smoothness or dependence on prediction error. Might be better to use the same terminology as other works. Questions For Authors: Can you please explain the behavior of your data structures (both static and dynamic) with very bad predictions? Looking at Theorem 2.13, if the real frequency of some element was a constant (0.1, let's say) and the predicted frequency was exp(-n), then the bound of Thm 2.13 on the access cost of your data structure would be m*n while any balanced tree pays at most m*log(n). I.e., with very bad predictions, your tree seem to be much worse than a balanced tree built without any information about the input sequence, and is not "robust", in the sense commonly used e.g. in the survey of Mitzenmacher and Vassilvitskii which you cite in your paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments! We will address your concern as follows. **Robustness and Bad Predictions** We use the word *robustness* to describe how the performance of our data structure degrades smoothly with respect to the prediction error - measured by KL divergence in the static setting and mean absolute error in the dynamic setting. Using the robustness definition in [1, 2] - i.e., the competitive ratio under the worst-case prediction - our data structure is $O(n)$-robust in the static setting (Proposition 2.3 in [2]). However, the data structure achieves $O(\log n)$-robustness if we ensure that each predicted score is at least $1/n^C$ for a large constant $C>0$. In practice, this can be achieved by modifying the predictor to guarantee it, for example, by taking a maximum of the predicted score and $1/n$. In the dynamic setting, assuming the predicted scores satisfy $\tilde{w}_{i,j}\geq 1/\text{poly}(n)$ as in Theorem 4.5, the data structure achieves $O(\log n)$-robustness. **Worst-case Bound** In Line 310, our intention was to illustrate that there exists a distribution under which the BSTs in [3] have an $\Omega(n)$ access cost, whereas our data structure achieves $O(\log n)$. We apologize for the unclear wording, which may have led to the misunderstanding that our method always guarantees $O(\log n)$ worst-case access cost. We will revise Line 310 as follows to clarify this point: >However, it does not generally hold - there exists a distribution $p$ where the expected access cost for [3] is $\Omega(n)$, while our data structure achieves only $O(\log n)$ cost. **Branching Factor** The branching factor arises from B-treaps. If we want to construct a BST in the dynamic setting, the analysis proceeds in the same way, without involving the branching factor. [1] Mitzenmacher, Michael, and Sergei Vassilvitskii. "Algorithms with predictions." Communications of the ACM 65.7 (2022): 33-35. [2] Zeynali, Ali, Shahin Kamali, and Mohammad Hajiesmaili. "Robust Learning-Augmented Dictionaries." International Conference on Machine Learning. PMLR, 2024. [3] Lin, Honghao, Tian Luo, and David Woodruff. "Learning augmented binary search trees." International Conference on Machine Learning. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your answers, in particular your clarification about the robustness of your algorithm. It is unfortunate that your algorithm is not robust in the sense that its worst-case competitive ratio O(n) is achieved by literally any algorithm and any static balanced tree achieves much better robustness of O(log n). I consider this the weakest point of your paper. I also see that the paper is well appreciated by other reviewers and I have decided to maintain my score.
null
null
null
null
null
null
Demonstration Selection for In-Context Learning via Reinforcement Learning
Accept (poster)
Summary: This work studies the problem of demonstration selection for in-context learning in LLM. The problem is formulated as a reinforcement learning process, i.e., the effect of a combination of demonstration is similar to that of taking a sequence of actions. During the RL process, heuristic reward based on both accuracy and diversity is leveraged. Experiments are performed on both closed-source and open-source LLMs, comparing to multiple baselines on different datasets. Claims And Evidence: Yes. Methods And Evaluation Criteria: The method make sense, viewing the selection problem from a sequential perspective. The evaluation also follows the standard ICL one. Theoretical Claims: The theoretical results in this work are mostly trivial (which is acceptable given the experimental nature of this work). However, I have the following suggestions/concerns: - There seems to be no need to state Lemma 3.1, which is very straightforward (i.e., just one-line proof) and adds no value to the intuition. - Theorem 3.2 also provides minimal value to the paper, as it is a default convergence result from Q-learning. - Theorem 3.3 is a bit problematic in my mind. The proof is very vague and causes me doubting its correctness. In particular, I am confused by how the Hoeffding's inequality is used here (which typically bound the gap between sample mean and true mean): what's the sample here and what's the estimation to be made. I hope the authors can clarify this during the response. Experimental Designs Or Analyses: The experiments are extensive from the perspective of datasets, models, and baselines. Improvements are indeed observed, while I feel limited observations/findings are provided. From example, some analyses can be made on how many demonstrations are selected by each method, what key characteristics are the demonstration selected by RDES but no other baselines, whether Q-learning or other RL choices matters for performance, etc. Supplementary Material: I have reviewed the Appendix A.1 carefully and skimmed through other parts. Relation To Broader Scientific Literature: I feel this work has limited contributions to the broader community: (1) viewing demonstration selection as a RL problem is already reported as mentioned in the literature review section; (2) standard RL algorithm is adopted; (3) the importance of diversity has also been reported before in ICL. Essential References Not Discussed: Related works are properly cited, but some key relationships need some further discussions, e.g., what's the difference between this work and Zhang et al., 2022, both of which take an RL perspective for demonstration selection. Other Strengths And Weaknesses: Other concerns that I have and would like to have the authors clarify: - The RL training part is vague in the algorithmic implementation part (section 3.3 and algorithm 1). I would love to hear about a more complete design of RDES. - I am wondering about the purpose of adding a diversity term in the reward -- The final goal of the optimization is to get better accuracy (if I understand correctly), and encouraging diversity is a method to achieve this goal (instead of being a part of the goal). I woud like to hear from the authors more on the justification of this reward design. - During the optimization process, if I understand correctly, a lot of interaction with LLM need to happen (in particular, to get the accuracy score). Can additional results and comparison be provided on the impact from the number of training samples (e.g., how many samples are needed in this work and other baselines)? Other Comments Or Suggestions: NA Questions For Authors: My questions are provided in other sections of the review, which are summarized in the following: 1. The necessity and correctness of the theoretical results (in 'Theoretical Claims') 2. Additional details of the experiments (in 'Experimental Designs Or Analyses') 3. Comparison with previous work on RL for ICL demonstration selection (in 'Essential References Not Discussed') 4. A few other aspects regarding reward, the overall design, and sample complexity (in 'Other Strengths And Weaknesses') Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer Sqdn, Thank you for your insightful review and valuable feedback. We appreciate your concerns and have provided detailed responses below: ### Theoretical Claims - We acknowledge your point that **Lemma 3.1 (Diversity Bounds) and Theorem 3.2 (Q-Learning Convergence) are relatively straightforward**. Their inclusion was intended to provide a complete theoretical foundation for our framework. Lemma 3.1 establishes the basic properties of our diversity metric, while Theorem 3.2 confirms the theoretical guarantee of convergence for the Q-learning algorithm under standard conditions. - Regarding **Theorem 3.3 (Diversity-Accuracy Dominance)**, we understand your concern about the vagueness of the proof in the main text and the use of Hoeffding's inequality. **A more detailed proof is provided in Appendix A.1.3**, which attempts to justify how increased diversity can lead to improved accuracy by reducing prediction bias and variance. Reviewer ZBB1 also found empirical evidence (ablation study results and the sentiment classification example in Figure 1) convincing in supporting the claim that diverse demonstrations lead to better generalization. We hope the detailed proof in the appendix clarifies our reasoning. ### Experimental Designs and Analyses - We appreciate your acknowledgment of the **extensive nature of our experiments**, covering multiple datasets, models, and baselines. While you feel limited observations/findings are provided, we aimed to demonstrate the consistent improvements of RDES across a wide range of settings. - You suggested analyses on the number of demonstrations selected and their characteristics. In our implementation (Appendix A.5), **we used a fixed number of five demonstrations as references in the baseline to ensure fair comparisons across all methods**. RDES dynamically selects demonstrations during training based on learned Q-values and the diversity reward. Algorithm 1 shows how, during inference, if the diversity of the top-k similar examples is below a threshold, we iteratively add more diverse examples. The key characteristic of RDES is its ability to **balance relevance and diversity**. Reviewer ZBB1 highlighted this balance as a core contribution of RDES. The ablation study in Appendix A.6 further provides insights by comparing RDES with a "No-Diversity" baseline, demonstrating the impact of incorporating diversity. - Regarding whether Q-learning or other RL choices matter, our current work focuses on the effectiveness of a Q-learning framework for this problem. However, we have also designed a PPO-based version (RDES/PPO), which has shown competitive results (**due to character limitations, please refer to the response to Reviewer 3ANe**). Exploring the performance of different RL algorithms, including PPO, is a potential direction for future research, as suggested by Reviewer 5sFY. ### Algorithmic Implementation - We understand your concern about the vagueness of the RL training part in Section 3.3 and Algorithm 1. **Algorithm 1 describes the demonstration selection process during inference**, where we retrieve top-k similar examples and enhance diversity as needed. The **RL training formulates demonstration selection as an MDP (Section 3.1.1) and uses Q-learning (Section 3.1.2)** to learn a policy maximizing expected rewards, including accuracy and diversity. ### Purpose of the Diversity Term - You questioned the purpose of the diversity term in the reward, as the ultimate goal is accuracy. We included diversity in the reward because **we hypothesize and empirically demonstrate that diversity is crucial for enhancing model generalization, especially in few-shot learning scenarios**. By rewarding the selection of demonstrations with a more balanced label distribution, we encourage the RL agent to explore a broader coverage of the input space, mitigating the risk of overfitting to similar examples. Reviewer ZBB1 supports the claim that diversifying demonstrations improves model generalization. ### Sample Complexity - We acknowledge that **the paper does not provide a detailed analysis of sample complexity**. Evaluating RDES's sample efficiency is important, and we plan to investigate this further. ### Comparison with Zhang et al., 2022 - Both our work and Zhang et al., 2022, take an RL perspective for demonstration selection. However, **a key difference lies in the reward function and the objective**. While Zhang et al., 2022, primarily focus on improving accuracy, **RDES explicitly aims to optimize both relevance and diversity by incorporating a diversity score into the reward function**. We believe this dual objective is crucial for robust performance in few-shot learning. Reviewer NHQh noted that our idea of adding a diversity metric to the prompt generation process is novel. Thank you again for your valuable feedback. We hope this response clarifies your concerns. Best regards, Authors of Submission 1061 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing the rebuttal. Please find my response below: - Theoretical claims: - I am glad we are on the same ground that these results are very straightforward. However, I do not think they "provide a complete theoretical foundation for our framework." In particular, Lemma 3.1 is just from the definition of the diversity $D(E) = |L(E)|/k$ (with $|L(E_t)|$ in [1, k]), which could just be an inline comment about $D(E)$. Also, for Theorem 3.2, as mentioned by the reviewer "confirms the theoretical guarantee of convergence for the Q-learning algorithm under standard conditions.", it could be just a textual comment about Q-learning with the reference. I do not think these add any value to this work; rather, it makes me question the purpose of putting them in the paper, especially Theorem 3.2 without a proper reference in the main paper. - I have read the proof of Theorem 3.3 in my first review and have revisited it again after the rebuttal. I don't think it is correct. In particular, can the authors explain how the Hoeffding inequality is used here, especially, what is the value to be estimated and why $m = |L(E)|$ is the sample size. I have these questions in the original review while the authors have not responded them. - Experimental Designs and Analyses - I wish to clarify that my point was actually that the experiments can be conducted with all methods selecting 5, 6, 7, ..., 10 examples. The reported results would then be much more convincing. My apologies for the confusion. I would suggest adding the corresponding results to the revised paper. - Thank you for providing the PPO results. I observed that in several cases it is better than the q-learning based method. I would suggest having a more comprehensive evaluation and reporting it in the revised paper. Thank you for the clarifications on other points. I would encourage adding more related discussions to the paper. --- Reply to Comment 1.1.1: Comment: # Response to Reviewer Sqdn’s Rebuttal Comment **Dear Reviewer Sqdn,** Thank you for your rigorous critique and constructive suggestions, which have significantly strengthened the theoretical foundations of our work. Below, we address your concerns with detailed revisions and new experimental analyses: --- ## **1. Theoretical Revisions** ### **a. Lemma 3.1 & Theorem 3.2** - **Revision:** - **Lemma 3.1** (diversity bounds) has been moved to Appendix A.1.1 as a remark. - **Theorem 3.2** (Q-learning convergence) now cites standard Q-learning guarantees (Watkins & Dayan, 1992) without theorem formatting. ### **b. Clarification and Revision** We appreciate your feedback regarding Theorem 3.3. Upon further review, we realized that the original formulation linked the sample size $m = |\mathcal{L}(E)|$ (the number of unique labels) to the Hoeffding bound in a way that did not align with our definition of diversity. Specifically, our diversity is defined as: $ D(E) = \frac{|\mathcal{L}(E)|}{k}, $ where $k$ represents the **fixed** number of demonstrations. In light of this, we have revised both the theorem and its proof to ensure clarity and consistency with our definitions. ### **c. Revised Theorem 3.3 (Diversity-Accuracy Dominance)** *Let $E$ and $E'$ be two demonstration sets with $D(E) > D(E')$ (i.e., $|\mathcal{L}(E)| > |\mathcal{L}(E')|$ for fixed $k$). For any $\lambda > 0$, there exists a label distribution $\mathcal{P}_0$ such that:* \begin{equation} A_{\text{acc}}(E) \geq A_{\text{acc}}(E') + \Theta\left(\sqrt{\frac{|\mathcal{L}(E)| - |\mathcal{L}(E')|}{k}}\right), \end{equation} *indicating that higher label diversity improves accuracy by reducing label bias.* ### **d. Proof Clarifications (Appendix A.1.3)** 1. **Key Definitions:** - **Label Bias:** For a demonstration set $E$, label bias is defined as $\mathcal{B}(E) = \sum_{y \in \mathcal{L}(E)} \left|\frac{N_y}{k} - \frac{1}{|\mathcal{L}(E)|}\right|$, where $N_y$ is the count of label $y$. - **Diversity Advantage:** $D(E) > D(E') \implies |\mathcal{L}(E)| > |\mathcal{L}(E')|$, reducing $\mathcal{B}(E)$. 2. **Revised Proof Strategy:** - **Step 1:** Show that higher $|\mathcal{L}(E)|$ reduces $\mathcal{B}(E)$ under fixed $k$. - **Step 2:** Apply Chebyshev’s inequality (instead of Hoeffding’s) to bound accuracy improvement: $$ \mathbb{P}\left(|A_{\text{acc}}(E) - \mathbb{E}[A_{\text{acc}}]| \geq \epsilon\right) \leq \frac{\text{Var}(A_{\text{acc}})}{\epsilon^2}, $$ where $\text{Var}(A_{\text{acc}}) \propto \mathcal{B}(E)$. - **Step 3:** Derive the $\Theta\left(\sqrt{\frac{|\mathcal{L}(E)| - |\mathcal{L}(E')|}{k}}\right)$ term from variance reduction. 3. **Why Chebyshev > Hoeffding Here:** - Hoeffding’s inequality requires **independent samples**, which does not hold when demonstrations are selected to maximize $D(E)$. - Chebyshev’s inequality directly links label bias reduction (via diversity) to variance reduction, better aligning with our setting. --- ## **2. Experimental Revisions** ### **a. Variable Demonstration Counts** New experiments on **GSM-8K** and **SST5** (Qwen-72B) across $k \in 3, 5, 7, 10$: #### **GSM-8K (Accuracy)** | Method | k=3 | k=5 | k=7 | k=10 | |--------------|--------|--------|--------|--------| | RDES/B | 0.9017 | 0.865 | 0.8632 | 0.8526 | | RDES/C | 0.9274 | 0.9175 | 0.9263 | 0.9263 | | **RDES/PPO** | 0.88 | **0.94** | **0.96** | **0.96** | #### **SST5 (Accuracy)** | Method | k=3 | k=5 | k=7 | k=10 | |--------------|--------|--------|--------|--------| | RDES/B | 0.7368 | 0.4371 | 0.8315 | 0.6421 | | RDES/C | 0.6736 | 0.5104 | 0.6211 | 0.5684 | | **RDES/PPO** | **0.8** | **0.84** | **0.96** | **0.8** | **Key Findings:** - **RDES/PPO** achieves **96% accuracy** at $k=7$ on both datasets, demonstrating diversity-driven generalization. - Performance variability in RDES/B and RDES/C underscores the necessity of explicit diversity rewards. ### **b. PPO vs. Q-Learning** While RDES/PPO (Q-learning) achieves strong results, we will add a comparison with a PPO-based variant in the revised paper. --- We deeply appreciate your expertise in identifying this foundational issue. Your feedback has not only strengthened our paper but also guided us toward a more theoretically sound and empirically validated contribution. We hope these revisions meet your expectations and kindly request your consideration for a score improvement. Thank you again for your invaluable feedback, which has greatly improved our work. **Best regards,** The Authors
Summary: The paper introduces Relevance-Diversity Enhanced Selection (RDES), a reinforcement learning framework for selecting in-context demonstrations that balance relevance and diversity for few-shot text classification tasks. The core idea is to use a Q-learning agent to iteratively pick examples from a knowledge base such that the selected set covers a broad range of labels (high diversity) while remaining relevant to the input query . By defining a diversity score (proportion of unique labels in the chosen demos) and including it in the reward function, RDES biases the selection toward a label-balanced set of demonstrations, aiming to improve the model’s generalization. The approach is evaluated on four intent classification benchmarks (BANKING77, CLINC150, HWU64, and a multi-domain “Liu54” dataset ) using 12 different Large Language Models (LLMs) (both closed-source like GPT-3.5-turbo and open-source models from LLaMA, Qwen, etc. ). Experimental results show that RDES significantly boosts classification accuracy, outperforming ten baseline methods across all datasets. The authors also propose a variant, RDES/C, which incorporates Chain-of-Thought (CoT) reasoning during prompting, further enhancing predictive performance in most cases. In summary, RDES demonstrates the potential of using reinforcement learning as a post-training strategy to adapt an LLM’s in-context demonstration selection policy, achieving higher accuracy than existing prompt-engineering and demo selection techniques. Claims And Evidence: Claim 1: Diversifying the demonstrations improves model generalization. The paper argues that “diversity in demonstration selection is crucial for enhancing model generalization” . This claim is supported by both theoretical and empirical evidence. The authors formalize a diversity metric (fraction of unique labels in the demo set) and prove that maximizing this diversity can improve accuracy (Theorem 3.3) . Empirically, an ablation study compares RDES to a “No-Diversity” variant and finds that adding the diversity mechanism consistently boosts accuracy on all evaluated datasets. For example, in the BANKING77 intent dataset, using diverse demos yields 0.838 accuracy vs. 0.600 without diversity, a large improvement. Figure 1 from the paper illustrates this effect with a sentiment classification example: when all demonstrations express positive sentiment, the model misclassifies a nuanced input as “Positive,” whereas a diverse set of positive, negative, and neutral examples leads to the correct “Neutral” label. This concrete example and the across-the-board gains in the ablation study provide convincing evidence for the claim that diverse demonstrations lead to better generalization. Claim 2: The RDES method outperforms existing prompt engineering and demo selection baselines. RDES is claimed to “significantly enhance classification accuracy compared to ten established baselines”. The paper backs this by evaluating on four benchmarks against two categories of baselines: (a) prompt-engineering methods (Zero-Shot, Knowledge Prompting, Least-to-Most, Chain-of-Thought, Self-Refine) and (b) demonstration selection methods (Few-Shot with/without CoT, Active Demo Selection, Representative Demo Selection, Adaptive ICL). In the results tables, RDES (denoted RDES/B for base and RDES/C with CoT) achieves the highest accuracy on nearly all dataset-model combinations. For instance, on CLINC150, RDES/C reaches 90.2% average accuracy versus the best baseline (ADA) at 78.0% – an absolute gain of ~12%, which is substantial. Similar margins are seen in other datasets and with both closed models (GPT-3.5, Doubao, etc.) and open models (LLaMA, Qwen variants) . The paper’s summary of Table 1 notes that “RDES/B and RDES/C consistently outperform alternative methodologies across the evaluated datasets”. This broad performance lead, along with statistical averages over 5 runs to ensure reliability, provide evidence for the claim that RDES is state-of-the-art in in-context demonstration selection. Claim 3: Integrating Chain-of-Thought reasoning (RDES/C) further improves performance. The authors claim that adding CoT reasoning to RDES “further enhances the model’s predictive performance”. Evidence is shown by comparing RDES/C vs the base RDES/B. In most cases, RDES/C achieves the top accuracy (often a few points higher than RDES/B) on the benchmarks . For example, on the BANKING77 dataset with GPT-3.5-turbo, RDES/C achieves 85.8% vs 76.7% for RDES/B . The paper explains that CoT prompting allows the model to reason step-by-step, which, when combined with optimal demo selection, leads to more accurate answers . However, the evidence also reveals an interesting nuance: on certain settings with smaller open-source models, RDES/B slightly outperformed RDES/C (e.g., CLINC150 with open models saw RDES/B 0.800 vs RDES/C 0.731) . The authors acknowledge this variation, suggesting that CoT’s benefit may depend on model capacity and dataset characteristics . Overall, the claim is mostly supported – CoT generally provides a boost, especially for powerful models and complex tasks, though it’s not uniformly beneficial in every scenario. Claim 4: A reinforcement learning policy can adaptively select demonstrations, offering an advantage over static methods. By framing demo selection as an MDP, the authors claim RDES “facilitates adaptive demonstration selection” and can adjust to classification challenges dynamically. This is implicitly evidenced by RDES outperforming baselines like Representative Demo Selection (RDS) and Adaptive ICL (ADA) , which themselves focus on diversity or uncertainty but without a learning agent. The RL agent’s adaptivity is further highlighted by the ability to handle different model types: RDES worked well across both closed and open LLMs, adjusting its selections even when the underlying model’s behavior differed. While the claim of “deepening the understanding of classification challenges” is somewhat broad, the paper does show that analyzing the learned policy and its outcomes yields insights. In summary, the evidence supports the adaptive advantage of the RL approach: RDES learns a policy that generalizes across tasks and models better than static selection methods. Any lofty suggestion that it “deepens understanding” of all classification challenges is not directly proven, but the work does encourage thinking of demo selection as a learning problem, which is a valuable perspective. Methods And Evaluation Criteria: Methods & Evaluation Summary (Shortened) The proposed Relevance-Diversity Enhanced Selection (RDES) method frames demonstration selection as a Markov Decision Process (MDP), where an agent uses Q-learning to iteratively select examples from a knowledge base (KB), balancing relevance and diversity. The reward function encourages both correct classification (+1) and diversity gains. To manage the large state space, the model focuses on summary statistics (e.g., diversity score) rather than raw text. An ε-annealing strategy ensures a balance between exploration and exploitation. At inference time, RDES first retrieves top-k most similar candidates using TF-IDF cosine similarity, then applies the learned RL policy (or a heuristic threshold) to ensure diversity. If the selected examples are too homogeneous, additional demos with new labels are added until the diversity requirement is met. The RDES/C variant incorporates Chain-of-Thought (CoT) prompting, prompting the LLM to generate a step-by-step reasoning chain before predicting the final answer. Evaluation Criteria & Benchmarks RDES is tested on four benchmark datasets spanning different domains: BANKING77, HWU64, CLINC150, and LIU54, ensuring comparability with prior few-shot intent classification research. A “challenge set” of hard-to-classify queries is used to stress-test model performance. The evaluation includes 12 LLMs (4 closed-source like GPT-3.5-turbo, Doubao, Hunyuan and 8 open-source LLaMA, Qwen variants from 1B to 72B parameters), demonstrating robustness across different model architectures. The method is compared against 10 baselines: Prompt engineering methods (ZS, KP, L2M, CoT, SF) focus on structuring the prompt. Demonstration selection methods (FS, FSC, AES, RDS, ADA) optimize example selection. Notably, AES and ADA also involve learning-based selection, making them strong baselines. RDES consistently outperforms all alternatives. The authors average results over 5 runs, ensuring statistical reliability. One limitation is that the cost of RDES (LLM calls during training) is not explicitly analyzed, leaving sample efficiency unaddressed. However, the extensive experiments (multiple models, datasets, baselines) make the evaluation thorough, well-aligned, and convincing. Theoretical Claims: The paper presents several theoretical contributions supporting RDES, all of which appear logically sound. First, Lemma 3.1 (Diversity Bounds) establishes fundamental properties of the diversity metric. It states that for a given demonstration set, diversity is measured as the fraction of unique labels within the set, and this value is always constrained between a minimum (when all examples share the same label) and a maximum (when every example belongs to a different label). This is a straightforward yet useful observation, reinforcing the idea that increasing label diversity leads to broader contextual coverage. Since the claim follows directly from how diversity is defined, the proof is simple and based on basic counting arguments, making it undisputed. A more significant result is Theorem 3.2 (Q-Learning Convergence), which states that the tabular Q-learning algorithm used in RDES is guaranteed to converge to an optimal policy, provided that standard reinforcement learning conditions are met. Specifically, if the learning rate follows a diminishing schedule and the reward values remain within a bounded range, the Q-learning process will stabilize over time. This claim aligns with well-established reinforcement learning theory, particularly the classic Watkins & Dayan (1992) result on Q-learning convergence. Given that the state and action spaces in RDES are finite—since the knowledge base (KB) has a limited set of candidate demonstrations and the selection process involves discrete choices—the theoretical conditions for convergence should hold. While practical implementations might not strictly follow all assumptions (e.g., if a fixed learning rate is used instead of a decaying one), the theoretical foundation is robust and aligns with established reinforcement learning principles. The final theoretical result, Theorem 3.3 (Diversity-Accuracy Dominance), provides a justification for why diversity enhances model accuracy. It states that if one demonstration set is more diverse than another, there exists a threshold beyond which increasing diversity leads to a measurable improvement in classification accuracy. The extent of this improvement depends on the number of unique labels present, with a diminishing returns effect—the more diverse the set, the smaller the additional accuracy gains per new label. The proof, detailed in the appendix, likely relies on probability bounds or coverage arguments to demonstrate how additional diversity reduces classification errors. While the exact mathematical formulation is omitted in the main text, the result logically aligns with empirical observations: ablation studies in the paper confirm that increasing diversity consistently improves accuracy. These theoretical claims collectively reinforce the validity of RDES. None of them appear to be incorrect or exaggerated: Lemma 3.1 is straightforward and follows from definition. Theorem 3.2 is a direct application of established Q-learning convergence results. Theorem 3.3 introduces a novel insight connecting diversity to classification accuracy, which aligns well with the experimental findings. One potential critique is that Theorem 3.3 presents its conclusion in an asymptotic form, meaning it assumes a sufficiently large number of labels before its effects become prominent. However, the core idea—that increasing diversity generally enhances accuracy—remains intuitively and empirically valid. Overall, the theoretical analysis is well-grounded and adds credibility to the paper. The authors correctly apply reinforcement learning principles and provide a meaningful contribution by formally linking diversity to model performance, strengthening the case for using RL-based adaptive demonstration selection in LLMs. Experimental Designs Or Analyses: The experimental design is well-executed, ensuring fairness and statistical reliability. The authors evaluate RDES across multiple datasets and models, comparing it against ten baselines (five prompt engineering, five demo selection methods). Key questions addressed include whether RDES outperforms others, the impact of Chain-of-Thought (CoT), and the importance of diversity. Controls & Comparisons By comparing RDES with Active Demonstration Selection (AES) and Representative Demo Selection (RDS), the study isolates the benefits of reinforcement learning and diversity selection. Results consistently favor RDES/RDES-C, reinforcing its advantage. Including CoT prompting as both a baseline and a variant (RDES/C) helps analyze its added value—while CoT alone isn’t always best, RDES + CoT is usually the top performer, validating the combined approach. Statistical Rigor & Experimental Setup The authors average results over five runs, mitigating variance from random demo selection. Although statistical significance tests aren’t provided, large accuracy gains (5-15 percentage points) strongly support RDES’s superiority. The setup is well-documented, detailing hyperparameters like learning rate (0.1), discount factor (0.9), and an epsilon-greedy policy. All methods use the same number of demonstrations (5 per prompt), ensuring fair comparisons. The dataset’s challenge set selection effectively isolates hard-to-classify cases, making performance gains more meaningful. Ablation Study & Analysis An ablation study (Appendix A.6) compares No-Diversity, RDES/B, and RDES/C, confirming that adding diversity improves accuracy, and CoT further enhances it. However, in one case (CLINC150, open-source models), RDES/B slightly outperforms RDES/C, suggesting that dataset/model characteristics influence CoT effectiveness. This nuanced analysis demonstrates a rigorous experimental approach. Potential Improvements While the experiments are robust, a few limitations remain: Computational cost: Training RDES via repeated LLM queries (especially with GPT-3.5) is potentially expensive, yet the paper does not discuss efficiency or required queries. Model averaging: Aggregating accuracy across models of vastly different sizes (8B vs. 70B parameters) might be misleading, though per-model breakdowns are provided. TF-IDF retrieval: While effective, semantic embedding-based retrieval (e.g., Sentence-BERT) could further improve demo selection. Supplementary Material: The supplementary material is detailed and useful, adding important background and clarifications to the paper. Baseline Details (Appendix A.3): Clearly explains all ten baselines, filling in gaps left in the main paper where they were only cited by acronym. This helps readers unfamiliar with methods like AES, RDS, and ADA understand how RDES compares. LLM Descriptions (Appendix A.4): Lists the models tested (GPT-3.5, Doubao, Hunyuan, LLaMA, Qwen, etc.), ensuring transparency. This is useful, especially for less familiar models like Doubao or Gemma. Implementation Details (Appendix A.5): Covers model access, training parameters, and how Q-learning rewards were assigned. They confirm five runs per method for consistency but don’t explicitly state some hyperparameters, like the diversity trade-off weight (lambda). Theoretical Proofs (Appendix A.1): Includes formal proofs for key theorems, validating diversity bounds, Q-learning convergence, and diversity’s impact on accuracy. A solid addition for those wanting to verify the math. Ablation Study (Appendix A.6): Provides detailed comparisons for No-Diversity vs. RDES/B vs. RDES/C, confirming diversity improves accuracy. For instance, CLINC150 closed models saw a jump from 0.770 to 0.902 with RDES/C. It also discusses cases where RDES/B slightly outperforms RDES/C, showing a thoughtful analysis. The supplement thoroughly supports the paper, answering key questions and reinforcing its claims. While some minor details (like exact lambda values or test sample counts) could have been explicitly stated, there are no major omissions. Overall, it adds significant value by clarifying baselines, theory, and experimental results. Relation To Broader Scientific Literature: This work sits at the intersection of in-context learning, demonstration selection, and RL-based post-training for LLMs, offering a notable improvement over prior methods. RL-Based Demonstration Selection: RDES builds on previous RL approaches like RetICL (Scarlatos & Lan, 2023) and Zhang et al., 2022, which optimized demo selection via reinforcement learning. However, these methods mainly focused on relevance or uncertainty, while RDES introduces a diversity-driven reward, ensuring varied label coverage. This avoids diminishing returns from selecting similar examples and leads to significantly better results, as shown by RDES outperforming AES (Active Demonstration Selection). Non-RL Demonstration Selection: Many prior works use embedding retrieval, clustering (DPP), or heuristics (BERTScore-Recall, skill coverage) to select examples. Representative Demo Selection (RDS, Yang et al., 2023) also prioritizes diversity but does so statically, while RDES learns an adaptive policy per query. Similarly, Adaptive ICL (ADA, Mavromatis et al., 2023) selects based on uncertainty and diversity without training a policy—RDES outperforms it, showing that an RL-driven approach can better optimize selection. Advancement Over Prior Work: While diversity-based selection, RL for demo selection, and CoT prompting have been studied individually, RDES uniquely combines all three. Unlike prior work that focused on smaller models (e.g., GPT-2), RDES successfully operates on large-scale LLMs like GPT-3.5, proving its practical viability for real-world applications. It demonstrates that RL-based optimization of LLM inputs (without fine-tuning) is an effective alternative to full model retraining. Comparison to RLHF: While Reinforcement Learning from Human Feedback (RLHF) fine-tunes model weights for alignment (Ouyang et al., 2022), RDES optimizes input selection while keeping the model fixed. This makes it a lighter-weight alternative for post-training improvements, aligning with a growing trend of treating LLM prompting as an RL-optimized decision process. RDES represents a meaningful advancement in RL-driven post-training for LLMs, outperforming previous methods while deepening the theoretical understanding of why diversity matters in in-context learning. Future research could explore new reward functions or adapt RL frameworks to reasoning tasks, further expanding on RDES’s contributions. Essential References Not Discussed: The paper does a solid job covering the key references on demonstration selection and prompting, making it clear that the authors are well-versed in the field. That said, there are a few things they could have mentioned to round out the discussion. One noticeable omission is Reinforcement Learning from Human Feedback (RLHF). Since RLHF is such a big deal in RL + LLM research, even a quick comparison between RDES and RLHF would have been helpful—especially for readers coming from the RL community. While RDES optimizes input selection rather than modifying model weights, making this distinction explicit would clarify where it fits within RL-based LLM tuning. Another area that could have been included is Self-Consistency and Calibration in few-shot learning. Self-Consistency (Wang et al., 2022) has been a major improvement for Chain-of-Thought (CoT) reasoning, and given that RDES/C incorporates CoT, it would have been interesting to see how it relates. Meanwhile, Calibration (Zhao et al., 2021) focuses on correcting biases in demo selection, which overlaps with RDES’s effort to ensure diverse label coverage. Neither of these are core to RDES’s main idea, but acknowledging them would have made the paper’s positioning even stronger. That said, these are minor gaps rather than major flaws. The authors clearly did their homework, and there’s no glaring omission that weakens the paper’s claims. Adding these references wouldn’t change the conclusions but would give a more complete picture of the broader landscape. Overall, the coverage is strong, and these missing pieces are more about fine-tuning the context than fixing any major oversight. Other Strengths And Weaknesses: Strengths Novel Approach: RDES uniquely combines reinforcement learning, diversity-aware selection, and Chain-of-Thought prompting, creating a new way to optimize few-shot classification. The explicit accuracy-diversity tradeoff in its reward function addresses a key limitation in prior RL-based demo selection. Strong Evaluation: The paper thoroughly tests RDES across 4 datasets, 12 models, and 10 baselines, with theoretical guarantees and ablation studies confirming its effectiveness. The inclusion of both API-based and open-source models demonstrates its practical applicability. Weaknesses Limited to Classification: RDES relies on label diversity, making it hard to apply to open-ended tasks like generation or structured prediction without significant modifications. High Computational Cost: Q-learning requires repeated LLM queries, which may be expensive and slow, especially for API-based models. The paper doesn’t discuss training efficiency or scalability, which could limit real-world adoption. Inconsistent Performance on Open Models: While RDES performs well overall, its gains on smaller open-source models are less consistent, and CoT sometimes reduces accuracy. The method doesn’t adaptively decide when to use CoT, requiring manual tuning based on the model. Other Comments Or Suggestions: Generality to Other Tasks: It would be interesting to see how RDES could be applied beyond classification. A suggestion for future work is to generalize the notion of "diversity" to other settings – for example, in open-ended question answering, diversity could mean covering different topical aspects or answer types in the demonstrations. While it’s understandable the paper focused on classification (where diversity is well-defined), exploring a generalized RDES could broaden its impact. Perhaps the authors could mention this as a potential extension. Adaptive CoT Usage: The results indicated that chain-of-thought helps with some models/datasets but not all. A suggestion is to make the CoT aspect part of the learning as well – e.g., an agent could decide whether or not to prompt the model with "Let’s think step by step" based on the state. Right now, RDES/B vs RDES/C was a manual switch. If the RL policy could also learn when to invoke CoT (maybe treat it as an action: "ask model to think" vs "ask model directly"), that might yield an even more adaptive system. This would prevent the slight performance drop seen for RDES/C on certain tasks by not using CoT when it’s unhelpful. Questions For Authors: Have you considered how RDES might apply to tasks beyond classification? You used TF-IDF for initial retrieval of candidate demos. Did you try any semantic retrieval (e.g., using embeddings from Sentence-BERT or from the LLM itself)? If so, did it make any difference? If not, do you anticipate any benefit from it? Did you notice if the learned policy for demonstration selection is specific to each dataset, or could a policy trained on one set of intents work (maybe with slight fine-tuning) on another? For RDES/C, how exactly is CoT used – do you simply prepend a fixed prompt like “Let’s think step by step” and then have the model generate a reasoning chain and an answer (like the standard CoT approach )? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer ZBB1, Thank you for your insightful feedback. **On the analysis of RDES's computational cost (LLM invocation frequency)**: We appreciate your attention to this important aspect. While our current submission primarily focuses on the effectiveness of RDES, we recognize that computational cost is a critical factor in practical applications. In the revised version, we will include a discussion on the number of LLM invocations, specifically the inference counts on test samples during the training process of RDES, and explore potential optimization strategies. **Regarding the potential misleading nature of average results across different model scales**: Thank you for your reminder. We understand that performance differences among models of varying scales can be significant. Therefore, in our paper, we not only provide average results across models for macro comparison but also include detailed performance metrics for each model under different methods in the tables. This allows readers to conduct a more nuanced analysis of the performance of models at different scales. **On the suggestion of using semantic embedding retrieval to improve demonstration selection**: We appreciate your valuable suggestion. We agree that retrieval methods based on semantic embeddings (e.g., Sentence-BERT or embeddings generated by the LLM itself) may better capture semantic similarity, thereby enhancing the quality of demonstration example selection. In our current work, we utilized TF-IDF for initial retrieval, which is a simple and widely used method. In future work, we will actively explore and evaluate the potential of semantic embedding retrieval methods within the RDES framework. **Regarding the generalizability of RDES beyond classification tasks**: Thank you for raising this profound question. Extending the ideas of RDES to tasks beyond classification (such as generation or structured prediction) is an exciting direction for future research. As you suggested, we need to rethink and redefine the concept of "diversity" in these new tasks. For instance, in open-ended question answering, diversity may refer to covering different aspects or types of answers. We will mention this potential extension in the conclusion and future work section. **On the manual switching between RDES/B and RDES/C and the potential for RL strategies to learn when to invoke CoT**: This is an insightful suggestion! We fully agree that incorporating the use of CoT into the learning process of the RL strategy (e.g., treating "whether to use CoT" as an action) is a promising direction. This would make RDES more adaptive and could potentially address the slight performance drop observed in certain cases with RDES/C. We will actively explore this idea in future work. **Regarding attempts at semantic retrieval**: As mentioned earlier, our current work primarily employs TF-IDF for initial retrieval. In the future, we will actively experiment with and evaluate semantic embedding retrieval methods. **On whether the learned strategies are specific to each dataset or can generalize**: This is indeed a fascinating and important research question. Currently, our experiments involve training the RDES strategy independently on each dataset. In future work, we will explore the generalization capabilities of strategies across datasets, such as whether a strategy trained on one dataset can be fine-tuned or directly applied to another. This will help assess the robustness and generalizability of RDES. **Regarding the specific use of CoT in RDES/C**: In RDES/C, we employ the standard CoT prompting method, which involves adding a fixed prompt before the input text and selected demonstration examples, such as "Let's think step by step and give your explanation to verify the answer." This prompts the LLM to generate a reasoning chain and the final answer. Thank you once again for your valuable comments. Best regards, Authors of Submission 1061
Summary: The paper proposes a smarter way to select examples/demonstrations for In-Context Learning in LLMs. Picking the right example set for the task at hand is a serious challenge and authors propose an RL-based technique (with Q-learning) to optimally pick examples from a golden set based on a similarity metric + a diversity metric. “RDES”. They’ve attempted to establish that the model should converge and that higher diversity in the examples should lead to higher accuracy via statistical theorems. To demonstrate the effectiveness of the method in classification tasks, they test it with 4 different LLMs on 4 different datasets. They’ve shown superior performance against a large number of different selection & prompt engineering methods - with & without COT. As for the algorithm, they start with similar examples from TF-IDF representations & then augment the set with more diverse data points. ## update after rebuttal 1. The authors did, address this to some degree by sharing metrics for more tasks in their rebuttal. Inclusion of SST ask from GLUE helps make their case. 2. The authors did, address this to some degree by sharing metrics for more tasks in their rebuttal. Inclusion of SST ask from GLUE helps make their case Claims And Evidence: The paper claims that for classification tasks using LLMs, their technique (adding a diversity metric to an RL selection process ) yields better results than a lot of existing methods. They have backed their claims with trials on a large number of techniques in ICL & prompt-tuning on a variety of LLMs. I believe that the claims are well supported by their evidence in numbers. Methods And Evaluation Criteria: Yes - their evaluation strategy for text classification tasks looks good. They could have, however, also shown some results form GLUE-based tasks like SST2 for easier comparison to models like BERT, etc EDIT (post rebuttal): The authors did, however address this to some degree by sharing metrics for more tasks in their rebuttal. Inclusion of SST ask from GLUE helps make their case. Theoretical Claims: Their theoretical proofs around (1) convergence and (2) usefulness of diversity in accuracy looks ok. However, I did not do a very rigorous analysis Experimental Designs Or Analyses: Checked the experimental setup for method comparison to baseline. Looks sound Should have also run tests on more widely used benchmarks like SST, etc EDIT (post rebuttal): The authors did, however address this to some degree by sharing metrics for more tasks in their rebuttal. Inclusion of SST ask from GLUE helps make their case. Supplementary Material: Read the Appendix. Did not get any supplementary material Relation To Broader Scientific Literature: Other very similar papers: RetICL: https://arxiv.org/pdf/2305.14502 Active Example Selection for In-Context Learning (Zhang et al): https://arxiv.org/pdf/2211.04486 Other unacknowledged, but very similar publications: 1. ICL-D3IE (https://arxiv.org/pdf/2303.05063) 2. Diverse Demonstrations Improve In-context Compositional Generalization (https://aclanthology.org/2023.acl-long.78/) The work is a small iteration on top of some existing works. Eg: “Active Example Selection for In-Context Learning (Zhang et al) and “RetICL (Scarlatos et al.)” The idea of using diverse examples for ICL has been explored by lot of recent works. Similarly, using RL techniques to perfect ICL has also been well-researched and published. I’m however not aware of any work on using Q-learning along with diversity metric in example selection for ICL. Essential References Not Discussed: 1. ICL-D3IE (https://arxiv.org/pdf/2303.05063) 2. Diverse Demonstrations Improve In-context Compositional Generalization (https://aclanthology.org/2023.acl-long.78/) Other Strengths And Weaknesses: Strengths: Paper has clear presentation and context on the problem being solved, along with good details of existing work on the topic. Their idea of adding diversity metric to the prompt-generation process is novel. Their proposal is also simple to add on top of existing infrastructure for RAG since it only changes the demonstration selection process in the prompt generation pipeline. They have made good effort to exhaustively test the methodology on a wide array of tasks and with a lot of different LLMs. I believe the paper should be accepted as it is an iterative upgrade to existing RL-based techniques to improve context creation for ICL. Weaknesses: The work is a small iteration on top of some existing works. Eg: “Active Example Selection for In-Context Learning (Zhang et al) and “RetICL (Scarlatos et al.)” The idea of using diverse examples for ICL has been explored by lot of recent works. Similarly, using RL techniques to perfect ICL has also been well-researched and published. I’m however not aware of any work on using Q-learning along with diversity metric in example selection for ICL. The overall study will also benefit a lot from more research on generation tasks, instead of just classification. Most other recent works do try to show impact on generation tasks. On that note, it would be useful to see how much encoder-type models woud benefit from the added context (since the authors are sticking to classification-only tasks, in the real world, decoder-only models are not always the most efficient ways to do classification) The authors also seem to have missed some notable references in the paper (more details in questions to author) Other Comments Or Suggestions: 1, Figure 1 needs to be clearer in the intention. The "Demonstrations" boxes in the figure look like they're being LLM-generated. IIUC, the intent is to show how different example types can be based on selection strategy. The "LLM" logo is confusing in this scenario. 2. Equation 1: needs an explanation of the "r_t" variable used in the expression 3. (Nitpick) Equation 2 can be labelled as Bellman's equation in the preceding text Questions For Authors: 1. Can you please shed light on why you chose only classification tasks for the analysis? If generative datasets & baselines were studied, do you know this compares to RetICL & other techniques on MWP & other problems? EDIT (post rebuttal): The authors did, however addressed this to some degree by sharing metrics for more tasks in their rebuttal. Inclusion of SST ask from GLUE helps make their case. 2. Points from "Other Comments" Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer NHQh, Thank you for your recognition of our work and the valuable suggestions. - **Work as an iteration of existing work**: We acknowledge that RDES builds upon existing work on RL for ICL and diverse demonstration selection. However, we emphasize that the novelty of RDES lies in **the combination of a reinforcement learning framework (both Q-learning and PPO) with an explicit label distribution-based diversity metric to achieve dynamic demonstration selection that balances relevance and diversity.** We believe this combination is significant for improving few-shot learning performance. - **Suggestion to test on GLUE tasks**: Thank you for the suggestion. Our current work focuses on evaluating the performance of RDES on intent classification tasks, which are common benchmarks for assessing the effectiveness of demonstration selection strategies. We have included results on **SST5 (sentiment analysis, part of GLUE) in our additional results. RDES/PPO achieved competitive results on SST5 (e.g., for both Qwen25-72b and Deepseek-r1-32b, AES and RDES/PPO achieved an accuracy of 0.84).** As also noted by Reviewer 5sFY regarding the simplicity of evaluation tasks, we have now included results on more complex reasoning benchmarks like GSM-8K and BigBenchHard, further demonstrating the applicability of RDES beyond basic classification. Future work can further explore the applicability of RDES to a broader range of GLUE tasks. Moreover, here are the results on SST5, BigBenchHard (boolean expressions, web of lie), and GSM-8K using Qwen25-72b and Deepseek-r1-32b models; The RDES series is our proposed method, where RDES/B and RDES/C are based on Q-learning, and RDES/PPO is based on the PPO method: ### **SST5** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.56 | 0.70 | | FSC | 0.54 | 0.66 | | AES | 0.84 | 0.84 | | RDS | 0.76 | 0.84 | | ADA | 0.90 | 0.90 | | RDES/B | 0.44 | 0.57 | | RDES/C | 0.51 | 0.52 | | RDES/PPO | 0.84 | 0.84 | ### **BigBenchHard - boolean expressions** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.98 | 0.38 | | FSC | 0.60 | 0.46 | | AES | 0.53 | 0.60 | | RDS | 0.53 | 0.60 | | ADA | 0.53 | 0.60 | | RDES/B | 0.76 | 1.00 | | RDES/C | 0.90 | 0.99 | | RDES/PPO | 1.00 | 1.00 | ### **BigBenchHard - web of lie** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.58 | 0.98 | | FSC | 1.00 | 1.00 | | AES | 0.85 | 0.72 | | RDS | 0.89 | 0.68 | | ADA | 0.83 | 0.72 | | RDES/B | 0.50 | 0.93 | | RDES/C | 0.98 | 1.00 | | RDES/PPO | 1.00 | 0.90 | ### **GSM-8K** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.50 | 0.28 | | FSC | 0.56 | 0.64 | | AES | 0.92 | 0.08 | | RDS | 0.90 | 0.48 | | ADA | 0.98 | 0.36 | | RDES/B | 0.87 | 0.37 | | RDES/C | 0.92 | 0.73 | | RDES/PPO | 0.94 | 0.48 | Thank you again for your insightful comments. Best regards, Authors of Submission 1061
Summary: This paper claims that applying a Q-learning method to demonstration selection can be beneficial for classification tasks in the context of in-context learning. The authors attempt to frame demonstration selection as a Markov decision process, though the details of this formulation remain unclear in the draft. Algorithm 1 in the paper presents the final strategy: first, the top-k samples are selected based on the cosine similarity of candidate demonstrations. Then, the remaining samples are chosen to enhance the diversity of the selected demonstration set until the diversity of the final set exceeds a predefined threshold. While the paper asserts that demonstration selection is important for reasoning tasks, the evaluation is conducted solely on simple classification tasks, where the proposed method is claimed to outperform other baselines. Claims And Evidence: - Claim: Reinforcement learning, especially Q-learning, is useful for demonstration selection. - Evidence: To the best of my knowledge, I cannot find evidence in the draft on why RL is helpful for demonstration selection. Sections 3.1 and 3.2 state very general notions of Q-learning and related theorems and readers can hardly obtain enough information on why RL is specifically helpful for demonstration selection, compared to previous existing methods. Methods And Evaluation Criteria: - Algorithm 1 is very simple and easy to understand; however, it is a well-known technique in the literature. More importantly, I am unsure of the relationship between Algorithm 1 and Q-learning. The algorithm merely involves computing cosine similarities using TF-IDF vectors, which is a well-established and basic approach. To the best of my knowledge, I cannot confirm that Algorithm 1 is directly related to Q-learning or any RL-based methods, especially based on the explanations provided in the paper. - While the authors claim to evaluate the reasoning performance of LLMs, the actual tasks used for evaluation are merely simple classification tasks rather than reasoning benchmarks such as GSM8K. Therefore, I am uncertain whether the evaluation results presented in the paper adequately support the authors’ claims. Theoretical Claims: The draft includes several theorems, but it is unclear how they relate to the effectiveness of the proposed method. There is not enough clear explanation of why the authors introduced these theorems. Experimental Designs Or Analyses: - While the authors claim to evaluate the reasoning performance of LLMs, the actual tasks used for evaluation are merely simple classification tasks rather than reasoning benchmarks such as GSM8K. Therefore, I am uncertain whether the evaluation results presented in the paper adequately support the authors’ claims. - More analysis or ablation studies are encouraged to probe what is the strengths and weaknesses of the proposed method. Supplementary Material: I skimmed through the entire supplementary material. As far as I can tell, there are no direct connections or references between the main paper and the supplementary material, making it difficult for readers to grasp its significance. Relation To Broader Scientific Literature: This paper explores a new approach to demonstration selection for in-context learning, a topic that has been widely studied in recent years. If the proposed method indeed offers a meaningful improvement in selecting better demonstrations, it could have a significant impact. However, I am uncertain whether the method is correctly specified and whether it introduces genuine novelty. Essential References Not Discussed: Not mandatory but related: Diverse Demonstrations Improve In-context Compositional Generalization (ACL 2023) Other Strengths And Weaknesses: Weaknesses - Basically, I'm uncertain about the current writing status of the paper. I'm sorry to say that, but personally, I feel like this draft is not yet in the stage of publication or being reviewed. - Each component and section in the paper provides independent concepts which are not well-aligned. For instance, what is the exact relationship between Sections 3.2 and 3.3? Other Comments Or Suggestions: Before using acronyms, please provide their full names first. For instance, before introducing RDES in the Introduction, define it explicitly (although it is mentioned in the Abstract, that alone is not sufficient). Questions For Authors: Why must we particularly rely on Q-learning-like methods instead of other reinforcement learning approaches, such as PPO and others? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer 5sFY, Thank you for your thorough review and valuable feedback. We appreciate your concerns and have provided detailed responses below, referencing other reviewers' comments to support our points. ### Claims and Evidence You noted the lack of evidence for the usefulness of reinforcement learning in demonstration selection. We highlight the following points, supported by other reviewers: - **Adaptive Selection Strategy**: Unlike traditional methods, RDES uses reinforcement learning to dynamically adjust selection policies based on current inputs and previously selected demonstrations. Reviewer ZBB1 noted that RDES "facilitates adaptive demonstration selection." - **Balancing Relevance and Diversity**: RDES employs Q-learning to balance the selection of relevant and diverse demonstrations. The reward function considers both classification accuracy and diversity. Reviewer 3ANe mentioned that our approach "balances relevance and diversity." - **Sequential Decision Process**: We model demonstration selection as a Markov Decision Process (MDP), where each selection is an action. Q-learning helps maximize future rewards. Reviewer NHQh stated that our method optimally picks examples based on similarity and diversity metrics. - **Improved Experimental Results**: RDES significantly outperforms traditional methods across multiple datasets, demonstrating the effectiveness of our approach. Reviewer ZBB1 concluded that "RDES significantly boosts classification accuracy." ### Methods and Evaluation Criteria You expressed uncertainty about Algorithm 1's relationship with Q-learning. We clarify: - **Algorithm 1** is used during inference to quickly select demonstration examples based on TF-IDF similarity and a diversity threshold. - **Q-learning** is applied during training, where the agent interacts with the environment to receive rewards based on accuracy and diversity changes. Reviewer ZBB1 described the RL agent's iterative selection process. - The diversity threshold can be a fixed hyperparameter or learned from the Q-learning model. Algorithm 1 improves efficiency while ensuring diversity. Regarding the simplicity of evaluation tasks: - Our initial focus was on demonstrating RDES's effectiveness in text classification. We have now supplemented our results with reasoning-oriented benchmarks like BigBenchHard and GSM-8K. **Due to character limitations, please refer to the response to Reviewer 3ANe for the corresponding experimental results.** ### Theoretical Claims We introduced theorems to support RDES's effectiveness: - **Lemma 3.1** establishes bounds for our diversity metric. Reviewer ZBB1 noted its importance. - **Theorem 3.2** shows Q-learning's convergence under certain conditions, ensuring policy reliability. Reviewer ZBB1 confirmed this. - **Theorem 3.3** explains that increased diversity can enhance classification accuracy, justifying our emphasis on diversity. ### Experimental Designs or Analyses We appreciate your suggestion for more analysis: - An **ablation study** in Appendix A.6 analyzes the impact of different diversity mechanisms, confirming that diversity improves performance. Reviewer ZBB1 highlighted this. - We have added results on reasoning benchmarks and will further analyze demonstration characteristics and compare RL algorithms in future work. ### Supplementary Material The supplementary materials are closely related to the main paper: - **Appendix A.1** includes theoretical proofs, supporting key claims. Reviewer ZBB1 stated this inclusion. - **Appendix A.3** details baseline methods, crucial for understanding our distinctions. Reviewer ZBB1 noted this clarity. - **Appendix A.4** lists LLMs used, ensuring transparency. Reviewer ZBB1 acknowledged this. - **Appendix A.5** provides implementation details for reproducibility. Reviewer ZBB1 confirmed this. - **Appendix A.6** contains the ablation study, verifying diversity effectiveness. ### Essential References Not Discussed We will cite and discuss the work "Diverse Demonstrations Improve In-context Compositional Generalization (ACL 2023)" in future revisions, as it aligns with our core idea of using diversity to enhance generalization. ### Other Strengths and Weaknesses Sections 3.2 and 3.3 serve different purposes: 3.2 introduces theoretical foundations, while 3.3 describes the implementation process. The former supports the latter. ### Other Comments or Suggestions We will ensure acronyms are defined before use in future revisions. ### Questions for Authors We chose Q-learning for its maturity, suitability for discrete action spaces, and clear convergence theory. We have also experimented with combining RDES with PPO, achieving competitive results. We hope these explanations address your concerns. We will consider your suggestions for improvements in future revisions. Thank you again for your feedback. Best regards, Authors of Submission 1061
Summary: The paper presents RDES, a reinforcement learning framework for selecting diverse and relevant demonstrations to support in-context learning in text classification tasks. It formulates demonstration selection as a Markov Decision Process and employs Q-learning with a composite reward function to balance relevance and diversity. The authors report experimental evaluations on four benchmark datasets, noting that a variant of RDES incorporating Chain-of-Thought reasoning is associated with improved classification performance relative to ten baseline methods. Claims And Evidence: The claims made in the submission are supported by extensive comparison to other prompt engineering and demonstration selection methods, across a variety of language models and datasets. Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including the reinforcement learning framework and diverse benchmark datasets, are appropriate for addressing the challenges in in-context learning for text classification. These choices provide a relevant and practical framework for assessing model performance and robustness. Theoretical Claims: Did not check correctness. Experimental Designs Or Analyses: Experimental design is sound and straightforward. The author compared against a wide variety of methods and datasets. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper extends prior work on in-context learning and demonstration selection by using reinforcement learning to balance relevance and diversity, building on ideas from chain-of-thought prompting, clustering, and DPP-based methods. Its contributions complement recent efforts like RetICL and adaptive demonstration selection with theoretical guarantees and extensive empirical evaluation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Extensive comparison to other methods across a variety of datasets. - Clear explanation of method. Weaknesses: - Lack of units in tables. - No user studies, which often serve as better metrics of LLM quality. Other Comments Or Suggestions: N/A Questions For Authors: None for now Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 3ANe, Thank you for your positive feedback on our work. Regarding your suggestions: - **Lack of units in tables**: We appreciate your suggestion and will explicitly label the units (e.g., accuracy will be presented as a percentage) in all tables in the revised manuscript. - **No user studies**: We understand the importance of user studies for evaluating LLM quality. However, the focus of this research is to introduce a novel reinforcement learning-based demonstration selection method and evaluate its effectiveness through quantitative metrics (classification accuracy). Future work can consider conducting user studies to more comprehensively assess the performance of RDES in real-world applications. Meanwhile, we have launched a RAG system on our school's platform, and we will actively integrate our methods into this system. To further support our claims, we have included additional experimental results on more challenging reasoning tasks. Here are the results on SST5, BigBenchHard (boolean expressions, web of lie), and GSM-8K using Qwen25-72b and Deepseek-r1-32b models; The RDES series is our proposed method, where RDES/B and RDES/C are based on Q-learning, and RDES/PPO is based on the PPO method: ### **SST5** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.56 | 0.70 | | FSC | 0.54 | 0.66 | | AES | 0.84 | 0.84 | | RDS | 0.76 | 0.84 | | ADA | 0.90 | 0.90 | | RDES/B | 0.44 | 0.57 | | RDES/C | 0.51 | 0.52 | | RDES/PPO | 0.84 | 0.84 | ### **BigBenchHard - boolean expressions** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.98 | 0.38 | | FSC | 0.60 | 0.46 | | AES | 0.53 | 0.60 | | RDS | 0.53 | 0.60 | | ADA | 0.53 | 0.60 | | RDES/B | 0.76 | 1.00 | | RDES/C | 0.90 | 0.99 | | RDES/PPO | 1.00 | 1.00 | ### **BigBenchHard - web of lie** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.58 | 0.98 | | FSC | 1.00 | 1.00 | | AES | 0.85 | 0.72 | | RDS | 0.89 | 0.68 | | ADA | 0.83 | 0.72 | | RDES/B | 0.50 | 0.93 | | RDES/C | 0.98 | 1.00 | | RDES/PPO | 1.00 | 0.90 | ### **GSM-8K** | Methods | Qwen25-72b | Deepseek-r1-32b | | ------ | ------ | ------ | | FS | 0.50 | 0.28 | | FSC | 0.56 | 0.64 | | AES | 0.92 | 0.08 | | RDS | 0.90 | 0.48 | | ADA | 0.98 | 0.36 | | RDES/B | 0.87 | 0.37 | | RDES/C | 0.92 | 0.73 | | RDES/PPO | 0.94 | 0.48 | Thank you once again for your insightful comments. Best regards, Authors of Submission 1061
null
null
null
null
Solving Probabilistic Verification Problems of Neural Networks using Branch and Bound
Accept (poster)
Summary: This paper is concerned with the quantitative (probabilistic) verification of neural network. It proposes PV, a branch-and-bound algorithm that is correct and (under reasonable assumptions) complete. Additionally, the paper proposes a fairness benchmark for the probabilistic verification of NNs that is more challenging than previous work. ## update after rebuttal The authors clarified some aspects and adequately addressed my concerns in the rebuttal, I raised my score to accept. Claims And Evidence: *"Each probability distribution Px(i) needs to allow for computing the probability of a hyperrectangle in closed form. This requirement is satisfied by a large class of probability distributions..."* The class seems pretty small to me*. *"...including discrete distributions with a closed-form probability mass function and univariate continuous distributions with a closed-form cumulative density function, as well as Mixture Models and probabilistic graphical models (Bishop, 2007), such as Bayesian Networks, of such distributions."* *=I think that I might be misinterpreting what "closed-form" means here, as I'm assuming it is a notion related to the complexity of the task. In "The computational complexity of probabilistic inference using Bayesian belief networks", Cooper (AIJ, 1990) shows that SAT can be reduced to (a decision version of) inference on a binary BN. To the best of my understanding, each conditional can be computed in closed form given the parents, but the overall task is clearly NP-hard. Limiting inference to hyper-rectangles doesn't really fixes that, since the hyperrectangle could fully contain the support of the BN. Is there something I'm missing? I checked Section A.4 only to be even more confused by this paragraph on expressivity: *"We first show how we can apply PV to multivariate normal distributions even though they do not admit a closed-form solution for the probability of a hyperrectangle. Consider a multivariate normal distribution Pz with mean μ and covariance Σ = AAT . Let Px be a standard multivariate normal distribution. Now, z = Ax + μ is distributed according to Pz . Therefore, by prepending the linear transformation Ax + μ to net, we can apply PV to multivariate normal distributions, since the probability of a hyperrectangle under a standard multivariate normal distribution has a closed-form solution."* Isn't the conclusion contraddicting the premise? Methods And Evaluation Criteria: The overall idea behind PV makes sense to me, but I didn't check the details of the proofs. I am not very convinced by the emprirical evaluation (see below). Theoretical Claims: I did not carefully check the proofs. Experimental Designs Or Analyses: Despite introducing a novel benchmark, the empirical evaluation is not extremely convincing for a paper concerned with scalability. My main criticism is that it consider low-dimensional settings only, with at most 8 input variables. In their work [1], Zhang et al. argue that input splitting only works for low dimensional settings and propose ReLU splitting as a better alternative when the input dimensionality grows. Since PV is based on input splitting, one is left wondering what happens when the dimensionality is > 8. Albeit restricted to uniform distributions, the experimental evaluation in [1] is more thorough, involving the verification of NN-based controllers (low input dimensionality) as well as MNIST classification robustness to patch attacks (up to 7x7 input variables). Another problem is that, despite the focus on verification with meaningful input priors, how the approach scales to more complex priors is not analyzed. My guess is that this is somehow dictated by the approach being restricted to very simple distributions. While I appreciate the novel setting introduced by this paper (MiniACSIncome), it seems to be tailored to PV's working assumptions. I wish it provided a more "neutral" perspective on the challenges of probabilistic verification of NNs. To the best of my understanding, it cannot be used for generating instances with input dimensionality > 8 and it cannot be used to test how a probabilistic verification algorithm scales to more expressive/complex input distributions (the BN seems to be fixed). Supplementary Material: I checked section A.4 and F.5. Relation To Broader Scientific Literature: The related work section is well written and thorough, with one exception (see section below). Essential References Not Discussed: [1] "Provable preimage under-approximation for neural networks" by Zhang et al. (TACAS24), is concerned with quantitative verification of NNs. Their scope is less general, focusing on uniformly distributed inputs and ReLU nets. The proposed approach, however, has many similarities with PV: 1) They also consider input (as well as ReLU) splitting in their work and propagate linear relaxations of the volumes that need to be quantified. To the best of my understanding, they also leverage AutoLiRPA. 2) They incrementally compute tighter approximations of the verification problem. Other Strengths And Weaknesses: The paper tackles an important problem in ML verification and contributes with a novel algorithm as well as establishing new benchmarks. I appreciated the related work section, which pointed me to relevant work that I missed. I found the problem definition in Section 3.1 (Eq. 3) quite involved. What are these "satisfaction functions" useful for? How does the definition relates to the example in Eq. 1 or to other problem definitions in similar works? Better characterizing the family of input distributions that is supported would be beneficial, the scope is not very clear at the moment. The empirical evaluation is mostly concerned with showing scalability to larger NNs. I would assume that larger NNs are needed when the number of inputs grows and their distribution becomes very complex and multimodal. The paper doesn't make a case for adopting complex NNs in relatively simple / low-dimensional input spaces. Other Comments Or Suggestions: NA Questions For Authors: 1) The case for employing deep neural network in the settings considered in the paper is not clear. How crucial is scaling to networks with hundreds or thousands of parameters when limited to low-dimensional input spaces? Or, alternatively, how does your approach scale to higher dimensional input spaces? 2) How does PV compare with [1] in uniform settings? Or is there anything preventing a direct comparison? 3) Can you elaborate on the complexity of computing the probability of hyperrectangles in BNs? What do you mean by closed-form inference exactly? Can you provide some examples? 4) Can you elaborate on the expressiveness of the supported family of distributions? Is it able to encode complex, multimodal distributions? Can you show that empirically? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer SiXd for their review. > Q3. Closed-form inference. By "closed-form", we wanted to express that **a terminating algorithm exists** for computing a probability exactly, not necessarily that this algorithm has polynomial runtime. We will state this more clearly in our paper. For example, we use a Bayesian Network (BN) in our MiniACSIncome experiment, for which exact inference is NP-complete, as you also pointed out. For practical applicability, the assumption is that computing the probability of a hyperrectangle should be significantly faster than solving the overall verification problem. This is the case for our BN (inference is NP-complete) and the MiniACSIncome probabilistic verification problem (#P-hard). > Q4. Expressiveness of the supported family of distributions. **Yes**, we are able to encode complex, multimodal distributions! For example, consider Figures 7 and 8 in our Appendix, which visualise the BN of our MiniACSIncome benchmark. Figure 7 shows that this distribution is multimodal. For example, the marginal distributions of "SCHL" and "AGEP" marginal distribution have several peaks. Regarding complexity, these figures also show that our BN is a reasonably good fit of real-world US census data (ACSIncome). Therefore, we believe that our MiniACSIncome experiment demonstrates that our approach **is applicable to complex, multimodal distributions.** > Multivariate normal distributions paragraph. We will clarify this paragraph in the updated version of the paper. The key point here is that while *general* multivariate normal distributions do not admit a closed-form solution, *standard* multivariate normal distributions (or, more generally, multivariate normal distributions with diagonal covariance matrices) do admit a closed-form solution. Or idea is to emulate a general multivariate normal distribution through a standard multivariate normal distribution and an additional linear layer in the network we want to verify. By doing this, we can apply PV to a (general) multivariate normal distribution, although we can not use this multivariate normal distribution as an input distribution for PV directly. > Q2. How does PV compare with [1]? We would like to point out that **[1] does not provide *sound* coverage ratios for MNIST**. The coverage ratios they report are sampling approximations. While [1] also provides sound results in the "Quantitative Verification" section, these results are for 2d and 3d input spaces. Due to the unsoundness of the MNIST results, a comparison with our approach on this case study would be unfair, but we performed a comparison with [1] on the VCAS quantitative verification case study. Our PV algorithm is faster than the algorithm from [1] by two orders of magnitude for this case study. | Algorithm | Runtime | |--------------------------|---------| | PreimageApproxForNNs [1] | 16.42s | | PV (Ours) | 0.13s | Both results were obtained on the hardware also used in our paper (HW1). The code for running both tools is available at https://drive.google.com/drive/folders/1KBbBwzxLvCXeufo24A-cZhuvwhGTB2MD?usp=sharing. > ReLU Splitting The problem with ReLU splitting for probabilistic verification is that it creates branches that are non-convex in the input space. The approach of [1] to underapproximate these branches using linear relaxations is interesting. However, these underapproximations are still polytopes and computing the volume of a high-dimensional polytope is very costly. The approach in [1] to use Monte Carlo estimates of such volumes is enticing, but it entails unsoundness. > Main criticism: only low-dimensional settings. Earlier works that consider non-uniform probability distributions (FairSquare and SpaceScanner) only scale to 2d input spaces. Although uniform input distributions significantly simplify verification, [1] and Marzari et al. (ProVe_SLR) only demonstrate sound #DNN-verification for up to 3 and 5 input dimensions, respectively. In light of this, **our ability to handle 7 input variables** while not assuming uniform input distributions **is a major step forward**. We would also like to point out that the actual input dimension of MiniACSIncome-7 is 67 (Table 7 in our Appendix). > Q1. Large networks for low-dim settings. ACAS Xu (Julian et al., 2018) is a good example of a low-dimensional application requiring a medium-sized neural network (> 1000 parameters). Other examples are the neural network controllers in [1]. > Satisfaction functions (Eq. 3) We agree that Equation (3) is somewhat intricate. This is because of the generality of our algorithm. The satisfaction functions encapsulate, for example, the fraction in Equation (1). The full satisfaction function for Equation (1) is given in Example A.1. Our $f_{Sat}$ function roughly corresponds to the formulae $\varphi_{\mathrm{post}}$ of Albarghouthi et al. (2017) and $\Gamma$ of Morrettin et al. (2024). --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I think that [1] should be cited and discussed in the paper. *" In light of this, our ability to handle 7 input variables while not assuming uniform input distributions is a major step forward. "* It is, but that also depends on how structurally complex the input distribution is. Nonetheless, this is solid work and I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our contribution. We will cite [1] in our paper and add the comparison we performed for the rebuttal. We agree on the point regarding the complexity of the input distribution and will extend our discussion to this end as outlined in the rebuttal. Thank you for your valuable feedback on this issue.
Summary: This paper proposed a branch & bound and interval propagation based probabilistic verification method for neural networks. This paper provides a sound verification methodology and speed up comparing to previous neural network verification techniques in mainstream neural network verification benchmarks. Claims And Evidence: The main claim of this paper is the soundness of its verification technique. As the branch and bound technique and interval propagation is widely used in nerual network verification, the soundness proof looks convincing to me. Methods And Evaluation Criteria: Ths benchmark datasets are currently widely-used in neural network verification tasks, and shared by a group of work in this area. The benchmark is meaningful in this area. The proposed method follows the previous working diagram to perform the verification, which satisfies the current verification application. Theoretical Claims: The following parts are checked and I think they are ok: - Soundness of verirication. - Completeness proof under certain condition from 5.3. Experimental Designs Or Analyses: Yes, I have checked the experiment setting. 1. The benchmark selections align with previous works, and are well-known in this area. 2. The comparision of baseline and the new methods are confusing: I do not understand the data listed in the table, especially these percentages. This part need to be refined and more ellaboration. Supplementary Material: The supplementary results includes more detailed running results and example source code. Relation To Broader Scientific Literature: This paper is related to a group of work on neural network verification based on abstract interpretation and abstract refinement. The interval refinement can be viewed as a kind of abstract refinement. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - This paper proposed an efficient probabilistic verification technique for neural networks, which can exploit the parallism of GPUs. - This method can be proved to be sound and complete(under some reasonable assumptions). Weakness: - The proposed idea is quite straightforward based on interval propagation and branch-and-bound. - Lack of comparisons of non-probabilistic verification tools. Other Comments Or Suggestions: N/A Questions For Authors: 1. The updating strategy for upper and lower bounds based on probabilistics need more clearification. Why we can directly use the probabilistic to refine the interval? 2.The branch-and-bound and interval propagation methods are widely used in neural network verification tools, and their properties have been explored a lot. This paper seems like merging them together, lacking of new insights. 3. As previous listed, the paper experiment needs more ellaboration. I do not understand the experiment setting of table 2. What is the meaning of the percentages in neural network verifications? Why not use the same setting for the baselines? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer 8dyc for their review. We address the questions posed in the review below. > 1. The updating strategy for upper and lower bounds based on probabilistics need more clearification. Why we can directly use the probabilistic to refine the interval? We assume that this question refers to Lines 7 and 8 of Algorithm 2. Please clarify in case the question is referring to a different part of our paper. We can add the probability of $\mathcal{X}\_{\mathrm{Sat}}^{(t)}$ to the lower bound, since we know that $g\_{\mathrm{Sat}}(\mathbf{x}, \mathrm{net}(\mathbf{x}) \geq 0$ for $x \in \mathcal{X}\_{\mathrm{Sat}}^{(t)}$. This is, in turn, due to the soundness of ComputeBounds (concretely, CROWN or interval arithmetic in our paper). We can compute $\mathbb{P}[\mathcal{X}\_{\mathrm{Sat}}^{(t)}]$ directly, since $\mathcal{X}\_{\mathrm{Sat}}^{(t)}$ is a union of disjoint hyperrectangles. Because of this, we can leverage the third axiom of probabilities (i.e. the probability of a countable union of disjoint sets is the sum of the probabilities of these sets). The probability of each hyperrectangle can be computed exactly due to our assumptions on the probability distributions that are described in Section 4.2. > 2. The branch-and-bound and interval propagation methods are widely used in neural network verification tools, and their properties have been explored a lot. This paper seems like merging them together, lacking of new insights. Branch and bound and bound propagation are indeed widely used for deterministic neural network verification. However, this is not yet the case for verifying *probabilistic* specifications for neural networks. **We use these tools and apply them to a new problem** (solving *probabilistic* neural network verification problems). In this context, our paper provides the following new insights: 1. We apply branch and bound (BaB) and bound propagation (BP) for solving probabilistic neural network verification problems, a class of problems to which these techniques have not yet been applied to. 2. We conduct a thorough theoretical analysis of our BaB and BP-based approach. To the best of our knowledge, there are no previous results on the theoretical properties of BaB and BP for *probabilistic* verification problems. 3. Our experiments demonstrate that our proposed approach (based on BaB and BP) provides a substantial runtime advantage for solving *probabilistic* verification problems. The magnitude of this runtime advantage is a surprising novel insight. > 3. As previous listed, the paper experiment needs more ellaboration. I do not understand the experiment setting of table 2. What is the meaning of the percentages in neural network verifications? Why not use the same setting for the baselines? The percentages in Table 2 are probabilities given as percentages. We will clarify this in the table caption. The "VR" percentages in this table are the exact probabilities, that is, the verification result of quantitative verification. The probabilities for ProbabilityBounds are pairs of lower and upper bounds on these values. The percentages for $\varepsilon$-ProVe are 99.9% confidence upper bounds on these values. **Overall, these percentages are *results*, not *settings* of the verifiers.** If the remark regarding using the same settings for the baselines refers to the timeout for ProbabilityBounds (10s, 1m, 1h): these settings are not applicable for ProVe\_SLR and $\varepsilon$-ProVe, since these algorithms do not provide intermediate sound results. This is one of the advantages of ProbabilityBounds that we seek to demonstrate in Table 2. > [Weakness:] Lack of comparisons of non-probabilistic verification tools. Since **non-probabilistic verifiers can not be applied to probabilistic verification problems**, it is not possible to compare our approach to them experimentally. For example, it is impossible to apply $\alpha$-$\beta$-CROWN to the FairSquare benchmark, since $\alpha$-$\beta$-CROWN can not handle probabilities. To position our work more clearly in the context of other verification approaches, we provide the following table: | Method | Deterministic Statements | Quantitative Statements (#DNN-Verification) | Group Fairness | General Probabilistic Statements | |---|---|---|---|---| | $\alpha,\beta$-CROWN, NNV | yes | no | no | no | | ProVe_SLR (Marzari et al., 2023b) | no | yes | no | no | | SpaceScanner (Converse et al., 2020) | no | yes | yes | yes | | FairSquare (Albarghouti et al., 2017) | no | no | yes | no | | PV (Ours) | no | yes | yes | yes | The rebuttal for reviewer BFqm includes an extended version of this table that we will add to our related work section. We hope that this clarifies the relation between our approach and non-probabilistic verifiers. If there remain further questions, we would be happy to address them in the second round of the author-reviewer discussion.
Summary: The paper proposes a generic approach for the probabilistic verification of neural networks which can leverage pre-existing qualitative verification tools. To this end, the paper proposes to combine input space splitting with bound computation to compute bounds on (un)safe behavior. Purely safe/unsafe regions can then be probabilistically quantified. The paper also puts forward an argument for the completeness of the approach. ## update after rebuttal The authors managed to lift my concern about the theoretical assumptions and promised to clarify this misunderstanding in the paper's final draft. I trust this will happen and am thus in favour of accepting the paper. Concerning the comments from other reviewers, I believe it's important to emphasize that the verification guarantees derived in the paper are meant to hold *globally*, i.e. on large parts of the input space. It is common knowledge that NN verification for lage input spaces can (so far) only be scaled to small input dimensions and/or small NNs. Consequently, I am happy with the presented experiments. This can also be seen in other work that aims to derive global guarantees (e.g. [CAV24]). [CAV24] https://link.springer.com/chapter/10.1007/978-3-031-65630-9_17 Claims And Evidence: The proposed approach comes with correctness and completeness claims (see Theoretical Claims) and is evaluated on multiple benchmark sets that compare the approach to competing techniques (see Methods and Evaluation Criteria). I find the empirical evaluation convincing, my concerns about the theoretical claims are discussed in below. Methods And Evaluation Criteria: The benchmarks seem well chosen to me. The work does not reproduce all results from competing papers on the same machine, but instead reuses the numbers of other authors while using a weaker machine for the own experiments. While this is not ideal, it seems to me the evaluation is nonetheless sound and clearly demonstrates the superority of the approach at hand. Theoretical Claims: The paper makes two theoretical claims: Soundness (Corollary 5.2) and Completeness (Theorem 5.5). While I agree with the former, I am worried about the assumption necessary for completeness (Assumption 5.3). My hope is that we can resolve this issue through the rebuttal in which case I'm happy to raise my score: The verification problem as specified in (3) requires that $f(p_1,\dots,p_v)$ must be non-negative where $p_i$ is the probability that some $g^{(i)}(\dots)$ is non-negative. Assumption 5.3 now requires that $f$ is never zero and the probability of $g$ being zero is zero. It is then noted that this assumption is only "mildly restrictive" as we can strengthen (3) to be a constraint of the form $\geq \varepsilon$. It seems to me, that this is beside the point: Of course, I can add $\varepsilon$ to $f$, but in most cases (especially the ones discussed in the paper) $f$ and $g$ are continuous and then there will be other inputs for which $f$ becomes zero (of for which $g$ becomes zero and which do not have zero probability). More intuitively: It seems to me that it is quite a natural phenomenon that the boundary between the safe and unsafe regions may not be axis-aligned. In this case, axis-aligned splitting as done in this paper *cannot* derive exact probability bounds but only approximate them. It seems to me a more natural assumption for Theorem 5.5 might be simply that it is complete for open properties (i.e. $>$ in both equations in (3))? Alternatively, it seems to me at the very least this restriction warrants further discussion. Experimental Designs Or Analyses: From reading of Chapter 6 the experimental design and the derived conclusions seem sound to me. Supplementary Material: No. Relation To Broader Scientific Literature: There is relatively little literature on probabilistic verification of neural networks which, in my view, makes this contribution all the more welcome. The few papers that do exist are cited and are used for comparison in Section 6. The presented approach clearly outperforms alternative techniques. Essential References Not Discussed: While it should probably be deemed concurrent literature, a recent ICSE paper by Kim et al. [1] proposes the same approach of bounding probabilities [1]. In particular, just like PV (Algorithm 1) the paper also bounds probability by computing hyperrectangles which are fully safe/unsafe (see Algorithm 1 and Section IV in [1]). However, this paper is specific to fairness and hence less broad in its scope. Moreover, the paper was first published on arXiv on 5th September 2024... [1] https://arxiv.org/abs/2409.03220 Other Strengths And Weaknesses: A major strength of the paper is its generality: The approach supports both discrete and continuous input spaces, a broad range of probability distributions and builds upon off-the-shelf tools for qualitative verification. In particular, this allows the approach to benefit from future improvements in verification technology. Other Comments Or Suggestions: In Example A.1 you reformulated demographic parity. It seems to me, that the problem would turn out to be computationally easier if the division was resolved by multiplying the divisor on both sides of the inequality (admissible due to non-negativity) and then subtracting $\gamma$*divisor. Is there any particular reason you did not do this? Questions For Authors: **(Q1)** Am I missing something about assumption 5.3. or is it indeed more restrictive than discussed in the paper? If so, might a focus on completeness for open properties be a more natural formulation? **(Q2)** You mention support for unbounded hyberrectangles. I am under the impression that this is not something typically supported by the qualitative verification tools used by the approach. How do you handle this in practice? Did you have any benchmarks for this setting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer GWj1 for their review, in particular their careful evaluation of our theoretical assumptions. > Theoretical Claims / Assumption 5.3 If the safe/unsafe region is not axis-aligned, indeed, Algorithm 2 can not compute the *exact* probability of safety. However, Algorithm 2 still computes a converging sequence of bounds on the unknown exact probability $p_i$. The question that motivates Assumption 5.3 is when converging bounds are sufficient for solving a probabilistic verification problem. For illustration, assume we want to show $y \geq 0$ and have converging sequences of bounds ${(\ell\_t)}\_{t \in \mathbb{N}}$ and ${(u\_t)}\_{t \in \mathbb{N}}$ with $\ell_t \leq y \leq u_t$ for each $t \in \mathbb{N}$ and $\lim_{t \to \infty} \ell_t = \lim_{t \to \infty} u_t = y$. If the actual (exact) $y$ is zero, sequences of bounds that only converge in the limit do not suffice to prove $y \geq 0$, since there may be no $T \in \mathbb{N}$ with $\ell_T = 0$. However, if the actual value of $y$ is different from zero, knowing a finite number of iterates of ${(\ell_t)}\_{t}$ and ${(u_t)}\_{t}$ always suffices to prove or disprove $y \geq 0$. Concretely, there will be a $T \in \mathbb{N}$, such that either $\ell_T > 0$ or $u_T < 0$, that proves, respectively, disproves $y \geq 0$. Unfortunately, the situation does not change fundamentally if we study $y > 0$ instead of $y \geq 0$. Before, if $y = 0$, our specification actually held, but we were not able to ever prove this. Now, our specification $y > 0$ is actually violated if $y = 0$, but we will never be able to disprove it. Concretely, if ${(u_t)}_{t}$ only converges in the limit, there may be no $T \in \mathbb{N}$, such that $u_T = 0$. Our assumption that $f\_{Sat}(p_1, \ldots, p_v) \neq 0$ corresponds to assuming $y \neq 0$ in the discussion above. Note that $f\_{Sat}(p_1, \ldots, p_v)$ is a constant in Equation (3). Our comment on studying $f\_{Sat}(...) \geq \varepsilon$ instead of $f\_{Sat}(...) \geq 0$ means solving a different verification problem with $f'\_{Sat}(...) = f\_{Sat}(...) - \varepsilon$. Practically, if you only want to show Equation (1) and PV seems not to be terminating for $\gamma = 0.8$, you can run PV again with $\gamma' = 0.81$ and find that your neural network either violates or satisfies a slightly stronger fairness specification. The motivation for requiring $p = \mathbb{P}[g\_{Sat}(...) = 0] = 0$ is similar. However, the reason why it is not restrictive to assume $p = 0$ is different. Unlike $f\_{Sat}(...)$, $g\_{Sat}(...)$ is not a constant in Equation (3). Here, our argumentation why the assumption that $p = 0$ is not restrictive is concerned with the level sets $G\_r = \\{\mathbf{x} \in \mathbb{R}^n \mid g\_{Sat}(\mathbf{x}, \mathrm{net}(\mathbf{x})) = r\\}$ of $g$ that have positive probability ($\mathbb{P}[G\_r] > 0$). If $\mathbb{P}[G\_r] > 0$ for some $r$, $G\_r$ also needs to have positive volume in $\mathbb{R}^n$ (we assume a continuous probability distribution here; for discrete distributions we actually do not require the assumption p = 0 due to splitting differently). A set $G\_r$ having positive volume means that the plot of $g\_{Sat}(...)$ has a "flat" part where $g\_{Sat}(...) = r$. Unless $g\_{Sat}$ has a continuous range of "flat parts", we can find a small $\varepsilon > 0$, such that $\mathbb{P}[g\_{Sat}(...) = \varepsilon] = 0$. While there may be pathological functions that have continuous ranges of "flat parts", neural networks with finitely many neurons with standard activation functions can not have such continuous ranges. We hope this discussion clarifies our assumption and why we consider it only mildly restrictive. We will use your feedback and the above discussion to improve the description of Assumption 5.3 in our paper. > Unbounded Input Spaces (Q2) Indeed, qualitative verifiers rarely support unbounded input spaces. In practice, we use $\underline{\mathbf{y}} = -\infty$ and $\underline{\mathbf{y}} = \infty$ for unbounded branches in line 5 of Algorithm 2. With this, unbounded branches are never pruned, and LongestEdge always first selects dimensions that are unbounded in a branch. We modify BaBSB to also first select dimensions that are unbounded in a branch. Unfortunately, this was not documented in our submission, but we will add this to the last paragraph in Section 4.1. The FairSquare benchmark (Section 6.1) features an unbounded input space (discussed in Appendix F.2.1). > Related Work: Individual Fairness (Kim et al.) Besides what was already noted by the reviewer, Kim et al. study individual fairness, which is a non-probabilistic fairness specification. Therefore, the work of Kim et al. is more closely related to the papers of Marzari et al. (2023a, 2023b, 2024), than to our approach. Nonetheless, this paper is important related work and we will incorporate it into our discussion of #DNN-verification in Section 2. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I believe what confused me about the (non-)restrictiveness of Assumption 5.3 was that it sounded to me as though for any NN and specification we can rephrase the specification *equivalently* such that Assumption 5.3 is satisfied and the completeness guarantee holds. This is not the case and also not what was meant by the authors. Instead, the idea is that there is a "semantically close" specification which evades the completeness issue. This is a statement I can get behind. I encourage the authors to update the draft to make this clearer and will, as promised, raise my score. --- Reply to Comment 1.1.1: Comment: We are happy that we could clarify our assumptions in the rebuttal and will use the reviewer's feedback to improve the discussion of our assumptions in the paper. We want to thank the reviewer for their helpful comments and questions.
Summary: This paper proposes a probabilistic neural network verification method through lower and upper bounds of probabilities of outputs. By branch and bound and bound propagation, it can run faster than other verifiers. Claims And Evidence: The definitions of soundness and completeness in this paper seem to be quite different from the commonly used ones in the neural network verification community, where soundness usually means once the verifier gives true, the output specification will truly hold. In this sense, probabilistic verification seems to be a false proposition. I suggest the authors should clarify it clearly at the very beginning of the paper to avoid potential confusion. Methods And Evaluation Criteria: - In Eq (3), I wonder whether each support of $P_{x^{(i)}}$ is assumed to be not overlapped. If so, the joint distribution is assumed to be invariant to the permutation of each distribution, which should be explicitly discussed. - In Alg 1, it is not clear if $PB_i$ is also related to $t$ on Line 5. It should be stated clearly on Line 2 as well. - In Alg 2, Line 7 and Line 8 seem to be conflicted with the statement of the right column on Line 220, where $l^t$ is the probability of a union of some sets. However, it should follow the union bound with inequality. At the same time, in Alg 2, the iteration of lower and upper bounds is assumed to be with the equality condition of the union bound, which may not hold and cause correctness issues. Theoretical Claims: The proof of Theorem 5.1 seems to hold with the assumption of uniform distribution and disjoint support of each single-value distribution. It holds in such special cases, however, more formal and general propositions should be derived based on formal arguments, e.g. through conformal prediction or hypothesis testing. Experimental Designs Or Analyses: As a neural network verification method in machine learning, experiments are expected to involve adversarial machine learning tasks, e.g. mnist with adversarial perturbation. More can be found in VNN-COMP benchmarks in recent years. Besides, some important baselines are missing, e.g. several best non-probabilistic verifiers $\alpha$-$\beta$-CROWN, NNV, etc. Supplementary Material: Yes Relation To Broader Scientific Literature: Related to neural network verification. Essential References Not Discussed: A key branch of probabilistic certification/verification methods in the literature is missing, i.e. randomized smoothing based neural network verification methods [1,2,3]. [1] Cohen et al. Certified Adversarial Robustness via Randomized Smoothing, 2019 [2] Yang et al. Randomized Smoothing of All Shapes and Sizes, 2020 [3] Li et al. Sok: Certified robustness for deep neural networks, 2020 Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer BFqm for their review. We believe there has been a misunderstanding of our definition of soundness that underlies most of the review. > The definitions of soundness [...] in this paper seem to be quite different from the commonly used ones in the neural network verification community [...]. In this sense, probabilistic verification seems to be a false proposition. **We actually use the standard definition of soundness**, as expressed by Definition 3.2 in our paper. The review appears to assume that we provide what we call a "probably sound" algorithm, for which a verifier output of true means that the output specification *likely* holds, typically with a certain confidence level (PAC-style guarantee). We instead provide a ***definite* guarantee for a *probabilistic* statement**. We agree that the term "probabilistic verification" can be misleading. Therefore, we propose to change the title of our paper to "Solving Probabilistic Verification Problems of Neural Networks using Branch and Bound" and similarly modify our abstract and introduction to avoid this source of confusion. If there are further suggestions where our presentation can be improved in this regard, we would be happy to incorporate them. To summarise, our paper provides a **sound algorithm** for solving **probabilistic verification problems** (PVPs), that is, proving statements involving probabilities over neural networks, such as fairness statements of the type of Equation (1). Other methods, like hypothesis testing, provide PAC-style guarantees for PVPs that only hold with a certain probability. **Unlike these methods, our guarantees hold with certainty.** > In Eq(3), I wonder whether each support of $P_{x^{(i)}}$ is assumed to be not overlapped. > The proof of Theorem 5.1 seems to hold with the assumption of uniform distribution and disjoint support of each single-value distribution. We do not require these assumptions. Our assumptions are summarized in Section 4.2 If you have concerns regarding particular steps of the proofs (Appendix C), we would be happy to discuss them in the second round of the author-reviewer discussions. > In Alg 1, it is not clear if $PB_i$ is also related to $t$ on Line 5 In Algorithm 1, $PB_i$ is not dependent on $t$. However, $PB_i$ is an instantiation of Algorithm 2 rather than a fixed value. Note that Algorithm 2 also has a loop counter $t$. While some $PB_i$ may advance faster than others in our implementation, the counters $t$ in Algorithms 1 and 2 are synchronized in our paper to simplify the exposition. > In Alg 2, Line 7 and Line 8 [...] conflicted with [...] Line 220. Our approach is **not based on the union bound** but rather on the third axiom of probabilities (the probability of a countable union of disjoint sets is the sum of the probabilities of these sets). The third axiom of probabilities holds with equality. > Adversarial machine learning. **We consider adversarial machine learning tasks in our ACAS Xu robustness experiment.** Unlike probably sound verifiers, sound verifiers for PVPs (such as PV) do not currently scale to image datasets, but our work provides major advances in scaling to larger input spaces. We discuss this in more depth in the rebuttal to reviewer SiXd. > [...] some important baselines are missing, e.g.[...] $\alpha$-$\beta$-CROWN, NNV, etc. Since **non-probabilistic verifiers can not be applied to PVPs**, it is not possible to compare our approach to these verifiers experimentally. For example, it is impossible to apply $\alpha$-$\beta$-CROWN to MiniACSIncome, since $\alpha$-$\beta$-CROWN can not handle probabilities. To position our work more clearly in the context of other verification approaches, we will add the following table to our related work section: | Method | Deterministic Statements | Quantitative Statements (#DNN-Verification) | Group Fairness | General PVPs | Type of Verifier Guarantee | |---|---|---|---|---|---| | $\alpha,\beta$-CROWN, NNV | yes | no | no | no | sound | | Randomized Smoothing | yes | no | no | no | probably sound (i.e., PAC-style) | | Hypothesis Testing | yes | yes | yes | yes | probably sound (i.e., PAC-style) | | ProVe_SLR (Marzari et al., 2023b) | no | yes | no | no | sound | | SpaceScanner (Converse et al., 2020) | no | yes | yes | yes | sound (quantitative statements), unsound (otherwise) | FairSquare (Albarghouti et al., 2017) | no | no | yes | no | sound | | PV (Ours) | no | yes | yes | yes | sound | We hope that this clarifies the relation between our method (PV) and other verifiers solving deterministic (non-probabilistic) verification tasks, as well as other methods mentioned in the review. > key [...] literature is missing. Thank you for mentioning this. We will add the mentioned references to our discussion of probably sound approaches in Section 2 (third paragraph). --- Rebuttal Comment 1.1: Comment: Thanks for the clarification of sound probabilistic verification and I will raise my score to 2. It seems the definite guarantee for a probabilistic statement can only be adopted in non-deterministic cases. However, a CROWN-like non-probabilistic verifier can be applied to the special case of the probabilistic verification problem with the probability of 1, which is supposed to be compared as a baseline regarding tightness and scalability. A comparison with a CROWN-like verifier with branch-and-bound is still expected under some fair settings. --- Reply to Comment 1.1.1: Comment: *We edited this comment. Edits are typeset in italics.* Thank you for considering our rebuttal and raising your score. Following your suggestion to perform an experimental comparison with a CROWN-based verifier, we experimentally compared our approach to $\alpha,\beta$-CROWN on the ACAS Xu safety benchmark described in Section 6.2 of our paper. *Our experiment shows that the tools are complementary and perform well at their respective tasks.* For non-probabilistic verification on ACAS Xu, $\alpha,\beta$-CROWN is the current state-of-the-art tool, as determined in VNN-COMP 2024 (Brix et al., 2024). *For this experiment, non-probabilistic verification corresponds to proving $\forall x: \neg(net(x) \text{ is unsafe})$, which is almost equivalent to $\mathbb{P}[net(x) \text{ is unsafe}] = 0$, except for the handling of probability-zero events.* *When ignoring probability-zero events, it is possible to disregard the probability distribution entirely to solve this verification problem.* *This simplifies the verification problem: non-probabilistic verification is NP-complete, while general probabilistic verification is #P-hard (see Section 3.1 in our paper).* *Due to the issue of probability-zero events and the different complexity classes, **a comparison of our approach with $\alpha,\beta$-CROWN is not entirely faithful and not entirely fair**.* *We performed the comparison regardless to demonstrate the differences between non-probabilistic verification and probabilistic verification.* The table below contains a selection of results from our experiment. Due to the character limit, we can not present the full results here. The entire table is available at https://drive.google.com/file/d/1OO8k-S-HCOLsGGolW_dxAPUm1V9gkDDA/view?usp=sharing. The results for our approach are taken from Table 4 in the Appendix of our paper. | net | Non-Probabilitic Verification Result ($\alpha,\beta$-CROWN) | Quantitative Verification Result ($\alpha,\beta$-CROWN) | Runtime ($\alpha,\beta$-CROWN) | Non-Probabilitic Verification Result (ProbabilityBounds, Ours, 60s timeout) | Quantitative Verification Result (ProbabilityBounds, Ours, 60s timeout) | |---|---|---|---|---|---| | $N_{3,1}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $\color{red}\times$ | $ 0.88\\%, 2.74\\%$ | $N_{3,2}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $?$ | $ 0.00\\%, 1.15\\%$ | $N_{3,3}$ | $\color{green}\checkmark$ | $0\\%,0\\%$ | $451s$ | $?$ | $ 0.00\\%, 0.84\\%$ | $N_{3,4}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $\color{red}\times$ | $ 0.13\\%, 1.53\\%$ | $N_{3,5}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $\color{red}\times$ | $ 0.63\\%, 2.01\\%$ | $N_{3,6}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $\color{red}\times$ | $ 0.02\\%, 5.18\\%$ | $N_{3,7}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $?$ | $ 0.00\\%, 3.32\\%$ | $N_{3,8}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $\color{red}\times$ | $ 0.32\\%, 3.46\\%$ | $N_{3,9}$ | $\color{red}\times$ | $0\\%, 100\\%$ | $<1s$ | $\color{red}\times$ | $ 1.39\\%, 3.72\\%$ In this table, the "Non-Probabilistic Verification Result" columns indicate whether the respective verifier determined that the network is safe ($\color{green}\checkmark$), unsafe ($\color{red}\times$), or whether it did not provide a conclusive result ($?$). The "Quantitative Verification Result" columns contain a lower and an upper bound on the probability of an unsafe network output under a uniform input distribution, as computed by the verifiers. **This experiment demonstates that $\alpha,\beta$-CROWN and our approach both shine in their own domains**. While $\alpha,\beta$-CROWN is fast at solving the non-probabilistic verification problem, its results only entail trivial bounds on the probability of unsafety, since $\alpha,\beta$-CROWN is not conceived for performing quantitative verification. On the other hand, our approach is less effective at performing non-probabilistic verification (for which it was not conceived) but can provide insightful bounds on the probability of unsafety. **As such, the two approaches complement each other.** This is typically the idea behind quantitative verification: finding out the probability of unsafety, *after* a non-probabilistic verifier proved that there is unsafety. To summarize our results: - For tightness, our approach outperforms $\alpha,\beta$-CROWN, since $\alpha,\beta$-CROWN can only provide trivial bounds on probabilities. - For scalability, $\alpha,\beta$-CROWN outperforms our approach on non-probabilistic verification problems, since it is specialized to these problems. We hope this comparison demonstrates the usefulness and strengths of our approach compared to CROWN-based verifiers. We kindly ask you to reevaluate our paper in light of these qualities.
null
null
null
null
null
null
Dynamic Similarity Graph Construction with Kernel Density Estimation
Accept (poster)
Summary: In this paper, the authors deal with the kernel density estimation (KDE) problem. They propose a query hash data structure for dynamic graphs (it maintains the estimates for a set of query points as data points are added). They present experiments with different types and dimensionality of data, as well as some reference methods. ## update after rebuttal I would like to thank the authors for providing answers to my concerns and questions, which reinforce me of the validity of the positive rating I gave initially. I hope the authors will make the promised changes to the new version of their manuscript. Claims And Evidence: The focus of the authors is amongst others on efficiency. They experimentally show that in the majority of cases their method obtains better times than the methods used in the comparisons. One thing is that their method does not usually yield the best result in terms of the accuracy, so the authors should comment on it. Methods And Evaluation Criteria: Rather, yes, the authors use quite standard metrics for evaluation, while one of the methods with which they compare their solution is the one that they base their work on, so it is good. Theoretical Claims: The majority of theoretical analyses are taken/slightly modified from works of Charikar et al. (2020) or Macgregor & Sun (2023) - the authors cite these papers in almost all lemmas and definitions used to introduce their method. Experimental Designs Or Analyses: * The authors perform the experiments on different, heterogeneous datasets, the datasets have a different dimensionality, which is good (+). * In Tab. 2 for MNIST the authors mark their method as the best, because it has the lowest execution time. Nevertheless, kNN achieves much higher NMI (it needs more time, but provides much higher results) or for other cases in Tab. 1 DYNAMICRS lower error values. Could the authors comment on it? * It would be nice to include in the analysis some image embeddings for networks trained on larger and more naturalistic datasets (CIFAR10 has only a few classes and its images have a low resolution). E.g., there are many trained ImageNet models available, so even an experiment with smaller datasets, such as mini-ImageNet or so would be interesting. Supplementary Material: I checked the Supplementary Materials. They look really good for reproducibility. The authors provide clear instructions there as well (this is a big plus). Relation To Broader Scientific Literature: The authors should better relate their work to previous findings. E.g. they frequently cite the works like Charikar et al. (2020) or Macgregor & Sun (2023) while introducing their work, and it is not clear for me after reading the paper what has been introduced in Charikar et al. (2020) or Macgregor & Sun (2023) and what is the novelty, it should be better highlighted. Essential References Not Discussed: * The authors state in line 59: “there has been some recent progress to develop sub-linear query time algorithms (Charikar et al., 2020; Charikar & Siminelakis, 2017; 2019)” - it would be good to add some more recent papers here, as the newest one provided is from 5 years ago. * A lot of works on incremental spectral clustering can be found, while the authors provide mostly old references (Dhanjal et al., 2014; Martin et al., 2018; Ning et al., 2007), one exception is Laenen & Sun, 2024. The authors should again focus on more recent works and discuss what is their contribution. * A crucial thing is to differentiate the paper contributions from Charikar et al. (2020), as it is cited in almost all lemmas and definitions used to define the method. Other Strengths And Weaknesses: Additional strengths: + The figures created by the authors are very aesthetic. + The formulation of the problem and the methods is clear. + The Appendix and Supplementary Materials are helpful. Additional weaknesses: - No clear Conclusions section with main highlights, because of that the paper seems unfinished. Other Comments Or Suggestions: In general, the authors should better highlight how their method improves the existing methods and better compare it to the existing literature. Questions For Authors: **Q1** The authors cite Charikar et al. (2020) or Macgregor & Sun (2023) in almost all parts of the method description. How does their work differ from them, what does it contribute? **Q2** Could the authors comment on the visible peaks in the execution times for their method? **Q3** Could the authors write down their main conclusions? What is better, what is worse when their method is used? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback and overall positive evaluation of our paper. **Response to _Claims And Evidence and Experimental Design_** It is known that kNN graphs achieve good empirical performance for spectral clustering. However, existing algorithms applicable for dynamic updates of kNN graphs lack rigorous theoretical guarantees. In contrast, our method provides such guarantees. On the DynamicRS result, one of the key strengths of our algorithm is its theoretical approximation guarantee. Meanwhile, for DynamicRS, one can only obtain a guarantee when sufficiently many samples are drawn with respect to the smallest KDE value in the dataset, which is dataset-dependent. DynamicRS performs well in many cases but degrades significantly on the msd dataset, whereas our algorithm maintains stable performance. We also appreciate the reviewer’s suggestion to run experiments on larger and more naturalistic datasets like mini-ImageNet. We highlight that our primary contributions are the theoretical guarantees and scalability, which are demonstrated in our current experiments. Nonetheless, we will include this analysis in future work. **Response to _References Not Discussed_** We thank the reviewer for the suggestion on discussing related works in more detail. We didn’t discuss the related work in more detail due to the page limit and we will add more discussion in the next version of the paper. On citing more recent work of sublinear-linear query time algorithms for the KDE problem, we will include a reference to Charikar et al. (2024), which gives an improved static KDE algorithm breaking the quadratic dependence on $\varepsilon^{-1}$. Other than this, Charikar et al. (2020) remain the state-of-the-art for KDE. We will further include more references to incremental spectral clustering algorithms such as Sun et al. (2020), Kłopotek et al. (2024), and Yin et al. (2021). Compared to these, our work is the only one providing theoretical guarantees on the output clusters. **Response to Q1** The main difference between our work and theirs is that we study the dynamic setting in which data and query points are added over time. More specifically, while our initialisation and underlying static KDE methods share similarities with Charikar et al. (2020), our main novelty is the query hash data structure (Algo 4), enabling efficient updates of KDE estimates as data points arrive. With this, we show that the amortised update time for adding data points can be independent of $|Q|$, the number of maintained query points. To the best of our knowledge, this is the first such result. The novelty of the analysis lies in considering how often a query point $\mathbf{q}$ gets updated throughout the sequence of data point insertions (see Lemma B.7), as the value of $\mu_{\mathbf{q}}$ changes over time. See Section B.2, pages 17–20. In addition, building on Macgregor & Sun (2023), we introduce the following techniques to handle the dynamic setting: 1. We carefully track all sampling paths for sampled edges in the tree. 2. We modify the reweighting factor of sampled edges (Lines 300–309) so that if the degree/KDE estimate of a neighbour changes, only a bounded number of edge weights must be updated (Lemma C.3). 3. We rigorously analyse the number of dynamically updated query points in hash buckets throughout our KDE sampling tree (Section 4.2). These contributions are algorithmically novel and critical for dynamic KDE and similarity-graph construction. We will clarify this in the next version of the paper. **Response to Q2** The observed peaks in execution time result from the periodic reinitialisation of our KDE data structures, triggered when the dataset doubles in size (Line 19, Algo 1). Notice that, while the reinitialisation step has time complexity $O(\varepsilon^{-2} \cdot n^{1+o(1)} \cdot \mathrm{cost}(k))$, it only occurs every $n$ updates, thus maintaining sublinear amortised time complexity. **Response to Q3** In the next version, we will expand the following paragraph and form the conclusion section: This paper develops dynamic algorithms for the dynamic KDE and similarity-graph constructions. Compared with most heuristic methods including the DynamicRS algorithm and dynamic kNN graphs, our designed algorithms have theoretically proven approximation guarantees. We hope this addresses your questions, and we are happy to elaborate further in the discussion phase. **References** - Charikar, M., Kapralov, M., & Waingarten, E. 2024. A quasi-monte carlo data structure for smooth kernel evaluations. SODA - Sun, G., Cong, Y., Wang, Q., Li, J., & Fu, Y. 2020. Lifelong spectral clustering. AAAI - Kłopotek, M. A., Starosta, B., & Wierzchoń, S. T. 2024. Eigenvalue-based Incremental Spectral Clustering. Jour. of AI and Soft Computing Research - Zhou, P., Shen, Y. D., Du, L., Ye, F., & Li, X. 2021. Incremental multi-view spectral clustering with sparse and connected graph learning. Neural Networks
Summary: This paper designs a dynamic similarity graph construction method with kernel density estimation. The proposed method constructs graphs more efficiently than existing methods and can also approximate the clustering results of spectral clustering. The authors provide a detailed theoretical analysis of the proposed algorithm. Comprehensive experiments demonstrate its effectiveness. Claims And Evidence: 1. Theorem 3.1 proves that the proposed Algorithm 1 quantitatively analyzes the approximation bound for kernel density estimation and its running time. 2. Theorem 4.1 provides the probability of the constructed approximate graph and its running time. The paper provides rigorous proofs for the above two theorems. Methods And Evaluation Criteria: The dynamic graph construction method proposed in the paper can assist in designing online clustering algorithms. The proposed method performs well on several benchmark datasets. The author also provides the algorithm's code, which facilitates further research. Theoretical Claims: I have verified parts of the proofs, and they are all correct. Experimental Designs Or Analyses: The width parameter of the Gaussian kernel function has a significant impact on the learning algorithm. How is this parameter set in Table 3, and what is the basis for its selection? Supplementary Material: I have read the theorem proofs in the supplementary materials, and they are generally correct. Relation To Broader Scientific Literature: The method proposed in the paper has broad connections with spectral clustering, online learning, and kernel density estimation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The method proposed in the paper is highly innovative and can effectively improve the computational efficiency of spectral clustering algorithms. 2. The paper provides a detailed proof for the proposed theorem. Weaknesses: The proof in the paper is relatively complex. It would be best to provide a general outline of the proof before presenting the detailed proof. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their suggestions and positive evaluation of our paper, and we respond to their comments below: >_**Question 1**: “The width parameter of the Gaussian kernel function has a significant impact on the learning algorithm. How is this parameter set in Table 3, and what is the basis for its selection?”_ **Response**: For each dataset, we set $\sigma$ of the Gaussian kernel such that the average kernel density $\mu_{\mathbf{q}}$ over all query points $\mathbf{q} \in Q$ is around $0.01$, as we describe on lines 433--436 of the paper submission. This follows the experimental setup of Karppa et al. (2022). >_**Weakness 1**: “The proof in the paper is relatively complex. It would be best to provide a general outline of the proof before presenting the detailed proof.”_ **Response**: We appreciate the feedback from the reviewer. In the next version of the paper we will add proof outlines before the main proofs to improve the paper exposition. **References** - Matti Karppa, Martin Aumüller, and Rasmus Pagh. Deann: Speeding up kernel-density estimation using approximate nearest neighbor search. In 25th International Conference on Artificial Intelligence and Statistics (AISTATS’22), pp. 3108–3137, 2022.
Summary: Suppose we are given a set $X = \{x_1,\dots,x_n\}\subseteq \mathbb{R}^d$ of $n$ data points, a set $Q = \{q_1,\dots,q_m\}\subseteq\mathbb{R}^d$ and a kernel $k : \mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$. We would like to compute $\mu_q := \sum_{i=1}^n k(q,x_i)$ for all $q\in Q$. However, computing these values has a linear running time by the brute force approach. Hence, we want to construct a data structure such that approximating these values in sublinear time. Moreover, the data structure supports data point updates and query point updates. As an application of this data structure, the authors showed that this data structure can be used to construct the similarity graph. ## update after rebuttal I have read the authors' rebuttal and will keep my current score. Claims And Evidence: Yes. The proofs of the theorems are provided. Methods And Evaluation Criteria: The authors provided for the experimental study. Theoretical Claims: I did not check all the details in the proofs. Experimental Designs Or Analyses: Yes. I read the figures and the experiment setup. Supplementary Material: I did not check all details in the supplementary material. Relation To Broader Scientific Literature: . Essential References Not Discussed: No. Other Strengths And Weaknesses: Estimating KDE is an important problem in machine learning. The effort devoted for pushing this direction forward should be appreciated. Other Comments Or Suggestions: . Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review of our paper. The reviewer doesn’t leave specific questions for us to address. However, we’re happy to answer further questions raised by the reviewer during the later discussion phase.
null
null
null
null
null
null
null
null
A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach
Accept (poster)
Summary: This paper investigates average-reward reinforcement learning with general policy parametrization. The authors propose a Multi-level Monte Carlo-based Natural Actor-Critic (MLMC-NAC) algorithm and provide a finite-sample analysis. Update after rebuttal: The authors have satisfactorily addressed all of my concerns in the rebuttal, including the clarification regarding the single-loop versus double-loop structure. Based on their responses, I am increasing my score to 3. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: N/A Supplementary Material: I read but did not carefully check the math in supplementary. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The authors propose a Multi-level Monte Carlo-based Natural Actor-Critic algorithm that achieves a global convergence rate of $\tilde{O}(1/\sqrt{T})$. The algorithm eliminates the need for knowledge of mixing times. In addition, the finite-sample analysis provided in the paper extends to infinite state and action spaces. Weaknesses: Please refer to Question section. Other Comments Or Suggestions: N/A Questions For Authors: My main concern is about the high-level idea. Typically, addressing Markovian noise requires either a projection step or the assumption of a finite action space. However, it is unclear to me how the authors handle Markovian noise without either of these approaches. Could the authors provide a high-level explanation of their methodology? Besides, some other concerns are: 1. To establish Theorem 1, the objective function $J$ must be $L$-Lipschitz w.r.t. $\theta$. Can this property be directly inferred from existing assumptions? If not, this constitutes an additional assumption, and the authors should justify its reasonableness. Additionally, does $L$ depend on $|S|$, $|A|$ or $t_{\rm mix}$? 2. What step-size is used in Theorem 1? Is it a constant or decaying step size? Furthermore, how does the step size scale with $T$ and the mixing time? This is a crucial aspect and should be explicitly stated in or before the theorem. 3. What are the definitions of $\Lambda_p$ and $\Lambda_q$ in Eq.(35)? Can they take arbitrary values? 4. The proposed algorithm operates on a two-timescale framework, as evident from Algorithm 1, where multiple critic updates (update on $\zeta$ in line 12) occur before a single actor update (update on $\theta$ in line 27). Two-timescale methods are generally easier to analyze since the outer loop can proceed after the inner loop converges. However, prior works such as NAC-CFA and MAC already focus on single-timescale algorithm, which are more commonly used in practice. In the comparison table, I suggest adding a column indicating whether each method follows a single- or two-timescale approach. 5. Given the previous point, the novelty of this work may be questionable unless the authors can demonstrate that the sample complexity analysis remains valid for a single-timescale version of Algorithm 1. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Markovian Noise**: We agree that applying projections or invoking the assumption of finite action space is common in the literature. For example, the MAC algorithm derives bounds using MLMC estimates from (Dorfman and Levy 2022), which assume a finite state space and a bounded domain, thereby necessitating a projection step in the critic update. Our analysis models the NPG and critic updates as stochastic linear recursions with Markovian noise, which we show to converge without invoking the above preconditions. A similar result can be found in (Beznosikov et al. 2023). However, their update differs from ours since we deal with biased estimates, which are not considered in the cited paper. **(Q1) Lipschitz Property**: Lipschitz continuity and smoothness of the function $J$ can be easily proven within the framework of discounted-reward MDPs (Lemma 2, Mondal and Aggarwal 2024). In this case, $L$ depends only on the discount factor. However, to the best of our knowledge, no such result is known in the average-reward case. We want to emphasize that nearly *all* papers analyzing general parameterized policy gradient methods for average-reward MDPs assume the stated properties of the $J$ function. These properties are expected to hold in practice since they ensure that a slight change in the parameter does not result in a large change in the $J$ function. In our assumption, $L$ is a constant that does not depend on other model-specific parameters. **(Q2) Step Sizes**: The values of the step sizes $\beta$ and $\gamma$ are given in Theorems 3 and 4, respectively: $\beta = \frac{4\log H}{\lambda H}$, and $\gamma = \frac{2\log H}{\mu H}$. Moreover, the value of $\alpha$ is mentioned at the top of Page 20, line 1045: $\alpha = \frac{\mu^2}{4G_1^2L}$. Observe that none of these parameters depend on the mixing time, $t_{\mathrm{mix}}$, which is consistent with our claim that our algorithm does not require the knowledge of $t_{\mathrm{mix}}$. In the revised version, we will explicitly state the values of these parameters in Theorem 1. **(Q3) Regarding $\Lambda_P$ and $\Lambda_Q$**: The terms $\Lambda_P$ and $\Lambda_q$ in (35) are parts of Theorem 2 that analyzes the convergence of a general stochastic linear recursion (SLR). Specifically, Eq. (35) forms a precondition for Theorem 2. We recognize that both NPG and critic updates can be formulated as SLRs. Moreover, these updates satisfy a precondition similar to (35). In particular, (53) and (54) allow us to choose $\Lambda_P=c_\beta+3$ and $\Lambda_q=c_\beta+1$ for the critic update while (55) dictates $\Lambda_P = G_1^2$ and $\Lambda_q = \mathcal{O}(G_1t_{\mathrm{mix}})$ for the NPG update. Note that, although $\Lambda_q$ for the NPG update is dependent on $t_{\mathrm{mix}}$, it is not used as an input to our algorithm, thereby maintaining the claim that our algorithm does not require knowledge of the mixing time. **(Q4) On Two-Timescale Approach**: We emphasize that no algorithm (be it single- or two-timescale) exists in the literature that achieves the optimal convergence rate of $\mathcal{O}(1/\sqrt{T})$ for average-reward MDPs with general parameterized policies via Markovian sampling. Our natural actor-critic-based algorithm achieves this goal, and that itself is a contribution worth highlighting. Secondly, the algorithms mentioned by the reviewer are not single timescale. We note that the paper NAC-CFA (https://arxiv.org/pdf/2406.01762) mentions (page 5) that ``Here, the AC and NAC algorithms in Algorithm 1 are single-loop, single sample trajectory and two timescale." Similarly, the MAC algorithm is also two timescale since the critic loop has an order-larger step size which makes the algorithm two timescale. Thus, we note that both the mentioned algorithms are two timescale. We will mention this in the paper and enlist designing a one-scale algorithm as a future work. **(Q5) Novelty**: Our main contribution is designing an algorithm for average-reward MDPs that achieves the optimal $\mathcal{O}(1/\sqrt{T})$ rate for general parameterized policies with Markovian sampling. This is a contribution worth highlighting because no algorithm, be it single- or two-timescale, achieves this feat. Moreover, as previously pointed out, all known actor-critic algorithms on average-reward MDPs with general parameterized policies are all, to the best of our knowledge, two-timescale. In summary, we respectfully disagree with the reviewer's stance that not being a single-timescale algorithm is a good enough reason to discount the merit of our work. The optimization literature teaches us that single-timescale algorithms typically develop based on the intuitions provided by two-timescale algorithms. We are, therefore, hopeful that the intuitions provided by our algorithm will pave the way to designing the first single-timescale actor-critic algorithm for average reward MDPs with general parameterized policies. We will mention this as a future work in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ response. The clarifications on Markovian noise, the Lipschitz and smoothness property, and the definitions of $\Lambda_P, \Lambda_Q$ make sense to me. I encourage the authors to explicitly include the clarification on the Lipschitz and smoothness properties in the paper. Additionally, I have noticed a slight difference in previous works regarding this assumption, as PHPG assumes the smoothness of $\nabla J$ rather than $J$ itself. I believe this distinction is worth highlighting in the paper as well. Regarding the distinction between single- and two-timescale algorithms, I apologize for the confusion in my initial comment. My intent was to distinguish between single-loop and double-loop algorithms rather than single- and two-timescale methods. It is evident that the algorithm proposed in this paper follows a single-loop structure, as multiple critic updates occur before a single actor update. However, I believe that comparing the analysis of single-loop and double-loop algorithms may not be entirely fair, as in double-loop algorithms, the outer loop proceeds only after the inner loop has converged, which simplifies the theoretical analysis. This distinction should be explicitly noted in the comparison table for clarity. --- Reply to Comment 1.1.1: Comment: Thank you for the clarification. We will definitely incorporate the suggested discussion on the smoothness assumption in the revised paper. Regarding the algorithmic novelty, our point still stands. There exists no algorithm in the literature (be it single- or double-loop) that achieves the order-optimal global convergence rate with general parameterized policies via Markovian sampling. Our proposed algorithm accomplishes this feat, making it a significant contribution worth highlighting. Additionally, the paper https://openreview.net/pdf?id=jh3UNSQK0l on single-timescale Actor-Critic (AC) highlights that single-loop two-timescale AC still shares the same drawbacks as double-loop AC. Specifically, even with single loop, the paper notes that these may be inefficient in practice (“However, it is still considered inefficient as the actor update is artificially slowed down") and relies on a decoupled analysis of the policy and critic updates (“ The two-timescale allows the critic to approximate the desired Q-value in an asymptotic way, which enables a decoupled convergence analysis of the actor and the critic”). That said, we acknowledge that analyzing a double-loop structure might be easier than its single-loop counterpart. We will include this point in the paper and outline the design of a single-loop algorithm that achieves the optimal convergence rate as a direction for future work. We hope our rebuttal has adequately addressed all of your concerns. Given these clarifications, we would greatly appreciate it if you could reconsider your evaluation and adjust the score accordingly to reflect your updated perception of the paper.
Summary: The paper establishes $O(\epsilon^{-2})$ sample complexity for MLMC-NAC algorithm, claims to significantly improve existitng result of $O(\epsilon^{-4})$. State-space independent result is not surpring as NAC (in exact gradient NAC case) is also state-independant. Claims And Evidence: Yes, seems so. Methods And Evaluation Criteria: Yes, seems so. Theoretical Claims: I didn't verify the proofs. Experimental Designs Or Analyses: No. Supplementary Material: No Relation To Broader Scientific Literature: In my personal opinion, this NAC algorithm guanrantees are of theoretical and conceptual interests. I am not sure, this flavour of NAC algorithms are used in large scale problems. Essential References Not Discussed: The paper doesn't cite [2], establishing 1/T convergence rate for average reward case (for exact gradient case). This recent work [1] also addresses the convergence of dynamic programming techniques in robust average reward MDPs and thus should be included in the related works literature." [1] @inproceedings{murthy2023modified, title={Modified policy iteration for exponential cost risk sensitive mdps}, author={Murthy, Yashaswini and Moharrami, Mehrdad and Srikant, R}, booktitle={Learning for Dynamics and Control Conference}, pages={395--406}, year={2023}, organization={PMLR} } [2] @inproceedings{ kumar2025global, title={Global Convergence of Policy Gradient in Average Reward {MDP}s}, author={Navdeep Kumar and Yashaswini Murthy and Itai Shufaro and Kfir Yehuda Levy and R. Srikant and Shie Mannor}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=2PRpcmJecX} } Other Strengths And Weaknesses: Strengths: Good theoretical result. Other Comments Or Suggestions: Suggestion: Presentation can be improved. Questions For Authors: Question: How this MLMC NAC algorithm can be used in large scale problems. Does this paper has any message for the practioners? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **(a) Related Works**: Thanks for mentioning references [1] and [2]. We will include them in the revised version of our paper. In addition to [1], we will also include some other works on robust MDPs. **(b) Message for Practitioners**: Our algorithm is designed for large state and action spaces, with parameterized representations for both the actor and critic, making it well-suited for large-scale problems. While there are multiple practical insights, we highlight a few key ones below. 1. **Sampling**: Many existing policy gradient-type algorithms assume access to a simulator capable of generating independent trajectories at will from a given state distribution (e.g., see [1], [2]). Typically called the i.i.d sampling, the procedure of generating independent and identically distributed trajectories greatly simplifies the algorithm design and analysis. However, for many practical applications, building an accurate simulator is itself a difficult problem. Fortunately, our algorithm works on a single trajectory, eliminating the need for a simulator. 2. **Dependence on Unknown Parameters**: As stated in the paper, many previous works assume knowledge of mixing time and hitting time [3], [4], which are difficult to obtain for most applications. Our algorithm does not require the knowledge of the above parameters, thereby making it easier to adopt in practice. 3. **Memory/Space Complexity**: Our algorithm is memory-efficient. One can verify that the memory complexity of our algorithm is $\mathcal{O}(\max\\{\mathrm{d}, m\\})$ where $\mathrm{d}, m$ are the sizes of the policy parameter, $\theta$, and the critic parameter, $\zeta$ respectively. It is to be noted that although the Fisher matrix, $F(\theta)$ (and its estimates) require $\mathcal{O}(\mathrm{d}^2)$ space, one only needs to store quantities of the form $F(\theta)\omega$, $\omega\in\mathbb{R}^{\mathrm{d}}$, which need $\mathcal{O}(\mathrm{d})$ space. 4. **Computational Complexity**: One way of computing the natural policy gradient (NPG) $\omega_\theta^* = [F(\theta)]^{-1}\nabla_{\theta}J(\theta)$ is first obtaining an estimate of the Fisher matrix $F(\theta)$, and then directly using its pseudo-inverse to compute $\omega_{\theta}^*$. Such a method was adopted in [5]. We, instead, pose the problem of computing the NPG as a stochastic least square problem with Markovian noise. This eliminates the computationally expensive process of inverting a regularized Fisher matrix estimate (whose computational complexity is $\mathcal{O}(\mathrm{d}^3)$ where $\mathrm{d}$ denotes the size of the policy parameter). It can be checked that the computational complexity of our algorithm for a given iteration instance $(k, h)$ is $\mathcal{O}(\mathrm{d}^2+m^2)$ where $m$ is the size of the critic parameter. 5. **Convergence Rate**: Finally, the convergence rate of our algorithm is optimal in the horizon length, $T$. Practically, it indicates that our algorithm takes a relatively small number of training iterations to reach a certain accuracy. [1] Liu, Y., et al., An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods. Advances in Neural Information Processing Systems, 33:7624–7636, 2020. [2] Mondal, W. U. and Aggarwal, V. Improved sample complexity analysis of natural policy gradient algorithm with general parameterization for infinite horizon discounted reward Markov decision processes. International Conference on Artificial Intelligence and Statistics, 2024. [3] Bai, Q., et al., Regret analysis of policy gradient algorithm for infinite horizon average reward Markov decision processes. Proceedings of the AAAI Conference on Artificial Intelligence, 2024. [4] Ganesh, S. et al., Order-Optimal Regret with Novel Policy Gradient Approaches in Infinite Horizon Average Reward MDPs. In The 28th International Conference on Artificial Intelligence and Statistics, 2025. [5] Xu, T., et al., Improving sample complexity bounds for (natural) actor-critic algorithms. Advances in Neural Information Processing Systems, 2020.
Summary: This paper studies the convergence of Actor-Critic algorithm in average reward setting of reinforcement learning. The paper proposed a multi-level Monte-Carlo based natural actor critic algorithm which achieves a $\tilde{O}(1/\sqrt{T})$ global convergence rate to a neighborhood of the optimal policy, where $T$ is the horizon length of the sampled trajectory. ### Update after rebuttal: I appreciate the authors’ effort in rebuttal. Although some of my concerns have been addressed, the remaining concern is with regard to the trajectory length. Throughout the paper and most of the rebuttal it seems to be $\max\(2^{Q_{kh}}, T_{\max}\)$. However, it was pointed out in the last reply to rebuttal comment that it was a typo. Various places (line 6, 11, 23) of algorithm 1 in the paper used max operator rather than min operator. It appears the paper can benefit from greater clarity in conveying the idea. Claims And Evidence: Yes. Methods And Evaluation Criteria: Proposed method makes sense. However, there’s no empirical evaluations for the current version. Theoretical Claims: The reviewer didn’t check the correctness of the proofs. Experimental Designs Or Analyses: NA Supplementary Material: The reviewer didn’t review the supplementary material. Relation To Broader Scientific Literature: The key contributions relate to the convergence performance of actor-critic algorithmic framework in RL, especially in the average reward setting. The paper proposed the model-free approach with order-wise comparable convergence result as the model-based approach. Essential References Not Discussed: The work lacks the discussion of a very important branch of sample efficiency literature for example A and references within. Instead of convergence rate, which seems to only measure critic estimation and NPG estimation, the sample complexity is a more practical measure of complexity of as it represents the true resource consumption from the sample necessity. In the context of the current paper, the paper should include the policy update component. [A] Xu T, Wang Z, Liang Y. Improving sample complexity bounds for (natural) actor-critic algorithms[J]. Advances in Neural Information Processing Systems, 2020, 33: 4358-4369. Other Strengths And Weaknesses: Strength: The paper studied the challenging average reward setting with finite-time convergence result. The analysis for this setting is in general non-trivial. Weakness: Sample complexity perspective: it seems to measure the sample trajectory length for each pair of $k, h$ values. But in fact, the algorithm relies on $KH$ number of trajectories for the entire NAC algorithm to work. Based on Theorem 1, $KH = O(T)$ is large as well. This potentially poses sample inefficiency. Please elaborate or correct if the reviewer misunderstood. Other Comments Or Suggestions: Potential typo: Line 320, left column, left hand side of the inequality, subscript $d^{\pi*}$ should be $\nu^{\pi*}$? Questions For Authors: Please see the weakness point. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **(a) Sample Complexity**: We note that there is a misunderstanding here. The convergence rate and the sample complexity are two faces of the same coin, and one can be derived from the other, as long as the error metric is taken to be the same. For example, consider our algorithm that uses a trajectory of length $\max\\{2^{Q_{kh}}, T_{\max}\\}$ where $Q_{kh}\sim \mathrm{Geo}(1/2)$ for each $k, h$. It is easy to check that $$\mathbf{E}[\max\\{2^{Q_{kh}}, T_{\max}\\}]\leq \mathcal{O}(\log T_{\max})$$ Since we have taken the number of iterations of $k, h$ to be $K=\Theta(\sqrt{T})$ and $H=\tilde{\Theta}(\sqrt{T})$, and $T_{\max} = H^2 = \tilde{\Theta}(T)$, the expected number of state transition samples used by our algorithm is $\mathcal{O}(KH\log T_{\max}) = \tilde{\Theta}(T)$. Theorem 1 exhibits that for such choices of parameters, our algorithm achieves $\tilde{\mathcal{O}}(1/\sqrt{T})$ global error (up to some additive factors of $\epsilon_{\mathrm{bias}}$ and $\epsilon_{\mathrm{app}}$). This establishes the convergence rate of our algorithm. Alternatively, if we want to ensure a global error of $\epsilon$ (up to the additive factors stated before), the expected number of state transition samples required would be $\tilde{\mathcal{O}}(\epsilon^{-2})$. This expresses the same result in terms of sample complexity. Although both notions are the same, the literature on average-reward MDP typically expresses the results in terms of the convergence rate, while the literature on discounted-reward MDP typically adopts the sample complexity framework. The cited paper [A] analyzes a discounted-reward setup where the common metric is the sample complexity. The same article has also been cited in our paper (line 118), where we mention its global convergence rate to be $\mathcal{O}(T^{-1/3})$. This is obtained from their equivalent sample complexity result of $\mathcal{O}(\epsilon^{-3})$. It is to be noted that even in the discounted MDP setup, there is no actor-critic algorithm that achieves $\mathcal{O}(\epsilon^{-2})$ sample complexity for general parameterization. **(b) Typo**: Thanks for pointing out the typo. We will correct it in the revised version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the response. 1) However, the order-wise result in the response should be $$\mathbf{E}[\max \(2^{Q_{kh}},T_{\max} \)] \ge T_{\max} = \Theta(T_{\max})$$ instead of the stated result. 2) In the following literature [A], the sample complexity result is $\tilde{\mathcal{O}}(\epsilon^{-2})$ for discounted setting. [A] Xu, Tengyu, Zhe Wang, and Yingbin Liang. "Improving sample complexity bounds for (natural) actor-critic algorithms." Advances in Neural Information Processing Systems 33 (2020): 4358-4369. I would like to maintain the score. --- Reply to Comment 1.1.1: Comment: 1. We apologize for the typo. The length of the trajectory should be $\min\\{2^{Q_{kh}}, T_{\mathrm{max}}\\}$. In this case, one can see that $$ \begin{align} \mathbf{E}\left[\min\\{2^{Q_{kh}}, T_{\max}\\}\right] &= \sum_{q=1}^{\lfloor \log_2 T_{\max}\rfloor} 2^q \mathbf{Pr}(Q_{kh}=q)+ \sum_{q=\lfloor \log_2 T_{\max} \rfloor + 1}^{\infty} T_{\max} \mathbf{Pr}(Q_{kh}=q) \\\\ &= \sum_{q=1}^{\lfloor \log_2 T_{\max}\rfloor} 2^q\times 2^{-q} + \sum_{q=\lfloor \log_2 T_{\max} \rfloor + 1}^{\infty} T_{\max}2^{-q} \\\\ &= \lfloor \log_2 T_{\max}\rfloor + T_{\max} 2^{-\lfloor \log_2 T_{\max}\rfloor} \leq \lfloor \log_2 T_{\max}\rfloor +2 = \mathcal{O}\left(\log_2 T_{\max}\right) \end{align} $$ Therefore, our conclusions remain unchanged. 2. We note that the cited paper [A] has an incorrect sample complexity for NAC which was subsequently corrected in the arXiv version (https://arxiv.org/pdf/2004.12956). In this paper, the sample complexity $\mathcal{O}(\epsilon^{-2})$ is established via the vanilla actor-critic (AC) algorithm is to ensure a first-order *local* or *stationary* convergence error of $\epsilon$ (please see the footnotes following the comparison table in the mentioned paper). In contrast, the same paper establishes a sample complexity of $\mathcal{O}(\epsilon^{-3})$ via a natural actor-critic (NAC) algorithm to ensure an $\epsilon$ *global* error. We reported the global convergence result in our paper. We hope the above clarification resolves all of your remaining concerns.
Summary: This work considers the average-reward RL setting with general policy parametrization. The authors improve over the state-of-the-art global convergence guarantee from a rate of $T^{-1/4}$ to $T^{-1/2}$ without requiring knowledge of the mixing or hitting times. This is done by adopting a multi-level MC procedure to estimate the relevant quantities and through a tighter analysis of the derived estimates. Claims And Evidence: The claims made in the submission are clear and supported by theoretical results. Concerning the table, the authors states that their approach works for infinite state and actions, and similar claims are made for some of the related works. However, by inspecting those works, it appears that their results are defined with respect to large but finite spaces. Can the authors comment on this? Methods And Evaluation Criteria: No experiments were presented in the work. Theoretical Claims: I checked the proof outline in Section 5 which seems correct to me. Experimental Designs Or Analyses: No experiments. Supplementary Material: I reviewed the supplementary material but I did not carefully check the proof of all the results. Relation To Broader Scientific Literature: The paper presents an improvement in terms of global convergence guarantees with respect to state-of-the-art approaches. A similar convergence result was achieved in Ganesh et al. (2024) but in a slightly different setting with finite state and actions and knowledge of the mixing time. A work closer in terms of assumptions and setting is the one from Patel et al. (2024) but achieves a worse convergence rate of $T^{-1/4}$. Essential References Not Discussed: In my opinion, the related literature has been thoroughly discussed. Other Strengths And Weaknesses: Among the strengths of the work, I mention the newly achieved result in terms of convergence for the considered setting without knowledge of the mixing time and with infinite state and actions. Concerning the weaknesses, the authors do not properly highlight the technical novelty in terms of theoretical analysis. Indeed, this work shares many similarities both in terms of assumptions and in terms of methods with the one of Patel et al. (2024). I believe that the work may benefit from a clearer comparison between the current work and the one just mentioned. Another weakness is the absence of numerical simulations. Other Comments Or Suggestions: See above Questions For Authors: See Sections above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Concerning the table, the authors state that their approach works for infinite states and actions, and similar claims are made for some of the related works. However, by inspecting those works, it appears that their results are defined with respect to large but finite spaces. Can the authors comment on this? We agree that some related works (NAC-CFA and MAC) assume large but finite state spaces, and we have mistakenly given them more credit by stating that their conclusions extend to infinite state spaces. We will make changes in the revised paper accordingly. In this context, we want to emphasize that our work does apply to infinite states and action spaces, and there is no mistake in that claim. > Concerning the weaknesses, the authors do not properly highlight the technical novelty in terms of theoretical analysis. Indeed, this work shares many similarities both in terms of assumptions and in terms of methods with the one of Patel et al. (2024). I believe that the work may benefit from a clearer comparison between the current work and the one just mentioned. The assumptions used in this work are commonly used in the literature and thus are similar to those used in multiple works including (Patel et al., 2024). Although our methods bear *some* similarities with that of (Patel et al., 2024), there are multiple novel components in the analysis that allow us to establish an order-optimal convergence rate of $\tilde{O}(T^{-1/2})$ for average reward MDPs without the knowledge of mixing time, a feat achieved by no other work in the literature. While existing AC analyses (including that in the cited paper) apply relatively loose bounds, we refine the analysis to get sharper results. The first step towards this goal is to establish that the global convergence error is bounded by terms proportional to the bias and second-order error in the NPG estimation (Lemma 1). In prior AC works (including the cited paper), a coarser bound was used instead of the NPG bias that led to an error of the form $\mathbb{E} ||\xi_t - \xi^\star||$ in the global convergence bound where $\xi_t$ is the critic estimate at time $t$ and $\xi^\star$ denotes the true value. Utilizing Lemma 1 and Theorem 3, our analysis refines this term to $||\mathbb{E}[\xi_t] - \xi^\star ||$, which can be significantly smaller than the previous estimate. It is to be noted that bounding $||\mathbb{E}[\xi_t] - \xi^\star||$ remains challenging due to Markovian sampling. Interestingly, we observe that both critic and NPG updates can be interpreted as linear recursions with Markovian noise. Efficient analysis of such a linear recursion is another novelty of our paper. In Theorem 2, we obtain a convergence rate for a generic stochastic linear recursion. This, along with the improved error estimates, form a basis for Theorems 3 and 4, which finally leads to our desired result in Theorem 1. In summary, improving the error estimates, recognizing the NPG and critic updates as linear recursions, and performing a sharp convergence analysis of a general stochastic linear recursion are the novel cornerstones of our analysis. >Another weakness is the absence of numerical simulations. The focus of the paper is obtaining the first theoretical result for a state-action space size independent global convergence rate of $\tilde{\mathcal{O}}\left(T^{-1/2}\right)$ for general parameterized policies in average reward infinite-horizon MDPs, using a practical algorithm that does not require the knowledge of mixing times. Evaluation of this work on some practical applications is left as future work.
null
null
null
null
null
null
Distributed Parallel Gradient Stacking(DPGS): Solving Whole Slide Image Stacking Challenge in Multi-Instance Learning
Accept (poster)
Summary: This paper proposes a Distributed Parallel Gradient Stacking (DPGS) framework and a Deep Model Gradient Compression (DMGC) technique to address the non-stackable data issues caused by variable bag lengths in whole slide image (WSI) multiple instance learning (MIL). The core innovations include: 1) Enabling lossless batch processing through distributed sub-model parallel computation and gradient aggregation; 2) Jointly compressing gradients/parameters and leveraging gradient sparsity to update only sparse parameters, reducing communication overhead by 99.2% while maintaining convergence; 3) Experimental results demonstrate 31× acceleration in training speed and 9.3% improvement in accuracy on Camelyon16 and TCGA-Lung datasets. Claims And Evidence: Most of the claims made in the manuscript are supported by clear and convincing evidence. The shortcoming is that the motivation of this paper is not strongly substantiated. For instance, the article claims, "There are two critical bottlenecks: (1) Slow training speed: The inability to utilize GPU parallelism leads to prohibitively long training times for large-scale WSI datasets; (2) Unstable gradient estimation: Sequential gradient updates rely on single-bag statistics, introducing bias across non-identically and independently distributed (non-IID) bags and hindering model convergence." However, the second point lacks support from both references and experimental evidence. Methods And Evaluation Criteria: Yes, the proposed method(s) and/or evaluation criteria (e.g., benchmark datasets) are appropriately justified for the current problem or application. Theoretical Claims: Yes, I have examined the principles of the method proposed in the article, primarily the Deep Model-Gradient Compression outlined in Section 3.3. After reviewing the references cited in this section (Dean et al., Lin et al.), I find the principles underlying the proposed method to be sound and without issue. Experimental Designs Or Analyses: Yes, I have checked the validity of experimental designs and analyses from Section 4.1 to Section 4.3. The experimental design and analysis conducted for the proposed method in the article are methodologically sound and empirically valid. Supplementary Material: No, there is not the supplementary material. Relation To Broader Scientific Literature: The Distributed Parallel Gradient Stacking (DPGS) framework proposed in this paper represents the first work to achieve simultaneous improvements in both speed and accuracy while addressing the issue of non-stackable data in Multiple Instance Learning (MIL). This holds significant importance for the current development of large-scale pre-trained pathology foundation models. The training method introduced in this paper has the potential to enhance the performance of existing Slide-level pre-trained foundation models, such as those referenced in [1][2][3]. [1] A whole-slide foundation model for digital pathology from real-world data (Nature 2024) [2] A pathology foundation model for cancer diagnosis and prognosis prediction (Nature 2024) [3] SlideChat: A Large Vision-Language Assistant for Whole-Slide Pathology Image Understanding (CVPR 2025) Essential References Not Discussed: To the best of my knowledge, the key contribution of the proposed Distributed Parallel Gradient Stacking (DPGS) framework in this paper is unique. It introduces the DPGS framework combined with Deep Model Gradient Compression (DMGC) technology to address the issue of non-stackable data caused by varying bag lengths in Whole Slide Image (WSI) multiple instance learning. The related work is comprehensively reviewed, but the motivation presented in the introduction may require further support from additional literature or experimental evidence. Other Strengths And Weaknesses: Strengths: 1.The paper introduces a novel approach (DPGS) to address the critical challenge of non-stackable data in MIL by leveraging distributed gradient stacking instead of traditional bag padding, enabling lossless MIL batch processing while improving both speed and accuracy 2.The proposed method achieves up to 31× faster training and 9.3% accuracy gains on widely used medical datasets (Camelyon16, TCGA-Lung). This holds a certain significance for the development of foundational models that require large-scale pre-training in the field of computational pathology. Weaknesses: As I am not familiar with work in the field of distributed computing or training in the general machine learning area, I conservatively point out the following shortcomings of the work: 1. The introduction of the paper mentions a key bottleneck of unstable gradient estimation faced by traditional MIL model training. To address this issue, the paper designed experiments to compare the convergence times of different methods. However, it is not clear how “convergence” is defined, which may not be sufficient to support the claim that the instability of gradient estimation and the difficulty of convergence in previous methods have been resolved. 2. The explanation as to why existing distributed training frameworks (such as Megatron-LM, DeepSpeed) or gradient compression methods (such as DGC) cannot be directly applied to MIL models is not clear. Other Comments Or Suggestions: Since I am not familiar with work in the field of distributed computing or training in the general machine learning area, I do not have any other comments or suggestions. Questions For Authors: 1.Although bags of different lengths cannot be directly concatenated into a batch tensor, to my knowledge, the built-in collate_fn function in PyTorch’s DataLoader can load multiple bags within a single batch, which means that batch training is possible on a single GPU. What advantages does the proposed DPGS offer over this training approach? 2.How does DMGC ensure that gradients critical for early-stage convergence (e.g., large-magnitude or directionally consistent gradients) are retained during sparsification? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: Most of the claims made in the manuscript are supported by clear and convincing evidence. The shortcoming is that the motivation of this paper is not strongly substantiated. A1: We sincerely appreciate the reviewer's valuable feedback. Regarding training efficiency, we clarify that "excessively long" was intended to highlight the relative efficiency difference between existing methods and our approach under the same conditions. We have added reference [1] to support this. For the second point, we agree that a clearer theoretical link between non-IID data and convergence difficulties is needed. We’ve cited [2], which proves: (1) non-IID data causes gradient bias, delaying or preventing convergence, and (2) appropriate gradient aggregation strategies (e.g., mean normalization) mitigate this effect. Our gradient aggregation mechanism is based on these insights. Q2: However, it is not clear how “convergence” is defined, which may not be sufficient to support the claim that the instability of gradient estimation and the difficulty of convergence in previous methods have been resolved. A2: We sincerely appreciate the reviewer's valuable feedback. We define convergence using the threshold-based criterion from [3], where the model is considered converged when the total loss remains consistently below a fixed threshold. This method aligns with standard practices and ensures objective stability assessment. We derived the threshold by analyzing the 95th percentile of the loss distribution across multiple trials, ensuring robustness. Q3: The explanation as to why existing distributed training frameworks (such as Megatron-LM, DeepSpeed) or gradient compression methods (such as DGC) cannot be directly applied to MIL models is not clear. A3: We sincerely appreciate the reviewer's valuable feedback. The existing distributed frameworks (e.g., Megatron-LM/DeepSpeed) indeed excel in traditional LLM scenarios but face two fundamental challenges when applied to MIL: (1) Architectural Compatibility: Current frameworks are primarily optimized for structured architectures like Transformers (e.g., tensor parallelism with intra-layer partitioning, as in Megatron-LM). However, MIL’s multi-instance interaction mechanism, due to its distinct design philosophy and structure, requires different parallelization strategies, making these frameworks suboptimal for MIL. (2) Data Characteristics: MIL’s hierarchical "bag-instance" data (variable-length, non-directly stackable) violates the conventional distributed training assumption of "batch homogeneity." This leads to inefficient data sharding and memory wastage when using existing frameworks. Our proposed method is specifically tailored to MIL’s unique computational paradigms and data topology. Thus, in the MIL domain, existing distributed training frameworks cannot be efficiently applied. Q4: Although bags of different lengths cannot be directly concatenated into a batch tensor, to my knowledge, the built-in collate_fn function in PyTorch’s DataLoader can load multiple bags within a single batch, which means that batch training is possible on a single GPU. What advantages does the proposed DPGS offer over this training approach? A4: We sincerely appreciate the reviewer's valuable feedback. Although PyTorch’s collate_fn provides flexibility, preprocessing steps like Bag Pooling are still needed to form batch tensors, potentially causing information loss and additional overhead. DPGS eliminates these steps, as shown in Table 2, where it significantly outperforms pooling-based methods in efficiency. Q5:How does DMGC ensure that gradients critical for early-stage convergence (e.g., large-magnitude or directionally consistent gradients) are retained during sparsification? A5: We sincerely appreciate the reviewer's valuable feedback. DMGC preserves essential gradients through: (1) Magnitude-based Gradient Filtering: We retain the top k% largest gradients, which dominate optimization (Table 4 shows that even retaining just 0.01% allows convergence). (2) Momentum Accumulation: Historical gradients stabilize training by suppressing noise and fluctuations. These mechanisms collectively preserve key gradients while minimizing ineffective updates. Thank you for your valuable feedback. If you have further questions, feel free to ask. We would really appreciate it if you find our revisions satisfactory and adjust your evaluation. [1]Wen, Jiangping, Jinyu Wen, and Emei Fang. "MsaMIL-Net: An End-to-End Multi-Scale Aware Multiple Instance Learning Network for Efficient Whole Slide Image Classification." arXiv preprint arXiv:2503.08581 (2025). [2]Li, Xiang, et al. "On the convergence of fedavg on non-iid data." arXiv preprint arXiv:1907.02189 (2019). [3]Jahani-Nasab, Mahyar, and Mohamad Ali Bijarchi. "Enhancing convergence speed with feature enforcing physics-informed neural networks using boundary conditions as prior knowledge." Scientific Reports 14.1 (2024): 23836. --- Rebuttal Comment 1.1: Comment: The rebuttal addresses my concerns and I'll retain the score.
Summary: This paper proposes a framework called DPGS combined with DMGC to address the non-stacked data problem in MIL, particularly in WSI analysis. DPGS addresses the inefficiencies due to variable-length instance bags that prevent effective batch stacking by parallelizing MIL models and stacking gradients instead of raw data, thereby achieving significant training speedups. DMGC further enhances performance by compressing both gradients and model weights. Experimental results on Camelyon16 and TCGA-Lung datasets demonstrate up to 31× faster training and 9.3% accuracy improvement over baseline models. Claims And Evidence: The paper claims to enable lossless data stacking in MIL, accelerate training, and improve model accuracy. However, the experimental data appear to contain discrepancies. For instance, inconsistencies exist between the experimental data presented in Table 2 and the descriptions within the main text. Notably, the convergence time of the Bag padding method in Table 2 significantly exceeds that of the Classic method (non-parallel methods restricted to a single GPU), which appears to be anomalous. Furthermore, the precise methodology for computing the convergence time evaluation metric is not adequately elucidated. Methods And Evaluation Criteria: The methodology proposed in this manuscript offers the potential to accelerate the training of giga-pixel Whole Slide Image (WSI) analysis, thereby possessing significant research implications Theoretical Claims: N/A. The demonstration of equivalent derivations between DPGS and traditional mini-batch training is straightforward. Experimental Designs Or Analyses: The experimental design and analysis are generally robust. However, the validation of model accuracy could benefit from the inclusion of stronger baselines. Furthermore, it is imperative to ensure the veracity of the experimental data, eliminating any typographical errors within the tables, as these significantly undermine the credibility of the reported results. Supplementary Material: N/A Relation To Broader Scientific Literature: The manuscript applies widely adopted techniques, such as gradient accumulation, distributed training, and gradient compression, to the task of giga-pixel WSI analysis. The connections to prior works are well-articulated. A discussion on how DPGS compares to recent federated MIL approaches would strengthen the literature review. Essential References Not Discussed: The paper provides a comprehensive overview of related works. Given the paper's emphasis on distributed training and gradient compression, incorporating recent studies on asynchronous distributed MIL and federated MIL frameworks would furnish a more comprehensive perspective of the field. Other Strengths And Weaknesses: Strengths: - The paper proposes DPGS to stack giga-pixel Whole Slide Images (WSIs) with inconsistent instance lengths, thereby accelerating training. - Experimental results demonstrate that the proposed method significantly enhances both training speed and accuracy. Weaknesses: - The novelty is somewhat incremental, as the method leverages established gradient accumulation, distributed training, and gradient compression techniques. - The computation methodology for metrics such as convergence time remains undefined. - Numerous data errors significantly compromise the credibility of the experimental results. Other Comments Or Suggestions: Clarify the trade-offs between gradient stacking and asynchronous updates. Questions For Authors: - Why is the Time data for the MEANMIL method unavailable in Table 2? - Why is the DPGS+DMGC method not compared under identical "B" conditions? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1:Table 2 shows Bag Padding's unexpectedly long convergence time versus Classic. The convergence time calculation method also requires clarification. A1:Thank you for reviewing.Bag Padding's longer convergence time (Table 2) stems from large packet length spans (C16 MULTISCALE: 223-57,318; C16 IMAGENET: 487-105,775), requiring excessive padding and computation time that outweighs batch benefits. Larger batches mitigate but don't eliminate this penalty. Q2:Comparing DPGS with recent federated MIL approaches and including asynchronous distributed MIL studies would enhance the literature review's comprehensiveness. A2:Thank you for reviewing.We acknowledge the need to compare with async distributed/federated MIL methods. Following your guidance, we systematically searched Web of Science using:"Federated Learning" AND "multi-instance learning""asynchronous distributed" AND "multi-instance learning"Both queries returned null results, further confirming our study's novelty. Q3:The method's novelty is limited, building on existing gradient accumulation, distributed training, and compression techniques. A3:Thank you for reviewing.We appreciate the reviewer's rigorous evaluation. While acknowledging prior work (gradient accumulation/distributed training), our core contribution addresses MIL's fundamental bottleneck - non-stackable data - which has constrained recent NN-based MIL studies to inefficient batch_size=1 training[2][3].Our three-level innovation:Problem: First lossless MIL data stacking enabling batch_size scalability (orders-of-magnitude efficiency gain)Method: DMGC extends DGC via gradient sparsity for compressed weight distribution (lower communication overhead)Framework: First adaptation of distributed concepts to MIL's variable-length sequences (bridging general/MIL-specific methods) We fully agree these innovations build upon existing technologies, much like how Transformer builds upon self-attention mechanisms. However, proposing systematic solutions to domain-specific critical problems constitutes significant innovation in itself. Q4:While the experimental design is robust, adding stronger baselines would improve accuracy validation. A4:Thank you for reviewing.To address your suggestions and improve rigor, we added comparisons with two recent MIL models (RRTMIL, CVPR 2024[2]; ACMIL, ECCV 2024[3]). Preliminary results (below) show our framework retains significant training efficiency advantages over these models (ACMIL1.47%ACC and 19.1XTime improve/RRTMIL 1.4%ACC and 3.99XTime improve). (Full table:https://drive.google.com/file/d/1VODIlXC0Qd1wXPao16yJFCReRAYj3MBV/view?usp=sharing)Ongoing experiments will be fully reported in the revision. Q5:DPGS+DMGC: Why no same-B comparison?MEANMIL:Why missing time data in Table 2? A5:Thank you for reviewing.Our tiered B-value design (C16: 1/4/8/16/32; TCGA-LUNG: 1/8/16/32/64) reflects real-world possible batch size selection. Classic only supports B=1 (no stacking),therefore, no comparison can be made; we'll correct Bag Padding labels and provide (https://drive.google.com/file/d/1_DUaRWWV7AcXjODUHF9RfFp_GsJFThd-/view?usp=sharing).For MeanMIL's missing timing:Unstable convergence prevented standard timing calculation [1].Non-convergent cases used class priors (marked *) as conservative estimates.Convergence time shown as "-" with explanatory notes. Q6:Clarify the trade-offs between gradient stacking and asynchronous updates. A6:Thank you for reviewing.Our design employs synchronous distributed training rather than asynchronous updates due to two key factors: Asynchronous updates cause stale gradients that led to significant training instability in our preliminary experiments. While synchronous updates require coordination overhead, our DMGC method minimizes communication costs while preserving stability through gradient stacking. We greatly appreciate your feedback. Should you have any additional questions, please don't hesitate to ask. We'd be most grateful if you find our revisions satisfactory and consider adjusting your evaluation. [1]Jahani-Nasab, Mahyar, and Mohamad Ali Bijarchi. "Enhancing convergence speed with feature enforcing physics-informed neural networks using boundary conditions as prior knowledge." Scientific Reports 14.1 (2024): 23836. [2]Tang, Wenhao, et al. "Feature re-embedding: Towards foundation model-level performance in computational pathology." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3]Zhang, Yunlong, et al. "Attention-challenging multiple instance learning for whole slide image classification." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the responses, which have largely addressed my concerns. After careful consideration of all comments and the corresponding responses, I have decided to raise my score to weak accept.
Summary: This paper introduces Distributed Parallel Gradient Stacking (DPGS), a framework designed to address the challenge of non-stackable data in Multiple Instance Learning (MIL) for Whole Slide Image (WSI) analysis. The authors propose two key components: (1) DPGS, which enables parallel processing of variable-length MIL bags by distributing them across multiple GPUs and aggregating their gradients, and (2) Deep Model-Gradient Compression (DMGC), which reduces communication overhead during distributed training through joint compression of gradients and model parameters. Experiments on Camelyon16 and TCGA-Lung datasets demonstrate significant improvements in both training speed (up to 31× faster) and classification accuracy (up to 9.3% increase) compared to baseline methods. Claims And Evidence: The paper makes several claims that are supported by experimental evidence but lack critical context: 1. Problem significance: The authors frame non-stackable MIL data as a critical bottleneck for training efficiency, claiming that sequential processing leads to "prohibitively long training times." However, from my experience, many current MIL methods can be trained on a single high-memory GPU (e.g., RTX3090 with 24GB memory) in reasonable timeframes (half a day) for the datasets used in this paper. 2. Acceleration and accuracy improvement: The paper claims up to 31× faster training and 9.3% accuracy improvement. While the experimental results in Table 2 support these numbers, the comparison is against sequential processing methods without consideration of simpler alternatives like uniform sampling approaches that could enable standard batch training. 3. Mathematical equivalence of DPGS to mini-batch training: The mathematical derivations showing DPGS is equivalent to mini-batch SGD are sound, but this equivalence raises questions about why standard mini-batch approaches with fixed-size bags (through patch sampling) couldn't achieve similar results with less complexity. 4. Communication efficiency of DMGC: The ablation studies support the claim that DMGC achieves significant communication reduction. However, this actually leads to another major question that existing works can be trained with a single GPU, yet the proposed method requires multi-GPU for training. In clinical settings, researcher may not have access to multi-GPU setups. Methods And Evaluation Criteria: The idea of this work is timely and interesting. And the method design seems reasonable. However, the evaluation has several significant limitations: 1. Unclear feature extraction: The "multi-scale" features referenced for both datasets are not adequately defined, making it impossible to understand their relationship to current feature extraction approaches in computational pathology. 2. Outdated baselines: The paper considers older MIL methods (ABMIL, MeanMIL, TransMIL, CLAM-MB) while omitting including more recent and advanced MIL approaches that represent the current state-of-the-art in computational pathology and pathology foundation models for feature extraction. 3. Missing comparisons to simpler alternatives: The paper did not evaluate simpler baselines such as uniform sampling that could potentially address the variable-length bag issue with much less complexity and computational overhead. 4. Limited dataset analysis: While the paper uses established datasets (Camelyon16 and TCGA-Lung), it could further benefit from adequately discussing the dataset characteristics or how they might influence the observed results. Theoretical Claims: The theoretical foundations of the paper are generally sound. Experimental Designs Or Analyses: Please address the questions in the Methods and Evaluation Criteria section. Additional questions: 1. Limited analysis of computational requirements: There is insufficient analysis of how results might change with different hardware configurations, particularly more accessible setups with fewer or less powerful GPUs. 2. Impact of foundation models: In the field of computational pathology, multiple foundation models have been proposed yet I see no use here in this paper. From the community’s experience, using foundation models as feature extractors and a simple MIL model (such as ABMIL), the performance on these two datasets could easily goes up to near 100%. I believe it would be better to take these into consideration. Supplementary Material: No supplementary material is appended. Relation To Broader Scientific Literature: The proposed method contributes to the entire community of computational pathology, facilitating efficient and effective frameworks for whole slide image analysis. The speed-up in terms of training time and batch training are interesting and considered for the modern MILs for the first time. Essential References Not Discussed: The Related Work section in this paper is comprehensive, yet some minor discussion is needed: 1. The paper doesn't adequately discuss simpler alternatives to handling variable-length bags, such as uniform sampling strategies. 2. The MIL approaches used as baselines do not represent the current state-of-the-art. Other Strengths And Weaknesses: Strengths: 1. The mathematical derivations showing equivalence to mini-batch training are sound. 2. The gradient compression approach (DMGC) offers an interesting extension to existing gradient compression techniques. 3. The ablation studies provide useful insights into the factors affecting performance within their framework. Weaknesses: 1. High resource requirements: Although the speed-up performance is impressive, the method requires multiple GPUs and high-bandwidth connections, significantly limiting its practical applicability in many research and clinical settings. 2. Missing comparisons to simpler alternatives: The paper doesn't evaluate simple alternatives such as uniform sampling that could potentially achieve similar results with much less complexity. 3. Outdated baselines: The paper relies on comparisons with older MIL methods rather than current state-of-the-art approaches. 4. Unclear feature extraction: The "multi-scale" features referenced throughout the paper are not adequately defined. 5. Limited relevance given foundation models: The paper doesn't acknowledge or compare against foundation models that have demonstrated near-perfect performance on the same datasets. Other Comments Or Suggestions: 1. Typos should be carefully filtered: for example, in the first paragraph in Introduction, “followed by classification.MIL framework”. 2. Meanwhile, please check the indent in the paper. For example, in the first paragraph of Introduction, “exemplify high-accuracy solutions in this domain.However,”. 3. Could you put Figure 1 at the top of Page 1? Questions For Authors: 1. Could you provide a clear definition of the "multi-scale" feature extraction process used in your experiments, including architectures and implementation details? 2. Have you compared your approach with simpler methods like uniform sampling from each WSI to create fixed-length bags that could be trained with standard batch processing? 3. Could you explain your experimental setting and the reason why you did not consider current foundation model-based approaches (UNI [1], CONCH [2], PLIP [3], etc.) that have demonstrated state-of-the-art performance on the same datasets? 4. What is the minimum hardware configuration required to achieve meaningful benefits from your approach compared to single-GPU training? 5. Why did you choose to compare against older MIL methods rather than more recent approaches that might represent stronger baselines? I will consider raising the overall recommendation score if these questions are solved in the rebuttal phase. [1] Chen, Richard J., et al. "Towards a general-purpose foundation model for computational pathology." Nature Medicine 30.3 (2024): 850-862. [2] Lu, Ming Y., et al. "A visual-language foundation model for computational pathology." Nature Medicine 30.3 (2024): 863-874. [3] Huang, Zhi, et al. "A visual–language foundation model for pathology image analysis using medical twitter." Nature medicine 29.9 (2023): 2307-2316. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1:The paper calls non-stackable MIL data a bottleneck citing 'prohibitively long training times',yet current methods train adequately on single GPUs within half-day for these datasets A1:Thank you for reviewing.Clarifications:This is an inexact error "long training times":Our method shows superior efficiency under identical conditions. While some MIL methods use 24GB GPUs, repeated runs (tuning/ablation) require 10-100× more time,making efficiency critical.Current medical datasets (thousand-scale) remain smaller than natural image sets (e.g., ImageNet).As [1] shows,larger datasets improve generalization,suggesting medical data will grow.Our framework accelerates training through multi-node/single-GPU parallelism and enables federated learning for multi-center collaboration,getting ready for the future Q2:The reported performance improvements lack comparisons with simpler alternatives like uniform sampling that could potentially achieve comparable results more efficiently. A2:Thank you for reviewing.For benchmarks, we added uniform sampling tests (currently ABMIL-only due to time constraints).Results show it speeds convergence but reduces accuracy and increases variance(see table:https://drive.google.com/file/d/1sjgW2z6Y9pbgwI9v6TwaJBZcwBd4PC4O/view?usp=sharing),compromising data integrity—mathematically distinct from standard minibatching by breaking positive bag completeness and potentially missing key instances,with larger batches worsening these effects [2] .Our method maintains better efficiency-accuracy tradeoffs. Q3: While DMGC enhances communication efficiency, its multi-GPU dependency and unproven performance on typical clinical hardware (especially low-GPU setups) pose practical concerns. A3:Thank you for reviewing.To clarify:Our method supports single-GPU operation through multi-process parallelization (virtual nodes) to maximize resource utilization.This optimizes usage without multi-GPU setups. Intra-device communication enhances efficiency as memory bandwidth exceeds network limits. We compared ABMIL performance using 4 nodes on single 4060 8G vs. multi-machine V100 32G configurations(https://drive.google.com/file/d/1biUVuB6COd79X3NADdsiwm9va2p1Mpyd/view?usp=sharing), with detailed implementation analysis to be included in revision. Q4:Unclear feature extraction:The"multi-scale"features referenced for both datasets are not adequately defined,obscuring their relation to current computational pathology feature extraction methods. A4:Thank you for reviewing.Multi-scale features were obtained from DS-MIL's repository (https://github.com/binli123/dsmil-wsi/issues/49), extracted at multiple magnifications (e.g.20×,5×) and concatenated into a feature pyramid.This maintains tissue details across scales while optimizing feature use via local attention, enhancing both classification and lesion localization. Q5:Outdated baselines:The paper uses older MIL methods but omits recent advanced MIL approaches representing current state-of-the-art in computational pathology and pathology foundation models for feature extraction. A5:Thank you for reviewing.Key clarifications:Our focus is a general MIL framework addressing non-stackable data bottlenecks, not a new model, hence using classical models (ABMIL,TransMIL) for comparison.New tests compare with recent models (RRT-MIL,ACMIL), showing training efficiency gains (ACMIL1.47%ACC and 19.1XTime improve/RRTMIL 1.4%ACC and 3.99XTime improve)(Full table::https://drive.google.com/file/d/1VODIlXC0Qd1wXPao16yJFCReRAYj3MBV/view?usp=sharing).Due to time constraints,the full experiments will be provided in the revised version. Q6:Limited dataset analysis:While using established datasets, the paper should better discuss their characteristics and potential impact on results.  A6: Key findings:Bag length variation (Δ57k/105k for C16 Multiscale/ImageNet) affects speed, explaining Bag Pooling's lag.Dataset size dictates optimal batch (TCGA=32 vs C16=16), showing scale's hyperparameter impact.Method differences fade with easier features (TCGA) but intensify with hard ones (C16 ImageNet). Q7:The study's omission of current foundation models (e.g., UNI, CONCH, PLIP) that achieve near-perfect performance on these datasets with simple MIL architectures requires justification. A7:Thank you for reviewing.Our core contribution addresses MIL's data stacking limitations through framework innovation (not feature extraction), justifying initial internal comparisons.Following your advice, we've added comparisons(https://drive.google.com/file/d/1Xs2YqWd7SCp4Te8PaqnUbjXEzdf3p6Vd/view?usp=sharing);Due to time,the full experiments will be provided in the revised version. [1]Bailly,et al. "Effects of dataset size and interactions on the prediction performance of logistic regression and deep learning models." Computer Methods and Programs in Biomedicine 213 (2022): 106504. [2]Shapcott M,et al.Deep Learning With Sampling in Colon Cancer Histology.Front Bioeng Biotechnol.2019;7:52. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The updated results in the rebuttal seem promising, with 20x speeding up and performance gain at the same time. I see that the authors have tried their best to answer the questions and most of them are solved. Although the writing quality of this paper requires further polishing, I decide to raise the score to weak accept.
Summary: The manuscript introduces a framework designed to overcome the challenges posed by variable-length bags in Multiple Instance Learning (MIL) for whole slide image analysis. The framework was called Distributed Parallel Gradient Stacking (DPGS), distributes the gradient computation across multiple processes and stacking gradients. This method simulates mini-batch training without the need for data padding. The authors also propose Deep Model-Gradient Compression (DMGC), which jointly compresses both gradients and model weights to reduce communication overhead significantly. Experiments on the Camelyon16 and TCGA-Lung datasets indicate improvements in training speed and accuracy over standard approaches. Claims And Evidence: The submission does support its claims with both theoretical derivations and empirical evidence. The authors provide theoretical Justification that gradient stacking is mathematically equivalent to traditional mini-batch training. The empirical results on Camelyon16 and TCGA-Lung datasets show improvements in convergence time and prediction accuracy. Methods And Evaluation Criteria: The combination of DPGS and DMGC addresses the challenge of handling variable-length bags in MIL. Experiments conducted on Camelyon16 and TCGA-Lung provide a measure of the proposed method’s effectiveness in real-world scenarios. The evaluation criteria include both prediction accuracy and convergence time, which assesses not just the model’s performance but also its efficiency. Theoretical Claims: The authors derive that aggregating gradients computed on separate bags (via DPGS) is mathematically equivalent to computing the gradient on a mini-batch formed by stacking the data. The authors also present a theoretical time complexity analysis that compares the parallel training time with sequential training. The derivation provides a reasonable estimate of speedup and highlights potential scalability. Experimental Designs Or Analyses: The combination of DPGS and DMGC addresses the challenge of handling variable-length bags in MIL. Experiments conducted on Camelyon16 and TCGA-Lung provide a measure of the proposed method’s effectiveness in real-world scenarios. The evaluation criteria include both prediction accuracy and convergence time, which assesses not just the model’s performance but also its efficiency. Supplementary Material: No. Relation To Broader Scientific Literature: The paper bridges gaps between MIL-specific challenges and the broader advances in distributed deep learning and gradient compression. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper enables efficient parallel processing despite variable instance lengths in MIL. The combinations of DPGS and DMGC improve training efficiency and model accuracy. The manuscript provides a derivation demonstrating the equivalence of gradient stacking to traditional mini-batch training. The time complexity analysis strengthen the theoretical foundation of the approach. The results, presented in tables and figures, effectively demonstrate both convergence speed improvements and accuracy gains. The integration between DPGS and DMGC could be better articulated. It is unclear how the compression interacts with the distributed gradient stacking. Additional details regarding hyperparameter choices (e.g., selection of compression thresholds, batch sizes in different settings) would further strengthen reproducibility. Other Comments Or Suggestions: No. Questions For Authors: 1. Could you clarify the integration between DPGS and DMGC, specifically how the compression interacts with the distributed gradient stacking process? 2. Could you provide additional details regarding your hyperparameter choices—such as the selection of compression thresholds and the batch sizes used in different settings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1:Could you clarify the integration between DPGS and DMGC, specifically how the compression interacts with the distributed gradient stacking process? A1: We sincerely appreciate the reviewer's valuable feedback. We apologize for the lack of clarity in describing the relationship between the components in our original manuscript. Regarding your question, we provide the following clarification: The DMGC module is responsible for the compression and transmission of gradients within the gradient-based DPGS framework. Specifically, during distributed training, after worker nodes have completed their local gradient computations, the gradients are first processed by the DMGC module for compression before being transmitted to the main server. Consequently, the main server receives not the full gradients but rather the sparsified gradients compressed by DMGC. On the main server side, DPGS aggregates these compressed sparse gradients to form new global gradients. It is noteworthy that due to the high compression ratio of DMGC, the aggregated gradients remain sparse (the extent of sparsity is contingent on the keep rate). This inherent sparsity enables selective model updates, i.e. only a subset of weights require updating rather than full parameter updates. The system leverages this property to transmit only the sparse weight updates back to the worker nodes, thereby significantly reducing communication overhead (this represents one of DMGC's key innovations compared to DGC[1]). We sincerely appreciate your valuable feedback. If you have any further questions about the paper, please do not hesitate to ask us. Q2:Could you provide additional details regarding your hyperparameter choices—such as the selection of compression thresholds and the batch sizes used in different settings? A2: We sincerely appreciate the reviewer's valuable feedback. Regarding your question, we provide the following clarification: Regarding the experiments presented in Table 2 and Figure 4, all DPGS+DMGC tests were conducted with 4 worker nodes and 1 master process, using a fixed gradient compression ratio of 10% (retaining 10% of gradients; results with different compression ratios can be found in Table 4). All frameworks (except the Classic framework which cannot support batch stacking) were evaluated with multiple batch sizes, employing a stepwise B-value configuration adapted to each dataset's scale (C16: B=1/4/8/16/32; TCGA-LUNG: B=1/8/16/32/64). The complete DPGS+DMGC results across different B-values are presented in Figure 4. Within the DPGS+DMGC framework shown in Table 2 and Figure 4, the master process maintains a fixed gradient buffer size of 4, while worker nodes dynamically adjust their gradient accumulation steps according to the batch size (e.g., when batch size is 16, both the gradient buffer and accumulation steps are set to 4, making the effective gradient update equivalent to batch size 16). All experiments used the Adam optimizer with default momentum parameters, with a base learning rate of 0.001 that follows the linear scaling rule (L=0.001×Batch_size) as established in Goyal et al.'s work on large minibatch SGD[2]. Regarding the experiments in Table 3, all tests employed the aforementioned learning rate configuration with a fixed batch size of 16. As described in the manuscript, we maintained consistent network bandwidth across all tests, and applied identical compression ratios (retaining 10% of gradients) for both DGC and DMGC methods. All other DPGS-related experimental settings remained unchanged from those previously specified. For the experiments depicted in Figure 3, DPGS maintained the same configuration while DMGC used a fixed 10% compression ratio, with identical learning rates and optimizer settings as described earlier. The only modification involved varying the number of nodes, and we recorded training time (not convergence time) at every 10-epoch interval, maintaining a constant batch size of 16 throughout these experiments. Concerning Table 4 and Figure 5, the DPGS configuration remained consistent with our baseline setup except for deliberate variations in two parameters: network bandwidth and compression ratio. These experiments were likewise conducted with a fixed batch size of 16. We sincerely appreciate your valuable feedback. If you have any further questions about the paper, please do not hesitate to ask us. We would be deeply grateful if you find our revisions satisfactory and might consider adjusting your overall evaluation. [1]Lin, Y., Han, S., Mao, H., Wang, Y., and Dally, W. J. Deep gradient compression: Reducing the communication bandwidth for distributed training. URL http://arxiv.org/abs/1712.01887. [2]Goyal, P., Dollar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch SGD: Training ImageNet in 1 hour. URL http://arxiv.org/abs/1706.02677.
null
null
null
null
null
null
Learning Condensed Graph via Differentiable Atom Mapping for Reaction Yield Prediction
Accept (poster)
Summary: The paper introduces YIELDNET, a neural model designed to predict chemical reaction yields by learning a condensed graph representation of reactions. Unlike traditional methods that rely on quantum chemistry-based molecular descriptors, YIELDNET approximates atom mapping—the correspondence between reactant and product atoms—using a differentiable node alignment network. This mapping allows the construction of a Condensed Graph of Reaction (CGR), a supergraph that serves as a surrogate for the transition state, a crucial determinant of reaction yield. The model then processes CGR embeddings through a transformer-guided reaction path encoder to predict yields. A key advantage of YIELDNET is that it operates under distant supervision, meaning it does not require explicit atom mapping or transition state labels during training, yet it outperforms baseline models. This approach enhances inductive bias by integrating a differentiable approximation of the transition state, improving the accuracy of yield predictions in multi-step chemical reactions. Claims And Evidence: Several claims are stated in this paper, including: 1) The CGR serves as a surrogate for the TS -- The claim is not rigorously validated. The claim relies on structural similarity but ignores electronic structure and activation energy, which are critical in TS determination. Transition states require potential energy surface calculations, which the authors do not provide. A surrogate for the TS must be benchmarked against computed TS structures to be credible. Even though in Figure 8 the author shows how TS is represented, for more complicated reactions, the middle state can also be intermediate. This simplicity of the whole reaction scheme should be emphasized. 2) The differentiable atom mapping is an effective approximation of the true atom mapping. The authors state that YIELDNET approximates atom mapping via a differentiable node alignment network (using Sinkhorn iterations). Experiments proved that the model outputs a doubly stochastic alignment matrix that relaxes hard permutation constraints, which mitigates NP-hard graph matching challenges. However, the improvement compared to other mapping methods is marginal. Test cases where the mapping fails are not mentioned —does it work for rearrangements, symmetry-driven reactions, and pericyclic processes? What about other edge cases (e.g., radical shifts, metal-catalyzed bond formations)? 3) The author claims that yield prediction can be learned w/o explicit TS information, giving experiments of empirical comparison with existing ML models for yield prediction. But thermodynamically, the yield is decided by: 1) reaction duration + activation energy height; 2) activation energy barriers comparison of multiple reaction pathways that compete with each other. The interpretation of the inherent state does not guarantee any physical understanding or explanation of its performance. 4) The author states that YIELDNET generalizes well across different reaction types . Methods And Evaluation Criteria: The author performs proper methods to design the model and organized a reaction dataset for the problem. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design and analysis in YIELDNET focus on reaction yield prediction using differential atom mapping, and condensed reaction graphs (CGRs). While the methodology is innovative, the experimental design has limitations in terms of chemical validity, dataset composition, benchmarking, and mechanistic interpretability. The paper claims to predict reaction yields by modeling CGR embeddings and atom mappings, but it does not explicitly explain the relationship between TSs defined by the energy barrier and the connectivity graphs used in their model. Besides, other terms that will influence yield (reaction time, solvent, catalysts...) are ignored. Since yield is highly condition-dependent, the author should consider design and analyze the limitations. The authors collected several datasets for the training and test of YIELDNET, but it lack detailed yield distribution visualization and reaction introductions to help readers understand the difference between the dataset and the potential bias involved. Supplementary Material: The supplementary materials include model details, training parameters, and complementary experiments to validate the importance of atom mapping and CGN. Relation To Broader Scientific Literature: YIELDNET could be a potentially useful model for yield prediction. However, the paper still lacks sufficient connection to fundamental chemistry, including physical organic chemistry principles such as transition state theory, kinetic control, and thermodynamics. To make it a more persuasive model with chemistry understanding, the author should report the validation of model generalization on unseen reaction classes. Essential References Not Discussed: Related and up-to-date papers are properly cited. Other Strengths And Weaknesses: Strengths: The introduction of differentiable atom mapping for reaction yield prediction is an innovative step. Unlike traditional rule-based or SMILES-based atom mapping methods, YIELDNET relaxes atom mappings into a soft differentiable space using the Sinkhorn algorithm. Weakness: The model is only tested on a limited dataset and the generalization ability of the model is still a mystery. Besides, The model does not analyze when and why YIELDNET fails and how missing reaction condition embeddings influence the performance. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does the Condensed Graph of Reaction (CGR) meaningfully approximate the transition state? The paper claims that the CGR serves as a surrogate for the transition state (TS), but it does not include quantum mechanical validation of this claim. If I can surrogate the "intermediate" with 0.8*R+0.2*P; 0.5*R+0.5*P,0.2*R+0.8*P would that be helpful? 2. How does YIELDNET compare to quantum-ML hybrid models (with QM-optimized TS) for yield prediction? If it does lend the TS information, there should be similarities. 3. How does YIELDNET generalize across different reaction classes? 4. Why does YIELDNET ignore reaction conditions (temperature, solvent, catalyst effects)? How you would design the model to incorporate this situation as the condensed graph may introduce a catalyst as a part of the reagent during a specific step of the reaction? 5. What are the major failure modes of YIELDNET? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, which we address as follows. > *CGR meaningfully approximates TS?* During rebuttal, we performed quantum chemical validation analogous to that reported by Choi, in 'Prediction of transition state structures of gas-phase chemical reactions via machine learning' Nature Commun. 2023. Here, we have used our CGR based TS as the initial guess for quantum chemical optimization for reactions in a GP dataset. We observed that 80% of these converged to TS. Subsequent QM calculations could produce correct reactant and product from the TS for 60% cases. In addition, we also compare CGR-based TSs with true (QM-computed) TSs using two metrics: (1) Accuracy: The fraction of total connections in the CGR-based TS which match with the QM computed TS. (2) RMSD: Root mean squared deviation (RMSD -lower the better) between atom positions obtained from CGR-based TS and the QM computed TS. Results for the GP dataset for our method and other interpolation based methods as follows. We observe that our method performs better. ||Accuracy |RMSD| |-|-|-| |0.8R+0.2P|0.51|1.50| |0.5R+0.5P|0.71|1.24| |0.2R+0.8P|0.91|1.33| |Our|**0.95**|**0.71**| Our CGR serves as a surrogate for TS to enhance yield prediction, not to provide an accurate TS prediction. More accurate TS prediction requires TS supervision during training, which most yield datasets lack. Complex datasets like SC contain ~200 atoms per reaction. Here, QM calculations on TSs take a couple of days per reaction--- assuming a good initial TS guess, which is even more time-consuming given the complexity of the dataset. Hence, general evaluation for our CGR was infeasible. However, as the reviewer suggested, we performed experiments on reaction instances in the GP dataset, which shows that despite absence of supervision, our CGR shows better proximity to the TS than its alternatives. > *compare to quantum-ML models* We feed the QM-optimized-TS (TS*) in our model to compute yield prediction error $E^*(r) = |y^*(r)-y _{TS^*}(r)|$ for each reaction $r$ and then compute yield prediction error $E(r) = |y^*(r)-y _{CGR}(r)|$ provided by our CGR based TS. The average difference between these errors is $<$ 8% of QM-optimized-TS based yield prediction error. > *generalize to different reaction classes* We compute MAE on test examples that include reactions from SC dataset, whereas the model is trained on DF dataset. We observe that our method shows better performance. ||| |-|-| |GCN|13.40| |HGT|25.38| |TAG|18.52| |GIN|17.03| |YB |23.64 | |Our |10.60 | > *ignore reaction conditions?* We do incorporate both catalyst and solvent, when present, into the reactant set R. We will explicitly mention them in the revised paper. Temperature was excluded as it is invariant in the case of high-throughput experimentation datasets (e.g., DF and NS datasets), which maintain consistent reaction conditions throughout, or varies mildly for others (e.g., SC). Integrating temperature into the node/edge features might enhance CGR or yield prediction quality. > *a catalyst as a part of the reagent* In i-th step of a multi-step reaction, the reactant set $R_i$ is a disjoint graph of all the participating species including catalyst, base, solvent, etc. >*Failure mode* Upon close inspection, we observe that our method couldn't do well in reactions that are very unusual/rare in terms of bond formation and breaking, due to their under-representation in training samples. Poor predictions on such reactions are unlikely to make any adverse impact, since they are too rare. If the dataset labels (yield values) carry large measurement errors, then also our model (and the baselines) would perform poorly. >*lack yield visualization* yhist.pdf in https://bit.ly/rbynet shows that our predicted yield distribution mimics true yield distribution, very closely. >*yield for rearrangements, pericyclic processes* Our datasets include rearrangements (present in GP), pericyclic processes (present in GP), metal-catalyzed bond formations (all reactions in SC), etc. Following are MAE based comparisons. |-|Our|Closest baseline| |-|-|-| |rearrangement|21.82|DeepReac+: 28.79| |Pericyclic|23.26|GCN: 24.16| |Metal-catal.|8.75|DeepReac+: 10.44| > *experimental design has limitations, dataset, benchmarking* We evaluated over 10 SOTA datasets against six baselines across ten splits for all experiments, including ablation study, to maintain consistency. We believe that our work conducts experiments with more rigor and comprehensiveness than any previous yield prediction baselines. We found that the pre-final layer embedding $z_r$ (Eq. 19) correlates with activation energies (corr.pdf in https://bit.ly/rbynet). Extracting mechanistic signals from yield alone is challenging and no prior yield prediction work attempts it-- they only focus on black box yield predictors. Hence, our work makes notable progress, paving the way for future research.
Summary: Predict permutation-matrix of a chemical reaction with GNN atom embeddings and iterative Sinkhorn interactions. Sampling from the permutation-matrix to get the atom-mapping which is used by a Transformer to predict the yield. Claims And Evidence: - propose differentiable approximation of the atom-mapping: I would argue that e.g. RXNMapper is also a "differentiable approach to atom mapping" but in their evaluation they show that they outperform RXNMapper (Table 2). Here also +- standard deviation as in Table 1 would be nice as well as statistical significancance tests (further state which test, and assumtion has been used). Their approach is novel and performant but seems memory intensive. - proposed YieldNet outperforms several baselines: experimentally validated in Table 1; code and data provided to reproduce as supp. across multiple datasets via 10-fold cross-validation; (why is USPTO in the supp but not the main Table 1?) Methods And Evaluation Criteria: yes Theoretical Claims: Not checked. Experimental Designs Or Analyses: All evaluations are sound. Further evaluations on different splits might be interesting but overall the analysis is done in a just manner. Supplementary Material: Reviewed code as well as supplementary pdf. Data and code are provided and seemed extensive and well written. Relation To Broader Scientific Literature: Key contributions could be demarcated better from scientific literature. Essential References Not Discussed: OK Other Strengths And Weaknesses: Original - yes Significance - yes Clarity - yes - paper is very well explained (minor suggestions follow) Other Comments Or Suggestions: Suggestions to add to clarity: - you mention several times that activation energy directly impacts yield; what is the precise relationship - it seems that the topic of yield in reality is much more nuanced then described in the manuscript: e.g. conditions such as person executing the experiment, temperature, .... I guess it's fair to distinguish between theoretical idealized yield and practical yield. - For reaction-path encoder - how is the atom mapping encoded here / how does it find it's way to the final yield? it seems the H_CGR is an embedding that holds this information for one step --> why not incorporate the whole reaction and perform attention over that Questions For Authors: no further questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, which we address below. > *results for atom mapping* Appendix E.3 contains the table with +- std error. Paired t-test in majority of cases revealed that the performance gain achieved by our method is statistically significant with $p=0.01$ > *Approach is novel but memory intensive* Our method utilizes the most compute on computing the permutation matrix using Sinkhorn iterations, which has $O(N^2)$ complexity if $N$ is the total number of nodes. However, this is not a hard bottleneck and can be easily overcome using low rank OT. Low rank OT provides highly efficient Sinkhorn iterations (see Scetborn et al. Low-Rank Sinkhorn Factorization, ICML 2021). We first apporximate $M = \sum _{i=1} ^d \max (H _R[u,i], H _I [u',i])\approx AB^T$ where $A$ and $B$ are $N \times d$ and $N \times d$ low rank matrices. Similarly, we approximate the transport matrix $P$ as $P \approx Q D(1/g) R^T$ where $Q, R$ are $N \times r$ matrices, $g$ is $r \times 1$, and $D(1/g)$ is a diagonal matrix with entries $1/g_i$. To ensure $P$ is doubly stochastic, we enforce: $$ Q\mathbf{1} = \mathbf{1}, \quad \mathbf{1}^T Q = g^T $$ $$ R\mathbf{1} = \mathbf{1}, \quad \mathbf{1}^T R = g^T $$ $$ g^T \mathbf{1} = 1, \quad Q, R, g > 0 $$ Instead of minimizing $Tr(P^T M)$, we minimize: $Tr((Q D(1/g) R^T)^T AB^T)$ using alterting minimization wrt $Q, R, g$. To optimize $Q$, we rewrite: $$ Tr((Q D(1/g) R^T)^T AB^T) = \langle Q, AB^T R D(1/g) \rangle $$ We apply Sinkhorn iterations to: $X = AB^T R D(1/g)$, computed in 3 steps. Note that complexity of multiplying $m$ x $n$ and $n$ x $p$ matrices is $O(mnp)$ 1. Compute $R D(1/g)$, which is $O(Nr)$ since $R$ is $N \times r$, $g$ is $r \times 1$. 2. Compute $B^T R D(1/g)$ which is $O(dNr)$ since $B^T: d \times N$ and $R D(1/g): N\times r$ 3. Compute $X = AB^T R D(1/g)$ which is $O(Ndr)$ since $A: N \times d$ and $B^T R D(1/g): d\times r$ Thus, complexity reduces from $O(N^2)$ to $O(Ndr)$. The same efficiency gain applies when optimizing $R$ and $g$. Moreover, in terms of time efficiency, our model is comparable with the some of the GNN based models (HGT and TAG) but not as high as YieldBERT. |Model|time (sec)| |-|-| |GCN|1.73| |HGT|4.68| |TAG|3.11| |GIN|1.69| |DeepReac+|0.9| |YieldBERT|84.98| |YIELDNET|3.44| > *USPTO ...not in the main Table 1?* We agree with the reviewer. We would revise Table 1 to include the USPTO results in the main text. > *....activation energy directly impacts yield; what is the precise relationship* For a single-step reaction assuming first order kinetics, the relationship is as follows. Suppose, $\Delta G^\ddagger$ represents activation energy in Jmol$^{-1}$. $k_B, h, R$ are the Boltzmann constant ($1.38\times 10^{-23}$ JK$^{-1}$), Planck constant ($6.626\times 10^{-34}$Js), and universal gas constant ($8.314$ JKmol$^{-1}$) respectively. Here, $t$ is the reaction time in sec. If "Temp" is the reaction temperature, then $k$, the first order rate constant is: $$ k = \dfrac{k_B \text{Temp}}{h} \exp\left(\dfrac{-\Delta G^{\ddagger}}{R\cdot\text{Temp}}\right),$$ Yield is given by: $$yield = (1 - \exp(-k t)) \times 100$$ > *conditions such as person executing the experiment, temperature, etc. .... I guess it's fair to distinguish between theoretical idealized yield and practical yield.* There are several factors that could influence yields. We do incorporate both catalyst and solvent, when present, into the reactant set R. Temperature was excluded as it is invariant in the case of high-throughput experimentation datasets (e.g., DF and NS datasets), which maintain consistent reaction conditions throughout, or varies mildly for others (e.g., SC). Other variabilities (e.g., purification procedure) are lacking in the state-of-the-art datasets, which is why we were unable not consider them. We discussed these factors in the conclusion. We will elaborate them further, if our paper gets accepted. > *why not incorporate the whole reaction and perform attention over that* Yes, $H_{CGR}$ captures the signals from atom mapping. We experimented with the transformer model where we incorporate whole reaction and perform attention over all reaction components, without explicit CGR modeling. Following MAE values show that our method performs significantly better. | Method|NS1|NS2|NS3|SC| |-|-:|-:|-:|-:| |Whole-reaction-attention|11.356|9.459|9.342|10.686| |**YIELDNET**|**9.245**|**8.387**|**7.914**|**8.751**| Despite the increased model size, whole-reaction-attention performs worse. We believe this is due to the fact that attention is non-injective in nature, i.e., multiple atoms from reactants can be mapped one single atom in product. In contrast, permutation is injective-- they provide one-to-one mapping between atoms. One way to mitigate the problem of transformer is to design permutation induced transformer, which we believe has a strong potential in this direction. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing a detailed and comprehensive rebuttal. I appreciate the effort taken to address the questions raised in my review. The additional clarifications, analyses, and justifications provided were helpful and have certainly added clarity regarding the work presented. While these clarifications are valuable and appreciated, my overall assessment based on the points discussed leads me to maintain my current score. (If only OpenReview allowed for "comma point" increases – the added clarity certainly deserves acknowledgement, even if it doesn't shift the final recommendation!) --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for their positive review and encouraging comments.
Summary: The paper introduces YIELDNET, a neural yield prediction model designed to predict the yield of multi-step chemical reactions without explicit atom mapping supervision. The key contributions include a differentiable node alignment network to approximate atom mapping, the construction of a CGR as a surrogate for transition states, and a transformer-guided reaction path encoder to model multi-step reactions. The approach enables end-to-end learning with supervision only from yield values, outperforming all baselines across eight datasets. Claims And Evidence: The paper claims that YIELDNET can predict reaction yields more accurately than existing methods by leveraging an approximate transition state representation. This claim is supported by experimental comparisons showing significant improvements over baselines. The ablation studies on different components of the model, such as CGR representations, reaction path encoder components, and the regularizer, provide evidence for the importance of these elements in the model. Methods And Evaluation Criteria: The proposed methods make sense for the problem. The differentiable node alignment network is a novel approach to approximate atom mapping, which is a crucial step in the reaction yield prediction problem. The evaluation criteria, including using MAE and RMSE to measure the performance on test sets, are standard and reasonable for this type of prediction task. The datasets cover a variety of reaction types for evaluating the model's performance. Theoretical Claims: The paper does not present complex theoretical proofs. The design of the node alignment network is based on relaxation of a quadratic assignment problem (QAP) to a linear optimal transport (OT) problem. The authors claim that the alignment matrix obtained is the solution of the entropy regularized linear OT problem. While the theoretical background is well presented, it would be beneficial to have a more detailed proof or reference to a more in-depth theoretical analysis to ensure the correctness of this claim. Experimental Designs Or Analyses: The experimental setup is sound, with multiple datasets and comparisons against reasonable baselines. The paper provides quantitative evidence of YIELDNET’s superiority. The ablation studies are well designed to analyze the impact of different components of the model Supplementary Material: I reviewed the supplementary material, including Appendices A, B and E. Appendix A discusses the limitations of the work, which is useful for understanding the boundaries of the model. Appendix B provides a more comprehensive review of related work. Appendices E present additional results, which are valuable for further analyzing the model's performance. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature. In the area of reaction yield prediction, previous works mainly rely on quantum chemically computed molecular descriptors, molecular fingerprints, or SMILES based representation. YIELDNET differentiates itself by leveraging graph neural networks and approximating atom mapping and transition states. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The combination of differentiable atom mapping, CGR approximation, and reaction path encoding in a single model for yield prediction is novel. 2. Strong empirical performance across multiple datasets. 3. The work has practical significance as accurate yield prediction can help in chemical synthesis design and optimization. Weaknesses: The method may face challenges when applied to larger molecular reactions due to the O(N^{2}) complexity of computing the alignment matrix P. Although the authors mention a possible mitigation strategy, it needs further verification. Other Comments Or Suggestions: No Questions For Authors: 1.What is the computational overhead of this method compared to baselines? The Sinkhorn operator involves iterative row and column normalization over a cost matrix. Given an N×N permutation matrix, the complexity is approximately O (T N^2), where T is the number of Sinkhorn iterations. Additionally, the gradient computation for backpropagation through Sinkhorn updates introduces further memory and compute costs. For a multi-step reaction with n steps, the embeddings from each step are concatenated and passed through a transformer. 2.Would a lighter-weight model maintain accuracy while improving efficiency? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their suggestions, which we address as follows. > *theoretical analysis* We provide the following theoretical underpinning. We start with the QAP in Eq 8. If $\mathcal{P}$ is the set of permutations, then QAP is $$\min _{P\in \mathcal{P}} \underbrace{\sum _{u,v} \max (Adj _R,P Adj _I P^{\top}) [u,v]} _{:=c(R,I;P)} --- (QAP) $$ The standard way to minimize it is to use Gromov Wasserstein (GW) based projection [A1], where P is updated as $$ P \leftarrow \text{argmin} _{P} \textrm{Trace}\left(P^T\nabla _{P}\ c(R,I;P) \right) -- (grad) $$ In our work, we provide a neural approximation of the gradient $\nabla _{P}\ c(R,I;P) \approx [\sum _{i=1} ^d \max (H _R[u,i], H _I [u',i])] _{u,u'}$. We use max to keep aligned with max() in Eq (QAP). This approximates Eq. (grad) as the following linear OT (Eq 9): $$ \min _{P\in \mathcal{P}} \underbrace{ \sum _{u,u'} \sum _{i=1} ^d \max(H _R[u,i], H _I [u',i]) P _{uu'}} _{F(P)} -- (a) $$ Eq (a) is optimization over permutations, which may appear to be computationally hard. However, we can write it as an instance of the following optimization over the space of doubly stochastic matrices $B = (P: P \ge 0, P\mathbf{1} = P^T\mathbf{1} = 1)$. $$ \min _{P}F(P), \text{ s.t. } P\in B --- (b) $$ Eq (b) is equivalent to Eq (a) since it is an linear program.The optimal solution of a linear program always coincides with some vertex of the space induced by the set of linear constraints. Thus, the optimal solution of Eq (b) is a permutation matrix $P^*$, as the set of permutation matrices are the vertices of B, the space of doubly stochastic matrices. However, $\arg\min _{P}F(P)$ in Eq (b) is non-differentiable. To enable differentiation, we approximate it using Sinkhorn iterations, analogous to how argmax is approximated using softmax. Given n numbers $G_1,..G_n, \ \ \mathbb{1}[i=\arg \min _{j} G_j] \approx p_i=\frac{e^{-G_i/\lambda}}{\sum _{j}e^{-G _{j}/\lambda}}$, where $\mathbb{1}$ is indicator function. One can show that $p_i: i\in [n]$ minimizes the following entropy regularized optimization problem: $$\min _{p} \sum _{j} p _j G _j +\lambda. \sum _{j} p _j \log(p _j),\text{ such that } 0 \le p _j \le 1, \sum _{j} p _j =1$$ Using the same technique, we can show that Sinkhorn mechanism minimizes the entropy regularized linear OT problem $\min _P F(P) - \lambda \text{Entropy}(P)$ [A2]. [A1] Xu et al. Gromov-wasserstein learning for graph matching and node embedding, ICML 2019 [A2] Mena et al. Learning Latent Permutations with Gumbel-Sinkhorn Networks. ICLR 2018. > *Computational overhead* Following table shows the computational overhead in terms of inference time. It shows: our model is comparable with the some of the GNN based models (HGT, TAG) and not as high as YieldBERT. |Model|time (sec)| |-|-| |GCN|1.73| |HGT|4.68| |TAG|3.11| |GIN|1.69| |DeepReac+|0.9| |YieldBERT|84.98| |YIELDNET|3.44| Moreover, the training time per epoch for our model is 8 seconds (s), which is comparable with baselines: 6s for HGT, 10s for YieldBERT (with same batch size) *Efficiency enhancement:* Low rank OT provides highly efficient Sinkhorn iterations. See Scetborn et al. Low-Rank Sinkhorn Factorization, ICML 2021. One first apporximates $M = \sum _{i=1} ^d \max (H _R[u,i], H _I [u',i])\approx AB^T$ where $A$ and $B$ are $N \times d$ and $N \times d$ low rank matrices. Similarly, we approximate the transport matrix $P$ as $P \approx Q D(1/g) R^T$ where $Q, R$ are $N \times r$ matrices, $g$ is $r \times 1$, and $D(1/g)$ is a diagonal matrix with entries $1/g_i$. To ensure $P$ is doubly stochastic, we enforce: $$Q\mathbf{1} = \mathbf{1}, \quad \mathbf{1}^T Q = g^T$$ $$R\mathbf{1} = \mathbf{1}, \quad \mathbf{1}^T R = g^T$$ $$g^T \mathbf{1} = 1, \quad Q, R, g > 0$$ Instead of minimizing $Tr(P^T M)$, we minimize: $Tr((Q D(1/g) R^T)^T AB^T)$ using alternating minimization wrt $Q, R, g$. To optimize $Q$, we rewrite: $$ Tr((Q D(1/g) R^T)^T AB^T) = \langle Q, AB^T R D(1/g) \rangle $$ We apply Sinkhorn iterations to: $X = AB^T R D(1/g)$, computed in 3 steps. Note that complexity of multiplying $m \times n$ and $n \times p$ matrices is $O(mnp)$ 1. Compute $R D(1/g)$, which is $O(Nr)$ since $R$ is $N \times r$, $g$ is $r \times 1$. 2. Compute $B^T R D(1/g)$ which is $O(dNr)$ since $B^T: d \times N$ and $R D(1/g): N\times r$ 3. Compute $X = AB^T R D(1/g)$ which is $O(Ndr)$ since $A: N \times d$ and $B^T R D(1/g): d\times r$ Thus, complexity reduces from $O(N^2)$ to $O(Ndr)$. The same efficiency gain applies when optimizing $R$ and $g$. > *lighter model* We replaced some complex components with simpler components in our model. MAE numbers are as follows. ||NS1|NS2|NS3| |-|-|-|-| |step-Aggr=SumAggr|10.19|8.86|8.35| |No-Transformer|9.47|8.68|8.16| |Our|9.25|8.39|7.91| We observe that there is some drop in performance. Depending on available resources, one can decrease the complexity to build a lighter model.
null
null
null
null
null
null
null
null
Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers
Accept (poster)
Summary: The paper proposes a new method for continual learning, which trains auxiliary classifiers on top of the intermediate features of the deep neural network along with the main neural network. The method is motivated by the observation that, in the process of class incremental continual learning, intermediate features change less drastically than the final layer(s), and some of the auxiliary classifiers can achieve better performance than the final classifier. Results show that auxiliary classifiers can improve the performance of a number of competitive continual learning approaches, including replay, regularization, and architectural methods. With dynamic layer skipping, the ACs can also enable cheaper inference cost while keeping the same continual learning performance. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria make sense for the problem at hand. CIFAR 100x10 and ImageNet 100x10 are popular and standard evaluation benchmarks for class incremental continual learning in computer vision and classification. The baseline methods that the authors choose are good representatives of different types of continual learning algorithms. Theoretical Claims: N/A. The paper does not include any proofs for theoretical claims in the main text. Experimental Designs Or Analyses: The experimental designs and analysis in the paper are sound and valid. The main experiment shown in table 1 demonstrate the effectiveness of the auxiliary classifiers across a range of different CL methods. The additional experiments shows in table 2 and figure 7 demonstrate the effectiveness of the auxiliary classifiers across different neural network architectures. Supplementary Material: I reviewed the sections of the supplementary material that are referred to in the main text, namely sections A, D, and E. Relation To Broader Scientific Literature: The proposed method is highly related to previous works on overthinking in neural networks and early-exit classifiers for cheaper inference cost in neural networks. This paper studies these topic in the context of continual learning. The observations that overthinking is more apparent in continual learning motivates the auxiliary classifier method. The early-exit classifiers motivates the dynamic layer skipping approach proposed in the paper for inference cost - accuracy tradeoff. The paper is also highly related to previous works on using intermediate features in continual learning. The prior finding that intermediate features are more stable motivates the auxiliary classifier method. Essential References Not Discussed: The related works are sufficiently discussed in the paper. Other Strengths And Weaknesses: Strengths; - The proposed method is well motivated, and draws inspirations from prior works in continual learning, neural network overthinking, and early-exit classifiers. It provides a natural tradeoff between final accuracy and inference cost. - The analysis on overthinking of continually-learned models is a novel contribution. Weaknesses: - If we just care about the optimal final accuracy (the point at 100% inference cost), the amount of improvement provided by auxiliary classifiers seems limited for more recent methods, especially considering the additional training overhead of these auxiliary classifiers. - (minor) The observation that intermediate features are more stable in continual learning methods is not a novel one, as mentioned in the paper in lines 186-189. Other Comments Or Suggestions: 1. The Riemer et al., 2018 paper is referred to as "ER" in the paper, which is a little bit confusing. I believe "ER" usually refers to naive Experience Replay (the FT+Ex setting in this paper), while the Riemer et al., 2018 paper is referred to as MER (Meta Experience Replay). In the provided anonymous code base the "er.py" file also seems to only contain code for naive experience replay. I would like the authors to confirm that the "ER" columns in the tables actually refer to the method proposed in the Riemer et al., 2018 paper. 2. I have some difficulty understanding what figure 4a is plotting exactly ("we plot overthinking normalized by the accuracy of the final classifier" seems a bit unclear). I would like the authors to more clearly define how the numbers plotted in Figure 4a is calculated. Questions For Authors: 1. In figure 2, the earlier layers (such as AC1) changes least during the process of continual learning, while AC6 changes most (closet to the final layer). However, in figure 3, we notice that AC6 provides most performance improvement for a majority of the tasks in non-trivial CL methods. Is there an intuition on why this is the case? This seems to contradict with the motivation provided in section 3.1 that the intermediate feature are more useful for classification in CL because they are more stable. 2. Is there any justification on using the confidence score to choose which auxiliary classifier to use for the final prediction (instead of say a popular vote of all the auxiliary classifiers), especially given that neural networks are often not very well calibrated? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent on our work and the suggestions for improving its quality. Below, we address the points raised by the Reviewer. > If we just care about the optimal final accuracy (the point at 100% inference cost), the amount of improvement provided by auxiliary classifiers seems limited for more recent methods, especially considering the additional training overhead of these auxiliary classifiers. We do not consider ACs a standalone method, but rather a addon-method that enhances existing approaches. Our experiments demonstrate that ACs enable inference cost savings up to while providing moderate performance improvements, even for newer methods such as LODE, where we both improve the performance and save up to 20% computation without any performance degradation, or save 50% compute while keeping the same performance as the baseline (see Figure 1). > The observation that intermediate features are more stable in continual learning methods is not a novel one, as mentioned in the paper in lines 186-189. As the Reviewer notes, we acknowledge prior work on the stability of intermediate features and we treat it as a motivation for our approach. To our knowledge, the idea of leveraging intermediate features to enhance both efficiency and performance in continual learning has not been explored before. Our work builds upon the known observations and introduces a novel approach to utilize it effectively. > The Riemer et al., 2018 paper is referred to as "ER" in the paper, which is a little bit confusing. (...)I would like the authors to confirm that the "ER" columns in the tables actually refer to the method proposed in the Riemer et al., 2018 paper. We thank the reviewer for pointing out this inaccuracy. The "ER" in our work refers to Experience Replay as described in [1], and we will update the paper to make this clear. To clarify the distinction between FT+Ex and ER, FT+Ex represents the most naive form of replay, where stored data is simply added to the training dataset for each task without modifying batching or sampling procedures. In contrast, ER ensures that each batch maintains a balanced number of samples from both old and new classes and involves oversampling from the memory. Therefore, ER is a more sophisticated replay method. > I have some difficulty understanding what figure 4a is plotting exactly ("we plot overthinking normalized by the accuracy of the final classifier" seems a bit unclear). (...) We appreciate the Reviewer’s feedback. Given the oracle accuracy $Acc_{oracle}$ and final classifier accuracy $Acc_{final}$, normalized accuracy in Figure 4a is simply $\frac{Acc_{oracle} - Acc_{final}}{Acc_{final}}$. We will clarify this in the updated version of the paper. > In figure 2, the earlier layers (such as AC1) changes least during the process of continual learning, while AC6 changes most (closet to the final layer). However, in figure 3, we notice that AC6 provides most performance improvement for a majority of the tasks in non-trivial CL methods. Is there an intuition on why this is the case? (...) Thank you for the question. Early feature stability for older tasks is a motivation behind our approach, but it does not conflict with the better discriminative abilities of the classifiers attached to the later layers, especially on the new data. Figure 2 demonstrates classifier performance in isolation, thus later classifiers will exhibit higher downstream accuracy. Our approach leverages both of those properties to enhance overall performance in CL. Note how with different ACs we learn to classify the samples based on diverse features (e.g. more primitive features in early layers, and more sophisticated features in the later ones) and this improves the robustness of our method (as evidenced by our analysis of overthinking). We will elaborate this in more detail in the updated version of the paper. > Is there any justification on using the confidence score to choose which auxiliary classifier to use for the final prediction (...), especially given that neural networks are often not very well calibrated? Our choice of the dynamic inference aggregation rule was primarily empirical. Methods that combine multiple predictions (e.g., voting, weighted ensembling) introduce additional complexity without yielding clear improvements (see our analysis of cascading and ensembling classifiers in Appendix C.2). Therefore, we opted for the simpler approach of using a single classifier’s output. We conducted an ablation study comparing the use of the final classifier prediction versus selecting the prediction with the highest confidence (Appendix E.5), and opted to use the best-performing method. See also response to the Reviewer FvJb about thresholding strategies. **References** [1] Chaudhry et al., *"Continual learning with tiny episodic memories."*, Workshop on Multi-Task and Lifelong Reinforcement Learning (2019).
Summary: The paper shows that intermediate representations are less prone to forgetting in the continual learning paradigm, which is aligned with previous observations in the literature. Based on this finding, they propose using auxiliary classifiers in the intermediate layers and showed that the method consistently outperforms the single classifier, when combined with various continual learning techniques. ## Update after rebuttal: The authors have addressed my comments. I will keep my positive score. Claims And Evidence: Yes, the paper provides empirical evidence and analyses for claims. The method is combined with various continual learning techniques and evaluated on two benchmarks with different numbers of tasks. Different architectures are also analyzed. Methods And Evaluation Criteria: Yes, though since the method focuses on forgetting, I expect to see a metric to measure it in the experimental results. However, only the final accuracy is reported. Theoretical Claims: The paper doesn’t include theoretical claims. Experimental Designs Or Analyses: Yes. The experimental design is reasonable. One comparison that could make it stronger is the comparison against shallow architecture (with perhaps wider layers), as it is also less prone to forgetting. Supplementary Material: Yes. Relation To Broader Scientific Literature: Up to my knowledge, exploring the idea of early exit classifiers in the continual learning paradigm is novel, but exists in other paradigms as acknowledged in the paper. Essential References Not Discussed: Up to my knowledge, the paper covers the essential references. Other Strengths And Weaknesses: Strengths: - The method is simple, yet effective - Extensive experiments are provided - Generalization across different architectures is analyzed - Paper is well-written Weakness: - Forgetting metric is missing in the results - Since the performance relies on earlier layers, reporting the achieved accuracy of task t at time t is useful. Other Comments Or Suggestions: - CKA is mentioned in Section 1 without definition. - Figure 2 is not clear until one reads the details in Section 3.1 (despite been discussed in Section 1). Questions For Authors: 1. Have you compared against shallow networks? 2. Can you report the forgetting? 3. Do you think that these findings generalize to LLMs? And where to place ACs in that case? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent on our work and the suggestions for improving its quality. Below, we address the points raised by the Reviewer. > Since the method focuses on forgetting, I expect to see a metric to measure it in the experimental results. We appreciate the Reviewer’s suggestion and have now included forgetting metrics for CIFAR-100. Our method generally helps in mitigating forgetting. Even in cases where we see more forgetting with ACs, it still outperforms the baseline in terms of accuracy (see Table 1). |||FT|FT+Ex|GDumb|ANCL|BiC|DER++|ER|EWC|LwF|LODE|SSIL|Avg| |:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| |CIFAR100x5|Base|64.85$\pm$1.27|**44.80$\pm$0.79**|12.01$\pm$0.63|36.94$\pm$0.79|9.33$\pm$2.89|7.42$\pm$4.17|47.25$\pm$0.36|64.65$\pm$1.13|24.21$\pm$2.69|**20.61$\pm$1.14**|24.80$\pm$1.08|32.44$\pm$19.83| ||+AC|**50.04$\pm$1.18**|45.51$\pm$0.66|**10.00$\pm$0.47**|**26.32$\pm$1.15**|**9.26$\pm$4.00**|**3.68$\pm$4.05**|**40.31$\pm$0.84**|**48.04$\pm$1.34**|**19.95$\pm$0.57**|**20.61$\pm$0.96**|**19.95$\pm$1.12**|**26.70$\pm$15.90**| |CIFAR100x10|Base|70.35$\pm$2.42|49.57$\pm$1.25|14.11$\pm$1.08|40.49$\pm$1.55|9.31$\pm$2.43|11.88$\pm$4.27|51.77$\pm$1.11|68.40$\pm$2.74|**24.26$\pm$2.12**|**18.49$\pm$1.50**|21.44$\pm$2.74|34.55$\pm$21.51| ||+AC|**56.40$\pm$2.97**|**48.03$\pm$1.05**|**10.68$\pm$1.44**|**30.60$\pm$1.46**|**6.37$\pm$3.81**|**8.39$\pm$5.10**|**46.62$\pm$1.33**|**54.08$\pm$2.30**|29.12$\pm$1.01|20.24$\pm$2.12|**16.71$\pm$1.96**|**29.75$\pm$17.96**| Assuming we train on total of $T$ tasks, we compute forgetting as $\frac{1}{T-1} \sum_{t=1}^{T-1} (Acc_{t}^{t} - Acc_{t}^{T})$, where $Acc_{i}^{j}$ refers to the accuracy on the i-th task after training on j-th task (we skip T-th task, as it has zero forgetting according to this definition). We will add these results to the paper. > One comparison that could make it stronger is the comparison against shallow architecture (with perhaps wider layers). Thank you for the suggestion. To address this, we evaluated our method using WideResNet16 [1] on CIFAR100, which is shallower than the ResNet-32, but twice wider. Consistent with previous results, the AC-enhanced methods outperform the baselines. |||FT|FT+Ex|GDumb|ANCL|BiC|DER++|ER|EWC|LwF|LODE|SSIL|Avg| |:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| |CIFAR100x5|Base|19.40$\pm$0.30|38.50$\pm$0.64|19.17$\pm$0.95|44.18$\pm$0.86|50.73$\pm$1.40|34.88$\pm$7.28|33.56$\pm$0.66|19.09$\pm$0.17|43.17$\pm$0.35|43.72$\pm$0.85|48.95$\pm$0.43|35.94$\pm$0.62| ||+AC|**29.79$\pm$1.94**|**40.67$\pm$0.37**|**24.09$\pm$1.16**|**46.09$\pm$0.75**|**53.38$\pm$0.68**|**46.02$\pm$1.08**|**39.61$\pm$0.64**|**32.58$\pm$0.37**|**44.26$\pm$0.02**|**50.83$\pm$0.30**|**51.35$\pm$0.53**|**41.70$\pm$0.23**| |CIFAR100x10|Base|9.13$\pm$0.74|34.39$\pm$0.80|21.94$\pm$0.26|33.32$\pm$0.50|46.02$\pm$0.29|34.28$\pm$3.80|30.56$\pm$0.50|10.10$\pm$0.35|32.59$\pm$1.05|40.22$\pm$0.27|45.58$\pm$0.28|30.74$\pm$0.52| ||+AC|**16.77$\pm$1.51**|**37.92$\pm$0.12**|**27.81$\pm$0.46**|**35.28$\pm$0.34**|**48.79$\pm$0.32**|**43.57$\pm$0.63**|**36.32$\pm$0.74**|**21.37$\pm$1.90**|**34.05$\pm$0.94**|**46.28$\pm$0.20**|**47.01$\pm$0.66**|**35.92$\pm$0.24**| > Since the performance relies on earlier layers, reporting the achieved accuracy of task t at time t is useful. Thank you for this suggestion. For the Reviewer’s convenience, we plotted such accuracies ([link](https://imgur.com/a/0O3btHy)). We will add those plots to the paper appendix. > CKA is mentioned in Section 1 without definition. Figure 2 is not clear (...). Thank you for the feedback. While CKA is a well-established metric in the literature, we acknowledge that adding a brief definition will improve clarity, and we will include it as suggested. > Do you think that these findings generalize to LLMs? And where to place ACs in that case? Prior work, such as [2], explores the usae of early exits in LLMs, suggesting that our approach could, in principle, be applicable in that context. The placement and architecture of ACs in LLMs could follow a similar strategy to what we describe for ViTs (see Appendix A). However, we are cautious about making direct claims regarding the transferability of our findings to LLMs. Our work explores class-incremental learning in vision, which differs fundamentally from language modeling, where tokenization constrains the class space. ACs might be applied to domain-incremental scenarios in LLMs, but investigating this requires further research beyond our current scope. **References**: [1] Zagoruyko et al., *“Wide Residual Networks”*, BMVC 2016 [2] Schuster et al., *“Confident Adaptive Language Modeling”*, NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thank you for your response and providing extra results addressing my comments. I would encourage the authors to include the forgetting for imageNet as well in the revised version. Analyzing your new results on shallow and wide architecture, It seems that just using 'WideResNet16 base' outperforms 'base ResNet-32 + AC' in multiple cases (like Ft+Ex, ANCL, LWF, BiC) right? I am wondering what could be the motivation behind using the proposed method over shallow networks? --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response. We will follow the Reviewer's advice to include forgetting for ImageNet in the revised version, but we need a bit more time to parse the results and re-compute some of the runs where we lost the model checkpoints. We agree that the introduction of these results should further improve our paper. Regarding shallow vs. wide architectures, we based our experimental setup on common class-incremental learning (CIL) benchmarks from the FACIL survey [1], which uses ResNet32 (0.47M parameters) for main experiments on CIFAR100 due to its balance between model size and performance. While deeper models like ResNet18 (11.2M) or VGG19 (39.3M) are sometimes used in CIL for CIFAR100, they offer no clear advantage while significantly increasing computational cost. WideResNets are less commonly used in CIL, mainly appearing in ablation studies. Upon some investigation, it seems that such wider networks are indeed overlooked in CIL - e.g. [2] argues that wider models are less prone to forgetting, and Table 1 from [3] in particular shows that, on CIFAR100, WideResNet16-2 (0.7M parameters variant we used for the results in this rebuttal) performs similarly to ResNet18 (11.2M parameters). However, our primary goal is not to optimize architecture selection but to demonstrate the general applicability of ACs in continual learning. We thank the Reviewer for providing an insightful suggestion for another evaluation case for our idea, which further strengthens our paper. **References** [1] Masana et al., *"Class-incremental learning: survey and performance evaluation on image classification"*, TPAMI 2022 [2] Mirzadeh, et al. *"Wide neural networks forget less catastrophically"*, ICML 2022 [3] Mirzadeh et al., "*Architecture Matters in Continual Learning"*, 2022 **(edit) Updated partial forgetting data on ImageNet100 with ResNet18** For the Reviewer's sake, we present forgetting data for ImageNet100 runs for runs computed so far. We will update the paper with the full table containing all the methods, but were unable to do so in short rebuttal period due to the computational constraints. |Setting|Setup|FT|FT+Ex|GDumb|BiC|ER|EWC|LwF|SSIL|Avg| |:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| |ImageNet100x5|Base|74.35$\pm$0.63|52.00$\pm$0.75|16.73$\pm$0.82|10.37$\pm$1.70|56.14$\pm$1.27|74.49$\pm$0.97|18.90$\pm$0.73|21.90$\pm$0.88|40.61$\pm$24.91| ||+AC|61.45$\pm$1.30|49.97$\pm$1.00|15.98$\pm$1.07|9.76$\pm$1.05|49.65$\pm$1.25|61.77$\pm$0.61|17.49$\pm$0.76|19.68$\pm$0.80|35.72$\pm$20.59| |ImageNet100x10|Base|77.50$\pm$1.54|57.50$\pm$1.40|19.67$\pm$1.37|12.73$\pm$3.38|60.38$\pm$1.42|76.70$\pm$1.40|23.76$\pm$1.49|19.75$\pm$1.69|43.50$\pm$25.51| ||+AC|68.44$\pm$1.54|54.67$\pm$1.73|18.12$\pm$1.47|12.84$\pm$2.71|56.06$\pm$1.74|68.13$\pm$1.18|18.17$\pm$0.90|18.90$\pm$1.80|39.42$\pm$22.94|
Summary: The manuscript analyzes the effect of class-incremental training on the parameters of a neural network, finding that deeper layers tend to be more affected from catastrophic forgetting. To exploit this it introduces a series of Auxiliary Classifiers (AC), finding that their combination can significantly make the predictions more robust. Claims And Evidence: The claims are convincing. Methods And Evaluation Criteria: The choice of benchmarks and competitors seems adequate. Theoretical Claims: N/A Experimental Designs Or Analyses: The choice of benchmarks and competitors is adequate. However, I found the use of a single seeded run for the results of the ViT-based models (lines 301-302) to be concerning. In my opinion, for a proper evaluation the tests should be repeated across multiple runs without seeds. Supplementary Material: I checked both the code, the additional details regarding the experimental settings, and the additional results. Relation To Broader Scientific Literature: While the analysis on the change in the intermediate representations in a class-incremental environment has been conducted in several other works [1,2,3] (with [1,2] being already cited in the manuscript), the use of auxiliary classifiers to take advantage of the smaller changes in parameters of shallower layers seems novel to me. [1]: Ramasesh, V. V. et al. “Anatomy of catastrophic forgetting: Hidden representations and task semantics.” In ICLR 2020. [2]: Zhao, Haiyan et al. “Does continual learning equally forget all parameters?." ICML 2023. [3]: Boschini et al. "Transfer without forgetting." ECCV 2022. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - While the manuscript investigates the change in the intermediate representations for a backbone trained from scratch, it could be interesting to analyze the effect of AC on pre-trained backbones, especially in light of [1] which conducts the same initial tests with CKA for pre-trained networks. - How much is the overhead of computing all gradients for the losses of the ACs (e.g., in terms of required GPU memory and training time with respect to the baseline)? [1]: Boschini et al. "Transfer without forgetting." ECCV 2022. Other Comments Or Suggestions: N/A Questions For Authors: See section on "Other Strengths And Weaknesses" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent on our work and the suggestions for improving its quality. We appreciate the Reviewer’s mention of *Boschini et al., "Transfer without forgetting." ECCV 2022*; we will include this paper in the updated related works. Below, we address the points raised by the Reviewer. > I found the use of a single seeded run for the results of the ViT-based models (lines 301-302) to be concerning. In my opinion, for a proper evaluation the tests should be repeated across multiple runs without seeds. We appreciate the Reviewer’s concern. Due to computational constraints, we initially conducted these experiments with a single seed, as ViT-based models are particularly expensive to train. In response to the Reviewer’s suggestion, we have now rerun the ViT experiments from Figure 7 using two additional seeds (three in total). The updated figure, including confidence intervals, is available at [link](https://imgur.com/JTuI5wn), demonstrating that our findings remain consistent across multiple runs. We will incorporate these results into the revised version of our paper. > While the manuscript investigates the change in the intermediate representations for a backbone trained from scratch, it could be interesting to analyze the effect of AC on pre-trained backbones, especially in light of [1] which conducts the same initial tests with CKA for pre-trained networks. Thank you for the interesting suggestion. We conducted warm-start continual learning experiments (Appendix B.1) as a controlled alternative to pretraining, empirically demonstrating that our approach remains effective in such a setting. We consider the investigation of pre-trained models with ACs a promising future work direction. > How much is the overhead of computing all gradients for the losses of the ACs (e.g., in terms of required GPU memory and training time with respect to the baseline)? We appreciate the Reviewer’s question. We measured training times and peak memory usage for training ResNet32 with different numbers of ACs on CIFAR-100, summarized in the tables below. "0 ACs" refers to a standard network. We report training times in hours (mean from three runs) and peak memory usage in GB. For memory, we include one table, as it does not depend on the data split. ## Training times (in hours) - 5 tasks |ACs|ANCL|BiC|ER|EWC|FT|FT+Ex|GDUMB|LODE|LwF|SSIL|Avg| |-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| |0|2.1|1.4|2.4|1.2|1.1|1.3|0.6|3|1.2|1.7|1.6| |3|2.7|1.8|2.8|1.5|1.3|1.5|0.8|3.6|1.5|2.1|2| |6|3.3|2.1|3.3|1.8|1.6|1.7|1.1|4.3|1.8|2.5|2.4| |12|4.5|2.9|4|2.5|2|2.3|1.5|5.5|2.5|3.2|3.1| ## Training times (in hours) - 10 tasks |ACs|ANCL|BiC|ER|EWC|FT|FT+Ex|GDUMB|LODE|LwF|SSIL|Avg| |-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| |0|2.8|2|2.5|1.4|1.4|1.7|1.2|3.3|1.4|1.8|1.9| |3|3.6|2.5|3.2|1.9|1.6|2|1.5|4.2|1.9|2.3|2.5| |6|4.4|3|3.8|2.2|1.9|2.4|1.7|5.2|2.4|2.9|3| |12|6.8|4.2|4.8|3|2.4|3.3|2.3|6.9|3.5|3.9|4.1| # Peak GPU memory usage (GB) |ACs|ANCL|BiC|ER|EWC|FT|FT+Ex|GDUMB|LODE|LwF|SSIL|Avg| |-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:|-:| |0|2.1|2.09|2.1|2.1|2.1|2.1|2.39|2.1|2.1|2.18|2.14| |3|2.29|2.22|2.18|2.56|2.25|2.18|2.56|2.22|2.22|2.29|2.3| |6|2.47|2.32|2.27|2.73|2.41|2.27|2.73|2.31|2.3|2.41|2.42| |12|2.82|2.5|2.43|3.02|2.72|2.43|3.03|2.49|2.5|2.65|2.66| Our standard setup (6 ACs) results in approximately 50% training time and 10% memory overhead. The exact overheads will depend on the model, AC architecture, and training environment. However, we do not consider this overhead prohibitive for real-world use cases, particularly in offline class-incremental learning, where training resource constraints are not usually strict; therefore, we also did not optimize the code towards such constraints. As our primary concerns are efficiency and performance during the inference time, we consider the discussed overheads acceptable. We appreciate the Reviewer raising this important point, and we will incorporate this discussion in the paper.
Summary: This paper introduces Auxiliary Classifiers (ACs) to enhance performance and efficiency in Continual Learning (CL) by leveraging intermediate representations in neural networks. The primary challenge is catastrophic forgetting, where new knowledge acquisition disrupts previously learned information. The key conceptual idea is that intermediate neural network representations exhibit higher stability than final-layer representations, making them less prone to forgetting. The authors propose attaching auxiliary classifiers (ACs) to intermediate layers to Improve accuracy by reducing overfitting in later layers, accelerate inference through early classification when confidence is high, and enhance generalization by leveraging more stable feature representations. The paper evaluates ACs across multiple continual learning methods using CIFAR100 and ImageNet100 datasets. Claims And Evidence: * The paper is mostly clearly written, making it easy to follow the authors' claims and methodologies. However, * Figure 3 is unclear. It appears that all auxiliary classifiers outperform the final classifier (non-negative accuracy differences). If this is not the intended message, the figure needs clarification or revision. * Figure 5 is not clearly presentated. Specifically, what do the check marks and x-marks represent? Clarification is also needed on how this figure illustrates static inference as described in the accompanying paragraph. Methods And Evaluation Criteria: * The experimental design is thorough, valid, and effectively covers relevant scenarios. * Continual Learning literature commonly reports metrics such as average accuracy and average backward transfer (BWT). However, this paper primarily focuses on final-task accuracy. I believe Tables 1 and 2, and Figures 1 and 6, should includ average accuracy and BWT. * The authors provide inference costs, but training costs should also be reported, given that adding auxiliary classifiers, both with and without gradient detachment, could significantly impact computational overhead. Theoretical Claims: * The paper does not present explicit theoretical claims or proofs. Experimental Designs Or Analyses: * The experimental design is sound and comprehensive, covering several baseline comparisons and tasks relevant to continual learning. * However, clarity regarding the training procedures of auxiliary classifiers (e.g., incremental class additions, gradient propagation handling) needs improvement. For example, how are new classes incrementally added for each AC? Are ACs initialized with a fixed number of classes, or do classes expand progressively? Additionally, the authors mention "enabled gradient propagation" in Section 3.2, but it is unclear whether this explicitly means gradients are not detached. If this is the case, could the authors clarify the reasoning behind this choice? Moreover, could gradient propagation negatively affect intermediate representations, potentially causing performance degradation, as suggested by the LwF case in Figure 4(d)? Supplementary Material: * Read B.1, B.2, D.1, D.2, E.5 Relation To Broader Scientific Literature: The paper distinguishes the novel usage of intermediate auxiliary classifiers from existing CL techniques. Essential References Not Discussed: No essential missing references were identified in the current version of the paper. Other Strengths And Weaknesses: None Other Comments Or Suggestions: * Figures and their corresponding textual explanations often appear on separate pages, negatively impacting readability. Repositioning figures closer to their descriptions would greatly enhance readability. * For figure 1 and 6, it should be stated if the accuracy is average task accuracy or final task accuracy. * Including explicit accuracy values for each AC in Figure 4 would be helpful, as parts (b) and (c) currently show only relative performance. * Defining overthinking as "cases where samples correctly classified by early classifiers are misclassified by the final classifier" is intuitive but may oversimplify the phenomenon. A more nuanced justification or further context would strengthen this definition. Questions For Authors: * Do ACs also suffer catastrophic forgetting, trained with and without graident detach? If ACs are continually trained alongside the final classifier, it seems plausible that ACs would also forget older tasks. Could you discuss or demonstrate the robustness of ACs across sequential tasks? * While the paper demonstrates robustness in early representations, it also acknowledges later layers’ superior ability to classify larger subsets. What do you believe explains this phenomenon? * In Figure 4, "unique overthinking" appears relatively minor (around 10%). Could the authors elaborate further on the significance or implications of this proportion? * If confidence measures play a key role in your inference strategy, it would be valuable to analyze and report confidence metrics explicitly for each AC. * A common viewpoint in deep learning literature suggests that early layers learn general representations, while later layers acquire task-specific representations. While authors claim that early layers do not "forget old tasks," it could be also possible that the apparent stability of early layers might reflect learned shared characteristics (e.g., color, shape) rather than true resistance to forgetting. What is your perspective on this interpretation? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for all the suggestions. Below, we address the points mentioned by the Reviewer. We will include the clarifications and the Reviewer’s suggestions in the updated version of our paper. ## Accuracy metrics and backward transfer (BWT) In all our experiments, we report the average accuracy across all classes after the final task. BWT can be seen as the negative of forgetting, which we discussed in the response to the Reviewer Y8XT. It is hard to observe BWT consistently in our setting across the whole training course and interpret this metric, so we do not report it like most class-incremental learning works (the metric is mostly used in task-incremental settings). However, BWT can be derived from the per-task accuracy plots we added in response to Reviewer Y8XT ([link](https://imgur.com/a/0O3btHy)). ## Early vs later layer representation stability We agree with the reviewer that feature-sharing is a useful concept when explaining the results observed in our paper. Early-layer representations tend to capture shared, low-level features that are less task-specific, making them more stable across tasks and hence suffer less from forgetting. The more complex, later-layer features, are more task-specific and consequently suffer from more forgetting (since they might not be required for current task data). As a consequence of this phenomenon, during continual learning the ACs based on lower layers can potentially outperform those of the later layers. ## Forgetting in the ACs As noted by the Reviewer, our ACs are still classifiers and as such are also susceptible to forgetting. Our approach leverages the stability of earlier features shown in Figure 2 but also benefits from the diversity of features used by different ACs. Since we classify based on the information aggregated across multiple ACs, it is enough if just one classifier returns a confident prediction for a given sample to make the correct prediction. Empirical results indicate that AC-based networks generally exhibit lower forgetting on a per-task basis across continual learning (see the forgetting plots under the [link](https://imgur.com/a/0O3btHy) and the response to Reviewer Y8XT), supporting our intuition that using multiple ACs results in more robust classification. ## Thresholding strategy The Reviewer correctly points out that our inference strategy is influenced by prediction confidence, which varies for each AC and the task. While confidence-aware inference strategies could improve performance, they are complex enough to warrant separate papers (e.g., Meronen et al., *“Fixing Overconfidence in Dynamic Neural Networks”*, WACV 2024) and often rely on calibration on a holdout dataset, which limits their applicability to exemplar-based methods. To maintain simplicity and generalizability, we use a shared threshold. The fact that our approach performs well without per-AC adaptation further highlights its robustness and broad applicability. ## Overthinking definition Our use of the term “overthinking” follows the SDN paper and is well-established in the literature, so we skipped it to keep the paper concise. Figures 4b and 4c are supposed to provide a more nuanced analysis of the phenomenon. ## Unique overthinking "Unique overthinking" indicates how many samples can be correctly classified by a unique AC. It highlights that each classifier specializes in different subsets of the data in a non-redundant way. ## How are the ACs initialized? We follow the same FACIL protocol for both the main classifier and the auxiliary classifiers (ACs) - we add a new head for the classes introduced in each task. ## Gradient propagation in Section 3 We mention that gradients from ACs are detached from the backbone in Section 3.2 (L#201-204); in Section 3.4, we compare this with a setup where gradient propagation is enabled end-to-end to empirically evaluate which approach yields better results. Then in all following experiments, we used the setup with enabled gradient propagation. Gradient propagation can sometimes negatively affect individual classifier performance. However, while some individual classifiers may degrade slightly, the overall benefit outweighs these effects. In most cases, gradient propagation training leads to better results (see Appendix C.3.). In Appendix E, we also show that findings from our analysis are consistent across with both detached and enabled gradients. ## Training times with ACs See the response to the Reviewer dDEJ. ## Figures clarification Figure 3 is intended to highlight which auxiliary classifiers outperform the final classifier. The marks in Figure 5 are supposed to represent correct and incorrect predictions. The accuracy reported in Figures 1 and 6 is the average accuracy after the final task.
null
null
null
null
null
null
Calibrated Language Models and How to Find Them with Label Smoothing
Accept (poster)
Summary: The paper studies calibration in LLMs after SFT stage. They mention that SFT leads to some significant degradation in the calibration performance of LLM due to the lack of diversity in learned feature embeddings. To address this, they propose to use label smoothing during SFT stage for better calibration. They provide some theoretical justification on the rationale of using label smoothing by relating them with minimizing the logit distance, and is somehow equivalent to the MAP estimation of the softmax predictions. They also provide more ablations on the effectiveness of label smoothing, and pointed out that label smoothing might not work for models with larger vocab and small hidden dim size, due to limited capacity and lack of "concentration" behavior. To practically implement the label smoothing on models with large vocab, they also introduces a novel, memory-efficient GPU kernel implemented in Triton, with optimized memory usage and improved throughput. Claims And Evidence: The claims made in the paper are generally well-supported. However, I found that the argument for some claims could be significantly improved. For example, in the section "Why does Instruction Tuning Lead to Mis-calibration?", it views the SFT stage as solving an OOD problem. While this might be supported by some literature, it is stated as a fact without any self-contained justification here. Furthermore, the calibration issue of the SFT stage is not explained clearly; it simply mentions that SFT hurts the diversity of the features, which is related to LLM calibration. I feel this part could be strengthened by adding more discussion or making it more self-contained. Methods And Evaluation Criteria: Label smoothing is a well-known technique for adding regularization and improving model calibration. Given the observation that SFT leads to overconfidence in LLMs, applying label smoothing does make sense. Another novelty comes from the development of the memory-efficient kernel, especially for models with large vocab, this is nice as it makes the use of label smoothing more scalable. The paper evaluates on three widely used benchmark MMLU, HellaSwag, and ARC-Easy, which seems to be a diverse set of tasks. I am not sure whether there exists any dataset specifically tailored for evaluating calibration, etc, not sure whether TruthfulQA makes sense here. It would be good to add some comments/discussions on these. Theoretical Claims: I checked the claims in the main paper, most of them look reasonable [Lemma 4.1, Theorem 4.2]. For Proposition 3.3, it might be worth to add the prior distribution here when you argue for MAP. Experimental Designs Or Analyses: The experiment setup does make sense. Some suggestions: - It might be good to add more ablations on the smoothing factor $\beta$, the rule of thumb for selection, and the calibration performance span a wide range of $\beta$. Furthermore, is there any adaptive label smoothing that adjust the smoothing factor based on hidden size, vocab size, or during training progress? - I am not sure whether it is within the scope of the paper, but it would be nice to understand the calibration issue after SFT under different training mixtures / datasets, how they correlated? - Actually, it seems some baselines are missing, such as temperature scaling? how do they perform? Supplementary Material: No Relation To Broader Scientific Literature: The paper builds upon the well-known label smoothing method and adapt it to the LLMs with large vocab size, the findings should be related with community studying the robustness / safety of LLMs, and also the general community studying calibration of neural networks. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The paper studies an important problem about calibration in LLMs with large vocab size, and the proposed method of using label smoothing does make sense. - The development of a custom GPI kernel is nice, makes the use of label smoothing more memory efficient, and makes it more practical. - The paper is generally well-structured, and easy to read. Weakness: - The motivation of using label smoothing is not so clear, there are simpler methods like temperature scaling for model calibration, why we need to focus on label smoothing? It will be great to add more discussions on why we choose label smoothing rather than other model calibration techniques. - As noted earlier, the explanation in the section "Why does Instruction Tuning Lead to Miscalibration?" could be more self-contained and provide a clearer, more detailed explanation of the underlying mechanisms. Other Comments Or Suggestions: See the points mentioned in other threads. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are immensely grateful for the thorough review that the reviewer has provided for our work. First, we would like to express our appreciation for the comments regarding the well-supported claims, the usefulness of our custom kernel, the appropriateness of the theoretical claims as well as the broader relationship of our work with robustness/safety of LLMs. We also appreciate the comments raised about improving our work, which we hope the responses below can address. ----- > The calibration issue of the SFT stage is not explained clearly. We appreciate this comment from the reviewer; we will further add discussion relating to existing literature that has explored how LLM representations (the representation of inputs within the feature space) [1, 2]. We agree that this merits a more self-contained paragraph or section that better details these claims, and will provide it within an updated manuscript. [3] also discusses this calibration issue in terms of how tuning in general (not limited to SFT) leads to feature diversity issues which we will also discuss in further detail. > Does TruthfulQA makes sense for evaluating calibration? It would be good to add some comments/discussions on these. Please see our responses to Reviewers C57F and LLNm, who ask regarding whether or not evaluation on other tasks is possible. Results on TruthfulQA and Winogrande are provided here (https://anonymous.4open.science/r/Anonymous-0F38/README.md). > For Proposition 3.3, it might be worth adding the prior distribution here when you argue for MAP. We thank you for your suggestion and will include the following discussion in the final version. The conditional prior $p\left(z\mid x\right)=\mathrm{Dir}\left(\alpha_{x}\right)$ is a Dirichlet distribution with instance-specific parameter $\alpha_{x}$. > It might be good to add more ablations on the smoothing factor $\beta$. Furthermore, is there any adaptive label smoothing that adjusts the smoothing factor based on hidden size, vocab size, or during training progress? Thank you again for this question. Our response is two-fold. First, we test $\beta$ across a rather large scale (0.0 to 0.5 at increments of 0.1). Generally, smoothing is always beneficial towards calibration, but too much smoothing can be harmful for accuracy. From our empirical results, a choice of 0.1 or 0.2 is generally most useful. There is no adaptive label-smoothing method that we are aware of, but we have considered this as a follow-up direction. Furthermore, we are making a better attempt to derive a potential relationship between $\beta$ and the model, however this is potentially out of scope for the current work. > It would be nice to understand the calibration issue after SFT under different training mixtures/datasets? We appreciate the comment. We point the reviewer to Table 1, which shows consistent results across three different SFT datasets. We have further tested a mixture of the datasets and observed insignificant differences, thus we did not include it due to a lack of novel insights. Nevertheless, we are happy to provide it within the Appendix as an ablation worth discussing. > Actually, it seems some baselines are missing, such as temperature scaling? We appreciate this mention. We have included a temperature scaling baseline here (https://anonymous.4open.science/r/Anonymous-0F38/README.md). As we can observe, temperature scaling generally can vary in effectiveness between datasets, thus the usefulness of label smoothing is highlighted here. > Why do we need to focus on label smoothing [...] rather than other model calibration techniques. We thank the reviewer for their comment. Label smoothing is particularly attractive as it is interpretable in terms of the objective and why it can benefit confidence calibration (see Section 2 and 3), while also being simple to incorporate within an inherent part of the learning regime, namely the training of models. On the other hand, temperature scaling requires a dataset to determine an appropriate temperature. ------ We appreciate the extensive feedback provided and the multitude of comments highlighting the strengths of our work as well as those asking for additional details to further highlight them. We hope that this rebuttal provides enough detail to address these needs for additional discussion, as well as better present the strengths of our work. If the reviewer shares the same opinion, we would be extremely grateful for an increased score. We are also willing to further engage in discussion to address any remaining questions and comments. ------ ### References [1] Murthy et al. One fish, two fish, but not the whole sea: Alignment reduces language models' conceptual diversity. _arXiv_ 2024 [2] Wang et al. Preference Optimization with Multi-Sample Comparisons. _arXiv_ 2024 [3] Oh et al. Towards Calibrated Robust Fine-Tuning of Vision-Language Models. _NeurIPS_ 2024.
Summary: The authors proposed using label smoothing to improve language model calibration for supervised fine-tuning. They demonstrate through theoretical analysis and experiments that label smoothing is effective, though computation is heavy for large vocabulary LLMs. They also propose a memory-efficient algorithm without sacrificing much accuracy. Claims And Evidence: Both the theoretical analysis and experiment results demonstrate the effectiveness of the label smoothing approach. Methods And Evaluation Criteria: The proposed method is reasonable and convincing, as label smoothing ensures the model not to over-estimate a specific class. For evaluation, the authors apply metrics of accuracy, ECE, and RMS to balance between model performance and reliability. They also include various modern models including Alpaca, Tulu3, and Openhermes. Theoretical Claims: Looks good to me. Experimental Designs Or Analyses: The experiment design is standard and reasonable. Though they mainly focus on a classification setting, they evaluate on metrics of accuracy, ECE, and RMS to balance between model performance and reliability. They also include various modern models including Alpaca, Tulu3, and Openhermes. They also demonstrate the time and memory usage of their method is more efficient than baselines. One thing is that the authors need to include their prompts and templates used in experiments in their appendix. Supplementary Material: Yes. More implementation details need to revealed, such as training epochs, instruction templates, etc. Relation To Broader Scientific Literature: The paper focuses on improving language model calibration in supervised fine-tuning, which is an emerging topic in LLM reliability. It is interesting to see that label smoothing is helpful to improve language model calibration, as previous improvements mostly happen on verbalized confidence. Not sure if label smoothing is only restricted to logit-based confidence, and classification tasks. Please further describe this in the paper. Essential References Not Discussed: Some references on LLM calibration have been outdated. Below are a few works you may discuss: scaling-based: Improving model reliability via inter-model latent agreement. https://arxiv.org/abs/2305.01481 prompting-based: Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. https://arxiv.org/abs/2305.14975 SFT-based: Teaching models to express their uncertainty in words. https://arxiv.org/abs/2205.14334 RLHF-based: Taming Overconfidence in LLMs: Reward Calibration in RLHF. https://arxiv.org/abs/2410.09724 Other Strengths And Weaknesses: Please use less LLM polishing for your paper. There are many uncommon usage in academic papers, examples not limited to: (1) "settings remain where ...", (2) "Seeking a practical solution, we...", (3) "Growing vocabulary sizes ... consumed to materialize ..., making training difficult." For (3), why not just say "As the vocabulary size becomes larger, there will be increasing computational costs of... and training will become more expensive"? Other Comments Or Suggestions: N/A Questions For Authors: 1. Please discuss if your method is limited to solving classification tasks in the paper. 2. Please include implementation details (e.g., instruction templates for each task) in the paper or supplementary material. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are very grateful and would like to thank the reviewer for their positive assessment of our work. We are happy they mention that they mention the suitability of the theory, experiments, methods and evaluation. To clarify their remaining questions and potential areas of concern, we hope the following discussion can provide the details necessary. ----- > One thing is that the authors need to include their prompts and templates used in experiments in their Appendix [...] or supplementary material. We thank the reviewer for the comment and will put all specific details. Some of these details are already provided in Section 3.1. We can provide the following experimental details: Training: We conducted a learning rate sweep over learning rates `[5e-6, 2e-5, 5e-5, 2e-4]` with a summing reduction. We further tested label smoothing hyperparameters `[0.0, 0.1, 0.2, 0.3, 0.4, 0.5]` where `0.0` is no smoothing. We used the open-instruct repository (https://github.com/allenai/open-instruct) at commit `e363290` for our training setup, with modifications to account for our kernel as well as specific experimental hyper-parameter settings and baselines. Evaluation: Our implementation is based on the MMLU official repository (https://github.com/hendrycks/test). We first evaluate our models on MMLU and then further modify the files here to directly adapt the evaluation dataset to the other tasks at hand. We follow MMLU and use the following prompt for all tasks: `"The following are multiple choice questions (with answers) about {}.\n\n".format(query)`. We didn’t tune the template for fair comparison, for models that required a chat template, we used `tokenizer.apply_chat_template` (supported by HuggingFace models) directly on the sequence. > It is interesting to see that label smoothing is helpful to improve language model calibration, as previous improvements mostly happen on verbalized confidence. Not sure if label smoothing is only restricted to logit-based confidence, and classification tasks. [...] Please discuss if your method is limited to solving classification tasks in the paper. Thank you for this question; it is certainly worth further discussion for clarifying inherent limitations in confidence calibration research. Implicitly, it is true that calibration is measurable only on classification-style tasks, or at the very least tasks where there is a set of reference answers where there may be an ordering of which answer is more correct than others. Label smoothing is also generally only applicable for logit-based classification tasks, given that it is used generally for classification-based losses. However similar concepts have been introduced in other domains such as for DPO preference based losses (Chowdhury et al., 2024) which we are currently investigating as a possible follow-up direction of interest. For further discussion, we invite the reviewer to read our responses to reviewers C57F and Sf9v, who pose similar questions. > Some references on LLM calibration have been outdated. Below are a few works you may discuss. We sincerely appreciate the references and will include them in our related works and discussion. We agree with the relevance of these works in the broader scope of research on LLM calibration and will keep these in mind for future work as well. If there are further requirements in which the reviewer would like to see these works discussed, we are happy to provide further details in how we plan to incorporate them. > Please use less LLM polishing for your paper. Thank you for your comment. We would like to note that we actually did not use an LLM to polish our phrasing, but we agree strongly with the sentiment from the reviewer. If the phrasing resulted in any part of our work becoming less clear, we are happy to adjust such portions of the paper. ------ Once again, we would like to express our sincerest gratitude towards the reviewer for their extensive comments towards our manuscript. We again appreciate their positive comments towards our methodology, experiments and results. We also hope they can read our response towards areas in which they felt improvements could be made and are hopeful that this response has properly addressed those areas. If this sentiment is shared, it would be highly thankful if this could be incorporated within an improved scoring of our work. ------ References [1] Chowdhury et al. Provably Robust DPO: Aligning Language Models with Noisy Feedback. ICML 2024
Summary: In this work the authors examine the effect of Instruction tuning on LLM calibration, i.e. when the model says it is 70% sure about a prediction does it actually get it right ~70% of the time? They find that majority of available LLMs are reasonably well calibrated, but once Instruction tuning is applied, they tend to become miscalibrated and end up overconfident in their predictions. Previous work has used label smoothing during instruction tuning as a way to reduce miscalibration. This work aims to explain why and identify settings, like when a large vocabulary is used, where label smoothing is not expected to help. They include theoretical results, such as how things like model and vocabulary size establish lower bounds on entropy and how that relates to calibration. Additionally, the develop and benchmark a new method for calculating logits and loss for large vocabularies while supporting label smoothing. Claims And Evidence: Their claims seem well founded. There insights into model size and its effect on entropy is validated in their experiments what show that label smoothing is less effective at calibrating smaller models. And their findings that LS helps calibration is most settings is inline with their claims and previous work. Additionally their new LS kernel evaluation supports their claims about it being faster and more memory efficient. Methods And Evaluation Criteria: Evaluations of calibration is done using just 3 datasets, it would be more convincing if there were more, especially in ones where the more text had to be generated and there was some notion of confidence in the entire answer instead of just the first token. Theoretical Claims: I did not check the correctness of the proofs Experimental Designs Or Analyses: There experimental design seemed reasonable. Supplementary Material: I did not review the supplementary material Relation To Broader Scientific Literature: They mention a lot of previous work, including the other works that first found that LS was helpful in terms of calibration. Additionally, they include multiple common metrics for calibration. Essential References Not Discussed: Not that I know of, Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Figure 5 should probably be at the top of the column Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First, we thank the reviewer for their review of our work and their enthusiasm they express regarding their work. We appreciate that they find the claims well founded, the insights to support these claims, and their agreement regarding our new kernel that incorporates label smoothing. ----- > Evaluations of calibration is done using just 3 datasets, it would be more convincing if there were more. To address the comment raised regarding the use of the different datasets, we are happy to provide additional results on different datasets here (https://anonymous.4open.science/r/Anonymous-0F38/README.md). We provide additional results on two datasets, WinoGrand and TruthfulQA, which demonstrate similar results compared to the prior three that are shown in our manuscript. > It would be more convincing if there were more [...] and there was some notion of confidence in the entire answer instead of just the first token. With respect to whether or not there is some notion of confidence in the entire answer instead of just the first token, we do note that while such a concept exists, there may be limitations that are fundamental to the field of confidence calibration. Take for example two sequences A and B. We can measure a level of confidence on the two as the normalized model perplexity over the two options. However, the fundamental limitation is the need for these two sequences as reference to the model, as otherwise the normalizing constant for the choices will be infinite, due to the infinite potential generations from the model. We will specifically mention this fundamental limitation of confidence/calibration measurements in an updated version of our manuscript, in order to make this point transparent. Further discussion on similar questions can be found in our response to reviewers LLNm and Sf9v. > Figure 5 should probably be at the top of the column We also appreciate the mention of the placement of figures, which we will adjust. ------ Again, we would like to show our appreciation to the reviewer for their positive response to our work and for highlighting the fundamental soundness of our methodology. We also acknowledge some minor comments that were left regarding how to discuss specific points regarding the overarching problem being studied, which we believe have been addressed within this rebuttal. We are hopeful that the reviewer feels similarly, and if there remains any additional question or comments we remain ready to provide additional responses to clarify them.
Summary: This work focuses on mitigating mis-calibration of LLM during SFT by incorporating label smoothing. It argues that label smoothing helps reduce model overconfidence by promoting equality among logits while enhancing learned feature embeddings. Through empirical experiments, this work demonstrates the effectiveness of label smoothing. Additionally, it designs custom kernels to enable label smoothing with significant memory and throughput improvements, without compromising performance. Claims And Evidence: 1. This work claims that label smoothing can mitigate mis-calibration by encouraging "diversity of feature embeddings". However, in subsequent argument, this work only shows that label smoothing encourages equal logits, and it's unclear how this is connected with "diversity of feature embeddings" (which itself is not clearly defined in this work). 2. This work claims that label smoothing is less effective in improving calibration for "large vocabulary LLMs" because "large vocabulary LLMs" do not have the ability to be overconfident. However, this cannot explain why label smoothing has huge improvement on Gemma2-2B (and relatively small improvement on Llama3-3B/1B). Additionally, to support the claim, authors shall show that the calibration errors of Llama3-3B/1B are indeed smalled (or less over-confident) compared to Llama3-8B. However, these numbers are not provided. If one looks a Figure 6 in appendix, it appears that Llama3-3B and 1B both are still over-confident. 3. In the introduction section, authors state that "We further show how alternative techniques, such as temperature scaling and logit capping, explicitly act as a mechanism to steer models toward overconfidence, allowing the benefits of label smoothing to once again emerge." However, no actual experiments/statistics are conducted to support this statement. Methods And Evaluation Criteria: The proposed methods are well-suited to the problem, and the evaluation criteria appropriately align with the task. Theoretical Claims: See claims and evidence section. Experimental Designs Or Analyses: The empirical experiments conducted in this work are methodologically sound and effectively demonstrate: (1) the impact of label smoothing in mitigating model overconfidence in certain LLMs, and (2) the effectiveness of the custom kernels in enhancing memory efficiency and throughput. Supplementary Material: NA. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: NA Other Strengths And Weaknesses: [-] The paper has several typos in its math symbols. E.g., in Eq (1), the index i should be n. "L is the length of a discrete input sequence" on page 3 should be N. [-] The paper would benefit from a more logical argument, as well as improved clarity and structure in its writing. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for providing a thorough evaluation of our work. We are delighted that the reviewer finds the work to have been positioned and set up in a manner that aligns with supporting our claims. We are hopeful that the below responses can sufficiently address their remaining concerns regarding our work as well as provide the necessary details to clarify possible misunderstandings resulting from their absence. ----- > However, in subsequent arguments, this work only shows that label smoothing encourages equal logits, and it's unclear how this is connected with "diversity of feature embeddings" (which itself is not clearly defined in this work). Indeed, this section could be better detailed, which we may have overlooked as a result of the page limit in the initial submission. Oh et al. (2024) provide a theorem (Theorem 3.1 of their work), which decomposes error into two components, a classification error (i.e. accuracy) and calibration error. They show both errors depend on the reciprocal of the smallest singular value of the input covariance matrix of the input $x$, with a smaller value leading to greater error. Mathematically, this means that the variance within the input features should be more diverse and independent. Next, our proposition 3.3 shows label smoothing to be a MAP estimation problem. Prior works have shown both empirically (Batmanghelich et al., 2014) and theoretically (Rajkowski, 2019) how this can lead to the learning of more diverse feature sets. First, Chi et al. (2024) show that the logits prior to the LM head are distributed as a Gaussian. Rajkowski (2019) proves that under the assumption that samples that are generated from a gaussian process, MAP estimation divides the data into clusters whose convex hulls are disjoint (Proposition 1 in their work) and into clusters of limited size (Proposition 2), leading to distinct and limited-sized clusters that make the features diverse and easy to distinguish (Corollary 3). Thus the model learns more diverse feature within the representations. In light of these details, we are happy to provide additional reference and the associated lemmas and remarks within an updated manuscript. > If one looks at Figure 6 in the Appendix, it appears that Llama3-3B and 1B both are still over-confident. We agree that this could be clarified. Our analysis in Sections 3/4 establish that smaller models have a higher lower entropy bound, which means that their predictions are bound to be less __concentrated__ than those of a larger model. However, this does not limit them to being under-confident or perfectly calibrated, as the lower bound could still indicate that the model has the potential to become overconfident. Thus we can see that as the model increases, so does overconfidence, a signal that the entropy bound could in fact be playing an influence on predictions. > Authors state that "alternative techniques explicitly steer models toward overconfidence, allowing the benefits of label smoothing to once again emerge." However, no actual experiments/statistics are conducted to support this statement. We provide clarification here. We can note the benefit of soft-capping directly from comparisons between Gemma and Gemma2. Figure 8 in Appendix B.2 shows that Gemma is naturally less prone to overconfidence compared to Gemma2, however the base models are both well calibrated. Both models are not publicly transparent about the raw pre-training data, however the fact that both are well calibrated initially prior to fine-tuning is an indication that both start at a roughly equivalent point/level. However, after SFT, Gemma2 models are naturally more overconfident than Gemma models, highlighting how the main difference between the two, the logit soft-capping, can naturally lead to overconfidence, which enables label smoothing to be more effective. ------ We extend our gratitude towards the reviewer for their engagement with our work. We appreciate their feedback and are hopeful that the provided response above clarifies any remaining uncertainties to highlight the strengths of our work. We hope that the reviewer shares this impression and would be appreciative if such an opinion could be reflected through an improved assessment. We also remain ready to provide any additional details for questions that remain. ------ ### References [1] Oh et al. Towards Calibrated Robust Fine-Tuning of Vision-Language Models. In NeurIPS, 2024. [2] Batmanghelich et al. Diversifying Sparsity Using Variational Determinantal Point Processes. In arXiv, 2014 [3] Rajkowski, Łukasz. Analysis of the maximal posterior partition in the Dirichlet Process Gaussian Mixture Model. In Bayesian Analysis, 2019 [4] Chi et al. Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation. In Findings of the Association on Computational Linguistics: NAACL-HLT, 2024
null
null
null
null
null
null
Policy Optimization for CMDPs with Bandit Feedback: Learning Stochastic and Adversarial Constraints
Accept (poster)
Summary: The authors study constrained Markov decision processes (CMDPs) and they provide the first best-of-both-worlds algorithm for CMDPs with bandit feedback. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, to some extent. Experimental Designs Or Analyses: Yes, to some extent. Supplementary Material: No. Relation To Broader Scientific Literature: The main novelty is that the authors provide the first best-of-both-worlds algorithm for CMDPs with bandit feedback. Essential References Not Discussed: No. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. Questions For Authors: Page 1: Can you please connect the study of constraint violation to the classical RL literature? Page 2: Can you please elaborate on the notion of episode-dependent learning rate? Page 3: In Algorithm 1, what are the inputs and the outputs? Page 4: Can you please further explain Condition 2.5? Page 5: Can you please explain Equation (2)? Page 6: What is the output of Algorithm 3? Page 7: Can you please elaborate on Lines 334--336? Page 8: Why are you defining OPT^\mathcal{w} in this way? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive evaluation. > [1] While in classical RL literature, the only goal is to maximize a given reward function, in the constrained RL literature the problem becomes two-fold. Specifically, the agent has to additionally fulfill $m$ possible constraints while maximizing the reward function. This double objective strongly limits the possibility to explore the decision space. > [2] As “episode-dependent learning rate” we mean a learning rate which changes over episodes. This is required to attain a linear dependence on the payoffs’ range in the primal algorithm regret bound. Notice that, while attaining a linear dependence on the payoffs’ range is trivial when the payoffs’ range is known, this is not the case when the aforementioned quantity is unknown. > [3] Algorithm 1 does not have any input/output since it simply shows the interaction between the learner and the environment. > [4] Condition 2.5 is a condition on the Slater’s parameter of the offline problem $\rho$. Αs is standard for primal-dual methods, when $\rho$ is large enough, it is possible to achieve better regret guarantees, since the Lagrangian space can be bounded by a quantity which is “almost” independent of T. > [5] We apologize with the Reviewer since Equation (2) contains a typo. The corrected version is $\ell_t(x_h,a_h)=\Gamma_t+ \sum_{i=1}^m \lambda_{t,i}g_{t,i}(x_h,a_h)-r_t(x_h,a_h)$, which is the definition of the lagrangian losses fed to the primal algorithm. > [6] Algorithm 3 outputs a policy $\pi_{t+1}$ , which is the policy that the primal algorithm suggests to play at the next episode based on the previous observed losses. > [7] One of the main challenges of our work is to build an algorithm which does not require the Slater’s parameter $\rho$ in input. Notice that, when $\rho$ is known, it is sufficient to optimally bound the Lagrange multipliers decision space to $H/\rho$ in order to have best-of-both-worlds guarantee. Differently, when $\rho$ is not known, it is necessary to resort to different techniques to show that the Lagrange multipliers are bounded during the learning dynamic. We tackle this aspect employing the no-interval regret property of the primal and the dual regret minimizers. > [8] $OPT^\mathcal{w}$ is an alternative baseline for the setting with adversarial constraints. This baseline is computed considering the optimal fixed policy which satisfies the constraints at each episode (and not only on average). The introduction of this weaker baseline is motivated by the fact that, when competing against $OPT^\mathcal{w}$, it is possible to guarantee sublinear regret and constraints violation simultaneously.
Summary: This paper studies online constrained Markov decision processes with rewards and constraints that can be either stochastic or adversarial. It proposes the first algorithm for this approach with bandit feedback. Moreover, instead of considering a problem on occupancy measures, their method is based on a primal-dual policy optimization approach. The algorithm requires no knowledge of the Slater parameter. For this, they need a primal regret minimizer that satisfies the “non-interval-regret” property. To achieve this, the authors construct a policy-based algorithm with a fixed share update. Claims And Evidence: All theoretical claims are followed by proofs in the appendix or the main paper. Methods And Evaluation Criteria: As a theoretical paper, the proposed algorithm is well suited to the problem. However, as one of the main advantages, according to the authors, of using a policy optimization algorithm over an occupancy measure approach is its efficiency, some showcase experiments would be useful to illustrate the algorithm in practice. Theoretical Claims: I have checked the theoretical claims concerning the primal regret minimizer algorithm. The dual regret proof seems to follow from existing proofs in the literature such as in Stradi et al. (2024b), so I did not check them in detail. Experimental Designs Or Analyses: not applicable. Supplementary Material: I have checked the proofs related to the primal regret algorithm. Relation To Broader Scientific Literature: This papers propose the first approach to tackle stochastic and adversarial rewards and constraints in CMDPs with bandit feedback. Most previous work focused on the case where rewards are adversarial and constraints are stochastic. Stradi et al (2024b) proposes an approach for stochastic and adversarial rewards and constraints but with full information feedback. Essential References Not Discussed: All relevant references related to CMDPs seem to be discussed. As the paper also claims that they develop the first no-interval-regret (a.k.a., adaptive regret) algorithm for unconstrained MDPs under bandit feedback, the paper could benefit from a related work on existing approaches for adaptive regret on online learning and MDPs. Other Strengths And Weaknesses: **Strengths**: - The paper is mostly well-written. - The paper presents the first primal-dual approach using policy optimization to deal with both adversarial and stochastic rewards and constraints with bandit feedback. **Weaknesses**: The main weaknesses of this paper appear to be the lack of technical novelties in both the algorithm and its analysis, as outlined below: - Policy optimization is a Mirror Descent-like algorithm. Cesa-Bianchi et al. (2012) presents a general method for achieving adaptive regret in Mirror Descent approaches through a fixed-share update. Hence, it seems that applying this technique in this context is not novel. - The proof of Theorem 3.3 appears to be essentially the same as that of Theorem 4.1 from Luo et al. (2021), except for the fixed-share step, which is a standard technique for achieving adaptive regret in Mirror Descent approaches. - Once the primal and dual algorithms are designed to satisfy the no-interval-regret property, the bound on the Lagrange multiplier (Theorem 3.4) appears to follow in a very similar manner to Theorem 5.1 in Stradi et al. (2024b). - The challenge of handling bandit feedback instead of full information appears to be addressed using classical techniques from Online MDPs in the development of Algorithm 3 (see, for example, Luo et al. 2021). Other Comments Or Suggestions: Below, I provide some suggestions to clarify the paper's presentation for readers. - The authors could define better in the beginning of the paper what they mean by "best-of-both-worlds". Usually it means an algorithm that achieves the optimal regret for both stochastic and adversarial rewards. But in this paper the authors deal with stochastic and adversarial constraints. - I recommend that the authors define the "no-interval-regret" property earlier in the paper, as it plays a crucial role in the final result and strongly influences the algorithm's design, and the reader may be unfamiliar with this notion. Additionally, this property has been referred to by other names in the literature, such as "adaptive regret," which could be worth mentioning to improve clarity for readers. - I would also advise the authors to give a proper mathematical definition of terms $\Gamma_t$ and $\Xi_t$ from Algorithm 2 to help the reader better understand their meaning in the proposed method. Questions For Authors: I would appreciate hearing the authors' responses to the concerns regarding the weaknesses of the paper. I also have some questions to better understand their approach: - The authors state that they do not assume Slater's condition (the existence of a strictly feasible solution). However, does requiring condition 2.5 to hold not implicitly imply the existence of such a solution? I would appreciate it if the authors could clarify this point. - One of the authors justification for the use of policy optimization is that mixing the occupancy measure produced by an occupancy-measure algorithm with the uniform one (as in the fixed-share update) would not work. However, would it be feasible to instead mix the policy induced by the occupancy measure in an occupancy-measure-based approach with the uniform policy and then use the occupancy measure induced by this mixed policy? This could allow to achieve the same results (dealing with stochastic and adversarial rewards and constraints with bandit feedback) also with an occupancy-measure approach. I would be interested in the authors' perspective on this point. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the Reviewer for the effort in evaluating our work. > On the novelty. We thank the Reviewer for the opportunity to clarify this aspect. In the following, we better highlight the contribution of our work. Indeed, while our primal-dual scheme follows the one of [Stradi et al. 2024] and the primal regret minimizer builds on [Luo et al. 2021], we believe that the contribution of our work is valuable. Specifically, our analysis and the one in [Luo et al., 2021] are substantially different in two fundamental aspects. First, we show that our algorithm works with a dynamic payoff range, paying a linear dependence only in the maximum range. While this dependence is standard in online algorithms, this happens since the payoff range is known a priori. This is not the case in our setting. To avoid a quadratic dependence on the payoff range, we had to suitably modify the learning rate. Moreover, our algorithm guarantees the no-interval regret property employing the fixed share update. While we agree that the fixed-share update has been shown to work in online convex optimization, we believe that extending this result to MDPs with bandit feedback is valuable. As a second aspect, notice that all the primal-dual analysis in [Stradi et al. 2024] is designed to work under full-feedback. Indeed, the Lagrangian function given to both the primal and the dual in our work, captures less information that the one in [Stradi et al. 2024], namely, the Lagrangian is built for the path traversed only. Finally, notice that, differently from any prior work on CMDPs, we complement our analysis studying the regret w.r.t. an optimal solution which satisfies the constraints at each episode. While this baseline is weaker than the first one we propose, it has been largely studied in different settings w.r.t. ours, since it is the only one which allows to be no-regret when the constraints are adversarial. Specifically, this baseline has been studied in online convex optimization with constraints, thus, in single state environments with full-feedback (see, e.g., “Tight Bounds for Online Convex Optimization with Adversarial Constraints”, 2024]). Our results match theirs, showing that $\sqrt{T}$ regret and violations are attainable when only bandit feedback is available and for an MDP structure of the environment. This analysis is novel and independent from prior works. > On the Slater's condition. The Reviewer is correct, Condition 2.5 implies the Slater’s condition; however the focus of this work is the **lack of knowledge** of the Slater’s parameter $\rho$ and not achieving theoretical guarantees when Slater does not hold –however, we provide sublinear bounds for this case, too, namely, when Condition 2.5 does not hold–. The motivation for this is that previous works in the literature with bandit feedback require the knowledge of $\rho$ in advance and they assume $\rho$ to be a constant, ignoring the case where $\rho$ is very small (e.g. [1]). > On policy optimization. The point raised by the Reviewer is surely interesting. While a more rigorous analysis is required to precisely answer the question, our perspective is that technical challenges may arise when dealing with the biased loss estimator. Finally, notice the policy optimization approaches are generally preferred to occupancy measure ones, since occupancy measure based algorithms require convex programs to be solved at each episode. We believe that the aforementioned one is the main justification for policy optimization. Finally, we want to thank the Reviewer for the additional comments and suggestions. We will surely take them into account in the final version of the paper. [1] Castiglioni et al. 2022, “A Unifying Framework for Online Optimization with Long-Term Constraints”. --- Rebuttal Comment 1.1: Comment: - Slater condition: Thank you for clarifying this point. I would just advise the authors to rephrase the sentence "Notice that, in this work, we do not assume that the Slater’s condition holds. Indeed, our algorithm still works when a strictly feasible solution does not exist" in page 4, lines 187-189, column 1, to mention that you study both cases, when this condition holds and does not hold, and in each case you obtain different bounds (even though this appears later in the paper). I also thank the authors for clarifying the novelty of their paper, and I agree that the complement on the analysis using a different baseline to overcome the impossibility result of adversarial constraints and the unknown Slater parameter in the original baseline is interesting. However, I still have some concerns about the novelty of the techniques: - Bandit feedback: I understand that bandit feedback means less information in what is observed in the Lagrangian. However, this estimate of the Lagrangian is sent as a loss to the primary algorithm, which uses the same techniques of Luo et al. 2021 to deal with the fact that the feedback is bandit. - Learning rate: As I understand it, the problem here is that the maximum of the loss function is unknown because it depends on the Lagrange multipliers, and the Slater parameter is unknown. However, overcoming this problem with changing learning rates seems to be the same idea as in Stradi et al. 2024. Each mirror descent approach is indeed performed in a different space (occupancy measures vs. policy), but the question of bounding the loss function seems independent. --- Reply to Comment 1.1.1: Comment: We would like to thank the Reviewer for the additional feedback. > On the Slater’s condition. We will surely modify the sentence as suggested by the Reviewer in the final version of the paper. > On the novelty of the techniques. We believe that similar novelty arguments can be applied to most papers on MDPs and, more broadly, on online learning. Specifically, we agree with the Reviewer that our paper does not introduce “breakthrough” algorithmic techniques; however, combining all these techniques with the required modifications is not a straightforward task. For instance, the design of no-interval regret minimizer for MDPs is interesting and of independent interest. Moreover, we believe that combining the algorithm of [Luo et al. (2021)] with fixed share while additionally satisfying the requirements needed by our primal algorithm (like unknown range of losses and variable learning rate) is not that trivial. As concerns the bandit feedback, we agree with the Reviewer for what concerns the primal regret minimizer. Nonetheless, notice that the bandit feedback influences the dual regret minimizer too. Indeed, differently from [Stradi et al. 2024], our dual algorithm cannot receive the expected violation attained by playing the selected occupancy measure, but the actual violation only. We will surely include this discussion in the final version of the paper and we hope to have properly addressed the Reviewer concern.
Summary: This paper proposes the first best-of-both-world algorithm that can solve a CMDP only with bandit feedback and enjoys optimal regret and constraint-violation guarantees in both regimes. Furthermore, the proposed method does not require Slater's condition and employs a policy optimization approach, which is more efficient than occupancy-measure-based methods and amenable to neural network function approximation. ## update after rebuttal I thank the authors for the rebuttal responses. I think it is a good contribution to proposing an algorithm for the bandit feedback setting, whose analysis is often non-trivial and sometimes requires interesting ideas, as explained in the rebuttal to Reviewer fRky. On the other hand, the obtained bounds do not seem tight, or at least unclear if they are tight in other factors such as the state space size. Furthermore, the definition of constraint violation employed in this paper is a weak version, allowing violation cancellation. That said, this paper could be the first step towards an algorithm with completely zero-violation algorithm. Weighing these aspects, I decided to keep my original score of "Weak Accept". Claims And Evidence: The paper claims that their proposed algorithm is the first best-of-both-world algorithm that can solve a CMDP only with bandit feedback and enjoys optimal regret and constraint-violation guarantees in both regimes. As far as I know, it is indeed the first best-of-both-world algorithm under the considered setting. All proofs in the appendix seem to be well-written and correct. Methods And Evaluation Criteria: Regret analysis is a reasonable technique for evaluating an algorithm from a theoretical perspective. Theoretical Claims: I have roughly read the appendix. All the derivation seems to be well-written and correct. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: I have roughly read the appendix. All the derivation seems to be well-written. Relation To Broader Scientific Literature: Since reinforcement learning aims at solving various sequential decision making problems, deepening the theoretical understanding of RL algorithms in many settings has a broader impact on various fields. Reinforcement learning is also presumed to be implemented in the brain, and it is frequently used to model the decision making process of animals. Furthermore, this paper considers the constrained RL setting, which is gaining importance due to increasing use of RL in real life. The bandit feedback setting is the most practical setting, and this paper solves an important problem. Furthermore, the analysis that does not require Slater’s condition seems also valuable for other CMDP regret analyses. Essential References Not Discussed: I think the related work in the Appendix well covers important references. Other Strengths And Weaknesses: **Strength** - The paper is well-structured and clearly written. The authors effectively present the algorithmic framework, motivation, and theoretical analysis, making it easy to follow. - The problem setting and the best-of-both-worlds contribution are important in the field of constrained MDPs. - Compared to prior work that relies on occupancy measures (and with full-information feedback), the proposed method is a policy-based* approach and amenable to neural-network-based implementations. Furthermore, the proposed method does not require Slater's condition, which many other algorithms need. - To deal with the lack of Slater’s condition, the author proposes several techniques to bound the Lagrange multipliers and loss for updating policies, which are of independent interest for other researchers too. **Main Weakness and concerns** - In the abstract, the authors claim that the regret bound is optimal. However, in CMDP settings, an existing algorithm achieves a $\sqrt{T}$ strong violation regret without assuming Slater’s condition [(Ghosh et al., 2024)](https://proceedings.mlr.press/v238/ghosh24a/ghosh24a.pdf), which appears to provide a tighter bound than those stated in Theorems 4.4 and 4.5 of this paper. Would you explain differences of the obtained results from theirs? Other Comments Or Suggestions: **Main Question** Most of the technical challenges and algorithmic contributions in this paper seem to arise due to the absence of Slater’s condition rather than the bandit feedback difficulty. For instance, challenges such as **bounding the Lagrange multipliers, the fixed-share update, and the loss range issue** (as described on page 2) primarily arise due to the lack of Slater’s condition. While I understand the importance of handling CMDPs without assuming Slater’s condition, it is unclear how these challenges are inherently tied to the bandit feedback setting. Does the combination of **no Slater’s condition and bandit feedback introduce additional technical difficulties**? If not, it would be nice to clarifying the contribution for no-Slater’s condition in the title and abstract. **Minor weakness or Suggestions** - In line 3 of Algorithm 3, the phrase "set of all possible transitions" is somewhat unclear. Initially, I thought it as a set storing next states, but upon further reading, I found that it represents the confidence set of transition functions. To avoid confusion, I suggest rewording it as "set of all possible transition functions." - The term "loss" in the introduction is ambiguous. To improve clarity, consider explicitly specifying it as "loss used to update the policy" or "Lagrangian loss" to help readers distinguish its role in the analysis. Due to the lack of important information (e.g., limitation, comparison table), I set the score to 3, but I’m happy to increase the score if all the concerns are addressed. Questions For Authors: Please see the first question in the Other Comments Or Suggestions section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive evaluation. > On Ghosh et al., 2024. We thank the Reviewer for the interesting point. First, we underline that we use the term best-of-both-worlds referring to the constraints definition. Thus, our algorithm achieves optimal regret guarantees when the constraints are stochastic (and the loss may be adversarial) and when the constraints are adversarial. Notice that our definition of best-of-both-worlds in constrained settings is standard in the literature (e.g., see [1]). (Ghosh et al. 2024) deals with the stochastic reward and constraints setting only, thus their algorithm does not admit adversarial loss as our stochastic setting. Finally, notice that sublinear results employing the positive regret and violation definition does not imply that a better rate can be obtained employing our metrics, thus, our dependence on the number of episodes is still optimal. > On the contribution. We thank the Reviewer for the opportunity to clarify this aspect. In this work, we design a primal-dual algorithm that **does not require the Slater parameter $\rho$ as input**, and in addition, it does not need any assumption on $\rho$. Indeed, in the analysis, we study both the case where the Slater condition is verified (Condition 2.5 holds) and the case where the Slater condition is not verified (namely, either $\rho=0$ or too small compared to the horizon). Nonetheless, while the lack of the Slater condition leads to weaker guarantees, most of the difficulties in our analysis arise from the lack of knowledge of the Slater’s parameter $\rho$, which is considerably harder to tackle when the feedback is bandit. In particular the bandit feedback affects the design of the primal algorithm and requires some forms of additional regularization in the learning dynamic like the fixed-share update. We will surely include a comparison table in the final version of the paper to clarify these aspects. Since these aspects are crucial, please let us know if further discussion is necessary. We finally thank the Reviewer for the suggestions and we will surely implement them in the final version of the paper. [1] Castiglioni et al. 2022, “A Unifying Framework for Online Optimization with Long-Term Constraints”. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Your response clarifies the difference from [Ghosh et al. 2024]. “In particular the bandit feedback … like the fixed-share update” Thank you for your response. Clarifying this part in the paper should address my concern. ## Additional Minor Suggestion When the constraint regret allows for error cancellation (your Definition 2.4), I believe it is possible to achieve a tighter, 0 constraint violation regret, with a minor modification (e.g., see Appendix H of https://arxiv.org/abs/2206.11889). Since the modification is almost trivial, I don't think the contribution of the paper degrades without the modification. Yet, mentioning it may help readers who are aiming for zero violation. --- Reply to Comment 1.1.1: Comment: We would like to thank the Reviewer for the suggestion. We will surely incorporate it in the final version of the paper.
Summary: This paper considers policy optimization for the CMDP framework with stochastic and adversarial constraints. Compared to the existing works, this paper considers bandit feedback rather than full feedback. The paper uses a primal-dual-based policy optimization method to find the policy directly rather than the state-action occupancy measure. Hence, it is more suitable. The paper shows that one can achieve $O(\sqrt{T})$ regret and violation if the Slater's condition holds or $O(T^{3/4})$ regret and violation if the Slater's condition does not hold. The paper is the first one providing the best-of-both-the world result in the CMDP setting. ###Update### After going over the paper and the responses again, I have decided to decrease my score. I have explained the reasons below: The key contributions are the following: 1. Relaxing the full-information setting information compared to Stradi et al. 2. Obtaining results even when the Slater's condition does not hold (or, the strict feasibility can be very small). 3. Use a lower computational complexity approach to get rid of solving a convex optimization problem to find the occupancy measure over all the possible transition probability sets. While I agree that if someone combines all the tools in a novel way to show that it is not trivial, then we should certainly accept the paper. However, I am not sure all these contributions are very novel. Let me elaborate on my points. If the paper is accepted, I will suggest the authors to consider the followings: 1. Stradi et al. consider that the transition probability is unknown. Hence, I am not sure how much complexity it adds using the Bandit approach. As it is well known, most of the complexity comes from the unknown transition probability. In the adversarial setting, it may indeed affect, however, I am not sure that the tools are very novel. Perhaps, some other reviewers can convince me otherwise. 2. I agree that the second claim is indeed a novel, however, as I pointed out there are other approaches that have already considered the setting in the Stochastic setting. Hence, I am not so sure about the contributions. In particular, we do not know the lower bound, hence, the tightness can not be commented here. 3. The third contribution is interesting, however, it directly follows from Luo et al.. It is not clear whether extending it to the constrained case will be very complex. Claims And Evidence: The claims are clearly supported by the proofs. Methods And Evaluation Criteria: The paper is mainly theoretical in nature. The paper's claims are supported theoretically. Theoretical Claims: I briefly go over the proofs. There are no major concerns. Experimental Designs Or Analyses: N/A Supplementary Material: I have briefly gone over the proofs. Relation To Broader Scientific Literature: The paper has made a clear connection with the existing works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. This is the first result that achieves sub-linear regret and violation using bandit feedback. Hence, in terms of results, the paper's contributions are significant. 2. The paper is relatively well-written, and the proofs are nice and clean. Weaknesses: 1. Even though the paper talks about policy optimization, it still relies on the state-action occupancy measure. In particular, the paper relies on the computing the maximum state-action occupancy measure among the possible transition probability sets (line 5 in Algorithm 3). The proposed approach also needs to maximize the bonus term as well over the possible transition probability sets (line 6 in Algorithm 3). The paper did not talk about how easy or difficult to solve this problem. 2. The paper heavily relies on Luo et al. '21. The paper did not explain the technical novelties. 3. The approach is model-based and thus can not be applicable to large state-space (even with linear function approximation). 4. The paper considers a weaker notion of the violation where the violations can cancel each other. In particular, one can have violation of +1, and then violation of -1, altogether violation is 0, however, it is still violating half of the episodes. Can the paper talk about a stronger notion of violation (cancelation free)? Recent works in CMDP have addressed the issue. [A1]. Ghosh, Arnob, Xingyu Zhou, and Ness Shroff. "Towards achieving sub-linear regret and hard constraint violation in model-free rl." In International Conference on Artificial Intelligence and Statistics, pp. 1054-1062. PMLR, 2024. [A2]. Müller, Adrian, Pragnya Alatur, Volkan Cevher, Giorgia Ramponi, and Niao He. "Truly no-regret learning in constrained mdps." arXiv preprint arXiv:2402.15776 (2024). Other Comments Or Suggestions: N/A. Questions For Authors: 1. Can the authors comment on Weakness 1? 2. Can the authors point out the main technical novelties compared to Luo et al.'21. In particular, why not combine the results of Luo et al.'21 and the existing understanding of the CMDP works in enough to get these results? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive evaluation. > On W1 and Q1. As shown in [1], computing the maximum state-action occupancy measure among the possible transition probability sets (Line 5 in Algorithm 3) can be solved efficiently via a greedy algorithm. The same reasoning holds for the corresponding minimum and for the bonus quantity (see Luo et al.'21). We finally remark that performing the OMD update over the occupancy measure space (as in occupancy-based methods) would require solving a convex program at each episode, which is considerably heavier than computing upper occupancy bound or bonus terms. > On W2 and Q2. The algorithm from [Luo et al. ‘21] cannot be employed as our primal regret minimizer, as it is. In particular, our primal regret minimizer must guarantee the no-interval regret property – in order to show that the Lagrangian multipliers are bounded during the learning dynamic –, which requires the introduction of a fixed-share update. To the best of our knowledge, our primal regret minimizer is the first to show no-interval regret property for adversarial MDPs with bandit feedback. Furthermore, we had to improve the dependence of the regret bound on the loss range, introducing a dynamic learning rate. This learning rate is used to improve the dependence on the payoff’s range from a quadratic dependence to a linear one. Indeed part of the complexity of designing the primal algorithm is that the range of the losses cannot be known a priori, as it is a lagrangian loss and therefore proportional to the problem-specific lagrangian multipliers. > On W3. We agree with the Reviewer that extending the guarantees presented in this paper to model-free approaches scalable beyond tabular CMDPs setting would be of great interest. However, the results presented in this paper are novel even for the simplest tabular case, and we believe this work might be a good starting point for extending the results to more challenging scenarios in future works. > On W4. Guaranteeing positive violation requires the primal and the dual regret minimizers to attain convergence to the constrained optimum, namely, to have last-iterate convergence to the equilibrium of the Lagrangian game. To the best of our knowledge, no algorithm can guarantee convergence in last-iterate in the setting faced by the primal regret minimizer. Finally, notice that, if future works may find no-regret algorithms with last-iterate convergence on the settings faced by the primal and dual regret minimizers, sublinear positive violation (in the stochastic setting) would directly follow by the analysis of our primal-dual procedure. We underline that the papers provided by the Reviewer focus on stochastic settings only, where it is possible to exploit confidence intervals to estimate both rewards and constraints. Indeed, it cannot be done in our setting, since it would prevent any algorithm to deal with adversarial settings. [1] Jin et al. 2020, "Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition". --- Rebuttal Comment 1.1: Comment: I appreciate the authors' replies. Regarding last-iterate convergence, there are works in the stochastic setting, [A1]. As another comments, when $\rho=0$, how you are bounding the regret and violation bound? The approach presented in [A2] also does not use the Slater's condition. [A1]. Ding, D., Wei, C. Y., Zhang, K., & Ribeiro, A. (2023). Last-iterate convergent policy gradient primal-dual methods for constrained mdps. Advances in Neural Information Processing Systems, 36, 66138-66200. [A2]. Ding, Y. and Lavaei, J., 2023, June. Provably efficient primal-dual reinforcement learning for cmdps with non-stationary objectives and constraints. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 6, pp. 7396-7404). --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for the possibility to clarify these aspects. > Regarding last-iterate convergence, there are works in the stochastic setting, [A1]. We certainly agree that there exist primal-dual procedures which attain last-iterate convergence to the constrained optimum as in [A1]. Nonetheless, not only [A1] focuses on stochastic setting only, but it also does not focus on regret minimization. Thus, their approach is completely different from ours, since no exploration-exploitation trade-off is involved, and its cannot be easily generalised to our setting. Moreover, please notice that, in order to achieve last iterate convergence employing our primal-dual scheme, it is necessary to have access to (primal and dual) regret minimizers which attain last-iterate convergence under bandit feedback. To the best of our knowledge, the only algorithm which attains those guarantees is [1]. Nevertheless, the convergence rate is of order $t^{-1/8}$, which would lead to highly suboptimal regret and violation bounds. > As another comments, when $\rho=0$, how you are bounding the regret and violation bound? The approach presented in [A2] also does not use the Slater's condition. When $\rho=0$, we bound the regret and the violation as $\tilde O(T^{3/4})$, which is in line with [A2], when Slater’s condition is not enforced (see their Theorem 9). This is done by suitably clipping the Lagrangian space as in Line 3 of Algorithm 2. [1] “Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games with Bandit Feedback”, Cai et al. (2023)
null
null
null
null
null
null
Delay-DSGN: A Dynamic Spiking Graph Neural Network with Delay Mechanisms for Evolving Graph
Accept (poster)
Summary: This paper introduces Delay-DSGN, a novel dynamic spiking graph neural network that incorporates a learnable synaptic delay mechanism to enhance dynamic graph representation learning. The authors argue that existing SNN-based methods struggle with temporal information propagation and historical forgetting, and they propose a Gaussian delay kernel to address this issue. The model is evaluated on three large-scale dynamic graph datasets for node classification, demonstrating superior performance over eight state-of-the-art baselines. The authors also provide a theoretical analysis of the stability conditions for the delay kernel, ensuring the model avoids gradient explosion and vanishing issues. Claims And Evidence: The paper makes several key claims: 1. Delay mechanisms improve long-term temporal modeling in dynamic graphs. 2. Delay-DSGN outperforms existing dynamic graph models (e.g., Dy-SIGN, SpikeNet) in node classification. 3. Theoretical analysis ensures stable training of the Gaussian delay kernel. The evidence provided supports these claims reasonably well: 1. The experimental results on three large-scale datasets demonstrate consistent improvements over state-of-the-art methods. 2. The ablation study comparing learnable delays vs. fixed delays vs. no delay strengthens the argument that delayed information propagation is beneficial. 3. The theoretical section derives conditions for stable training, although additional empirical validation (e.g., loss landscape analysis) would further reinforce this claim. However, the paper does not provide enough analysis on the interpretability of learned delays—are the learned delays consistent across different datasets? Methods And Evaluation Criteria: The choice of node classification as the primary evaluation task and the focus on classification accuracy (Ma-F1, Mi-F1) is appropriate, as it aligns with common benchmarks in dynamic graph learning. However, the dataset-specific hyperparameter tuning is not fully discussed. Theoretical Claims: The paper provides a sound theoretical analysis of the conditions required for stable training of the Gaussian delay kernel, ensuring that the model avoids gradient explosion and vanishing issues. The proof (Appendix A) correctly applies the chain rule and leverages the properties of Gaussian functions to derive a well-structured bound on the standard deviation ($\sigma$) and kernel size ($K_s$). The derivation is mathematically reasonable, and the constraints appear well-motivated. One potential improvement would be an empirical validation of how training behaves when $\sigma$ is set outside the derived bounds. A brief numerical analysis confirming the theoretical stability conditions would further strengthen this claim. Experimental Designs Or Analyses: The authors claim that the delay mechanism effectively preserves historical information and mitigates the information forgetting issue in dynamic graphs. This claim is supported by comprehensive experimental results showing improvements in both macro and micro F1 scores, as well as a theoretical gradient stability proof provided in the appendix. The evidence is generally convincing, although some parameter choices could benefit from clearer justification. Supplementary Material: Yes, I reviewed the supplementary material. The appendix contains a proof for gradient stability, which represents a significant theoretical contribution to the paper. Relation To Broader Scientific Literature: This work aligns well with research in dynamic graph learning and spiking neural networks, building on prior studies in GNNs for dynamic graphs (TGAT, EvolveGCN, Dy-SIGN) and spiking neural networks for graphs (SpikeNet). Essential References Not Discussed: The paper should consider discussing: 1. “Temporal Graph Networks” (Rossi et al., 2020, ICLR), which also models long-term dependencies in dynamic graphs using memory-based approaches. 2. “A Hawkes process-based graph neural network: TREND” (Wen et al., 2022, WWW), which provides an alternative view on modeling temporal dependencies. Other Strengths And Weaknesses: Strengths: 1. A novel delay-based temporal mechanism, inspired by biological synaptic plasticity, which offers a fresh perspective in GNN research. 2. Impressive empirical results on large-scale dynamic graphs, with meaningful comparisons to both non-spiking and spiking methods. 3. A solid mathematical foundation, providing a theoretical basis for stable training. Weaknesses: The interpretability of learned delays is somewhat limited—it’s not entirely clear what the model is learning and whether the delays correspond to real-world temporal phenomena. Other Comments Or Suggestions: 1. Typos: I. Page 2, Line 15: "effectively mitigating historical information forgetting" → "effectively mitigating the forgetting of historical information". II. Page 4, Equation (5): Should reference Equation (4) for clarity. 2. The paper mentions a comparison with 8 methods, but the experimental results only present 7. Is this due to an omission of one method or a typographical error? Questions For Authors: 1. Are the learned delays consistent across training runs, or do they vary significantly with different initializations? 2. How does Delay-DSGN compare to attention-based methods like TGN in terms of handling long-term dependencies? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your meticulous review of our work and the valuable feedback provided. **Response to Weaknesses and Question 1:** In our model, delay parameters are randomly initialized following a normal distribution within a specified range. For example, when the maximum delay is set to 5, the initial delays are sampled from a normal distribution over the interval $[0, 5]$. Training results on the DBLP show that the learned delay distribution is slightly shifted to the right, indicating that academic collaborations and influences often require a longer time to become evident. For the Tmall, the learned delay distribution is slightly skewed to the left, reflecting the typically rapid user responses and interactions in e-commerce environments. The Patent exhibits larger delays, with a more uniform distribution toward the right, suggesting that the impact of patents generally takes a longer time to diffuse and manifest. Despite sharing the same initialization distribution, the model consistently learns dataset-specific delay values across multiple training runs, demonstrating its ability to adapt to inherent temporal characteristics. | Delay Interval | Initialization Density | DBLP Post-Training Density | Tmall Post-Training Density | Patent Post-Training Density | |----------------|------------------------|----------------------------|-----------------------------|------------------------------| | 0-1 | 0.0 | 0.0| 0.08 | 0.0| | 1-2 | 0.2 | 0.17| 0.29 | 0.13 | | 2-3 | 0.6 | 0.5 | 0.46 | 0.35 | | 3-4 | 0.2 | 0.28 | 0.17 | 0.33 | | 4-5 | 0.0 | 0.05 | 0.0 | 0.19 | **Response to Theoretical Claims:** Thank you for your valuable feedback. We have added an extra analysis in the experimental section and visualized the training loss curves. Unfortunately, we are unable to display the figures here, so we provide the following explanation. When $σ$ is outside the derived range, the model's loss during training on the DBLP remains largely unchanged. In contrast, when $σ$ is within the derived range, the training loss consistently decreases, indicating that the model is able to learn and optimize effectively. **Response to Essential References Not Discussed and Question 2:** We have added a new paragraph in the related work section as follows: "Unlike the memory module in TGN, our delay mechanism explicitly models multi-step dependencies through differentiable temporal kernels, thereby mitigating the gradient decay issues inherent in RNN architectures. Compared to the Hawkes process used in TREND, the biologically inspired spike-delay mechanism in Delay-DSGN is better suited for handling discretized timestep-based graph updates. " We have compared our approach with attention-based methods such as TGAT. We will endeavor to include experimental comparisons with TGN to further demonstrate the effectiveness and advantages of our proposed method. **Response to Other Comments Or Suggestions:** Thank you for your valuable feedback and careful corrections on our paper. Based on your suggestions, we have carefully revised and improved the issues mentioned in the text as follows: * In line 15 on page 2, “effectively mitigating historical information forgetting” has been revised to “effectively mitigating the forgetting of historical information” for improved clarity and grammatical accuracy. * At the location of Equation (5) on page 4, we have added a reference to Equation (4) to better clarify the relationships between the different parts of the formulas. * Regarding the comparison with eight methods but only seven results being presented in the experiments, we confirmed that this was due to the omission of one method in the table. We have now updated the table to include all the mentioned methods. Due to character limitations, the results of the omitted comparison methods on the three datasets are as follows: | Dataset | Metrics | Training | DeepWalk | |---------|---------|----------|----------| | DBLP | Ma-F1 | 40%| 67.08 | | | | 60%| 67.17| | | | 80% | 67.12 | | | Mi-F1 | 40%| 66.53 | | | | 60%| 66.89 | | | | 80%| 66.38 | | Tmall | Ma-F1 | 40%| 49.09 | | | | 60%| 49.29 | | | | 80% | 49.53 | | | Mi-F1 | 40%| 57.11 | | | | 60%| 57.34 | | | | 80%| 57.88 | | Patent | Ma-F1 | 40%| 72.32 | | | | 60%| 72.25 | | | | 80% | 72.05 | | | Mi-F1 | 40% | 71.57 | | | | 60% | 71.53 | | | | 80%| 71.38 |
Summary: This paper introduces Delay-DSGN, a dynamic spiking graph neural network that incorporates a learnable delay mechanism to enhance the representation of evolving graphs. By modeling synaptic delays with a Gaussian kernel, the model effectively captures temporal dependencies and mitigates information forgetting. The paper provides theoretical guarantees to address gradient issues and demonstrate the model's effectiveness through experiments on three large-scale dynamic graph datasets. Overall, the paper offers a timely and promising contribution to dynamic graph learning and spiking neural networks. Claims And Evidence: This claim is supported by comprehensive experimental results showing improvements in both macro and micro $F1$ scores, as well as a theoretical gradient stability proof provided in the appendix. The evidence is generally convincing, although some parameter choices could benefit from clearer justification. Methods And Evaluation Criteria: The use of F1-macro and F1-micro as evaluation metrics is appropriate for the node classification tasks presented. The method details are mostly clear; however, some parts of the zero-padding and convolution process could be described in greater detail for complete reproducibility. Theoretical Claims: The proof assumes that the Gaussian kernel’s influence is stable across different graph structures, but real-world dynamic graphs may have highly irregular temporal patterns. $Relevant$ $discussion$ on the $robustness$ of the theoretical bounds in these settings is necessary. Experimental Designs Or Analyses: >The experiments are well-structured and reproducible, but a few aspects could be improved: • The paper reports standard deviations, but statistical significance tests (e.g., t-tests) are not provided to confirm that the improvements are meaningful rather than random variation. • While the impact of $\sigma$ and $d_m$ is explored (Figure 5), the authors do not discuss the computational cost of larger delay windows. Supplementary Material: Yes, I have checked the detailed proof. Relation To Broader Scientific Literature: The paper is well-positioned within dynamic graph learning and spiking neural network literature. However, while the discussion on the biological plausibility of delay mechanisms is interesting, it could benefit from more references to neuromorphic computing models (e.g., event-driven SNNs). Additionally, there is limited comparison between delay-based mechanisms and other temporal modeling approaches, such as continuous-time point processes used in DyRep and TGN. Essential References Not Discussed: The paper lacks a discussion of some relevant temporal graph learning works, such as GC-LSTM, a model with an LSTM embedded in a GCN, and DyRep, which uses a temporal point process to model event-driven graph evolution. A brief discussion of these methods, particularly how they differ from Delay-DSGN, would enhance the paper’s contextual positioning. Other Strengths And Weaknesses: >Strengths: • The incorporation of a learnable delay mechanism in the spiking graph neural network framework is a novel idea, blending insights from neuroscience and graph learning. • The paper presents theoretical guarantees for the stability of the training process, ensuring the model avoids gradient explosion or vanishing, which is an important contribution. >Weaknesses: • The authors did not present the learning process or distribution characteristics of the delay mechanism in the graph data. Relevant evidence would provide a clearer explanation. Other Comments Or Suggestions: The network architecture is shown in Figure 2, but it lacks a more detailed explanation. The integration of Spiking Neural Networks with the graph neural network approach should be explained more clearly. How does the SNN layer interact with the graph convolution layer, and what benefits does this bring to dynamic graph representation learning? Questions For Authors: 1. Does the delay mechanism perform differently on irregular graphs, such as those with sparse or heterogeneous structures? 2. Additionally, the paper adopts a Gaussian delay kernel—what was the rationale behind this choice?' ## update after rebuttal Thank you for the detailed response, which effectively addressed my main concerns. I appreciate the clarity provided and am happy to update my score accordingly. I recommend Accept. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your meticulous review of our work and the valuable feedback provided. **Response to Experimental Designs or Analyses 1:** In our experiments, we compared eight methods across three datasets using two metrics. To assess the statistical significance of improvements by our proposed method over the baselines, we conducted Friedman tests. The results are: |Metric |F-Value |P-Value| |-|-|-| |Ma-F1|18.1170|0.0204| |Mi-F1|18.6072|0.0171| With all P-values are less than the significance level $ \alpha = 0.05 $, indicating that our method's average ranking is significantly different from the baselines for both metrics. **Response to Essential References Not Discussed:** We have made additions to the "Related Work" section: "GC-LSTM integrates LSTM into GCN to model the temporal evolution of node features. It captures temporal dependencies through hidden state transitions. In contrast, Delay-DSGN introduces a delay convolution kernel to explicitly parameterize propagation delays. DyRep utilizes temporal point processes to model the triggering intensity and time intervals of events. Its core lies in describing the probabilistic patterns of event occurrences rather than capturing the dynamic delay effects of information propagation." **Response to Weaknesses:** Please refer to the first response to Reviewer WWHS. **Response to Concerns:** * **Regarding zero-padding, convolution processes:** For example, at the current timestep, two neurons from the previous layer aggregate information to one neuron in the current layer. The inputs are spike trains from these two neurons, each containing activity information across four timesteps (the current timestep and three historical timesteps). Neurons exhibit different delays. Before convolution, the input sequences undergo left-side zero-padding (with padding length equal to $K_s−1$) to accommodate the convolution kernel size. Convolution kernel construction: The hyperparameter $K_s$ represents the size of the delay convolution kernel, and $K_s−1$ is the maximum delay time. The values in the delay convolution kernel follow a Gaussian distribution, with the center position located at $K_s−d−1$. Subsequently, the padded sequences are convolved with the delay-specific kernels constructed between each pair of neurons. This operation fuses weighted information across timesteps, generating a new output spike sequence that aggregates information from two neurons to one. * **Regarding the computational cost of larger delay windows:** This does not increase the computational cost. A larger delay window means a larger delay convolution kernel, which results in more left zero-padding, but the number of convolution computations remains unchanged. * **Rationale for Choosing the Gaussian Kernel:** - Learnable Temporal Shift: The parameter $μ$ controls the central position of the delay, enabling the model to explicitly learn the delay magnitude. The smoothness of the Gaussian kernel ensures gradient stability during training. - Adjustable Receptive Field: The variance parameter $σ$ modulates the receptive field of the delay kernel, allowing it to capture multi-step historical information. * **Network Architecture:** The dynamic graph is divided into fixed-interval snapshots. For each snapshot, second-order neighborhood sampling captures multi-hop dependencies, and first-order aggregation integrates local features. Features are encoded into spike representations using SNNs, converting continuous data into spike trains. Second-order aggregation incorporates broader context. In the Delay-SNN layer, the model dynamically adjusts the Gaussian delay kernel's center to capture varying temporal delays, generating node delay representations. Multi-timescale fusion integrates representations across timesteps, producing final embeddings used for downstream tasks. * **SNN-GNN Interaction and Benefits of SNNs for Dynamic Graphs:** After traditional GNN-based neighborhood aggregation, the threshold-triggering mechanism of SNNs replaces conventional nonlinear activation functions. Advantages of SNNs for Dynamic Graphs: SNNs inherently accumulate membrane potentials across timesteps, aligning naturally with the discrete timestep nature of dynamic graphs. At each timestep, node information is stored in the membrane potential of SNN neurons. This potential is inherited to the next timestep, preserving historical states and enabling the modeling of graph evolution. **Response to Weaknesses Theoretical Claims:** Thank you for your valuable feedback. Your insights have provided us with an important perspective. To further explore this issue, we plan to conduct more detailed experiments focusing on: Evaluating our method's performance on dynamic graphs with diverse temporal patterns and topological structures. Testing our theoretical findings with additional real-world datasets to ensure their validity and applicability in practical scenarios.
Summary: This paper proposes Delay-DSGN, a dynamic spiking graph neural network that incorporates a delay convolution kernel, which dynamically adjusts the weight of information at different time steps. Through the delay convolution kernel, Delay-DSGN captures temporal dependencies and historical influences on node representations, then fed into LIF neurons for spike generation. After temporal modeling, Delay-DSGN combines node representations across time steps into a unified representation and finally uses regularization and cross-entropy loss to prevent gradient explosion and vanishing issues. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: The experimental design is well-structured, but I have concerns regarding the analysis. Supplementary Material: yes Relation To Broader Scientific Literature: A new exploration in spike neural networks for dynamin graph learning. Essential References Not Discussed: no Other Strengths And Weaknesses: **Strengths** 1. This paper considers the historical information in neighborhood aggregation, mitigating historical information forgetting. 2. The delay convolution kernel uses the learnable synaptic delay and weight to make the information of different time steps participate in convolution with different weights. **Weaknesses** 1. The motivation for delaying the information propagation is not clearly and convincing explained. 2. The fixed random delay model with randomly initialized decay values is unreasonable, since earlier information is more likely to be weighted lower than more recent information. 3. The novelty of this paper is rather limited. Apart from the design of delay convolution kernel, Delay-DSGN is similar to standard SNNs. Moreover, the extensibility of delay convolution kernel is restricted to SNNs. Other Comments Or Suggestions: For the formulation, there are three different symbols, please check the Eqs. 4, 9, and 15. Questions For Authors: 1. When adding a new edge, why doesn't it immediately affect the representations of the two nodes that the edge connects? How can the delay representation ensure that it does not cause information latency, leading to inaccurate node representation? 2. Why the fixed random delay model with randomly initialized decay values outperforms the no-delay model, and sometimes is competitive with Delay-DSGN? 3. Since the delay convolution kernel has already captured the temporal dependencies and historical influences on node representations, why do Delay-DSGN still need to combine node representations from different time steps into a unified feature? 4. Why is the number of neurons set to a higher value in the no-delay model? It will lead to an unfair performance comparison between Delay-DSGN and no-decay model. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your meticulous review of our work and the valuable feedback provided. **Response to Weaknesses 1 and Question 1:** When a new edge is added to a graph, it may indeed represent an immediate interaction demand. However, in reality, there is usually a time lag between "establishing a connection" and "producing an effect." For instance, in social networks, after user A posts content, user B may take several days to respond; in traffic networks, congestion propagation requires time. These phenomena indicate that delays in information propagation are an inherent property of dynamic graph evolution. Therefore, our motivation is to explicitly model these delays to more accurately capture the dynamics and temporal dependencies. Delay-DSGN adaptively learns the delay parameters $\mu_{ij}$ within the delay kernel, dynamically determining appropriate delay value. This ensures that the speed of information propagation adapts to different application scenarios while preventing excessive delays. **Response to Weaknesses 2:** Neither our model nor the fixed random delay model includes a decay value for random initialization. The key difference lies in whether the delay parameters are learnable or remain fixed after initialization. **Response to Question 2:** When the number of synaptic connections in a neural network layer far exceeds the number of possible discrete delay positions, randomly initialized delays can cover the entire effective time range. For example, with 10 possible delay positions and 1000 synaptic connections, all delay positions will be covered. Thus, the necessity of moving delay positions away from their initial state diminishes. In this way, a fixed random delay model can achieve good performance by optimizing connection weights to capture temporal characteristics. **Response to Weaknesses 3:** * The core innovation of Delay-DSGN lies in its first-ever deep integration of biologically inspired delay mechanisms with dynamic graph modeling, along with rigorous theoretical guarantees for model stability. Specifically, Delay-DSGN explicitly models multi-step dependencies using learnable Gaussian delay kernels, capturing the relationship between topological connection strengths and signal propagation delays in dynamic graphs. Additionally, it provides strict upper bounds for gradient propagation in the spatiotemporal domain driven by delay kernels, offering theoretical guidance for model parameter selection. Compared to traditional SNNs and GNNs, Delay-DSGN not only enhances biological fidelity and temporal processing for SNNs but also introduces a groundbreaking delay perspective to dynamic graph modeling methods. * Although current research mainly focuses on SNNs, the delay mechanism can be regarded as a type of temporal convolution method. This approach shows potential for extension to other types of dynamic models when dealing with time-series data. Future work will further explore the application scope of this mechanism. **Response to Question 3:** The delay convolution kernel primarily addresses the weighted aggregation of historical information within a single time step. However, topological changes in dynamic graphs often span multiple time steps. By integrating node representations from different time steps into a unified feature, it becomes possible to better capture long-term dependencies and complex topological evolutions. Secondly, this approach compensates for the sparsity of spike signals, mitigating information loss caused by the sparse nature of spike signals. **Response to Question 4:** Since Delay-DSGN introduces additional delay time parameters, to balance the total number of parameters, we increased the number of hidden neurons in the no-delay model. Additionally, we conducted experiments on a standard SNN model without adding extra neurons. The comparative experimental results are as follows: | | Metric | No-Delay SNN | Standard SNN | |---------|----------|:--------------:|:--------------:| | DBLP | Ma-F1 | 71.00 | 70.88| | | Mi-F1 | 71.65| 71.98 | | Tmall | Ma-F1 | 59.34| 58.84| | | Mi-F1 | 63.55| 63.52 | | Patent | Ma-F1 | 83.57| 83.53 | | | Mi-F1 | 83.55| 83.48 | **Response to Other Comments or Suggestions:** - Equation (4) describes the node features processed with delay, which are subsequently used as input in Equation (5). - We understand your questions about Equations (9) and (15) mainly concern the notation $w$,and we apologize for not providing sufficient explanations in the original manuscript. Here are the specific modifications: - For Equation (9), $w_t$ represents the weight of the features at time step $t$, which is obtained through an element-wise multiplication of $Z_v^t$ and $w_t$, followed by a row-wise summation. - For Equation (15), $w_{ij}$ represents the weight between neuron $i$ and neuron $j$. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have carefully reviewed your response and intend to increase my rating to 2 based on clarifying the motivation for the delay. AQ-1: You mentioned that "the delay mechanism can be regarded as a type of temporal convolution method." Could you clarify what you mean by "temporal" convolution? How is the term "temporal" defined in this context? AQ-2: Does this paper capture long-term dependencies? --- Reply to Comment 1.1.1: Comment: We are grateful for your feedback and the higher evaluation of our work. **Response to Question 1:** The data we process is time-series data, where neurons exhibit corresponding spike activity (0 or 1) at each time step. Arranging these spike activities in the order of time steps constitutes the spike sequences mentioned in the text. $$ \widetilde{s_v^{j,\ t}}=\ \left[0,\ 0,\ \ldots,\ 0,\ s_v^{j,t}\right] $$ The delayed convolutional kernel is defined as: $$ k_{ij}\left[n\right]=w_{ij}exp\left(\frac{-\left(n-\left(K_s-d_{ij}-1\right)\right)^2}{2\left(\sigma\right)^2}\right) $$ Here, $d_{ij}$ represents the delay amount, and $K_s-d_{ij}-1$ corresponds to the index in the convolutional kernel that assigns high weight. Temporal convolution refers to the process where the convolutional kernel slides over the spike sequence (along the time axis) to extract features, resulting in the input to the SNN: $$ I_v^{j,\ t}=k_{ij}\ast\widetilde{s_v^{j,\ t}} $$ This process is analogous to 2D convolution in image processing, except that in our work, it is performed on one-dimensional time series data. **Response to Question 2:** Delay-DSGN successfully captures long-term dependencies. On the DBLP, we observed the weights of 10 features across 27 time steps. The results indicate that the model assigns higher weights at earlier time steps, as shown in the table below. Moreover, compared to SpikeNet and Dy-SIGN, which only use information from the previous time step to update the membrane potential, our delayed kernel aggregates multi-step historical information to update the membrane potential of the current neuron. The results of comparative experiments also demonstrate that our method outperforms these two traditional SNN+GNN approaches in terms of performance. | Time Step | feature1-weight | feature2-weight | feature3-weight | feature4-weight | feature5-weight | feature6-weight | feature7-weight | feature8-weight | feature9-weight | feature10-weight | |---------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|------------| | 1 | 0.139 | 0.205 | 0.247 | -0.058 | 0.102 |0.034 | 0.252 | 0.146 | 0.154 | 0.090 | | 5 | 0.114 | 0.153 | 0.093 | 0.131 | -0.011 | -0.163 | -0.072 | -0.013 | 0.171 | 0.031 | | 10 | 0.054 | 0.026 | 0.012 | 0.058 | 0.022 | 0.050 | 0.027 | 0.062 | 0.021 | 0.068 | | 15 | -0.008 | -0.135 | 0.028 | -0.045 | 0.012 | 0.130 | -0.017 | 0.007 | 0.034 | -0.218 | | 20 | 0.077 | -0.087 | -0.134 | 0.009 | 0.056 | 0.076 | -0.076 | -0.076 | 0.029 |-0.103 | | 25 | 0.143 | 0.123 | 0.276 | 0.031 | -0.073 | 0.106 | -0.180 | -0.158 | -0.001 | 0.094 | | 27 | 0.349 | 0.421 | 0.370 | -0.179 | 0.155 | 0.244 | -0.070 | -0.126 | -0.184 | 0.175 | Thank you again for your recognition of our work.
null
null
null
null
null
null
null
null
Nearly Optimal Sample Complexity for Learning with Label Proportions
Accept (poster)
Summary: This paper investigates the problem of Learning with Label Proportions (LLP), where training data is provided in groups (or bags) rather than individual instances, and only aggregate label proportions are available for each group. The goal is to infer the original instance-level labels using the provided proportion information. To tackle this problem, the paper adopts a statistical learning perspective and aims to minimize the excess risk under both realizable and non-realizable settings. The authors focus on empirical risk minimization (ERM) and stochastic gradient descent (SGD) and propose a variance reduction technique that theoretically achieves a tighter regret bound, improving the learning performance in LLP. The effectiveness of the proposed method is evaluated through experiments on CIFAR-10, MNIST, and Higgs5 datasets, demonstrating its superiority over existing baseline approaches. Claims And Evidence: Most of the claims are provided with evidence. Methods And Evaluation Criteria: No. Theoretical Claims: I have roughly read the proofs. Experimental Designs Or Analyses: No. Supplementary Material: Not carefully read the suppmentary material. Relation To Broader Scientific Literature: Theoretical contribution on a new framework for tackling LLP. Essential References Not Discussed: No additional references needed. Other Strengths And Weaknesses: Strengths - Theoretical Contributions: - The paper establishes solid theoretical guarantees for LLP, particularly by addressing the necessity and sufficiency of hypothesis bias in LLP learning. - The theoretical results provide valuable insights into how hypothesis bias impacts learning performance in LLP settings, which is crucial for both model design and practical implementation. - Generalizability of the Framework: - The proposed approach is not restricted to a specific learning paradigm but is applicable to both empirical risk minimization (ERM) and stochastic gradient descent (SGD). - This generalizability enhances the relevance and applicability of the proposed variance reduction technique, making it adaptable to different optimization and learning strategies. - Empirical Effectiveness: - The experimental results show significant improvements over baseline methods, indicating the practical viability of the proposed approach. - The study evaluates the method on diverse datasets, including CIFAR-10 (image classification), MNIST (digit recognition), and Higgs5 (scientific data analysis), demonstrating its broad applicability. Weaknesses and Areas for Improvement - Clarity and Writing Quality: - The paper’s writing could be improved, as there are several typos and unclear descriptions that affect readability. - Some key arguments, especially regarding the theoretical framework, could be more clearly articulated. - Motivation for the Proposed Learning Framework: - The paper lacks a clear motivation for why this particular learning framework is necessary. - While the authors demonstrate performance improvements, it remains unclear what specific challenges in existing LLP methods necessitate this new approach. - A more detailed discussion on how the proposed variance reduction technique uniquely addresses these challenges would strengthen the motivation. - Justification for the Choice of ERM and SGD: - The authors consider both ERM and SGD as realizations of the LLP problem but do not sufficiently justify this choice. - ERM is a learning principle, whereas SGD is an optimization algorithm—this distinction should be discussed more explicitly. - It is unclear why the proposed variance reduction technique is equally applicable to both. Providing additional justification or empirical comparisons between different optimization techniques would help clarify this choice. - Practical and Real-World Relevance: - The realistic implications of this study are not fully explored. - While the theoretical and empirical results are promising, the paper does not clearly articulate how LLP is applied in real-world scenarios. - Including concrete applications (e.g., medical diagnosis, fraud detection, or social science studies where label proportions are more accessible than instance-level labels) would strengthen the case for the practical impact of the work. Other Comments Or Suggestions: Please check the weaknesses. Questions For Authors: If the authors can address my concerns, I will reconsider the score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - “... several typos and unclear descriptions… ”. We would appreciate pointers to typos, unclear descriptions, and key arguments that the reviewer found unclear so that we can improve them. Thank you! - On Motivation. As discussed in the right column of lines 23 – 32, a key modern motivation for the learning from label proportions framework is training conversion models for online advertising. Advertisers (or AdTechs working on their behalf) train models for predicting whether a user will convert (i.e., sign up for an account, buy a product, etc) if they click an advertisement. Since the ad click and conversion happen on different websites, collecting training data requires linking user behavior across the two sites. Historically, third party cookies or link decoration has made this straightforward. However, as web browsers move towards improving user privacy, there are several proposed APIs that allow for collecting cross-site data *in aggregate*. Many of these APIs can be used to directly measure the label proportion within a bag of examples. We view LLP as a principled technique for making the best use of these APIs for training conversion models. We will add more language along these lines to the introduction. - On challenges. The key challenge that we seek to address is to characterize the sample complexity of LLP: how much more data is required to train a model in the LLP framework than when all labels in the training set are available in the clear. We answer this question for the case of squared loss binary classification by giving the first (nearly) matching upper and lower bounds on the sample complexity of LLP. Now, given the fact that our loss estimate is almost unbiased (unlike those in Busa-Fekete et al, and Li et al.), the way in which the sample complexity can be improved over existing works is to improve the way the loss estimates concentrate at their expectation while keeping bias under control. It is intuitively clear that any method that selects a model by minimizing the estimated losses will be more reliable when the variance of this estimate is reduced (if the bias is also small). Our analysis leverages the lower variance of our estimates to derive improved sample complexity and convergence rates that are not possible with the methods of Busa-Fekete et al, and Li et al. Also, note that the theoretical best estimator is not even the one alluded to in Thm 4.1, but the *pairwise* (but not very practical) estimator of Thm 3.1, which is again a novel contribution of this paper. We will add text to the paper in order to better clarify these aspects. - Justification of ERM and SGD + “It is unclear why the proposed variance reduction technique is equally applicable to both.” We do not fully understand these comments. We focus on ERM and SGD because these are the most prominent learning paradigms. ERM is a learning principle – true – but the interpretation of our results should be rather clear: Any algorithm that performs ERM and operates with the loss function we designed for LLP enjoys the sample complexity guarantees we claim for ERM in Thm 4.1. Our results for ERM deal with the information-theoretic limits of LLP independently of increased computational difficulty in the underlying optimization problem. Our results for SGD study the impact of LLP on both the sample complexity and convergence rate of SGD for the case of linear hypotheses. In both cases, it is the design of an ad hoc loss function that enables improved results, not the learning algorithm itself. There are, on the other hand, some nuances, as this loss assumes prior knowledge of quantities that are usually unknown, and can only be estimated from data. We provide one such solution in Appendix B.2 in connection to ERM, and claim that similar fixes are possible for Sect. 3 (the pairwise estimator) and Sect. 5 (the SGD algorithm). - Practical and Real-World Relevance. As discussed above, we are mainly motivated by an extremely practical problem: training advertising conversion models from aggregate data collected via new privacy-preserving browser APIs. Improvements to the sample complexity of LLP in this context can directly lead to improved model performance. We will describe this more clearly in the introduction of the paper. As for experiments related to such data, this is a legitimate point that was also made by Reviewer YpnD. Unfortunately, most modern large-scale aggregated datasets (originating from online advertising) are proprietary. That led us to simulated aggregation environments on standard benchmark datasets. Yet, we believe our experimental environment does not make our experiments unrealistic. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my concerns; some of the replies helped me understand better. Hence, I am willing to raise my score but still stay borderline.
Summary: This paper provided the optimal sample complexity of LLP under square loss. Based on these analysis, the authors proposed an improved squared loss, which was reported to enjoy better classification accuracy on several binary classification datasets under LLP settings. Claims And Evidence: The claims made in the submission are generally supported by evidence and proofs. Methods And Evaluation Criteria: It is OK. However, due to the limitation to binary classification, many SoTA LLP solvers and dataset settings cannot be considered. Theoretical Claims: Yes, I have roughly reviewed the correctness of all theoretical proofs provided in the paper. Experimental Designs Or Analyses: Because the performance improvement of the proposed method only evident for big bag sizes, I think its contribution on LLP algorithm is limited. Supplementary Material: No. There is no supplementary material uploaded. Relation To Broader Scientific Literature: This work is a seminal exploration of improving LLP improvement under large bag sizes, which enjoys a promising focus in practice. Essential References Not Discussed: An important seminal work [1] on multiclass LLP should be cited in the Introduction. [1] Liu J, Wang B, Qi Z, Tian Y, Shi Y. Learning from Label Proportions with Generative Adversarial Networks. In NeurIPS 2019. Other Strengths And Weaknesses: **Strengths:** The performance improvement of LLP for big bag sizes is significant. **Weaknesses:** 1. As we already have several work on binary classification, the analysis on multi-class scenarios should be worth noting and desirable. 2. The squared loss is not common in current work. The analysis of other losses (e.g., CE loss) can improve the practicability the this work. Other Comments Or Suggestions: 1. The variance comparison should be displayed in a more clearer way. (Maybe providing the quantitative results) 2. In fact, in some datasets, the improvement becomes significant under the largest bag size, i.e., $k=2^9$. It will be better to show the results under more other large bag sizes that the proposed method performs better than other existing methods. Questions For Authors: 1. Can you discuss how your work can fit in the situation of varying bag size? (i.e., the bag size is not the same among the bags). Maybe a bounded interval for the bag size is a good assumption for further discussion. 2. In terms of hypothesis complexity, the discussion on linear predictor in Section 5 seems to be too restricted. How can you extend the results to more complex hypothesis space. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - Extensions to Multi-class and Other Losses. We agree with the reviewer that extensions to the multiclass setting and losses other than the squared loss would be of interest. Having said that, our work is the first to provide matching upper and lower bounds on the sample complexity of learning from label proportions for binary classification with squared loss. In other words, if we go outside the (easy) noise-free realizable setting considered in Li et al. (2024), all prior work has suboptimal sample complexity even in this relatively simple setting. Our ongoing work explores extensions of our results to the multiclass setting and other losses. We expect the extension to multiclass to be relatively easy (but please see also the response to Reviewer Rv2q on this point). On the other hand, we feel that the extension to losses other than square loss will be more challenging (if we want to retain the minimum variance property of the estimators), thus belonging in a follow-up paper. - Essential References. Thank you for bringing to our attention the missed reference. We will include a citation in the introduction! - Varying Bag Size. This point was also raised by Reviewer Rv2q. Since our loss function produces an (almost) unbiased estimate of the per-example squared loss from a single bag, applying it to a collection of bags of varying sizes will still result in an overall (almost) unbiased estimate of the loss. And, compared to prior work, the variance of our per-bag estimates will all scale better with the size of the bags, so we expect to have lower variance and improved sample complexity even when the bags have varying sizes. Our restriction to fixed-size bags is only for clarity of presentation. We will add a comment to the paper on this point. As for the resulting sample complexity, please see the response to Reviewer Rv2q (“On uniform bag sizes”). - Variance Comparison. Following the reviewer’s suggestion, below is a simple scenario where the difference in variance between our estimator and Li et al. 's is readily assessable. Consider the simple setting when $x$ is uniform in $[0,1]$, $h^*(x) = x^2$, and $\hat h(x) = x$. We compute the label marginal $p$ exactly, and estimate $E[h(x)]$ in the same way as our learning experiments using SGD batches of size 1024 and a total of $n=2^{20}$ examples. From the table below, we see that the variance of our estimate stays roughly constant, while that of Li et al. grows as the bag size increases. | bag size | Ours | Li et al. | |-----------:|----------:|------------:| | 2 | 0.0336042 | 0.0593875 | | 4 | 0.0363739 | 0.0931322 | | 8 | 0.037708 | 0.155778 | | 16 | 0.0385156 | 0.282495 | | 32 | 0.0392801 | 0.540059 | | 64 | 0.0414068 | 1.06511 | | 128 | 0.0478653 | 2.21354 | | 256 | 0.0565216 | 4.84059 | - Larger Bag Sizes: Following the reviewer’s suggestion, the following table reports final test accuracy for each method on each dataset using bags of size $k = 1024$. The number in the parenthesis is one standard error in the mean over 10 repetitions. The number of epochs is the same as the longer training run for each dataset, and we increased the SGD batch size to 2048 so that each batch contains more than one bag (which is necessary for our estimates of $E[h(x)]$). Otherwise, the experimental setup is identical to that of Section 6. On MNIST, Cifar-10, and Higgs, we see a large accuracy gap using our method compared to baselines, and on UCI Adult we slightly outperform all baselines, further supporting our theory. | | mnist | cifar10 | uci adult | higgs | |:----------|:-------------------|:-------------------|:-------------------|:-------------------| | Ours | 0.876 (+/- 0.0047) | 0.796 (+/- 0.0115) | 0.770 (+/- 0.0024) | 0.628 (+/- 0.0013) | | Li et al. | 0.789 (+/- 0.0158) | 0.628 (+/- 0.0182) | 0.710 (+/- 0.0445) | 0.602 (+/- 0.0024) | | VanillaCE | 0.847 (+/- 0.0063) | 0.665 (+/- 0.0093) | 0.762 (+/- 0.0000) | 0.570 (+/- 0.0039) | | VanillaSQ | 0.848 (+/- 0.0066) | 0.663 (+/- 0.0084) | 0.762 (+/- 0.0000) | 0.568 (+/- 0.0041) | | EasyLLP | 0.783 (+/- 0.0158) | 0.617 (+/- 0.0174) | 0.709 (+/- 0.0485) | 0.603 (+/- 0.0033) | - Hypothesis Complexity in Section 5 is too restrictive. We agree that restricting to linear hypotheses in Section 5 is restrictive. Still, remember that in order for SGD to have global convergence guarantees (which are the focus of this paper), the function $\qquad w \rightarrow E[ \ell(h(x; w), y) ] $ should be convex in $w$ (here $\ell$ is any convex loss function---in our case, the squared loss), and unless $h$ is a linear function in $w$, it is unlikely that this will be the case. And indeed, assuming a linear prediction rule is standard in convex, globally convergent analyses of SGD in Machine Learning contexts. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. The author has addressed most of my concerns and I am willing to raise my score to borderline acceptance.
Summary: This paper addresses the challenge of Learning from Label Proportions (LLP) by analyzing the i.i.d. bag sampling setting. The authors provide improved sample complexity bounds in both realizable and non-realizable settings, using variance reduction methods, surpassing prior results. They further demonstrate the practical benefits of their approach through experiments, showing performance gains over baselines, especially in the critical regime of large bag sizes. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: The discussion of related work in Section 1.1 is appropriate. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: This paper is highly readable, contributes significantly to the field of LLPs, and presents novel results for the ICML community. I recommend acceptance, with the following suggestions for improvement. 1. The results rely on certain assumptions, such as the bag size assumption in Theorem 3.1. 2. The setting assumes a uniform bag size, which limits its practical applicability. 3. The results are limited to the squared loss. However, the square loss is often not ideal in practice. Exploring other loss functions, such as the log loss, would be more beneficial. 4. The observed gains compared to Li et al. (2024) are marginal for the majority of bag sizes. Could the authors discuss this observation? Other Comments Or Suggestions: N/A. Questions For Authors: 1. Could you please elaborate on the technical novelty and the challenges you faced when proving improved bounds, especially in comparison to Busa-Fekete et al. (2023) and Li et al. (2024)? 2. Could you elaborate on the challenges involved in generalizing your framework to multi-class classification? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: - On uniform bag sizes. We present our results using a uniform bag size for simplicity, but in principle the loss estimate we propose here can be applied even in situations where we have bags of different sizes, and we should still expect to see improvements due to the reduced variance at each bag. The corresponding sample complexity will replace the dependence on the uniform bag size $k$ (see lines 262-271, right column) with a dependence on the average bag size $(k_1 + k_2 … + k_m)/m$, being $k_j$ the size of the $j$-th bag. We will add this observation to the paper. - On the use of square loss. We agree that extensions to other losses would be beneficial (and this is the focus of our ongoing work), but we found the squared loss setting already quite challenging. - Observed gains compared to Li et al. (2024) marginal for the majority of bag sizes. The variance of our loss estimates has a substantially improved dependence on the bag size compared to Li et al.’s loss estimate. We expect the performance of both loss estimates to be similar for small $k$ (when the variances are comparable), and for very large $k$ but fixed $n$ (when the bags are so large that the LLP algorithms simply do not have enough signal). But for large values of $k$, where learning is still possible, we expect our loss to perform better. Please see also the extra experiments we ran in response to Reviewer 2x73, where we show that the gap between our method and Li et al. 's widens substantially as $k$ grows (e.g., for $k$ = 1024). - On the technical novelty and the challenges in comparison to Busa-Fekete et al. (2023) and Li et al. (2024). The technical novelty is multi-faceted. 1. The main technical novelty is recognizing that the methods proposed in both Busa-Fekete et al. and Li et al. cannot provide the best sample complexity as a joint function of the regret bound $\beta$ and the bag size $k$ (this sample complexity is akin to a “privacy vs. utility” tradeoff – the bigger $k$ the more private the labels), unless something substantially different is tried in the way the estimators are designed. The variance reduction methods we proposed based on clipping and centered variables come at the price of making our estimators *biased*, and we need to keep both bias and variance under control. For square loss, we have shown that this is the best one can hope for (up to log factors). 2. Further, the fast rates claimed in Li et al. only apply to the easy situation where the best in class has *zero* loss (thereby ruling out the possibility of noise in the labels) – this is more restricted than the standard *realizable* scenario, where the Bayes optimal predictor coincides with the best in class hypothesis. This requires a more careful analysis than Li et al. 's when estimating the unknown parameters $E[h(x)]$ and $p$ (see Appendix B.2). 3. Moreover, we show that, in principle, there is an even better algorithm for LLP (even for square loss), which is based on pairwise comparisons and eliminations, described in Sect. 3 (for the two function case). That algorithm is better in that it does not have extra $\log k$ factors in its sample complexity bounds (at the cost of assuming $k$ large), but it is clearly not practical, since it requires doing comparisons for all pairs of functions in the hypothesis space. In passing, we also improve Li et al. ‘s lower bound by log factors (again, in the two hypothesis case). 4. Last but not least, we have shown that the sample complexity improvements are tangible, in that they are reflected in our experimental evidence. - Extension to multi-class classification. We do not expect the extension to multiclass to be hard. The main technical challenge is to come up with an estimator whose variance will depend on the number of classes *linearly* (or even sublinearly, if at all possible), instead of quadratically.
Summary: The paper presents a theoretical analysis for the problem of learning with label proportions (LLP) where training examples are grouped into “bags” and only the aggregate label proportion of each bag is observed (rather than individual labels). The authors propose a theoretical analysis of LLP under the square loss, proving nearly optimal sample complexity results. Algorithmically, the paper introduces carefully designed variants of standard learning methods (Empirical Risk Minimization and Stochastic Gradient Descent) equipped with variance-reduction techniques. ## update after rebuttal Thanks to the authors for the clarifying answers. I am still inclined to keep the recommendation to weak accept as the paper is still largely suited for a journal publication given the it is largely based on the proofs in the appendix and requires an appropriate scrutiny. Claims And Evidence: Overall, the paper's claims seem generally reasonable. Some of the points as mentioned below are unclear : Overall, the paper is quite dense, and understanding it is heavily dependent on having to read the appendix, which contains long proofs. It is somewhat unreasonable to expect to read along with many other papers to review in the allocated time, with other constraints. In that sense, the paper should provide intuition of the main results in the main body, which it provides only partially, and hence the paper is largely suitable for a journal. On the other hand, I tried to check the claim of the scenario when \Delta and \beta are unknown (lines 190-195), and it seems there isn’t any proof in B.2. Also, I am not sure about the claims in lines 228-232, how can one estimate E[h(x)] and E[h*(x)] from unlabeled data. Even in standard supervised setting, when h* is unknown, how could it be possible to make such estimates. In the given settings, when bag size becomes larger, this estimate would become worse in my opinion, so these assumptions seem quite strong. The sample complexity result of realizable case O(k/beta) is not of much practical importance. While the other scenarios of non-realizability, the difference compared to previous results is not very significant O(k*k/\beta*\beta) in previous vs O(k/\beta*\beta) in this work. However, this matters only for large bag sizes, in which case the estimates of E[h(x)] and E[h*(x)] would become worse. Methods And Evaluation Criteria: The methods proposed in this paper are appropriate for the LLP problem and align with the theoretical claims. The authors build on standard learning algorithms – ERM and SGD – modifying them to handle bag-level supervision. Theoretical Claims: Beyond the points raised above (in claims and evidence), the paper’s theoretical contributions seem reasonably sound. It would be helpful if the authors could help and clarify the above. Experimental Designs Or Analyses: The experimental setup in the paper is generally reasonable and comparable to those used in previous related works mentioned in the paper and appropriate for evaluating an LLP method. Supplementary Material: Only skimmed through Relation To Broader Scientific Literature: The paper seems to have a good discussion on relevant existing methods Essential References Not Discussed: Not to my knowledge Other Strengths And Weaknesses: Strengths 1. Significance of Problem: The paper addresses an important problem setting (LLP) that is highly relevant for privacy-preserving machine learning. 2. Practical Algorithm and Empirical Validation: Unlike some theory-heavy papers, this work doesn’t stop at theory. The empirical results, showing improved accuracy with fewer samples, underscore that the theory has some practical merit. 3. Clarity and Organization: The paper is generally well-written and structured logically. The abstract clearly states the contributions, the introduction provides sufficient background and motivation (mentioning real-world scenarios which helps the reader care about LLP). weakness 1. Incremental Aspects: A potential weakness is that the paper’s main theoretical claim, while valuable, is somewhat incremental with respect to very recent works Li et al. (2024), and Busa-Fekete et al. (2023). 2. Focus on Square Loss: The analysis is done under square loss, which is a convenient surrogate for classification. This is a limitation in the sense that the theoretical guarantees are for squared error/regret, not directly for 0-1 classification error. It’s possible to have a small square loss but still misclassify some fraction of points 3. Experimental Limitations: The empirical results are mostly on simulated LLP scenarios derived from standard datasets. The paper does not report results on an actual real-world aggregated-label dataset 4. Clarity in Technical Sections: While the paper is overall well-written, the theoretical sections are dense and may be challenging for readers not deeply versed in learning theory. Other Comments Or Suggestions: NA Questions For Authors: As mentioned above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - Claim in LL. 190-195 when $\Delta$ and $\beta$ are unknown + absence of proof in Appendix B.2. As quickly mentioned in L. 195-196, Appendix B.2 contains a proof of a similar statement that applies indeed to a harder situation (the one in Thm 4.1), where $E[h(x)]$ and $E[h^*(x)]$ are unknown. The idea here was just to point out that an argument similar to (but simpler than) the one contained in Appendix B.2 applies to the scenario of Thm 3.1. We will better clarify this in the paper. - Claims in LL. 228-232 on how to estimate $E[h(x)]$ and $E[h^*(x)]$ from unlabeled data + “when bag size becomes larger, this estimate would become worse”. First, our claim contains a typo, and we apologize for that. Whereas $E[h(x)]$ only needs unlabeled data to be estimated, our estimate for $E[h^*(x)]$ makes use of the label proportions (that is, the aggregate labels). To estimate $E[h(x)]$, first observe that LLP does not hide the $x$ within each bag from us, so we have many i.i.d. feature vectors to evaluate $h$ on. This observation also applies to the estimation of $\Delta$ and $\beta$ in Thm 3.1. It is important to observe that the accuracy of these estimates only depends on the total number of samples $n$, rather than the total number of bags $m$. So, no, the estimate does not become worse when the bag size gets larger. To estimate $E[h^*(x)]$, we use the fact that for a bag with label proportion $\alpha$, we have that $\qquad E[\alpha] = E[(y_1 + … + y_k) / k] = E[(h^*(x_1) + … + h^*(x_k)] / k] = E[h^*(x)] = p .$ So, by averaging the $\alpha$ values across many bags, we get an accurate estimate of $E[h^*(x)]$. Again, it is important to observe that this estimate does not degrade as the bag size $k$ gets larger, as its accuracy only depends on the number of samples $n$, instead of the number of bags $m$. Another way to see this is that the LLP data provides a collection of $m$ i.i.d. sums $\sum_i y_i$, each made up of $k$ terms. Now, an i.i.d. sum of i.i.d. variables just gives a bigger sum of i.i.d. variables, so we are again able to accurately estimate $E[h^*(x)]$ with an accuracy that only depends on the total number of samples $n = m\times k$, instead of the total number of bags $m$. The way we made this work in Appendix B.2 is to simply separate the training set into two independent subsets, the first one for training and the second one for estimating $E[h(x)]$ and $E[h^*(x)]$. The accuracy at which $E[h(x)]$ and $E[h^*(x)]$ are estimated depending on $n$, not on $m$. So, even in this case, at a fixed $n$, the estimate does not become worse as the bag size gets larger. We will make all the above clearer in the main body of the paper. - On the significance of the bound improvements, $k^2/\beta^2$ vs. $k/\beta^2$, etc. We have to respectfully disagree on this point … . It is just the large bag size scenario that matters the most in practice. You may view this as akin to interplay between privacy (the bigger $k$ the more “private” the labels are) and utility (the regret bound $\beta$). Please see also the response to Reviewer UVyu about motivations. - On incrementality of the analysis. What we solved here was essentially stated as an open question by both Busa-Fekete et al. (2023), and Li et al. (2024) – see, e.g., the “Discussion” section in Li et al. This by itself should perhaps suggest that our work cannot be considered incremental . . . . Technically, the sample complexity analysis is fairly different from Busa-Fekete et al., and closer to Li et al., but more involved, due to the effort to reduce the variance. - On the focus on square loss and the fact that squared error/regret does not directly translate into 0-1 classification error. Observe that we allow our models $h$ to have output in the interval $[0,1]$. As a special case, we can restrict the predictions of the models to be $0$ or $1$, making the square loss equivalent to the zero-one loss. Then our loss estimates allow one to estimate directly the zero-one loss of the model. We do not emphasize this in the paper because the zero-one loss is difficult to optimize computationally. But in fact, our Thm 4.1 about ERM seamlessly applies to 0/1 loss regret as well. - Empirical results mostly on simulated LLP scenarios. We agree that including datasets with natively aggregated labels would be interesting. Unfortunately, most modern large-scale aggregated datasets (originating from online advertising) are proprietary. Yet, we do not feel that simulating aggregation on standard benchmark datasets makes the experiments unrealistic. - On denseness and readability. We will work on improving the readability of the technical sections in the paper. Thank you!
null
null
null
null
null
null
Plausible Token Amplification for Improving Accuracy of Differentially Private In-Context Learning Based on Implicit Bayesian Inference
Accept (poster)
Summary: The paper introduces Plausible Token Amplification (PTA), a novel approach designed to improve the accuracy of Differentially Private In-Context Learning (DP-ICL) by refining the process of generating differentially private synthetic demonstrations. The paper also presents a theoretical guarantee of the distinguishibility of the latent concepts when noises are injected into the next-token-prediction distribution to ensure DP properties. The proposed PTA is aimed to make the latent concepts more distinguishable, thus improving the performance of generated DP demonstrations. Empirical effectiveness has been shown on several text classification datasets. Claims And Evidence: Most of the claims are well-supported. I have a few concerns about the experimental results, which aim to show the superiority of PTA: 1. DBPedia has significantly higher performance without the proposed KL divergence, which is an essential part of the proposed PTA. There is no explanation for this. 2. Only one DP-ICL baseline is included. Not sure if there could be more baselines to compare. Methods And Evaluation Criteria: Yes. Though the datasets could be updated to more recent/relevant ones like MMLU. Theoretical Claims: The proofs are not fully/carefully checked. The claim in Equation (12) is pretty hand-waving without any analysis of the closeness of the approximation. The whole proposed algorithm is built on this equation. Experimental Designs Or Analyses: I have a few concerns about the experimental results, which aim to show the superiority of PTA: 1. DBPedia has significantly higher performance without the proposed KL divergence, which is an essential part of the proposed PTA. There is no explanation for this. 2. Only one DP-ICL baseline is included. Not sure if there could be more baselines to compare. Supplementary Material: I reviewed some parts of the proofs and found them reasonable. Relation To Broader Scientific Literature: The paper proposes a new DP in-context demonstration generation algorithm that performs better than the baseline. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is a nice combination of theoretical analysis and theory-inspired real-world algorithm. Other Comments Or Suggestions: N/A Questions For Authors: 1. Why does DBPedia have significantly higher performance without the proposed KL divergence? 2. Are there any other DP-ICL baselines that you can compare PTA with? 3. Can you provide some analysis on Equation (12) and the effect of having this approximation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Interpretation of numerical experimental results} > Reviewer comments (Claims And Evidence, Questions For Authors): > 1. DBPedia has significantly higher performance without the proposed KL divergence, which is an essential part of the proposed PTA. There is no explanation for this. > 2. Why does DBPedia have significantly higher performance without the proposed KL divergence? Thank you for pointing this out. We agree that the performance degradation on DBPedia when using KL divergence requires clarification. As shown in Equation (11), PTA is fundamentally designed to amplify tokens distinctive to the ground-truth concept. The KL divergence term is introduced as an auxiliary regularizer to ensure that the modified token distribution does not deviate excessively from the original LLM next-token probability distribution. While not the core objective, this regularization can help stabilize training and preserve plausibility, contributing to improved performance on datasets such as AGNews and GINC. However, as with many regularization techniques, the KL term does not consistently improve accuracy. In some cases, it may act as a constraint that limits useful adaptation to the target distribution. We suspect this effect may be present in DBPedia, though further analysis would be needed to pinpoint the exact cause. We will add this clarification in the revised version. # Lacking baseline methods > Reviewer comments (Claims And Evidence, Questions For Authors): > 1. [...] Not sure if there could be more baselines to compare. > 2. [...] Are there any other DP-ICL baselines that you can compare PTA with? We agree with these comments. A similar suggestion was also raised by another reviewer (DiNj). In response, we conducted additional experiments during the rebuttal period using DP-OPT (Hong et al., ICLR 2024), which is conceptually similar to our approach in that it generates differentially private prompts once and reuses them for evaluating test queries. For a detailed discussion, please see our response to Reviewer DiNj under “Missing comparison with related works.” We will include the new experimental results and their interpretation in the revised version of the paper. (Hong et al., ICLR 2024) DP-OPT: Make large language model your privacy-preserving prompt engineer. # Effect of approximation in Equation. (12) > Reviewer comments (Claims And Evidence, Theoretical evidence): > 1. The claim in Equation (12) is pretty hand-waving without any analysis of the closeness of the approximation. [...] > 2. Can you provide some analysis on Equation (12) and the effect of having this approximation? This is a good question. As you noted, PTA is based on the following approximation in Equation~(12): $$ \log \frac{p(o = v \mid \theta^*)}{p(o = v \mid \theta)} \approx \log \frac{p(o = v \mid S_{\text{priv}}^{(i)})}{p(o = v \mid S_{\text{pub}})}. \hspace{10pt} (12) $$ We approximate the LHS of Equation (12) since it is intractable to compute directly. This intractability arises from the fact that $\theta$ and $\theta^*$ are not directly observable as they are latent variables, making it impossible to evaluate the likelihoods $ p(o = v \mid \theta^*) $ and $ p(o = v \mid \theta) $ explicitly. To address this, we approximate the LHS using the LLM’s next-token probability distribution conditioned by observable prompts, which are designed to reflect their corresponding latent concepts. To explain how we approximate, we decompose the LHS of Equation (12) as follows: $$ \log \frac{p(o = v \mid \theta^*)}{p(o = v \mid \theta)} = \log \frac{p(o = v \mid \theta^*)}{p(o = v \mid S_{\text{pub}})} + \log \frac{p(o = v \mid S_{\text{pub}})}{p(o = v \mid \theta)}. $$ First, the first term on the RHS is then approximated by $ \log \frac{p(o = v \mid S_{\text{priv}}^{(i)})}{p(o = v \mid S_{\text{pub}})} $, replacing the intractable $ p(o = v \mid \theta^*) $ with a quantity that can be computed using the LLM prompted by the private prompt $S_{\text{priv}}^{(i)}$. As detailed in Section 4.1, this is justified by the Bayesian interpretation of in-context learning from Xie et al., where such prompting induces a posterior concentrated around $\theta^*$. Second, we omit the second term in RHS to focus on amplifying tokens distinctive to $\theta^*$, without explicitly penalizing competing concepts. This omission is plausible when the second term is small. This occurs when different concepts assign high probability to distinct sets of tokens. In such cases, tokens indicative of $\theta^*$ are also unlikely under both $\theta$ and the reference $p(o = v \mid S_{\text{pub}})$, making the omitted term negligible. To support this in practice, we construct $S_{\text{pub}}$ using instruction-only prompts that avoid concept-specific content. This yields a neutral reference distribution and helps maintain separation across concept token distributions. We will revise the paper to clarify this approximation strategy.
Summary: This manuscript explores Differentially Private In-Context Learning (DP-ICL) to mitigate leakage risks in ICL. The authors first provide a theoretical analysis for a prior work (Tang et al., 2024) using a Bayesian analysis, where Tang et al. studied generating synthetic demonstrations by adding variance-tuned noise to the next-token probability obtained from an LLM prompted with the original demonstrations. Based on the authors' theory, they identify how the added noise to ensure DP affects the LLM’s ability to infer the ground-truth concept. Accoradingly, two insights for improving DP-ICL are derived: (i) Reducing the vocabulary size lowers the noise-dependent threshold and (ii) Increasing the divergence between concepts, by employing another next-token probability distribution that enlarges the gap between the ground-truth and any other concept. The authors also propose Plausible Token Amplification (PTA) to improve the performance of DP-ICL and empirically verify its effectiveness. Claims And Evidence: The claims made in this submission are supported by convincing evidence. Methods And Evaluation Criteria: The proposed methods indeed make sense for the problem. Theoretical Claims: The proofs for theoretical claims seems correct. Experimental Designs Or Analyses: The experimental designs and analyses are soundness. Supplementary Material: I review the "related works" and "additional experiments" in the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** 1. This manuscript is overall well-written with clearly organized. 2. The authors provide theoretical evidence (based on Implicit Bayesian Inference) supporting Tang et al.’s empirical method in Differentially Private In-Context Learning. 3. This paper also introduce a refined method PTA for modifying next-token probability distribution accorading their theoretical insights. 4. Extensive experiments empirically demonstrate the effectiveness of PTA. **Weaknesses** 1. The term "concept" in the introduction needs clearer demonstration. It can improve the readability of the introduction. 2. Missing emprical comparison with:\ Wu et al. Privacypreserving in-context learning for large language models. ICLR 2024\ Hong et al. DP-OPT: Make large language model your privacy-preserving prompt engineer. ICLR 2024 2. The authors verify their method on relatively small LLMs such as GPT-2, Llama2-7B. How will the PTA perform on modern LLMs such as GPT4? Other Comments Or Suggestions: I have no more comments or suggestions. Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Unclear definition of "concept" > Reviewer comment (Other Strengths And Weaknesses, Weakness): > 1. The term "concept" in the introduction needs clearer demonstration. It can improve the readability of the introduction. Thank you for pointing this out we had not noticed this readability issue. In our context, a concept refers to the latent rule or semantic mapping that connects inputs to outputs in demonstrations. This underlying concept governs the token transitions in the demonstrations and is implicitly inferred by the language model during In-Context Learning (ICL). In the revised version, we carefully clarify and introduce the term “concept” in Section. 1. # Missing comparison with related works > Reviewer comment (Other Strengths And Weaknesses, Weakness): > 1. Missing empirical comparison with: (1) Wu et al. (ICLR 2024) and (2) Hong et al. (ICLR 2024). Thank you for pointing out these relevant works. (1) Wu et al. (ICLR 2024) While their method is indeed related, it directly adds noise to the next-token probability distribution at test time for each classification query. As a result, the noise variance must scale with the number of test queries to ensure a fixed $(\varepsilon,\delta)$-DP. In contrast, the noise variance in our method (and Tang et al.'s) remains fixed—regardless of the number of queries thanks to the post-processing property of DP: once a prompt satisfying $(\varepsilon,\delta)$-DP is generated. This fundamental difference in design makes direct comparison challenging, thus we did not include their method in our evaluation. (2) Hong et al. (ICLR 2024) We agree that DP-OPT, proposed by Hong et al., is conceptually similar to our approach, as it generates differentially private prompts once and reuses them to evaluate test queries. During the rebuttal period, we conducted an additional empirical comparison using the same Vicuna-7B-v1.5 model and TREC dataset setup as used in their study. The comparison results are summarized in the table below. To ensure fair comparison, we report both our replicated results and the original values for DP-OPT from Table 3 of their paper, formatted as (replicated / original). This is because our replication was limited by computational constraints and a limited understanding of DP-OPT’s hyperparameters. We followed their appendix settings for $\varepsilon=8$ and extended them to $\varepsilon=1$ using $\epsilon_0 = 0.1$ from their Table 5. Additionally, differences in prompt format remain. Therefore, this comparison may not fully reflect the method’s optimal performance. |$\varepsilon$|Method|Accuracy| |-|-|-| |1|PTA (Ours)|$77.84_{\pm2.83}$| ||Tang et al.|$75.60_{\pm2.05}$| ||DP-OPT|$47.8_{\pm0.0}$ / N.A.| |8| PTA (Ours)|$76.28_{\pm2.92}$| ||Tang et al.|$73.84_{\pm5.44}$| ||DP-OPT|$60.76_{\pm1.27}$ / $65.3_{\pm4.3}$| We present below our interpretation of the experimental results: - Even under the same evaluation setup, both our proposed PTA and Tang et al.'s DP-ICL acheieve high accuracy. This confirms the competetive performance of PTA compared to a strong DP-ICL baseline. To further support the numerical results, we highlight a key behavioral differences between DP-OPT and our approach at $\varepsilon=1$. - As discussed in Section 5.2 of their paper, DP-OPT tends to output only the instruction without demonstrations as the private prompt satisfying $(\varepsilon,\delta)$-DP guarantee with $\varepsilon=1$. In such cases, the method method asymptotically converges to zero-shot prompting, limiting the practical utility of private demonstrations. - In contrast, our method generates in-context demonstrations that, while potentially noisy, remain informative beyond the instruction even with $\varepsilon=1$. This may allow our approach to consistently outperform zero-shot prompting, especially in high-privacy regimes where DP-OPT yields only marginal gains. We will include these numerical results and their interpretation in the revised paper. # Performance investigation using GPT4-o > Reviewer comment (Other Strengths And Weaknesses, Weakness): > 1. [...] How will the PTA perform on modern LLMs such as GPT4? We believe that applying our method (as well as that of Tang et al.) to models such as GPT-4 accessed via the OpenAI API poses challenges from a privacy evaluation perspective. At this moment, the API does not provide access to the full token probability distribution, instead returning only a limited set of top tokens. Since the selection of these tokens depends on the full (unobservable) distribution, it may lead to unintended privacy leakage. Moreover, as this selection process is a black box, it is practically infeasible to assess its differential privacy guarantees. Given the current API constraints, we consider rigorous privacy evaluation using GPT-4 constraints to be challenging at this time. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. My concerns have been addressed, and I will maintain my rating.
Summary: The paper presents an approximate Bayesian model of DP-ICL (differentially private in-context learning). DP-ICL in general combines a public LLM with a private LLM to generate a number of low-privacy-leakage prompts, then uses those prompts for in-context learning. A previous paper (Tang et al.) used a simple vocabulary restriction method to improve DP-ICL; this paper uses their Bayesian model to explain why vocabulary restriction works, then present an improved Plausible Token Amplification method which achieves better empirical results. ## Update after rebuttal I am happy with the authors' rebuttal, and continue to think the paper should be accepted. The authors have said they'll clarify the theorem statements to show that C_delim and G depend on some things, and after that my only reservation is that I expect there is a future cleaner paper that will result in cleaner and better algorithms as well. However, the correct course of action is to accept this paper in hopes it in fact encourages that future cleaner paper. Claims And Evidence: (Caveat: Unfortunately I caught a cold over the weekend, and as a result have not had time to check the proofs. Therefore, this review is based on the main body of the paper and a quick skim of the proofs.) # Theoretical evidence 1. The theoretical model seems plausibly correct given that assumptions: it is a reasonable generalisation of an existing Bayesian approximation to non-DP ICL. However, I have to admit I don't find it very elegant! Appendix C.2 has to introduce a whole pile of constants to approximate things about the Hidden Markov Model, so the C_delim "constant" in the main body theorem statement hides a bunch of things which are very far from constant, even in the approximate model. Similarly, G is also dependent on the LLMs involved. 2. Actually the non-constant natural of C_delim and G really do need to be called out in the main text. When I was reading the main text I thought they really were constants, which is not correct, as is the theorem statements mislead the reader. 3. My guess is that there are cleaner theorems hiding under the surface the theorems they stated, that might lead to improved methods. In particular, Theorem 2 in particular has a term involving the vocabulary size, but my guess is that this is artificial in order to present direct evidence for the success of Tang's method. If the prompts are sampled sufficiently closely to the public distribution on a reasonable metric, we should be able to get a bound that doesn't depend on the vocabulary size. The vocabulary size would play a role, but would not as directly show up in the theorem statement. 4. This vocabulary size appearance then contaminates the empirical method, which combines *both* vocabulary restriction and PTA. My guess is that there is a simpler, better method available if one more naturally combines the two features. # Empirical evidence 1. The empirical evaluation is good, and do support the improvement of PTA over the baseline. 2. However, I would have preferred to see a graph showing the degradation in accuracy as the privacy budget is tightened: currently only two points along that curve are shown. # Summary Overall, I am happy with the paper! The fact that I think there is a better paper lurking under this one (with cleaner theory and empirics) still means this paper has good contributions, and I believe it should be accepted. Methods And Evaluation Criteria: Yes, I am happy with the datasets chosen in section 5. Theoretical Claims: See the Claims And Evidence section above. Experimental Designs Or Analyses: The experimental designs are simple. The main issue with any DP method is that bugs could easily break the guarantees and result in fake, better results, but this is hard to rule out with certainty. Supplementary Material: No. Relation To Broader Scientific Literature: Due to catching a cold immediately prior to the deadline, I don't have capacity to do a literature search, and unfortunately I don't have significant existing knowledge of the DP literature. Thus, while the paper's internal discussion of the literature appears good, I am unable to check whether important things are missed. Essential References Not Discussed: Due to catching a cold immediately prior to the deadline, I don't have capacity to do a literature search, and unfortunately I don't have significant existing knowledge of the DP literature. Thus, while the paper's internal discussion of the literature appears good, I am unable to check whether important things are missed. Other Strengths And Weaknesses: Discussed above, but to summarise: both the theorems presented and the final method, while useful contributions, appear non-natural. I expect there are better versions of both theorems and methods, in particular by merging vocabulary restriction natively into some metric comparing public and private distributions. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Concerns in bounding constants > Reviewer comments (Claims And Evidence, Theoretical evidence) > 1. [...] introduce a whole pile of constants [...], so the C\_delim "constant" [...] hides a bunch of things which are very far from constant, [...]. Similarly, G is also dependent on the LLMs involved. > 2. [...] non-constant natural of C\_delim and G really do need to be called out in the main text. [...] This is a good question. We address $G(\sigma)$ and $C_{delim}$, separately. First, $G(\sigma)$ is a function, not a constant, as defined in Line 1339 (Appendix C.7), though this was not clearly stated in the main text. We did not intend to hide this point, thus we will include its definition and clarify its dependency on the LLM's next-token probability distribution in the revised paper. Notably, introducing the bound $G(\sigma)$ in Line 1339 is essential for a **tight** analysis of the noise error in Theorem 2. Under general settings without assuming any specific form of the LLM's next-token probability distribution, a tight analysis using $G(\sigma)$ in Line 1339 is established thanks to Bretagnolle–Huber bound (Lemma. 4, Appendix C.3). Next, we clarify that the term $C_{delim}$, which is defined in Line 1009, is also a function, not a constant. It originated from the formulation in Xie et al. (ICLR 2022) and depends on the ground-truth concept $\theta^*$ and the other concept $\theta$. We will also clarify this dependency in the revised paper. # Refinement of Theorem > Reviewer comments (Claims And Evidence, Theoretical evidence): > 1. [...] cleaner theorems hiding under the surface the theorems [...]. Theorem 2 has a term involving the vocabulary size, [...] this is artificial in order to present direct evidence for the success of Tang's method. > 2. [...] If the prompts are sampled sufficiently closely to the public distribution [...], we should be able to get a bound that doesn't depend on the vocabulary size. [...] Thank you for the insightful comment. We agree more elegant theorems may arise under additional assumptions—e.g., when prompts are sampled closely from a public distribution and this is a promising direction for future work. However, Theorem 2 is designed to hold under general settings without assuming a specific form of next-token probability distribution. In practice, prompts often contain private demonstrations that may diverge from public distributions, making such assumptions unreliable. To address your concern, we clarify why the $\log |V|$ term appears in Theorem 2. Our analysis (Section 3) examines how the noise impacts latent concept inference in DP-ICL. Since next-token probability distributions of LLMs can vary widely, especially with private prompts, we aim to derive a general bound that holds without assuming any specific form of the distribution. Applying Csiszár’s inequality (Lemma 3, Appendix C.3) naturally introduces the $\log |V|$ term as a bound on the maximum distributional shift. Further, as discussed in the former section (Concerns ...), $G(\sigma)$ also results from a tight analysis. Together, $G(\sigma)\log |V|$ forms a general upper bound on noise impact. We will clarify this motivation in the revised version. # Refinement of empirical method} > Reviewer comments (Claims And Evidence, Empirical evidence, Other Strengths And Weaknesses): > 1. [...] there is a simpler, better method available if one more naturally combines the two features, vocabulary restriction and PTA. > 2. [...] by merging vocabulary restriction natively into some metric comparing public and private distributions. This is an insightful point. We acknowledge that metrics comparing public and private distributions can directly inform vocabulary restriction. Our goal, however, was to theoretically support Tang et al.’s empirical vocabulary restriction and refine it through PTA. Building on Theorem 2, we adopted a modular design to isolate each component’s role: vocabulary restriction narrows the output space, to reduce the noise impact, while PTA amplify token probabilities to increase divergence between the ground-truth and other concepts. We appreciate the suggestion and see it as a promising future direction. # Empirical evaluation > Reviewer comment (Claims And Evidence, Empirical evidence): > 1. [...] to see a graph showing the degradation in accuracy as the privacy budget is tightened [...] Due to space constraints, we showed results for $\varepsilon=1$ and $8$ in the main paper. We agree broader coverage is important. Table 8 (Appendix F.4) includes results for $\varepsilon=1,2,4,8,\infty$ (following Tang et al., ICLR 2024), giving a clearer tren of how accuracy changes as privacy strengthens. Below is a summary for PTA on GINC: |$\varepsilon=1$|$\varepsilon=2$|$\varepsilon=4$|$\varepsilon=8$| |-|-|-|-| |$94.55_{\pm 1.20}$|$95.17_{\pm 0.82}$|$ 96.76_{\pm 0.91}$|$96.85_{\pm 0.97}$| We will include accuracy curves in the revision to better illustrate this trend.
null
null
null
null
null
null
null
null
Gamma Distribution PCA-Enhanced Feature Learning for Angle-Robust SAR Target Recognition
Accept (poster)
Summary: This paper proposes a Gamma-Distribution Principal Component Analysis (ΓPCA) method for angle-robust SAR target recognition. The key idea is to integrate the Gamma distribution into PCA to account for the statistical properties of SAR data, thereby enhancing feature extraction across varying observation angles. The authors claim that ΓPCA-derived convolution kernels can capture angle-invariant features without adding additional computational burden. The method is evaluated on the MSTAR dataset with ResNet and ViT backbones. Claims And Evidence: no Methods And Evaluation Criteria: The study is based entirely on MSTAR, which, while widely used, is a relatively small dataset that may not reflect real-world SAR conditions. The authors compare ΓPCA with ViT, ResNet, and a few SOTA models, but crucial SAR-specific baselines (e.g., UIU-Net, ASC-based models) are missing. No statistical significance tests (e.g., t-tests, Wilcoxon tests) are performed to validate improvements. Theoretical Claims: The paper extends PCA to a Gamma-distributed setting, claiming it better represents SAR image statistics. The derivation of ΓPCA is mathematically rigorous, but no theoretical proof is provided that ΓPCA is better than traditional PCA or other low-rank approximations. The study assumes that angle variation can be mitigated by low-rank feature extraction, but does not establish a formal theoretical link between angle-robustness and Gamma-PCA projections. Experimental Designs Or Analyses: The paper compares against ViT, ResNet, and some recent models but ignores SAR-specific methods such as UIU-Net or ASC-based approaches. The study shows ΓPCA with different architectures but does not isolate the contributions of different components (e.g., Gamma assumption vs. PCA). The study does not explore the effect of different kernel sizes, training strategies, or dataset variations. Real-World Relevance: No analysis is provided on how the method performs under different noise conditions, sensor types, or imaging artifacts. Supplementary Material: no Relation To Broader Scientific Literature: The paper references many relevant SAR and deep learning studies but fails to discuss critical related work in: Statistical modeling of SAR images (Weibull, K-distribution, etc.) Deep learning for SAR ATR (UIU-Net, few-shot SAR methods) Angle-invariant feature learning in computer vision Essential References Not Discussed: no Other Strengths And Weaknesses: no more. Other Comments Or Suggestions: no more Questions For Authors: no more Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable comments. We will enhance the related work and include more experimental results in our final version. Below are our detailed responses to each point. > Q1. MSTAR is small. **A1:** We constructed a new dataset based on SAR-AIRcraft-1.0. A detailed description of the dataset and the experimental results can be found in **Answer1** for **Reviewer Otus**. > Q2. SAR-specific baselines. **A2:** We incorporat the ASC-based method MSNet-PIHA [R10] into our experiments. The comparison results are shown below. |Methods|A-R Test|AD-R Test| Overall Acc. (%)| |---|---|---|---| |MSNet-PIHA| 68.08±0.03|55.75±0.06| 64.20±0.03| |ΓPCA-Resnet|73.79±0.02|52.96±0.05|67.23±0.01| |ΓPCA-ViT|78.48±0.01|70.67±0.16|76.02±0.01| MSNet-PIHA performs well when trained on diverse azimuth angles [R10] but struggles with limited-angle data, which can lead to overfitting and suboptimal predictions. While ASC information enhances feature extraction with limited data, severe azimuth constraints remain challenging. In contrast, ΓPCA demonstrates better angle robustness. [R10] Huang, Z., et al. Physics inspired hybrid attention for SAR target recognition. *ISPRS*, 2024. > Q3. Statistical significance tests. **A3:** We conduct significance tests by repeating all experiments in our paper three times and reported the mean and variance to reduce the influence of randomness. Part of the updated experimental results are presented below. Table R1. Partially updated recognition results of Table 1. |Methods|A-R Test|AD-R Test|Overall Acc. (%)| |---|---|---|---| |Resnet|72.57±0.25|43.95±0.01|63.62±0.14| |ViT|78.12±0.04|60.11±0.24|72.45±0.04| |ΓPCA-Resnet|73.79±0.02|52.96±0.05|67.23±0.01| |ΓPCA-ViT|78.48±0.01|70.67±0.16|76.02±0.01| More results can be found at: https://anonymous.4open.science/r/ICML2025-E1F8. > Q4. Theoretical proof of ΓPCA is better than PCA. **A4:** Previous work in [R11] has proven that generalized PCA based on exponential family distributions theoretically outperforms traditional PCA. As a variant, our ΓPCA, based on the Gamma distribution, inherits this theoretical advantage. [R11] Landgraf, A. J., et al. Generalized principal component analysis: Projection of saturated model parameters. *Technometrics*, 2020. > Q5. Theoretical link between angle-robustness and ΓPCA projections. **A5:** Under different orientation observation angles, the statistics of the same class of SAR targets are similar. Base on this characteristic, ΓPCA assumes that the SAR target image follows the Gamma distribution, and then derives ΓPCA projection which can extract the principal component information, i.e., the low-rank information, of these SAR targets. Then the projection matrix is used to construct the convolution kernel to extract the SAR target features, which are characterized by a low sensitivity to angle. > Q6. Comparison of different components. **A6:** We provide an isolated comparison of different components (see Table 3, page 8, PCA-ViT vs. ΓPCA-ViT). As demonstrated in Table 3, ΓPCA consistently outperforms conventional PCA with ViT backbone for SAR data. For example, with a kernel size of 17, PCA-ViT and ΓPCA-ViT achieve accuracies of 73.52\% and 76.01\%, respectively, demonstrating its effectiveness in non-Gaussian SAR data analysis. > Q7. Effects of different settings. **A7:** Kernel size analysis is presented in Sec. 4.3 (Page 8, Left Column, Lines 410–434), with quantitative comparisons in Table 3. To isolate the impact of ΓPCA, all experiments exploit the same training settings (optimizer and learning rate: the default configurations of the original backbones, batch size: 32), ensuring a fair comparison within the scope of the paper on fundamental ΓPCA theory for SAR data. > Q8. Real-World Relevance. **A8:** We conducted experiments under various noise conditions, focusing on multiplicative noise, i.e. speckle, the most common noise type in SAR image. Noise levels are quantified by the Equivalent Number of Looks (ENL), defined as: μ^2/σ^2, where μ and σ are mean and standard deviation of pixel values. A higher ENL value indicates lower noise. |ENL|ViT|ΓPCA+ViT| |---|---|---| |Noiseless|70.61|76.01| |20|51.51|69.60| |10|47.22|62.19| |5|45.30|56.23| |2|41.20|48.09| More results are available at: https://anonymous.4open.science/r/ICML2025-E1F8. These results show the anti-noise performance of ΓPCA, attributed to its ability to extract principal components while filtering out noise-dominated ones. Our study primarily addresses the challenge of angle-induced performance degradation in SAR ATR. Extensive experiments validate the dual robustness of ΓPCA to angular variations and speckle noise. While sensor types and imaging artifacts are also important for SAR ATR, these aspects fall beyond the scope of our current work, as they pose distinct research challenges. We will further clarify the influence of these two aspects in the final version. --- Rebuttal Comment 1.1: Comment: Despite acknowledging the authors’ extensive efforts to address reviewer concerns, several major issues remain unresolved. In particular: Justification of the Gamma Distribution: The authors do not convincingly establish the theoretical link between the Gamma distribution assumption and the underlying SAR imaging physics. Why is the Gamma distribution particularly well-suited for modeling SAR images, and how does this choice offer clear advantages over traditional or even more modern methods? The explanation provided is vague and does not substantiate the claim that this approach is better tailored to the specific noise characteristics and imaging artifacts in SAR data. Reinventing an Established Method: PCA is a long-established technique and the improvement via a Gamma-distribution-based variant is, at best, incremental. The authors have not provided compelling evidence that the proposed ΓPCA adds sufficient value over conventional PCA or other advanced dimensionality reduction methods. Given that numerous state-of-the-art approaches with robust theoretical foundations demonstrate superior performance, the reliance on a modified PCA framework appears to be a regression rather than an advancement. Comparative Performance with Recent State-of-the-Art: Although the rebuttal includes additional experimental results, the reported accuracies still do not convincingly outperform many recent methods. There is a lack of discussion regarding why a fundamental method like PCA, even in a generalized Gamma form, should be preferred over contemporary approaches that often incorporate more sophisticated theoretical developments and yield higher recognition performance. About angle information in SAR recognition: The focus on angle-induced variations in SAR image recognition appears overstated. While the manuscript emphasizes the importance of angle robustness, identifying SAR images fundamentally falls under the broader umbrella of image recognition, where standard data augmentation techniques can effectively mitigate variations introduced by different viewing angles. The authors have not provided compelling evidence that angle variations are a uniquely critical challenge for SAR recognition that cannot be addressed by simpler and more proven methods. Furthermore, the paper does not include any experimental validation to compare the proposed ΓPCA method against data augmentation baselines. Without demonstrating that simple augmentation strategies fail to yield comparable or superior results, the motivation for developing a Gamma-distribution-based variant of PCA becomes questionable, especially since PCA is a well-established method and its modification in this context appears incremental. Overall, while the authors have made a considerable effort to address specific reviewer points, the manuscript does not sufficiently justify the theoretical and practical merits of ΓPCA within the context of SAR target recognition. Given these unresolved issues, especially the unclear motivation for adopting the Gamma distribution and the reliance on an aging methodology in the face of modern alternatives, I remain inclined to recommend rejection.
Summary: The paper proposes a Gamma-Distribution Principal Component Analysis ($\Gamma$PCA) model to enhance the robustness of Synthetic Aperture Radar (SAR) target recognition against angle variations. The key idea is to leverage the Gamma distribution to extract low-rank features that are invariant to changes in azimuth and depression angles. The authors derive a consistency projection matrix from $\Gamma$PCA to construct convolutional kernels that capture angle-insensitive information. The proposed method is integrated into deep learning backbones (ResNet and ViT) and validated on the MSTAR dataset, demonstrating improved robustness and performance compared to baseline models. Claims And Evidence: * The authors take the non-Gaussian nature of SAR into consideration and deveplot the $\Gamma$PCA model accordingly. * The experimental results on the MSTAR dataset demonstrate that $\Gamma$PCA can improve the robustness of ResNet and ViT models against angle variations. Methods And Evaluation Criteria: The evaluation criteria look good. Theoretical Claims: There is no proof for theoretical claims to be checked. Experimental Designs Or Analyses: * The authors conduct experiments on the MSTAR dataset to evaluate the azimuth robustness and azimuth & depression robustness of the proposed method. * Experiments with state-of-the-art models (e.g., ViT and ResNet) further validate the effectiveness of the proposed method. Supplementary Material: Supplementary material is not reviewed. Relation To Broader Scientific Literature: The key contributions are not related to the broader scientific literature. Essential References Not Discussed: Based on the information provided, there are no obvious essential references missing. Other Strengths And Weaknesses: Strengths: * The paper presents a novel approach to addressing a significant challenge in SAR target recognition: robustness to angle variations. * The method is demonstrated to be effective across different deep learning architectures (ResNet and ViT), highlighting its generalizability. Weaknesses: * The experimental validation is limited to a single dataset (MSTAR), which may restrict the generalizability of the findings. Other Comments Or Suggestions: * Line 143, the formulation lacks ending period. * Figure 4 can be lifted to former pages. * Figures 5 and 6, the same label repeats 6 times, please simplify it. Also, the font can be larger. Questions For Authors: There is no further questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Comment1. The experimental validation is limited to a single dataset (MSTAR). **Answer1:** Thanks for pointing out this important concern. We acknowledge this limitation. Unlike optical imagery, SAR data is challenging to obtain at scale due to sensor costs, operational constraints, and the factor of security sensitivities. Therefore, most studies [R3-R6] rely solely on MSTAR for evaluation. To enhance experimental validation, we construct a dataset based on SAR-AIRcraft-1.0, a widely used public SAR aircraft detection dataset. Compared to MSTAR, SAR-AIRcraft-1.0 offers richer data diversity and more complex operational scenarios. We extract individual targets from the original imagery to create a dedicated SAR ATR dataset. Below, we provide a detailed description of its composition. | Class | A220 | A320/321 | A330 | ARJ21 | Boeing737 | Boeing787 | other | all | | ------ | ---- | -------- | ---- | ----- | --------- | --------- | ----- | ---- | | Number | 2065 | 939 | 290 | 713 | 1495 | 1677 | 2041 | 9220 | Since SAR-AIRcraft-1.0 lacks diverse and separable imaging angles [R7-R9], a standard target recognition strategy and conducted experiments are implemented. Specifically, 80% of the data was used for training, and the remaining 20% for testing. The experimental results are outlined below: | Methods | Overall Acc. (%) | | -------------- | ---------------- | | Resnet101 | 97.33±0.02 | | ΓPCA+Resnet101 | _**98.67±0.01**_ | | ViT-B/16 | 97.05±0.03 | | ΓPCA+ViT-B/16 | _**98.12±0.02**_ | | Swin-B | 97.41±0.01 | | ΓPCA+Swin-B | _**98.32±0.01**_ | Compared to the MSTAR dataset, SAR-AIRcraft-1.0 is lack of imaging angle annotations, and thus the angle-related constraints are eliminated in fact. Consequently, in this experiment on SAR-AIRcraft-1.0, most of the evaluated networks exhibit superior performance. But notably, existing mainstream networks incorporating our ΓPCA consistently outperform their original backbones, even under this less constrained evaluation scenario. These results validate the generalizability of our method across diverse data characteristics and scenarios. **Reference:** [R3] Huang, Z., et al. Physics inspired hybrid attention for SAR target recognition. *ISPRS Journal of Photogrammetry and Remote Sensing*, 2024. [R4] Zhang, L., et al. Optimal azimuth angle selection for limited SAR vehicle target recognition. *International Journal of Applied Earth Observation and Geoinformation*, 2024. [R5] Wang, R., et al. MIGA-Net: Multi-view image information learning based on graph attention network for SAR target recognition. *IEEE Transactions on Circuits and Systems for Video Technology*, 2024. [R6] Li, W., et al. Hierarchical disentanglement-alignment network for robust SAR vehicle recognition. *IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing*, 2023. [R7] Zhirui, W., et al. SAR-AIRcraft-1.0: High-resolution SAR Aircraft Detection and Recognition Dataset. *Journal of Radars*, 2023. [R8] Huang, B., et al. Scattering Enhancement and Feature Fusion Network for Aircraft Detection in SAR Images. *IEEE Transactions on Circuits and Systems for Video Technology*, 2024. [R9] Zhou, J., et al. DiffDet4SAR: Diffusion-based aircraft target detection network for SAR images. *IEEE Geoscience and Remote Sensing Letters*, 2024. --- > Suggestions and writing issues: **Answer2:** Thank you for your careful review and valuable suggestions. We will thoroughly proofread and incorporate these revisions into the final version of the paper.
Summary: This paper proposes Gamma-PCA, a Gamma-distribution Principal Component Analysis model, to enhance angle-robust SAR target recognition by leveraging SAR's non-Gaussian statistics. The method derives consistent projection kernels to capture angle-invariant features without parameter updates, seamlessly integrating with CNN/Transformer backbones. Experiments on MSTAR demonstrate its effectiveness in mitigating performance degradation under significant azimuth/depression angle variations while maintaining computational efficiency. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: see the comments. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback on our work and are truly grateful for the time and effort you have dedicated to reviewing our manuscript. Should you have any additional comments or suggestions, we would be delighted to engage in further discussion and improvement. --- Rebuttal Comment 1.1: Comment: I have no more comments about the manuscript.
Summary: This paper addresses a key challenge in SAR target recognition: the sensitivity of deep learning models to variations in azimuth and depression angles, which cause significant shifts in scattering characteristics. The authors propose a Gamma-Distribution Principal Component Analysis (ΓPCA) model to extract angle-invariant features by leveraging the statistical properties of SAR data. ΓPCA derives consistency convolution kernels without requiring parameter updates, thus adding no computational burden. The method is evaluated on the MSTAR dataset using ResNet and ViT backbones, demonstrating improved robustness to angle-induced distributional discrepancies. The key contributions include: (1) a novel ΓPCA framework tailored to SAR’s non-Gaussian statistics, (2) angle-insensitive feature extraction via a fixed projection matrix, and (3) seamless integration into existing architectures. While the approach shows promise, its generalizability to more diverse SAR targets and complex clutter scenarios remains to be further validated. The work stands out for its principled statistical modeling and parameter-free design, offering a potential advance over ad-hoc multiview fusion or data augmentation techniques. Claims And Evidence: The paper makes several strong claims, most of which are supported by empirical evidence, but some aspects could benefit from further clarification or validation. Methods And Evaluation Criteria: To verify the effectiveness of the proposed method, the authors conducted a variety of robustness experiments on the MSTAR benchmark dataset. Experimental results show that the ΓPCA model can significantly improve the performance of existing models when facing angle changes. In addition, the ΓPCA model does not require parameter updates, so it does not bring additional computational burden to the network. The proposed ΓPCA method and evaluation criteria (i.e., the MSTAR benchmark dataset) are meaningful for addressing the angle variation problem in SAR target recognition. The MSTAR dataset contains SAR images at different angles, which provides a suitable testing environment for evaluating the performance of the model at different observation angles. Therefore, it can be considered that the proposed model and evaluation method are designed for this problem or application and are reasonable. Theoretical Claims: N/A, there is no theoretical claims in the paper. Experimental Designs Or Analyses: The experimental analyses are sound. Supplementary Material: The supplementary material provides more information about the experimental setup and dataset, and provides some derivations. Relation To Broader Scientific Literature: The contribution of this paper is to propose a new feature extraction method that is particularly suitable for SAR data and can improve the robustness of the model in the face of angle changes. These contributions provide new perspectives and solutions compared to existing methods in the literature, especially in dealing with the non-Gaussian properties and angle changes of SAR data. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Strong motivation: Tackles a well-known but underexplored problem (angle sensitivity) in SAR ATR. Novelty: ΓPCA extends PCA to Gamma distributions, aligning with SAR physics. Practicality: No added parameters or training overhead. Weaknesses: Dependence on Gamma distribution assumptions (may not hold for all targets/clutter). Limited discussion on computational efficiency vs. performance trade-offs. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to the entries above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Comment1. Dependence on Gamma distribution assumptions (may not hold for all targets/clutter). **Answer1:** We appreciate this insightful observation regarding the distributional assumption. To provide clarification: 1. Theoretical and empirical validation: The applicability of Gamma distribution to SAR image statistics is theoretically grounded and empirically validated, as evidenced by [R1, R2]. These works demonstrate its effectiveness in capturing the multiplicative speckle noise and scattering characteristics inherent to SAR systems. 2. Scope of the Gamma assumption: Our method adopts the Gamma distribution to model the reflection intensity of the entire SAR image (encompassing targets and clutter collectively). Of course, statistical distributions such as generalized Gaussian, fisher, etc. can be used, but these distributions are complex, and the corresponding parameter estimation and derivation of matrix $U$ are challenging to realize in practice. 3. Robustness of approach: By operating at this holistic statistical level, our method maintains robustness across diverse imaging scenarios, even when local target/clutter distributions vary. This aligns with established practices in SAR image analysis, where system-level statistical modeling often supersedes component-specific assumptions. **Reference:** [R1] Li, H. C., et al. On the empirical-statistical modeling of SAR images with generalized gamma distribution. *IEEE Journal of selected topics in signal processing*, 2011. [R2] Nascimento, A. D. C., et al. Compound truncated Poisson gamma distribution for understanding multimodal SAR intensities. *Journal of Applied Statistics*, 2023. ---- > Comment2. Limited discussion on computational efficiency vs. performance trade-offs. **Answer2:** Thanks for your detailed comment. Below, we analyze the computational complexity of our proposed ΓPCA method and compare it with standard PCA. The primary computational cost of ΓPCA mainly arises from the derivation using the MM. 1) Update the mean vector $\mu^{(t+1)}$ by equation (13) in our manuscript: The update involves matrix multiplications and a linear system solve. Dominant operations include: - $O(d^2k)$ for $U^{(t)}(U^{(t)})^T$ - $O(nd^2)$ for matrix multiplication of $H(·)(·)^T$ Here, $n$ is the number of samples, $d$ is the original feature dimension and $k$ is the reduced dimension. Therefore, the total complexity for $\mu^{(t+1)}$ is $O(nd^2+d^2k)$. 2) Calculate the deviance $M(\boldsymbol{\Theta}|\boldsymbol{\Theta}^{(t)})$ by equation (27): Computes the Frobenius norm of an $n \times d$ matrix after rank-$k$ projections. Dominate terms: - $O(ndk)$ for matrix products with $\mathrm{H}+1_n\mu^{(t)}$ with $U^{(t)}$. - $O(d^{2}k)$ for $U^{(t)}(U^{(t)})^T$ operations. The total complexity for $M(\boldsymbol{\Theta}|\boldsymbol{\Theta}^{(t)})$ is $O(ndk+d^2k)$. 3) Calculate the matrix $F^{(t)}$ by equation (30) and update $U^{(t+1)}$ to the first $k$ eigenvectors of $F^{(t)}$: - $O(nd^{2})$ for the matrix conjunction of $(·)^{T}Q^{(t)}(·)$. - $O(d^{3})$ for singular value decomposition (SVD). Therefore, the total complexity for $U^{(t+1)}$ is $O(nd^{2}+d^3)$. From the above discussion, the overall computational complexity of ΓPCA algorithm is approximately $O(nd^2+d^2k)+O(ndk+d^2k)+O(nd^2+d^3)\approx O(nd^2+d^2k+ndk+d^3).$ In practical applications, parameters typically exhibit the following relationships: $n \gg d \gg k$. Therefore, the overall computational complexity of ΓPCA algorithm is $O(nd^2+d^3)$. In contrast, the computational complexity of the standard PCA method is mainly determined by two key operations: 1. Computation of covariance matrix: $\Sigma=\frac{1}{n}\mathbf{X}^T\mathbf{X}$, which has a computational complexity of $O(nd^2)$. 2. Eigenvalue decomposition of covariance matrix: $\Sigma=\mathrm{U}\Lambda\mathrm{U}^T$, with a computational complexity of $O(d^3)$, equivalent to the SVD operation in ΓPCA . Here, $\text{X}\in\mathbb{R}^{n\times d}$ is the data matrix, $U$ is the eigenvector matrix, and $\Lambda$ is the eigenvalue diagonal matrix. Consequently, the overall computational complexity of standard PCA is approximately $O(nd^2+d^3)$. The computational complexity is identical to that of standard PCA under the condition of $n \gg d \gg k$. The overall computational cost of our method is on the same order of magnitude as standard PCA, making the new algorithm computationally acceptable.
null
null
null
null
null
null
Simplifying DINO via Coding Rate Regularization
Accept (poster)
Summary: The authors proposed the coding rate regularization technique to improve the training method of DINO and DINOv2, enhancing their performance. Claims And Evidence: 2-1. The careful review of the original DINO and DINOv2 provides an intuitive and well-explained reasoning for why collapse occurs. 2-2. It is impressive that a method for effectively selecting the necessary hyperparameters for the proposed regularization is presented, along with a well-supported justification. Methods And Evaluation Criteria: 3-1. As stated in the title, it is clear that simplifying DINO is the main focus of the paper. However, the proposed coding rate regularization seems applicable to other representation learning methods that use contrastive learning. Are there any experimental results demonstrating this? Theoretical Claims: 4-1. The proof provided in the Appendix has been reviewed, and no issues were found. Experimental Designs Or Analyses: 5-1. The intention to improve upon DINO and DINOv2 is clear, but why is there no comparison with other self-supervised learning methods? 5-2. Is there a reason why few-shot learning performance, which is commonly evaluated in most representation learning papers, is not included? Supplementary Material: 6-1. The proof provided in the Appendix has been roughly reviewed, and the section detailing the necessary hyperparameters for applying the proposed method has been examined. Relation To Broader Scientific Literature: 7-1. Self-supervised representation learning is an actively researched field. While various methods have been proposed, representation collapse remains a critical issue that needs to be addressed. This work provides a simple yet effective solution to this problem. Essential References Not Discussed: 8-1. There do not appear to be any missing key references. Other Strengths And Weaknesses: 10W-1. How does the actual regularization value change? Is there an analysis comparing cases where collapse occurs and where it does not? 10W-2. Is there a performance comparison when applying other regularization methods? Other Comments Or Suggestions: I have no further comments. Questions For Authors: Please refer to the comments provided in Questions 2-11. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer ubYD, We deeply appreciate your insightful review and am pleased to see that you find our work intuitive to follow, and well-justified by extensive empirical studies. Here we attempt to address the concerns you have raised in the review. 3-1: This is a great suggestion. It is possible to apply coding rate to other representation learning methods, as it is a principled measure of non-collapse (we provide a sketch of theoretical justification in our response for Reviewer 5gcy). As you rightfully point out, we focus on simplifying DINO and DINOv2 in this work as they are widely used and attain SOTA performance. The study of extending the coding rate to other learning frameworks is interesting, but out of scope for this paper and we leave it for future work. 5-1: Thanks for your comment. As discussed in the previous point, this work focuses on the DINO family of models as they are very widely used and typically best-performing across various downstream tasks. In ImageNet classification results, we also have included additional baselines (e.g. MoCov3) for reference. With that said, we are happy to incorporate more baselines in the final version of the paper. 5-2: We would like to respectfully note that few-shot learning is not really necessary here, since we already obtain linearly separable features from DINO/SimDINO training (demonstrated by the superior $k$-NN performance). Due to this property, few-shot learning was not evaluated in the original DINOv2 paper. We hope this helps clarify and resolve your concern. 10-W1: Thank you for your question. We can attach training plots in the final version of the paper. As long as the balancing coefficient $\gamma$ is selected appropriately, the coding rate value will steadily increase, meaning that the features are spread out without collapse. If it’s too small, then feature alignment loss will dominate and lead to collapse (coding rate approaches 0). If it’s too large, the opposite happens, where the coding rate dominates. In this case, the features don’t collapse but feature alignment between global and local views is no longer enforced. We will follow your suggestion and put these analyses in the paper. 10-W2: We appreciate your suggestion. In our preliminary test, using regularization such as direct contrastive loss leads to worse performance than coding rate regularization. We will run and include additional experiments with other regularization methods in the revised version of the paper. We are grateful to your suggestions and comments, which will no doubt improve the quality of our work. Please let us know if you have further questions or concerns, and we are happy to address them. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to address the questions. While I appreciate the effort to clarify the contributions, I believe that my concerns regarding the proposed regularization—one of the core aspects of the paper—remain insufficiently addressed. As such, I will need to revise my initial score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your feedback, but we are not sure what aspects of your concerns regarding our proposed regularization are not adequately addressed. Nevertheless, we will try our best to further clarify and hopefully resolve your concerns. > Comparison with other regularization methods We greatly appreciate your suggestions and we have since tested on three different regularization methods to compare with our proposed coding rate regularization. The following results are also provided in our response to Reviewer nrJd. Concretely, we fix other settings to be the same as SimDINOv2 on ViT-L/16 and only vary the choice of the regularizer applied on the student features of global views for a fair comparison. We report their specific formulation and KNN performance as follows. - the vanilla contrastive objective (i.e. explicitly repelling negative samples and attracting positive ones) as in SimCLR[1]. Compared to our result, this setting results in performance decrease of about 2 points (79.4 vs 81.1). - the uniform loss proposed in [2] that encourages the representation to be uniform on the unit sphere. This settings leads to performance decrease of about 4 points (77.2 vs 81.1). - Barlow Twins loss proposed in [3] that penalizes off-diagonal terms while promoting the on-diagonal terms on the covariance matrix of learned representations. This setting causes NaN in our experiments so far. It might be related to the sensitivity of the coefficient within the loss (as it needs to balance the off-diagonal and on-diagonal terms) and we are still investigating. > Evolution of coding rate term value We hope our original response addresses this specific conern. As we said previously, we will put training plots that show the evolution of our loss function values during training in different settings in our camera-ready version. Also, we will be open-sourcing our code for reproduction of our results. > Application to other representation learning frameworks We greatly appreciate your advice. As said in our earlier responses, this work focuses on DINO and DINOv2 as they represent the state-of-the-art methods at the moment. We would like to emphasize the non-trivial difficulty and computational cost of significantly changing an existing pipeline and tweaking it to obtain the best performance, and because of this we leave comprehensive investigation of applying our techniques to non-SoTA frameworks as future work. With that said, we strongly agree with you that the proposed regularization has great potential and applicability to other learning frameworks. Therefore, following your suggestion, we plan to include additional experiments where we apply the coding rate regularizer to some (simple and lightweight) frameworks such as SimSiam [4] and assess its effectiveness. We will put all these results and any more results we obtain (i.e., about Barlow Twins loss and SimSiam) in the final version of the paper. Furthermore, we will publish our implementation upon the acceptance of our work to make it accessible for any researcher to reproduce our results and apply our method to their own problems or datasets. Again, we thank you for your constructive feedback and we hope our response resolves your remaining concerns. Please let us know if you have any further questions or comments. [1] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning, 2020. [2] Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." International conference on machine learning, 2020. [3] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." International conference on machine learning, 2021. [4] Chen, Xinlei, and Kaiming He. "Exploring simple siamese representation learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
Summary: The paper reveals that many comonents in the pipelines and models of DINO and DINOv2 are used to avoid collapse of the learned representations, and thus attempts to simplify the training pipelines, architectures, and hyperparameters of DINO and DINOv2 by introducing a $\ell^2$ distance based on the loss function plus a total coding rate based regularization to replace the cross entropy based loss function. Such a simplification leads to more simple, efficient, and robust training pipelines and models---SimDINO and SimDINOv2, without need to use several manually designed components in models and tricks in training pipelines. Extensive experiments demonstrate that SimDINO and SimDINOv2 outperform the original DINO and DINOv2 on the downstream tasks, including image classification, object detection and segmentation, semantic segmentation, and video object segmentation. Moreover, the experimental results also demonstrate the stability and robustness against hyper-parameters. ## update after rebuttal: The reviewer read the responses from the authors and the other review comments, and believes that this submission is an excellent work and should be accepted. Claims And Evidence: The claims made in the paper is supported by extensive experiments on the downstream tasks, including image classification, object detection and segmentation, semantic segmentation, and video object segmentation. Moreover, there is also partial theoretical analysis in the appendix to reaveal the possible reasons of the training instability in DINO. Methods And Evaluation Criteria: Yes. Simplifying DINO and DINOv2 does make a lot of sense for wide applications of unsupervised representation learning. Theoretical Claims: Yes, the proofs in the appendix are correct. Experimental Designs Or Analyses: Yes. The experimental designs are fair and sound. Supplementary Material: Yes. The materials in the appendix are correct and supportive. Relation To Broader Scientific Literature: The papers simplified two very popular methods DINO and DINOv2 in unsupervised learning of visual representation and thus proposed two promising models: SimDINO and SimDINOv2. Since that DINO and DINOv2 are widely applied in visual representation learning and have broad connections to deep learning literature, the reviewer believe that the simplified models again have signifiant potential to advance the practice of visual representation learning. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: Strengths: + Extensive experiments are provided to demonstrate the SimDINO and SimDINOv2 outperform the original DINO and DINOv2 on image classification, object detection and segmentation, semantic segmentation, and video object segmentation. + Experimental results also confirm the stability and robustness of SimDINO and SimDINOv2 against hyper-parameters. Weaknesses: - There is no theoretical justification on the mechanism why adding the total coding rate maximization term leads to avoid the feature collapse. For example, what is the landscape of the optimal solutions when using the simplified loss function based on $\ell_2$ distance and the rate reduction regularization in SimDINO? Other Comments Or Suggestions: At L166 in right column: "Note that $d_{\ell^2}(z_c^{cls}, z_g^{cls}) = - (z_c^{cls})^\top z_g^{cls}$ since ....". This is not correct----a constant 2 is missing. Questions For Authors: Regarding to the mechanism of adding a total coding rate maximization term to avoid the collapse, is there possible to have some theoretical results? For example, what is the landscape of the optimal solutions when using the simplified loss function based on $\ell_2$ distance and the rate reduction regularization in SimDINO? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer 5gcy, We deeply appreciate your insightful review and am pleased to see that you find our work backed by solid empirical studies, with potential to improve practice of visual representation learning. Here we attempt to address the concerns you posed. > Theoretical justification for coding rate regularization Thanks for your suggestion. It actually can be shown that when coding rate is maximized, it will induce the feature $Z$ to be full-rank. This can be proved by a spectral analysis as $\log\det(I+ Z^\top Z) = \sum_{i=1}^{\min(n,d)}\log(1+\sigma_i^2)$, where $\sigma_i$ is the $i$-th singular value of $Z$. Maximizing coding rate then leads to uniform non-zero singular values (as columns of $Z$ are $\ell^2$ normalized), making $Z$ full-rank. This means the learned features are spread on the unit sphere, avoiding a collapsed solution. On the other hand, the $\ell^2$ distance encourages the student features to be close to the teacher features on the same image, ensuring that we do not arbitrarily project different crops of the same image to large distances in the feature space. This is the extent of our added theoretical analysis for now; note that a full optimization theory-esque landscape analysis is very challenging and/or unreflective of practice due to the self-distillation (teacher/student) aspect, and we leave such analysis for future work. We hope these clarifications alleviate your concerns and we will put the relevant theoretical analysis in the final version of the paper. > The connection between MSE and dot product similarity Thank you for pointing out this minor algebraic error. Note that in in eq.(7) we have included a factor of $\frac{1}{2}$, so the correct additive constant is +1. We will amend this in the final version of the paper. We greatly appreciate your time in evaluating our work and your suggestions will certainly help improve the quality of our work. Please let us know if you have more questions or concerns and we are happy to address them. --- Rebuttal Comment 1.1: Comment: The reviewer appreciate the author's effort to carefully address the concerns and agrees that the self-distillation involving student and teacher networks makes the theoretical justification quite challenging. Due to the supportive experiments and the elegant way to address the feature collapse issue of DINO and DINOv2, the reviewer believes that this is an excellent work and should be accepted. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive evaluation of our work and your valuable suggestions in improving our paper. Please let us know if you have any further comments or questions.
Summary: The work aims to simplify the architectures of DINO and DINOv2 family of models by adding an explicit coding rate term in their loss. The simplified architectures (referred as SimDINO and SimDINOv2) are more robust to different design choices and offer a Pareto improvement over the DINO model families. The simplification, if effective, could be useful for the larger field of AI and encourage more computationally efficient approaches. Claims And Evidence: The contribution of this work can be briefly summarized as: methods to reduce complexity of DINO and DINOv2 families by reducing complexities in training pipelines and adding a coding error rate regularizer to enable simple, robust and efficient training pipelines which result in an improvement performance in vision SSL. Methods And Evaluation Criteria: - In Figure 1, it is stated that the simplification lies in removal of a linear layer and a softmax layer in DINO, and in the removal of the same layers along with one component of loss in DINOv2. - The DINO pipeline is recapped (this is good for the readers) to highlight z^{cls}_c and z^{cls}_g obtained from local (student network) and global views (teacher network). It is argued that these features can be compared directly. However, the post-processing steps that are applied to prevent collapse result in a Loss highlighted in Equation 4. The complexities in updates of student and teacher parameters are further highlighted. It is explained how the collapse is avoided during EMA update while mentioned that the exact mechanism is not clear. - The two main questions addressed are 'how to use z^{cls}_c and z^{cls}_g directly' and 'how to enforce non-collapse by efficient use of negative samples?'. The two questions, if addressed, could lead to removal of linear and softmax layers along with EMA update structure. Equation (9) is proposed as a solution (for DINO) here which applies squared Euclidean distance and a rate distortion term with a hyperparameter γ and similarly Equation (17) has been proposed for DINOv2. Guidance on the hyperparameter γ has also been provided. - The overall problem is well posed and the problem is formulated here. - For evaluation, The ImageNet-1K classification dataset has been used for all methods for ViT backbones pretrained on DINO and SimDINO variants and COCOval2017, ADE20K and DAVIS-2017 for downstream detection and segmentation tasks. Theoretical Claims: Theoretical details have been listed for hyperparameter tuning in the Appendix C provided specifically to justify the choice of γ. I have not carefully verified the proof but the steps are consistent and I have not noticed any discrepancies in a brief look. Experimental Designs Or Analyses: - Experiments compare the DINO approaches with their SimDINO alternatives on - Classification: Improvements in the performance have been highlighted in Table 1. - Object Detection and Segmentation: Tables 2 and 3 have highlighted the performance improvements for SimDINO. - Segmentation maps for DINO and SimDINO in the supplementary material are comparable. Supplementary Material: The supplementary material formally describes global and local views, complexity in DINO, hyperparameter scaling justification, SimDINO, SimDINOv2 algorithms, Implementation Details (model, pipeline, data, optimization hyperparameter choices), Some ablation studies and visualizations of attention maps. All the parts support the main content in the paper. Relation To Broader Scientific Literature: DINO families have been introduced with the background of contrastive SSLs and the issue of representation collapse is discussed in the context of contrastive SSL. The use of negative samples (directly) in DINOv2 and (indirectly) in DINO models is highlighted along with the complexity and instability of the models along with the requirement of 'many tweaks' and careful hyperparameter selection for convergence. The references to the corresponding sections and/or papers should be cited while mentioning the issues with complexity of these models and the claim that all these are required for convergence. For instance, the appendix F.1 highlights some ablation studies on the stability of DINO training could be referred, etc. This work removes the 'many tweaks and hyperparameters' (figures, discussion could be referred accordingly) from DINO families by adding a coding error rate regularizer in the optimization. Essential References Not Discussed: I am not aware of any missing essential references. Other Strengths And Weaknesses: The manuscript is well-written and maintains high standards for readability. The problem and framework are well posed. Experiments have been conducted to highlight the value of proposed solutions. The design designs could potentially be further elaborated on by extending the experiments to include more variants of regularizers (including choices for coding rate terms), methods for improving efficiency, etc. This would help in establishing the value of proposed solution over the existing choices which could motivate the users for adoption. Other Comments Or Suggestions: - The last part 'vision SSL' (page 2, line 88) could perhaps be replaced by DINO and DINOv2 families or more details can be provided on how it would apply to the larger vision SSL field. - The coding rate regularisation has been significantly used in the literature as mentioned in the Subection 'Coding rate, and related regularizers'. Since, the coding rate term (in Equation 9, 17) is central to the contribution, it would be good to compare variants of coding rate terms in literature and how they differ in the implementation, evaluation, etc. - The improvements have been proposed over the existing DINO and DINOv2 pipelines. It would be good to summarize all the framework, training style variants of DINO and DINOv2 families which differ from the standard frameworks chosen for comparison. The contribution can be split into efficiency and improvement. Both of these are valuable. However, discussion and extension of experiments on design choices could strengthen the work. The literature and experiments could potentially be improved to include other common strategies that avoid representation collapse and how the proposed strategy is not only more efficient at avoiding the collapse but also more effective. - Discussion on tradeoffs and efficiency could be provided when using alternate strategies for representation collapse. Questions For Authors: - The use of 'pareto improvement' term in the abstract is a slightly confusing to me. It is generally used if the improvement is coming at a cost of something (like a decrease) else (based on the Pareto definitions). In this work, I see that efficiency and performance are argued to improve together. - I have shared my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer nrJd, Thank you for your insightful comments and suggestions. We are encouraged by your compliments on the simplification of the proposed method as well as the empirical evidence and presentation quality of our work. Here we attempt to address the concerns raised in your review. > Including more variants of regularizers and coding rate terms We appreciate your suggestion. First, we use coding rate as the anti-collapse loss here due to its simplicity and theoretical grounding (also refer to our response to Reviewer 5gcy for further theoretical justification). In terms of its computation, we would like to clarify that there do not exist many variants (i.e. all the cited works compute the exact same quantity). Other than coding rate, we agree it is possible to prevent collapse with other choices of regularizers and is an interesting direction to explore. We have experimented with directly using contrastive loss, but the results lag behind our choice. We will include these ablations in the camera-ready version of the paper. > Other variants of DINO and DINOv2 pipelines To the best of our knowledge, there aren’t any major variants of DINO and DINOv2 pipelines. Generally people either directly use the pretrained checkpoints or rely on the official implementations. We believe this can also be largely attributed to the complexity and fragility of the DINO pipeline, as we have demonstrated in Table 5 in the appendix. To compare our methods with the official pipeline, Figure 1 directly compares and summarizes the relevant changes. > Some issues on presentation Thank you for your comments. Regarding the use of "pareto improvement", we respectfully assert that a "Pareto improvement" is when the whole Pareto curve is moved forward (i.e., when improvements are gained without any cost). You may have been thinking of moving along the Pareto frontier, which involves tradeoffs. For "vision SSL" (page 2, line 88), we want to suggest that simplification can be very valuable to the larger field of vision SSL, and our work pushes this broader envelope. Rather than adding more tricks and complexities, we show that identifying the necessary components (i.e. feature alignment while avoiding collapse) naturally leads to a greatly simplified design and improved performance. We believe such practices are useful for the larger field of vision SSL. If this causes confusion, we are happy to revise it. We hope these clarifications can resolve your concerns. We again thank you for your time and effort in evaluating our work, which will no doubt improve its quality. We welcome your feedback and will be happy to engage in further discussion. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my comments and providing clarifications accordingly. My concerns regarding the use of "pareto improvement" and DINO pipelines are addressed. However, it would still be good to provide brief explanations on the alternate choice of regularizers for preventing collapse. The authors have mentioned the comparison of 'directly using contrastive loss' with their approach. It would be good to mention what variants of contrastive loss have been used and a brief summary of the comparison itself. In my opinion, that would make the contribution stronger. I am willing to change my initial score if brief summaries of the comparison results (and a summary of possible regularizers) to be included in the updated manuscript are provided. --- Reply to Comment 1.1.1: Comment: We are grateful for your review and valuable suggestions. We here provide you a summary of what we have done so far in verifying the effectiveness of our proposed coding rate regularizer. Recall that we regularize the student features on the global views in our experiment via the coding rate objective. As alternatives to our design, we have since tested on three different regularizers (with other settings fixed to be the same as SimDINOv2 on ViT-L/16) and we include their KNN accuracy results below: - the vanilla contrastive objective (i.e. explicitly repelling negative samples and attracting positive ones) as in SimCLR[1]. Compared to our result, this setting results in performance decrease of about 2 points (79.4 vs 81.1). - the uniform loss proposed in [2] that encourages the representation to be uniform on the unit sphere. This settings leads to performance decrease of about 4 points (77.2 vs 81.1). - Barlow Twins loss proposed in [3] that penalizes off-diagonal terms while promoting the on-diagonal terms on the covariance matrix of learned representations. This setting causes NaN in our experiments so far. It might be related to the sensitivity of the coefficient within the loss (as it needs to balance the off-diagonal and on-diagonal terms) and we are still investigating. We will put all these results and any more results we obtain (i.e., about Barlow Twins loss) in the final version of the paper. Overall, our experiments show that the coding rate is indeed a good choice in our setting, potentially due to the variance reduction reason proposed in the paper. We sincerely hope these results can help resolve your concerns. [1] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning, 2020. [2] Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." International conference on machine learning, 2020. [3] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." International conference on machine learning, 2021.
Summary: The authors aim to improve the stability of self-supervised models DINO and DINOv2, while simplifying the training process. Collapse is a common problem in self-supervised learning (SSL), and it is usually avoided with careful hyperparameter tuning and architectural adaptations. In particular, to avoid collapse, DINO uses softmax temperature and centers the Teacher’s features, while DINOv2 adds KoLeo regularizer to make the span of features more uniform. The authors propose the following two main changes, (1) remove the heads, and replace cross-entropy loss with MSE on the $l_2$ normalized features of the encoder, (2) add rate distortion regularization to reduce the correlation between feature dimensions, which mitigates a constant output. The authors experiment with ViT-S, B, and L models, which they pre-train on ImageNet-1K (IN-1K), and test on multiple downstream tasks, including classification, object detection, semantic segmentation, and video object segmentation. The proposed models, SimDINO and SimDINOv2, consistently outperform the DINO and DINOv2 baselines trained by the authors. They also show that hyperparameters from training ViT-B can be transferred to ViT-L, allowing for stable training. Finally, the authors provide ablations in the Appendix to study the effect of different hyperparameters and design choices, e.g., batch size. ## update after rebuttal Because of the discrepancy in the description of the DINO loss, which is even involved in derivations, I can not recommend the paper for publication. However, the authors clarified/addressed a number of comments from my initial review, so, I increased my score from 1 to 2. Claims And Evidence: The authors make two main claims, (1) SimDINO and SimDINOv2 allow for simpler, more robust, and more computationally efficient training compared to DINO and DINOv2, (2) learned representations are of higher quality because of performance on downstream tasks. - Performance: In the conducted experiments, the proposed models outperform the baselines across different downstream tasks, and different pre-training datasets. However, I see a number of issues with the experimental design: - Training duration: In Table 1, all models are trained for 100 epochs, while in the DINO paper [1] they report performance after 800 epochs, and they train for 100 epochs only for ablations. Having said that, the proposed method may be able to outperform the baselines if all models are trained longer, but this should be verified experimentally, especially since SSL methods benefit from extended training. - The provided experiments are adequate to show relative differences between DINO and SimDINO models, but additional baselines are required to be able to estimate the impact of the proposed method in the literature. - Simplicity: - One of the experiments in support of the increased training stability is provided in Fig. 2, where SimDINO performance increases as training progresses, while DINO saturates and even experiences a decrease in accuracy. However, these results contradict the results reported in the original DINO paper, e.g., in Appendix Section D in [1], ViT-S improves its k-NN accuracy by a big margin as it is trained for 100, 300 and 800 epochs. This raises concerns about the training of the baselines, or about the adequacy of the training duration, since in Fig. 2 the models are trained only up to 100 epochs. - The authors claim that training is more efficient, however, training compute is not reported. The proposed pipeline is indeed simplified because the heads are removed, but a new loss term is introduced, which requires non-trivial computations. So, I think any gains in training efficiency should be quantified. [1] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." *Proceedings of the IEEE/CVF international conference on computer vision*. 2021. Methods And Evaluation Criteria: I think there are a number of issues with the description of the methods. - ln 92, col 1: $R^{D \times *}$ represents the space of finite sequences, but in $U_{T=1}^{\infty}$, $T$ goes to infinity. - What is the reason for defining a $d-1$ dim space instead of a $d$-dimensional one? For example, I could see how something like that would make sense for the sequence dimension $N$, since $N-1$ is the number pf patches, and there is an additional cls token for a total sequence length of $N$, but why for $d$? Also, in ln 133, col 1, the heads are defined as functions $R^d \rightarrow R^m$, but for their inputs $z$, in ln 166, col 2, we have $z \in S_{d-1}$. - ln 99: $\Theta$ is not defined. - ln 101: In the definition of $f_{\theta}^{\text{cls}}$, I guess it should be $S_{d-1}$ instead of $S^{d-1}$. - ln 113-116, col 1: I think it would be good to make a distinction between a transformation that augments an image, and the output of such a transformation for a given image, which can be called a view. In ln 92-93, col 2, the term “view” is treated as the output of a transformation, i.e., “the feature corresponding to any view”, and in ln 113-116, col 1, as the transformation itself. The same in ln 121, col 1, where $X_c$ is called a “view”, while in the paragraph before, $v_c$ is defined as a “view””, and $X_c = v_c(X)$. SimCLR [2] has simple and clean notation for this issue. - Eq. 2, 3: The temperature values should be different, because they are not assumed the same in DINO [1]. - ln 112, col 2: It is mentioned “the expectation is over $X$”, while there is no $X$ in Eq. 4. - If I am not mistaken, the definition of DINO loss in Eq. 4, 5, 6 is incorrect. As can be seen in Algorithm 1 in [1], loss = H(t1, s2)/2 + H(t2, s1)/2 which is equivalent to $1/2 (CE(p_t(t_1), p_s(s_2)) + CE(p_t(t_2), p_s(s_1)))$, while based on Eq. 5 should hold $p_t(t_1) = p_s(s_1)$ and $p_s(s_2) = p_t(t_2)$, which is not true. - This affects derivations in Appendix B, where Eq. 5 is used. - Is this the loss used for DINO in the experiments? - Fig 1: In (a), it seems that only the teacher weights are updated through EMA, but this is true for the head weights as well. - Eq. 8 is not adequately explained. For example, in [3], which is cited by the authors, it is explained that “rate distortion $R(z, \epsilon)$ is the minimal number of binary bits needed to encode $z$ such that the expected decoding error is less than $\epsilon$”. This is the core contribution of the work, and the authors introduce $\epsilon$ as just a hyperparameter, without explaining its meaning. - Footnote 3, starts at ln 215, col 2: - The authors refer to a similarity term in Eq. 9. I guess they refer to the 1st term, however it is an MSE term, not similarity term. - Their argument makes sense when the MSE is small, however, what if it is large? - ln 245-246, col 1: $Z^{\text{patch}}$ and $P^{\text{patch}}$ are mentioned, e.g., “$i^{\text{th}}$ column of $Z^{\text{patch}}$”, before they are defined. [2] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." *International conference on machine learning*. PmLR, 2020. [3] Yu, Yaodong, et al. "Learning diverse and discriminative representations via the principle of maximal coding rate reduction." *Advances in neural information processing systems* 33 (2020): 9422-9434. Theoretical Claims: The authors provide theoretical results in Appendix Section B and C. I commented about Section B in the previous section. I went through the proof in Section C, but not in enough detail to be absolutely certain that is correct. Experimental Designs Or Analyses: - One of the main contributions of DINOv2 paper is performance at scale, both in terms of model and training data size. Based on that, the scale of the conducted experiments doesn’t allow for direct comparisons with the main claims of the DINOv2 paper. - After pre-training, the authors keep the Teacher model and discard the Student (ln 188-189, col 2), however, in DINO paper the opposite holds true. Which model is kept for the reported baselines? Are there any noticeable differences in the performance of the Teacher and Student models? - It would be interesting to have experiments with CNN backbones, similar to the DINO paper, to see the effect of the proposed regularizer, however, I don’t think this is a must-have, since DINO models are reported to excel with Transformer backbones. Supplementary Material: - Section F.4.: Why this is called “DINO without Self-Distillation”? If I understand correctly, there is still a Teacher and a Student model, but instead of using momentum, the Teacher is a copy of the Student at every iteration, which is the “Student copy” setting explored in the original DINO paper. - In Appendix A, $X_l$ belongs to $R^{N \times D}$, and not to $R^{D \times T}$ used in page 2, which makes for inconsistent notation. Relation To Broader Scientific Literature: The proposed method builds directly on the DINO line of models, and offers increased stability and simplicity to the training process by introducing a rate distortion regularizer. As I mentioned earlier, this regularization term aims to reduce the correlation between feature dimensions, which is an idea already explored in SSL [4]. As a result, the proposed method is a combination of existing ideas in a modern setting, which I think has value, since SSL is an important research direction. [4] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." *International conference on machine learning*. PMLR, 2021. Essential References Not Discussed: There are references that could be added, e.g., [4], but in general, I think the authors provide adequate references. Other Strengths And Weaknesses: No additional comments. Other Comments Or Suggestions: There are some minor typos, for example: - ln 163, col 1: “receives a interpolated”, should be “an”. - ln 112, col 1: “During the pipeline”, this phrase doesn’t seem right. - ln 113-116, col 1: It is mentioned “we sample at random a view”, and then “selected randomly”, which seems unnecessary. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer S9wV, Thank you for the detailed review. Many comments, especially about presentation or typos, are helpful and will be incorporated in the camera-ready version; we do not address them here. Others require clarification or even stem from misunderstandings on your end, and we reply to these as follows. ## Claims and Evidence > Training duration Longer training usually leads to better performance, which is demonstrated in Appendix F.3. SimDINO still outperforms DINO with more training epochs. We are limited by compute and cannot adopt the exact same configurations as the original DINO(v2). With that said, we have since trained a ViT-B SimDINO model for 400 epochs with better performance in $k$-NN accuracy (76.6) than the official DINO model (76.1). This is still unfair comparison since the official setting uses full fp32 precision while ours only uses fp16 due to memory constraints. We will include more such results in our final version. > Additional baselines Please see our response to Reviewer ubYD (5-1). > Training stability It may seem contradictory but is actually not. For example, checkpoints at 100-th epoch in a 100-epoch run are very different from those in an 800-epoch run, due to difference in learning rate, teacher momentum, etc., and are incomparable. We also notice a similar performance dip in DINO trained for 200 epochs. > Computational efficiency The coding rate computation is efficient since covariance matrices are PSD. We will include quantitative analysis in the final version. ## Methods and Evaluation Criteria > Definition of $\mathbb{R}^{D \times *}$ $\bigcup_{t = 1}^{\infty}\mathbb{R}^{d \times t} = \{x \colon \exists t\ \text{s.t.}\ x \in \mathbb{R}^{d \times t}\}$ is the set of all finite-size matrices with $d$ rows. This is a standard definition for the symbol $\bigcup_{t = 1}^{\infty}$. > Definition of $\mathbb{S}_{d - 1}$ - The sphere in $\mathbb{R}^{d}$ is a $(d - 1)$-dimensional submanifold of $\mathbb{R}^{d}$. The definition including "$\mathbb{S}_{d - 1} \subseteq \mathbb{R}^{d}$" disambiguates this notation. > Definition of $\Theta$ Here $\Theta$ is the weight space. We will make this more explicit. > Multiple definitions of view As is relatively common in SSL, we identify the function $v$ and the object $v(X)$ with each other for conceptual ease. We will make this more explicit. > Expectations over $X$ The expectation is over $X$; the variables $z, p$, etc., are functions of $X$. We plan to streamline this notation to show dependencies. > Typo in the loss definition Thank you very much for pointing it out. We emphasize that **this is only a typographical error, and we use the official DINO(v2) losses in experiments**. We will fix this in the paper. However, it makes _very little_ difference in the main body and Appendix B, as the symmetry is not important for our argument. > Parameter $\epsilon$ In this work, we only care that a smaller $\epsilon$ makes the regularizer larger. However, for completeness we will add a short description of the information-theoretic role of $\epsilon$. > MSE and similarity Since the features are normalized on the sphere, an MSE term _is_ a similarity term. > Covariance approximation The provided argument applies only when the MSE term is small. Since we choose $\gamma$ to balance the terms’ relative sizes, a well-trained SimDINO(v2) model _will_ have a small MSE term (induced by the feature alignment loss term), so the argument will apply. ## Supplementary Material > Section name choice We use this section label precisely because there is no self-distillation in this setting; one just tries to minimize the distance between two views' features, instead of taking one model as a privileged teacher from which the other network is distilled. ## Experimental Designs and Analyses > Performance at scale Our experiments (up to ViT-L and on ImageNet-1k) use very popular/standard settings in vision SSL. They are by-no-means considered to be limited scale. We are bound by our compute resources, and leave further scaling-up of our approach as future work. > Teacher/student weights Your assertion is false; both DINO and our work use the teacher weights for evaluation. This is shown in the original DINO paper (Fig.6), where the teacher model consistently outperforms the student. > CNN backbones As you point out, DINO(v2) perform strongly with ViT models, which are our focus as well. In particular, DINOv2 does not provide training on CNN backbones either. ## Overall Thanks again for your valuable feedback. We will fix many raised issues about prose and typos in the camera-ready version. We hope that our responses clarify the strength and correctness of our work. Given that we have attempted to address all of your main concerns and clarified several critical misconceptions of yours, we humbly request that you reconsider your recommendation for this work, as we believe it has good value to the community. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering to all my points. I think some of them are still not addressed, so, I would like to offer the following comments: - DINO loss symmetry in Eq.5 - I start with this issue because it is one of my main concerns. As I mentioned in my review, Eq. 5 has 2 distributions, $p$ and $q$, while there should have been 4. The authors claim that this is a typographical error, however, the symmetry that involves these 2 distributions in Eq. 5 drives the derivations in Appendix B, so, how is this a typo? Also, why this is not important in Appendix B, since Eq. 21 and Eq. 24 don't apply to DINO? The authors state that this is not important, but they don't explain why. - Training stability - I accept the authors' argument that some of the DINO, and especially DINOv2, experiments are very computationally expensive and should not be expected for new methods to always reach that scale. Having said that, the particular experiments in Fig. 2 have to do with training stability, and the original DINO was trained for 800 epochs, while most SSL methods train for much longer than 100 epochs. So, I am not claiming that the favorable behavior of SimDINO wouldn't extend to longer training times, but that there is no sufficient evidence for this due to the limited number of training epochs. - Computational efficiency - I trust the authors that calculation of the proposed regularization term doesn't add a significant overhead, however, I would appreciate it if they could share the quantitative analysis that they plan to include in the updated manuscript. - Definition of $S_{d-1}$ - My question had to do about the reason d-dimensional latent variables belong to a d-1-dimensional sphere after normalization. It may be totally justified, I just said that the reason is not clear to me and I asked why this is. - MSE and similarity - Based on the authors' response, I think there is a misunderstanding here, so, to clarify, my intention was to make a distinction between similarity and distance metrics, e.g., cosine similarity and Euclidean distance. When two vectors become more similar, similarity metrics increase, and distance metrics decrease. MSE falls in the latter category, thus, I think it is appropriate to call it a distance metric instead of a similarity metric. - Teacher/student weights - I apologize for mentioning that the reported experiments in the DINO paper are with the Student model, this was my assumption, given that this is not explicitly mention in the paper. After more careful inspection of the official DINO github repo, I can confirm that the Teacher model is the one used in the reported DINO results. In summary, as I mentioned in my review, I think the authors work on an important research direction, i.e., stability of established SSL methods, however, due to the aforementioned issues I don't have the confidence to change my score. --- Reply to Comment 1.1.1: Comment: Thanks for your detailed response. We will try our best to resolve the remaining confusion. > DINO loss symmetry (or lack thereof) The original symmetrized objective was an inaccuracy in the presentation that occurred several times in the text, and we (again) thank you very much for pointing it out. We call it a "typographical" error to emphasize the fact that **it does not affect the experiments at all**. We are not sure what you mean by 4 vs 2 distributions, and you may be thinking of some other SSL method. In DINO there is the general (local or global) view taken by the student and the global view taken by the teacher, and that is it --- our notation gets this right, and in particular agrees with the DINO paper [1], which writes the loss as [quoted] $$\sum_{x \in \\{x_{1}^{g}, x_{2}^{g}\\}}\sum_{\\substack{x^{\prime} \in V \\\\ x^{\prime} \neq x}}H(P_{t}(x), P_{s}(x^{\prime})),$$ which clearly shows the variables that we use. In terms of whether or not the symmetrization matters for our analysis in Appendix B, note that the optimality conditions of the CE loss remain the same for the symmetrized and non-symmetrized versions (i.e., $p = q$ and both are one-hot vectors), and Appendix B is only concerned with the optimality conditions, so the error does not substantively impact our argument at all, and the fixes required are trivial. > Training stability As pointed out in our previous response, the dip in performance in DINO happened not only in the 100-epoch run, but in other settings with more training epochs as well. Overall, our analysis on training stability does not depend on the number of training epochs, as shown in the last paragraph "More on Stability and Robustness" in Section 3.2. Specifically, our evidence reveals the high sensitivity of hyperparameters for DINO, as well as the difficulty associated with adapting DINO for different architecture (e.g. collapse in ViT-L) and different datasets (Fig.4). These results have nothing to do with the number of training epochs and demonstrate the superior stability and robustness of our proposed method. > Computational efficiency We again thank you for your valuable suggestion and will include the efficiency analysis in our final version. Concretely, we will measure the FLOPs and wall-clock time of our loss function and compare them with the original DINO loss (together with the linear head). > Definition of $\mathbb{S}_{d - 1}$ The $d - 1$ dimensional sphere $\mathbb{S}_{d - 1} \subseteq \mathbb{R}^{d}$ is the set of all unit-norm vectors in $\mathbb{R}^{d}$. By definition, if you take a nonzero vector in $\mathbb{R}^{d}$ and normalize it, it ends up on $\mathbb{S}_{d - 1}$. > MSE and similarity Sure, we can disambiguate this, thanks for pointing it out. But we emphasize (as you rightly point out) there is no conceptual or mathematical issue. We hope that our replies have clarified any remaining confusions. In light of our responses, we would kindly request that you re-evaluate the work. [1] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
null
null
null
null
null
null
Task-Agnostic Pre-training and Task-Guided Fine-tuning for Versatile Diffusion Planner
Accept (poster)
Summary: This paper proposes a two-stage framework for training versatile diffusion policies. The core idea is to learn generalized diffusion policies from a large amount of low-quality multi-task trajectories, followed by task-specific online RL fine-tuning that quickly tailor the pre-trained diffusion policy to the downstream task. In particular, this paper uses a learning objective, similar to PPO, for fine-tuning the diffusion policy with online interaction. To enhance the learning stability, this paper further introduces a behavior cloning loss that bias the policy toward the distribution of high-reward actions. This paper evaluates the proposed method on MetaWorld 50 tasks, outperforming other offline/online RL baselines. Claims And Evidence: Yes, the claims are supported by the experimental results presented in Table 1 and 2. Moreover, Table 3 demonstrates the efficacy of multi-task pre-training. Methods And Evaluation Criteria: This paper tests baselines and the proposed method with success rate, which is a commonly adopted metric for evaluating robot policies. Theoretical Claims: The theoretical proof (A.1, A.2 and A.3) in the Appendix looks correct to me. Experimental Designs Or Analyses: The experimental designs are reasonable Supplementary Material: I've read section A and B in the Appendix Relation To Broader Scientific Literature: This paper contributes to the field of fine-tuning diffusion generative models, a crucial aspect of image and video generation, as well as chemical and material discovery Essential References Not Discussed: I believe the references are complete Other Strengths And Weaknesses: Strengths: 1. One of the core ideas of this paper is to pre-train a generalized diffusion policy with scaled-up sub-optimal training data. The idea is practically reasonable, since it is much cheaper to collect non-expert trajectories than expert ones. This paper demonstrates that, by incorporating the knowledge from large-scale sub-optimal trajectories, the pre-trained policy adapts to downstream tasks more effectively and rapidly compared to the policy trained from scratch. 2. Another key insight of this paper to enhance diffusion policies with online RL fine-tuning, which is also technically sound. It is commonly known that policies learned with behavior cloning have issues with distributional drift, where models perform poorly on unseen scenario, and reinforcement learning could fix these issues. This paper proposes to enhance diffusion policies, which are often pre-trained with imitation learning, by online RL fine-tuning. With a simple PPO-like objective and behavior cloning-based regularization, the performance of the diffusion policy is improved by a large margin. 3. This paper is well-written and easy to follow. I can easily understand the high-level idea and low-level details from the paper. 4. This paper presents good visualization (e.g. Figure 2 and 6), helping readers understand the effect of each component. --- Weaknesses: 1. This paper lacks the analysis on the effect of separate components. I believe this paper has three main ideas: 1) pre-training models with large-scale sub-optimal trajectories, 2) fine-tuning models with online RL, and 3) diffusion modeling. This paper did a good job on analyzing the effect of large-scale pretraining and RL fine-tuning, but does not ablate the effectiveness of diffusion modeling. More specifically, Table compares to offline RL transformers (HarmoDT), offline RL diffusion models (MTDIFF), and offline+online RL baselines (which often use MLP Gaussian policies). This paper proposed an offline+online RL diffusion model. From the comparison to offline RL diffusion models, we know the effectiveness of oneline RL. From the comparison to offline+online RL baselines, we know diffusion models are better than MLP Gaussian policies. However, the comparison to offline RL transformers involves two factors (offline RL vs. offline+online RL and transformers vs. diffusion models). It's unclear for me if the superior performance is mainly resulted from online RL fine-tuning, or the combination of diffusion modeling and online RL. In other words, would offline+online RL transformers work better? 2. This paper does not present the computational limitation of the proposed method. From Eqn 12, I suppose gradients are back-propagated across the whole denoising rollout, which may require longer training time and larger memory usage. It'd be great if the authors could provide analysis on training and testing efficiency, compared to others. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the performance of pre-training a Decision transformer with the same sub-optimal trajectories and fine-tuning it with online RL? 2. What is the training and testing efficiency? 3. I'm wondering if the behavior cloning regularization is needed, if you consider a preference-based RL objectives [1]? --- Reference: [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model. Rafailov et al. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 4vpc, We sincerely appreciate your precious time and constructive comments. In the following, we would like to answer your concerns separately. **W1**: would offline+online RL transformers work better? **R1**: We extend the multi-task Decision Transformer (DT) following [1] by pre-training it on the same sub-optimal data and fine-tuning each task online for 1M steps on the Meta-World 10 tasks. We consider two types of DT: (1) the vanilla DT, which uses a deterministic policy, and (2) the variant proposed in [1], which employs a stochastic policy. The results are presented below. |Methods|Success rate after pre-training |Success rate after fine-tuning| |-|-|-| |MT-ODT_deter|24.83 ± 0.10|40.07 ± 0.15| |MT-ODT_stoc|37.20 ± 0.12|50.47 ± 0.21| |SODP (ours)|31.60 ± 0.14|65.27 ± 0.21| **W2**: ...gradients are back-propagated across the whole denoising rollout... analysis on training and testing efficiency... **R2**: Similar to previous work [2], we do not require gradients to be back-propagated through the entire denoising chain at once. From Eq. (11) and (12), each step only requires the final reward $r(a^0)$ to compute the gradient. This allows us to back-propagate the gradient for each step separately, which improves memory efficiency. Furthermore, we employ DDIM sampling during the fine-tuning process, requiring only 10 denoising steps, which reduces both training and inference time. We present the computational efficiency results below. All results are obtained using a single 4090 GPU. Previous works on fine-tuning text-to-image diffusion models [3] propose a more efficient approach that directly back-propagates the reward gradient through the denoising chain. This method bypasses the need to model the denoising process as a MDP and reduces memory consumption by truncating backpropagation. These reward-backpropagation methods rely on differentiable rewards. However, such differentiable rewards are impractical in RL because reward functions vary across different environments and cannot be easily predicted by a differentiable reward model. Therefore, we use the raw, non-differentiable environment reward and employ PPO for fine-tuning. Recent studies [4,5] have improved RL-based diffusion model fine-tuning by simplifying the original PPO or introducing dense reward signals at each denoising step, leading to greater effectiveness and better performance. We leave the exploration of optimizing SODP for future work. |Methods|Fine-tuning time for each task over 1M environment steps| Inference time for each task over 50 episodes| |-|-|-| |RLPD|4.5 - 5.5 h|~ 37s| |IBRL|6 - 7.5 h|~ 39s| |Cal-QL|8.5 - 10 h|~ 40s| |SODP (ours)|11 - 13 h|~ 47s| **Q1**: ... the performance of pre-training a Decision transformer with the same sub-optimal trajectories and fine-tuning it with online RL **R3**: See R1. **Q2**: What is the training and testing efficiency? **R4**: See R2. **Q3**: I'm wondering if the behavior cloning regularization is needed, if you consider a preference-based RL objectives? **R5**: Preference serves as a weak label when the reward function is hard to obtain and must be converted into an explicit or implicit reward signal [6,7] for model fine-tuning. In our setting, the model interacts with online environments and can readily obtain rewards to evaluate the performance of given actions. Therefore, we can directly fine-tune the model using these ground-truth reward labels without the additional steps of constructing and translating preference labels. The recent success of DeepSeek-R1 [8] further demonstrates the effectiveness of using rewards to improve LLM performance. Whether through preferences or rewards, the objective of fine-tuning is a KL-constrained reward maximization problem [6,7], defined as: $L=\mathbb{E} _ {s,a}[R(s,a)]-\beta D _ {\rm KL}[\pi_\theta(a|s)\|\pi_{\rm ref}(a|s)].$ Preference-based methods such as DPO offer an alternative approach to injecting update gradients into the model but still require constraints to regulate the update step size. However, directly computing the KL-divergence term for diffusion models is not feasible due to their complexity. Therefore, we employ BC-regularization as an alternative to achieve similar effects, ensuring that the learned policy preserves the desirable properties of the reference policy while adapting to the specific task. [1] Online decision transformer. [2] DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models. [3] Directly fine-tuning diffusion models on differentiable rewards. [4] A Simple and Effective Reinforcement Learning Method for Text-to-Image Diffusion Fine-tuning. [5] Aligning Few-Step Diffusion Models with Dense Reward Difference Learning. [6] Training language models to follow instructions with human feedback. [7] Direct Preference Optimization: Your Language Model is Secretly a Reward Model. [8] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed my concerns, and I'm happy to keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We sincerely thank you for your time and effort in providing valuable feedback to help us improve our work.
Summary: This paper proposes SODP, a two-stage framework for training versatile diffusion planners using sub-optimal data. SODP first pre-trains a foundation model on mixed sub-optimal trajectories without requiring expert demonstrations or reward labels, then employs RL-based fine-tuning with task-specific rewards and behavior cloning regularization. Experiments on Meta-World and Adroit benchmarks demonstrate superior performance over state-of-the-art methods with limited fine-tuning data. ## update after rebuttal As the rebuttal has addressed most of my concerns. I'd like to keep my score. Claims And Evidence: The main claims are supported by convincing evidence. SODP achieves 60.56% success rate on Meta-World benchmarks compared to 57.20% for the closest competitor. Ablation studies (especially Appendix C) demonstrate the effectiveness of BC regularization, importance of pre-training, and benefits of the two-stage approach. The efficiency claim is supported by strong performance with just 100k fine-tuning steps. Methods And Evaluation Criteria: The methods are appropriate, using established benchmarks (Meta-World and Adroit) and standard evaluation metrics (success rate). The approach comprehensively compares against both offline-online RL methods and multi-task RL baselines. The experimental design for testing data efficiency is sensible. Theoretical Claims: The paper provides theoretical derivations for the policy gradient, loss function, and behavior cloning loss. These derivations appear sound but don't offer novel theoretical insights beyond established techniques from diffusion models and policy optimization literature. Experimental Designs Or Analyses: The experimental designs are generally sound, with comprehensive comparisons and ablation studies. Visualizations effectively demonstrate how the method improves exploration. Limitations include: - The computational costs of pre-training (500k steps) are omitted - No experiments on entirely novel tasks to evaluate true generalization Supplementary Material: Yes. The supplementary material includes detailed derivations, implementation details, extended results, ablation studies, and comparisons with concurrent work, providing valuable context for the main paper's claims. I think **Appendix C** is very important to support the claims. **The authors had better merge them with the results in main paper.** Relation To Broader Scientific Literature: The paper builds upon recent works using diffusion models for trajectory planning but innovates by learning from sub-optimal data rather than expert demonstrations. It effectively relates to multi-task RL methods and diffusion model fine-tuning approaches from other domains. Essential References Not Discussed: None. Other Strengths And Weaknesses: Minor Strengths: 1. Novel use of sub-optimal data 2. Extension to image-based observations. Minor Weaknesses: 1. Insufficient analysis of scaling with task complexity. Other Comments Or Suggestions: None. Questions For Authors: 1. How does SODP's computational efficiency compare to baselines in terms of training time and inference overhead? 2. From Tab.2, it's interesting to find that with online data collection, the baseline HarmoDT remains stable when trained on the augmented dataset, whereas the performance of MTDIFF declines markedly. How could it happen? Is it because we do not use a good way to distill the new information into the model? 3. Are there specific task categories where SODP struggles that would help identify the advantage of this paper? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer aUqS, We sincerely appreciate your precious time and constructive comments. In the following, we would like to answer your concerns separately. **C1**: The computational costs of pre-training (500k steps) are omitted **R1**: Thank you for your question. The data used for pre-training are sub-optimal trajectories. Compared to scarce expert demonstrations, collecting sub-optimal data is more cost-effective. Thus, the pre-training stage is data-efficient, allowing us to use relatively inexpensive training data to obtain a reasonable initial policy. The pre-training stage is a supervised learning process in which the diffusion model is trained offline. This setup enables the use of data parallelism to efficiently fit the action distribution. In contrast, online interaction is considerably more expensive because data must be collected and used iteratively to optimize the model. Consequently, our primary focus is on the RL fine-tuning process, aiming to improve the efficiency of data utilization and reduce the costs associated with online data collection. Overall, large-scale offline data can be leveraged for pre-training, making total computational cost less of a concern. The primary challenge in RL is not the overall computational expense but rather the efficiency of data utilization, as the cost of interaction in the fine-tuning stage is significantly higher. **C2**: No experiments on entirely novel tasks to evaluate true generalization **R2**: We fine-tune a model on the *hammer* task and evaluate it on tasks that were not included in the pre-training dataset. The results are presented below. ||Pre-trained model | Fine-tuned model on *hammer*| |-|-|-| |*drawer-close*|54.67 ± 0.06|75.33 ± 0.01| |*faucet-open*|19.33 ± 0.01|32.67 ± 0.04| **C3**: I think Appendix C is very important to support the claims. The authors had better merge them with the results in main paper. **R3**: Thank you for your suggestion. Appendix C indeed provides further evidence supporting the advantages of our methods. We conduct additional ablation studies on various fine-tuning strategies, such as fine-tuning with high-quality offline data and directly applying behavior cloning on online transitions. Additionally, we fine-tune models pre-trained on different datasets to validate the scalability of our fine-tuning approach. We will merge and discuss these ablation studies in the revised version to enhance clarity. **W1**: Insufficient analysis of scaling with task complexity. **R4**: As shown in Figure 8, all methods achieve a high success rate in the simple movement task *handle-pull-side*, which requires the arm to pull a handle upward. However, in more complex two-stage tasks such as *basketball* and *hammer*, which demand precise arm control to interact with small objects, both MTDIFF and HarmoDT exhibit a success rate below 10%. Despite this challenge, our method consistently outperforms the baselines, achieving higher performance (e.g., 80% success rate in the *hammer* task). This improvement can be attributed to the broad action prior learned during pre-training, which enables effective knowledge transfer across tasks. This is particularly advantageous for challenging tasks where task-specific feedback is limited, requiring the model to leverage skills acquired from other tasks. As shown in Figure 7, fine-tuning from scratch results in a substantial decline in success rate for the *hammer* task, dropping from 80% to 20%. **Q1**: How does SODP's computational efficiency compare to baselines in terms of training time and inference overhead? **R5**: Please refer to R2 for Reviewer 4vpc. **Q2**: the performance of MTDIFF declines markedly. How could it happen? Is it because we do not use a good way to distill the new information into the model? **R6**: Thank you for your question. We hypothesize that this is due to the 'mode shift' problem identified in previous theoretical work [1,2], which suggests that diffusion models struggle to fit the target distribution when distant modes exist. The original dataset was collected using an SAC agent with a Gaussian policy, while the newly introduced data was collected using our pre-trained model with a diffusion policy. As a result, the data distributions obtained by these policies differ significantly. When combined into a single dataset, the overall distribution contains two distant modes, which may severely hinder the model's ability to accurately capture such complex distributions. **Q3**: Are there specific task categories where SODP struggles that would help identify the advantage of this paper? **R7**: As shown in Fig. 8, other baselines often struggle with tasks that require the arm to pick and move a small object (e.g., *basketball*, *sweep*), whereas our method achieves higher performance. [1] On the Generalization Properties of Diffusion Models. NeurIPS 2023. [2] Statistical Efficiency of Score Matching: The View from Isoperimetry. arXiv:2210.00726. --- Rebuttal Comment 1.1: Comment: For Q3, my question is that is there any task SODP cannot do well to help identify the contribution of this paper. But all in all, the rebuttal has addressed most of my concerns. I'd like to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response. The main failure cases of SODP occur in challenging tasks that require precise manipulation of extremely small objects, such as the pick-place task, where the arm must pick up and place a puck, and the push task, which requires pushing the puck to a goal. We also observed that other strong baselines, such as HarmoDT, similarly struggle with these tasks. Once again, we sincerely thank you for your time and effort in providing valuable feedback to help us improve our work.
Summary: The paper proposes a two-stage framework that pre-trains a diffusion planner using a large number of suboptimal demonstrations and performs task-specific finetuning through reinforcement learning. The diffusion planner is first pre-trained on multi-task offline data of low quality to predict action sequences given historical states. To further optimize performance for particular tasks, the planner is finetuned with policy gradients using task rewards in an online manner. The proposed method achieves strong performance on MetaWorld tasks, outperforming several offline-to-online and multi-task RL baselines. ## Update after rebuttal I appreciate the authors' response and effort in the additional experiments. Most of my concerns and questions have been addressed. Please include the results above in the final version. I will change my score to 3. Claims And Evidence: Please see “Other strengths and weaknesses” and “Other Comments Or Suggestions” section below. Methods And Evaluation Criteria: Please see “Other strengths and weaknesses” and “Other Comments Or Suggestions” section below. Theoretical Claims: N/A Experimental Designs Or Analyses: Please see “Other strengths and weaknesses” and “Other Comments Or Suggestions” section below. Supplementary Material: The supplementary material provides more ablation studies regarding different finetuning and pre-training choices, as well as the image-based experiments, supporting the efficacy of the proposed method. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The proposed method effectively leverages a large suboptimal dataset to learn a diverse behavior prior, which can facilitate the second stage of task-specific finetuning - Extensive ablation studies demonstrate the effectiveness of design choices such as multi-task pretraining and behavior cloning regularization, especially with limited finetuning steps. - The paper is well-written and easy to read Weaknesses: - Different sets of MetaWorld tasks are selected for different ablation studies. It would be helpful to provide some context information or rationale behind the choice of particular tasks, or rigorously keep the same choice of tasks throughout ablation studies. - Table 1 should include more baselines that (1) are diffusion-based and (2) support offline (pre-)training to online RL fine-tuning procedures for better comparison. The bulk of current baselines are trained offline on suboptimal data without finetuning, and the selected offline-to-online baselines do not use diffusion backbones, which render comparisons unfair. - It appears that BC regularization is the essential component of the proposed method. It would be helpful to provide ablations on additional environments, such as image-based Adroit, to strengthen the conclusion further Other Comments Or Suggestions: 1. For offline-to-online settings, it would be helpful to show the performance before and after online finetuning (e.g. in Table 1) Questions For Authors: 1. In MetaWorld, are dense rewards or sparse success signals used for RL finetuning? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer w6L1, We sincerely appreciate your precious time and constructive comments. In the following, we would like to answer your concerns separately. **W1**: Different sets of MetaWorld tasks are selected for different ablation studies ... provide some context information or rationale behind the choice of particular tasks ... **R1**: Thank you for your question. Our main experiments were conducted on Meta-World 50 tasks, demonstrating the effectiveness of our method compared to baseline approaches. For ablation studies, such as investigating the impact of different pre-training datasets and evaluating performance on image-based Meta-World, we conducted experiments on Meta-World 10 tasks. This subset is a well-established benchmark that includes tasks of varying difficulty levels and is widely used in multi-task learning research [1,2]. We find it sufficient to assess the effect of the pre-training dataset and the model’s generalization ability to high-dimensional image observations. For other ablation studies, we primarily selected tasks from those used to visualize the learning curves. These tasks were chosen to cover a range of difficulty levels, ensuring a comprehensive comparison of our method's performance. **W2**: Table 1 should include more baselines that (1) are diffusion-based and (2) support offline re-training to online RL fine-tuning procedures for better comparison **R2**: We extend two diffusion-based methods, DQL [3] and IDQL [4], to multi-task offline-to-online training on Meta-World 10 tasks, following [5]. Specifically, we first pretrain a diffusion-based actor using behavior cloning on an offline dataset and then fine-tune both the actor and critic in an online environment. However, as the results below show, the diffusion actor can still be misled by inaccurate value network estimations, causing rapid forgetting of actions learned from offline pretraining. Furthermore, since goals in Meta-World are randomized, traditional methods struggle to learn how to complete them effectively. |Methods|Success rate on Meta-World 10 tasks| |-|-| |MT-DQL|10.80 ± 0.11| |MT-IDQL|13.57 ± 0.13| |SODP (ours)|65.27 ± 0.21| **W3**: ... provide ablations on additional environments, such as image-based Adroit ... **R3**: Thank you for your suggestion. We conduct ablation studies on three Adroit environments, fine-tuning the same pre-trained model with and without BC regularization separately. As shown below, the performance improvement of the model fine-tuned without BC is marginal, whereas the model fine-tuned with BC achieves significantly better results. This indicates that BC can also facilitate learning in high-dimensional single-task settings. This improvement can be attributed to the update stride constraint imposed by BC, which ensures that updates are not too aggressive, thereby preventing the model from becoming trapped in a suboptimal region [6,7]. ||Pre-trained model |w/o BC| w/ BC| |-|-|-|-| |Hammer|20 ± 8|38 ± 3|67 ± 6| |Door|55 ± 8|63 ± 3|96 ± 1| |Pen|28 ± 3|30 ± 5|59 ± 4| **S1**: For offline-to-online settings ... show the performance before and after online finetuning **R4**: Thank you for your suggestion. We evaluate the performance of the pre-trained model over 50 episodes across three seeds and report the results below. The pre-training performance of SODP surpasses that of other baselines, demonstrating the advantage of diffusion models in capturing multi-modal action distributions. Additionally, with RL-based reward fine-tuning, the performance improvement exceeds that of traditional offline-online methods, highlighting the effectiveness of our fine-tuning strategy in generalizing to downstream tasks more efficiently. |Methods|After pre-training| After fine-tuning| |-|-|-| |RLPD|6.4 ± 0.03|10.16 ± 0.11| |IBRL|14.68 ± 0.06|25.29 ± 0.22| |Cal-QL|15.82 ± 0.06|35.09 ± 0.12| |SODP (ours)|31.76 ± 0.11|60.56 ± 0.14| **Q1**: In MetaWorld, are dense rewards or sparse success signals used for RL finetuning? **R5**: We use dense rewards for RL finetuning because it is necessary to provide the reward signal $r(a _ 0)$ for each samling process to guide the update of $p _ \theta$, as shown in Eq.(12). However, as shown in Table 2, we achieve good performance with only 100k online steps, which is 10% of the unlabeled data used for pre-training. [1] Hard Tasks First: Multi-Task Reinforcement Learning Through Task Scheduling. ICML 2024. [2] Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization. NeurIPS 2024. [3] Idql: Implicit q-learning as an actor-critic method with diffusion policies. arXiv:2304.10573 [4] Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning. ICLR 2023. [5] Diffusion Policy Policy Optimization. arXiv:2409.00588. [6] Trust Region Policy Optimization. ICML 2015. [7] Proximal Policy Optimization Algorithms. arXiv:1707.06347. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and effort in the additional experiments. Most of my concerns and questions have been addressed. Please include the results above in the final version. I will change my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We will incorporate these results in the revised version. Once again, we sincerely thank you for your time and effort in providing valuable feedback to help us improve our work.
Summary: The paper presents SODP (Sub-Optimal Diffusion Planner), a two-stage framework that leverages task-agnostic pre-training on sub-optimal trajectory data followed by reward-guided fine-tuning for multi-task reinforcement learning (RL). The approach aims to train a generalizable diffusion-based planner that can extract broad action distributions from large-scale, low-quality offline data and adapt efficiently to new tasks using policy gradient fine-tuning with behavior cloning (BC) regularization. The pre-training phase learns a guidance-free generative model that predicts future actions given past states, without requiring task-specific rewards or expert demonstrations. In the fine-tuning phase, the planner is refined using policy gradients to maximize task-specific returns while maintaining pre-trained knowledge via BC regularization. Empirical evaluations on Meta-World and Adroit show that SODP outperforms state-of-the-art multi-task RL and diffusion-based methods, demonstrating strong sample efficiency, generalization to unseen tasks, and robustness with limited fine-tuning steps. The paper claims that pre-training on diverse sub-optimal data enables the planner to acquire broad action priors, which accelerate task-specific adaptation and achieve higher rewards compared to traditional RL baselines. Claims And Evidence: The claims in the submission are largely supported by clear empirical evidence, particularly those regarding SODP’s superior performance over state-of-the-art multi-task RL and diffusion-based methods, as demonstrated through success rates in Meta-World, learning curves, and fine-tuning efficiency studies. The claim that BC regularization improves fine-tuning stability is also well-supported by ablation studies comparing different regularization techniques. However, some claims lack sufficient justification. Specifically, the assertion that task-agnostic pre-training on sub-optimal data leads to strong generalization across tasks is not rigorously analyzed beyond empirical results, and the paper does not provide a theoretical explanation or deeper analysis of learned priors. Additionally, while SODP is claimed to be efficient for real-world applications, the paper does not discuss the computational overhead of diffusion-based inference compared to standard policy networks. Lastly, the claim that BC regularization prevents catastrophic forgetting is not explicitly tested through a retention or forgetting analysis. Strengthening these aspects with additional theoretical discussion, computational cost analysis, and empirical studies on task transfer and model forgetting would make the paper’s claims more convincing. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of multi-task reinforcement learning using diffusion models. The use of Meta-World and Adroit as benchmark environments is appropriate, as they provide diverse manipulation tasks that test generalization and fine-tuning efficiency across different task distributions. The evaluation methodology effectively measures success rates, fine-tuning efficiency, and generalization to unseen tasks, which are key factors in assessing the effectiveness of a multi-task planner. Additionally, the comparison against state-of-the-art RL baselines, including diffusion-based planners (MTDIFF, HarmoDT) and offline-online RL methods (RLPD, IBRL, Cal-QL), ensures a fair assessment of SODP’s performance. However, the evaluation could be further strengthened by including runtime comparisons to assess the computational feasibility of diffusion-based planning versus traditional policy networks. Additionally, while success rate is a relevant metric, analyzing the stability of fine-tuning (e.g., performance degradation over long adaptation periods) and transfer efficiency between tasks would provide deeper insights. Overall, the experimental setup is well-structured, but adding computational efficiency analysis and long-term stability studies would enhance the evaluation. Theoretical Claims: The paper primarily focuses on an empirical approach rather than providing extensive theoretical claims, but it does include a derivation of policy gradient updates for fine-tuning the diffusion planner in Appendix A. The derivation follows standard policy gradient principles and importance sampling, aligning with established reinforcement learning literature (e.g., PPO and policy optimization methods). The formulation of reward fine-tuning as a K-step Markov Decision Process (MDP) for the diffusion denoising process appears correct in structure, mapping the diffusion process to reinforcement learning updates in a reasonable way. However, the paper does not provide formal theoretical guarantees on why pre-training with sub-optimal data leads to better generalization, which remains an empirical claim rather than a proven result. Additionally, while the policy gradient derivation follows standard RL formulations, it would be useful to include a more detailed discussion of convergence properties or potential biases introduced by the BC regularization term. There were no critical errors in the derivations, but a more formal analysis of pre-training’s impact on generalization and stability would strengthen the theoretical foundation. Experimental Designs Or Analyses: I reviewed the soundness and validity of the experimental design and analyses presented in the paper. The benchmark selection (Meta-World, Adroit) and success rate evaluation are appropriate for assessing multi-task reinforcement learning performance. The experimental design includes comparisons against strong baselines, such as diffusion-based RL methods (MTDIFF, HarmoDT) and offline-online RL baselines (RLPD, IBRL, Cal-QL), ensuring a fair evaluation. The ablation studies on BC regularization vs. KL and PL regularization, as well as the impact of pre-training on unseen tasks, further strengthen the empirical findings. However, some analyses are missing, such as a computational efficiency comparison and an analysis of forgetting during fine-tuning. While the learning curves and task success rates support the claims, a more detailed breakdown of failure cases and sensitivity analyses would enhance robustness. Overall, the experimental design is strong, but adding efficiency analysis and long-term stability studies would further validate the findings. Supplementary Material: Yes I briefly went through the derivations and extended results in the supplementary. Relation To Broader Scientific Literature: The key contributions of the paper build on prior work in diffusion models for reinforcement learning, multi-task RL, and offline-to-online fine-tuning. Diffusion-based models have been used in trajectory generation and planning (e.g. MTDIFF), but these approaches often rely on expert demonstrations or task-conditioned optimization. The paper extends this line of research by proposing a task-agnostic pre-training strategy using sub-optimal data, similar in spirit to unsupervised pre-training in language models, but applied to RL. The fine-tuning approach builds on policy gradient-based RL fine-tuning of generative models, but introduces a behavior cloning (BC) regularization term to preserve pre-trained knowledge while optimizing for task-specific rewards. This aligns with prior findings that pre-trained policies can be refined through RL but may suffer from forgetting or policy collapse without regularization (e.g., RLHF for LLMs, reward-guided fine-tuning of generative models). The work also connects to multi-task RL approaches that aim to learn task-invariant representations (e.g., MTSAC, HarmoDT), but differs by leveraging diffusion models to model broad action distributions without explicit task conditioning. While prior work has explored multi-task pre-training for policy learning, this paper specifically demonstrates that sub-optimal data can serve as a useful training signal for generalization. The findings contribute to the broader literature by showing that diffusion models can learn meaningful priors from noisy data and be efficiently fine-tuned for high-reward adaptation, a paradigm that could be extended to real-world robotic control and hierarchical RL settings. Essential References Not Discussed: The paper includes references and comparison to most of the relevant works in this domain. Other Strengths And Weaknesses: Strengths - The paper demonstrates strong originality in its approach to training a diffusion-based planner using sub-optimal data, rather than relying on expert demonstrations or reward-conditioned learning, which is a notable departure from prior diffusion RL methods - The proposed two-stage framework (task-agnostic pre-training + reward-guided fine-tuning) is an interesting adaptation of pretraining-finetuning paradigms used in language models and computer vision but applied to multi-task reinforcement learning. - The integration of behavior cloning (BC) regularization during fine-tuning is also an important contribution, as it effectively prevents policy degradation while allowing adaptation to high-reward behaviors. Weakness - While the paper is methodologically strong, there are areas where it could improve in clarity and broader significance. - Some explanations, particularly in theoretical motivation for pre-training on sub-optimal data, are not well justified beyond empirical results. - While the experimental results are compelling, the paper lacks computational efficiency analysis. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer aKBX, We sincerely appreciate your precious time and constructive comments. In the following, we would like to answer your concerns separately. **W1**: ...there are areas where it could improve...: **W1.1**: ... analyzing ... performance degradation over long adaptation periods **R1.1**: As shown in Tables 1 and 2, increasing the fine-tuning steps from 100k to 1M does not degrade performance. After 1M steps, the learning curve converges and remains stable. **W1.2**: BC regularization prevents catastrophic forgetting is not explicitly tested ... **R1.2**: We fine-tune two models on the same *button-press-topdown* task using the same pre-trained model, with and without BC regularization, respectively. We then evaluate them on other tasks from the pre-training dataset. As shown below, the performance of the model fine-tuned with BC regularization does not decline significantly, whereas the model without BC exhibits a noticeable performance drop, indicating that it forgets the acquired skills. ||Pre-trained model|w/o BC|w/ BC| |-|-|-|-| |*button-press-topdown-wall*| 52.67 ± 0.01|21.33 ± 0.05|65.33 ± 0.05 |*button-press-wall*| 45.33 ± 0.01|20.00 ± 0.07|52.67 ± 0.03 |*door-close*| 86.00 ± 0.05|47.33 ± 0.06|84.67 ± 0.04 **W1.3**: ...transfer efficiency between tasks **R1.3**: As shown in R1.2, the model fine-tuned on the *button-press-topdown* task also improves its performance on *button-press-topdown-wall* and *button-press-wall*, as these tasks share similar goals, allowing the learned skills to be transferred. **W2**:...theoretical motivation for pre-training on sub-optimal data ... **R2**:Thank you for your question. The theoretical motivation of our method is grounded in the provable efficient reward-free exploration in RL [1,2]. This approach employs a two-stage learning process for developing an RL policy: a reward-free exploration phase and a reward-based optimization phase. Specifically, in the reward-free stage, the agent optimizes a purely exploration reward, namely, the upper-confidence bound (UCB) term of the value function. Leveraging a UCB-based bonus, the agent is able to collect diverse state-action pairs with broad coverage, thereby providing sufficient information for the subsequent planning phase. In the second stage, the agent performs least-square value iteration (LSVI) by interacting with the environment. The goal of this stage is to conduct value and policy updates for a specific task with a specific reward function. This two-stage learning paradigm, which combines reward-free exploration and reward-based optimization, can achieve polynomial sample complexity according to theoretical results [1]. We believe this approach provides a solid theoretical foundation for our method's empirical success. Our algorithm essentially follows this theoretical motivation but incorporates more practical implementations. The reward-free exploration stage in [1] is similar to count-based exploration, which aims to collect diverse datasets in the first stage [3]. In SODP, we adopt an alternative exploration method (entropy-based exploration) via a non-expert SAC policy to provide sufficient exploration in the data collection stage. We also pre-train a diffusion planner on this exploratory dataset rather than directly performing reward-based policy optimization. We empirically demonstrate that SODP requires few online samples to achieve strong performance in the second stage. This finding verifies the theoretical results of [1,2], which suggest that such a two-stage learning paradigm is sample-efficient. Concerning the BC-regularization, we would like to provide further clarification. We follow the principled RL-tuning formulation used in LLMs fine-tuning [4], which is defined as: $L=\mathbb{E} _ {s,a}[R(s,a)]-\beta D _ {\rm KL}[\pi_\theta(a|s)\|\pi _ {\rm ref}(a|s)].$ Here, the first term aims to maximize the reward during fine-tuning, while the second term ensures that the learned policy $\pi _ \theta$ remains close to the reference policy $\pi _ {\rm ref}$. In the context of SODP, the $\pi _ {\rm ref}$ is the diffusion planner trained on a large-scale non-expert dataset in the first stage, and $\pi_\theta$ is the policy learned for a specific task in the second stage. However, directly calculating the KL-divergence term for diffusion models is not feasible due to their complex nature. Therefore, we adopt BC-regularization as an alternative to achieve similar effects, ensuring that the learned policy retains the desirable properties of the reference policy while adapting to the specific task. **W3**:... lacks computational efficiency analysis. **R3**: Please refer to R2 for Reviewer 4vpc. [1] On reward-free reinforcement learning with linear function approximation. [2] Reward-free exploration for reinforcement learning. [3] Count-based exploration with neural density models. [4] Training language models to follow instructions with human feedback. --- Rebuttal Comment 1.1: Comment: I am satisfied with the rebuttal and would like to keep my score as is. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We sincerely thank you for your time and effort in providing valuable feedback to help us improve our work.
null
null
null
null
null
null